WorldWideScience

Sample records for multi-tasking multi-processor telemetry

  1. A high speed multi-tasking, multi-processor telemetry system

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kung Chris [Univ. of Texas, El Paso, TX (United States)

    1996-12-31

    This paper describes a small size, light weight, multitasking, multiprocessor telemetry system capable of collecting 32 channels of differential signals at a sampling rate of 6.25 kHz per channel. The system is designed to collect data from remote wind turbine research sites and transfer the data via wireless communication. A description of operational theory, hardware components, and itemized cost is provided. Synchronization with other data acquisition systems and test data on data transmission rates is also given. 11 refs., 7 figs., 4 tabs.

  2. Multi-processor network implementations in Multibus II and VME

    International Nuclear Information System (INIS)

    Briegel, C.

    1992-01-01

    ACNET (Fermilab Accelerator Controls Network), a proprietary network protocol, is implemented in a multi-processor configuration for both Multibus II and VME. The implementations are contrasted by the bus protocol and software design goals. The Multibus II implementation provides for multiple processors running a duplicate set of tasks on each processor. For a network connected task, messages are distributed by a network round-robin scheduler. Further, messages can be stopped, continued, or re-routed for each task by user-callable commands. The VME implementation provides for multiple processors running one task across all processors. The process can either be fixed to a particular processor or dynamically allocated to an available processor depending on the scheduling algorithm of the multi-processing operating system. (author)

  3. A survey of Tumult, a real-time multi-processor system

    International Nuclear Information System (INIS)

    Jansen, P.G.

    1986-01-01

    Tumult (Twente University MULTi processor system) is the name of an ongoing project aiming at the design and implementation of a modular extendible multiprocessor system. All memory is distributed and processors communicate in parallel via a fast and reliable local switching network instead of a shared bus. A distributed real-time operating system is being designed and implemented, consisting of a multi-tasking subsystem per processor. Processes can communicate via a message passing mechanism. Communication links and processes are dynamically created and disposed by the application. In this article a brief description of the system is given; communication aspects are emphasized. (Auth.)

  4. Safe and Efficient Support for Embeded Multi-Processors in ADA

    Science.gov (United States)

    Ruiz, Jose F.

    2010-08-01

    New software demands increasing processing power, and multi-processor platforms are spreading as the answer to achieve the required performance. Embedded real-time systems are also subject to this trend, but in the case of real-time mission-critical systems, the properties of reliability, predictability and analyzability are also paramount. The Ada 2005 language defined a subset of its tasking model, the Ravenscar profile, that provides the basis for the implementation of deterministic and time analyzable applications on top of a streamlined run-time system. This Ravenscar tasking profile, originally designed for single processors, has proven remarkably useful for modelling verifiable real-time single-processor systems. This paper proposes a simple extension to the Ravenscar profile to support multi-processor systems using a fully partitioned approach. The implementation of this scheme is simple, and it can be used to develop applications amenable to schedulability analysis.

  5. Who multi-tasks and why? Multi-tasking ability, perceived multi-tasking ability, impulsivity, and sensation seeking.

    Science.gov (United States)

    Sanbonmatsu, David M; Strayer, David L; Medeiros-Ward, Nathan; Watson, Jason M

    2013-01-01

    The present study examined the relationship between personality and individual differences in multi-tasking ability. Participants enrolled at the University of Utah completed measures of multi-tasking activity, perceived multi-tasking ability, impulsivity, and sensation seeking. In addition, they performed the Operation Span in order to assess their executive control and actual multi-tasking ability. The findings indicate that the persons who are most capable of multi-tasking effectively are not the persons who are most likely to engage in multiple tasks simultaneously. To the contrary, multi-tasking activity as measured by the Media Multitasking Inventory and self-reported cell phone usage while driving were negatively correlated with actual multi-tasking ability. Multi-tasking was positively correlated with participants' perceived ability to multi-task ability which was found to be significantly inflated. Participants with a strong approach orientation and a weak avoidance orientation--high levels of impulsivity and sensation seeking--reported greater multi-tasking behavior. Finally, the findings suggest that people often engage in multi-tasking because they are less able to block out distractions and focus on a singular task. Participants with less executive control--low scorers on the Operation Span task and persons high in impulsivity--tended to report higher levels of multi-tasking activity.

  6. Who multi-tasks and why? Multi-tasking ability, perceived multi-tasking ability, impulsivity, and sensation seeking.

    Directory of Open Access Journals (Sweden)

    David M Sanbonmatsu

    Full Text Available The present study examined the relationship between personality and individual differences in multi-tasking ability. Participants enrolled at the University of Utah completed measures of multi-tasking activity, perceived multi-tasking ability, impulsivity, and sensation seeking. In addition, they performed the Operation Span in order to assess their executive control and actual multi-tasking ability. The findings indicate that the persons who are most capable of multi-tasking effectively are not the persons who are most likely to engage in multiple tasks simultaneously. To the contrary, multi-tasking activity as measured by the Media Multitasking Inventory and self-reported cell phone usage while driving were negatively correlated with actual multi-tasking ability. Multi-tasking was positively correlated with participants' perceived ability to multi-task ability which was found to be significantly inflated. Participants with a strong approach orientation and a weak avoidance orientation--high levels of impulsivity and sensation seeking--reported greater multi-tasking behavior. Finally, the findings suggest that people often engage in multi-tasking because they are less able to block out distractions and focus on a singular task. Participants with less executive control--low scorers on the Operation Span task and persons high in impulsivity--tended to report higher levels of multi-tasking activity.

  7. Who Multi-Tasks and Why? Multi-Tasking Ability, Perceived Multi-Tasking Ability, Impulsivity, and Sensation Seeking

    Science.gov (United States)

    Sanbonmatsu, David M.; Strayer, David L.; Medeiros-Ward, Nathan; Watson, Jason M.

    2013-01-01

    The present study examined the relationship between personality and individual differences in multi-tasking ability. Participants enrolled at the University of Utah completed measures of multi-tasking activity, perceived multi-tasking ability, impulsivity, and sensation seeking. In addition, they performed the Operation Span in order to assess their executive control and actual multi-tasking ability. The findings indicate that the persons who are most capable of multi-tasking effectively are not the persons who are most likely to engage in multiple tasks simultaneously. To the contrary, multi-tasking activity as measured by the Media Multitasking Inventory and self-reported cell phone usage while driving were negatively correlated with actual multi-tasking ability. Multi-tasking was positively correlated with participants’ perceived ability to multi-task ability which was found to be significantly inflated. Participants with a strong approach orientation and a weak avoidance orientation – high levels of impulsivity and sensation seeking – reported greater multi-tasking behavior. Finally, the findings suggest that people often engage in multi-tasking because they are less able to block out distractions and focus on a singular task. Participants with less executive control - low scorers on the Operation Span task and persons high in impulsivity - tended to report higher levels of multi-tasking activity. PMID:23372720

  8. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    Science.gov (United States)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  9. Who Multi-Tasks and Why? Multi-Tasking Ability, Perceived Multi-Tasking Ability, Impulsivity, and Sensation Seeking

    OpenAIRE

    Sanbonmatsu, David M.; Strayer, David L.; Medeiros-Ward, Nathan; Watson, Jason M.

    2013-01-01

    The present study examined the relationship between personality and individual differences in multi-tasking ability. Participants enrolled at the University of Utah completed measures of multi-tasking activity, perceived multi-tasking ability, impulsivity, and sensation seeking. In addition, they performed the Operation Span in order to assess their executive control and actual multi-tasking ability. The findings indicate that the persons who are most capable of multi-tasking effectively are ...

  10. Parallelising a molecular dynamics algorithm on a multi-processor workstation

    Science.gov (United States)

    Müller-Plathe, Florian

    1990-12-01

    The Verlet neighbour-list algorithm is parallelised for a multi-processor Hewlett-Packard/Apollo DN10000 workstation. The implementation makes use of memory shared between the processors. It is a genuine master-slave approach by which most of the computational tasks are kept in the master process and the slaves are only called to do part of the nonbonded forces calculation. The implementation features elements of both fine-grain and coarse-grain parallelism. Apart from three calls to library routines, two of which are standard UNIX calls, and two machine-specific language extensions, the whole code is written in standard Fortran 77. Hence, it may be expected that this parallelisation concept can be transfered in parts or as a whole to other multi-processor shared-memory computers. The parallel code is routinely used in production work.

  11. On the effective parallel programming of multi-core processors

    NARCIS (Netherlands)

    Varbanescu, A.L.

    2010-01-01

    Multi-core processors are considered now the only feasible alternative to the large single-core processors which have become limited by technological aspects such as power consumption and heat dissipation. However, due to their inherent parallel structure and their diversity, multi-cores are

  12. Performance evaluation of throughput computing workloads using multi-core processors and graphics processors

    Science.gov (United States)

    Dave, Gaurav P.; Sureshkumar, N.; Blessy Trencia Lincy, S. S.

    2017-11-01

    Current trend in processor manufacturing focuses on multi-core architectures rather than increasing the clock speed for performance improvement. Graphic processors have become as commodity hardware for providing fast co-processing in computer systems. Developments in IoT, social networking web applications, big data created huge demand for data processing activities and such kind of throughput intensive applications inherently contains data level parallelism which is more suited for SIMD architecture based GPU. This paper reviews the architectural aspects of multi/many core processors and graphics processors. Different case studies are taken to compare performance of throughput computing applications using shared memory programming in OpenMP and CUDA API based programming.

  13. Application of Advanced Multi-Core Processor Technologies to Oceanographic Research

    Science.gov (United States)

    2013-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Application of Advanced Multi-Core Processor Technologies...STM32 NXP LPC series No Proprietary Microchip PIC32/DSPIC No > 500 mW; < 5 W ARM Cortex TI OMAP TI Sitara Broadcom BCM2835 Varies FPGA...state-of-the-art information processing architectures. OBJECTIVES Next-generation processor architectures (multi-core, multi-threaded) hold the

  14. Hardware Synchronization for Embedded Multi-Core Processors

    DEFF Research Database (Denmark)

    Stoif, Christian; Schoeberl, Martin; Liccardi, Benito

    2011-01-01

    Multi-core processors are about to conquer embedded systems — it is not the question of whether they are coming but how the architectures of the microcontrollers should look with respect to the strict requirements in the field. We present the step from one to multiple cores in this paper, establi......Multi-core processors are about to conquer embedded systems — it is not the question of whether they are coming but how the architectures of the microcontrollers should look with respect to the strict requirements in the field. We present the step from one to multiple cores in this paper...

  15. Recommending the heterogeneous cluster type multi-processor system computing

    International Nuclear Information System (INIS)

    Iijima, Nobukazu

    2010-01-01

    Real-time reactor simulator had been developed by reusing the equipment of the Musashi reactor and its performance improvement became indispensable for research tools to increase sampling rate with introduction of arithmetic units using multi-Digital Signal Processor(DSP) system (cluster). In order to realize the heterogeneous cluster type multi-processor system computing, combination of two kinds of Control Processor (CP) s, Cluster Control Processor (CCP) and System Control Processor (SCP), were proposed with Large System Control Processor (LSCP) for hierarchical cluster if needed. Faster computing performance of this system was well evaluated by simulation results for simultaneous execution of plural jobs and also pipeline processing between clusters, which showed the system led to effective use of existing system and enhancement of the cost performance. (T. Tanaka)

  16. Prototype Sistem Multi-Telemetri Wireless untuk Mengukur Suhu Udara Berbasis Mikrokontroler ESP8266 pada Greenhouse

    OpenAIRE

    Hanum Shirotu Nida

    2017-01-01

    Telemetri wireless adalah proses pengukuran parameter suatu obyek yang hasil pengukurannya dikirimkan ke tempat lain melalui proses pengiriman data tanpa menggunakan kabel (wireless), sedangkan multi telemetri adalah gabungan dari beberapa telemeteri itu sendiri. Penelitian ini merancang prototype sistem multi-telemetri wireless untuk mengukur suhu udara dan kelembaban udara pada greenhouse dengan menggunakan sensor DHT11 dan data hasil dari pembacaan sensor dikirim dengan menggunakan modul W...

  17. CQPSO scheduling algorithm for heterogeneous multi-core DAG task model

    Science.gov (United States)

    Zhai, Wenzheng; Hu, Yue-Li; Ran, Feng

    2017-07-01

    Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.

  18. Optimizing survivability of multi-state systems with multi-level protection by multi-processor genetic algorithm

    International Nuclear Information System (INIS)

    Levitin, Gregory; Dai Yuanshun; Xie Min; Leng Poh, Kim

    2003-01-01

    In this paper we consider vulnerable systems which can have different states corresponding to different combinations of available elements composing the system. Each state can be characterized by a performance rate, which is the quantitative measure of a system's ability to perform its task. Both the impact of external factors (stress) and internal causes (failures) affect system survivability, which is determined as probability of meeting a given demand. In order to increase the survivability of the system, a multi-level protection is applied to its subsystems. This means that a subsystem and its inner level of protection are in their turn protected by the protection of an outer level. This double-protected subsystem has its outer protection and so forth. In such systems, the protected subsystems can be destroyed only if all of the levels of their protection are destroyed. Each level of protection can be destroyed only if all of the outer levels of protection are destroyed. We formulate the problem of finding the structure of series-parallel multi-state system (including choice of system elements, choice of structure of multi-level protection and choice of protection methods) in order to achieve a desired level of system survivability by the minimal cost. An algorithm based on the universal generating function method is used for determination of the system survivability. A multi-processor version of genetic algorithm is used as optimization tool in order to solve the structure optimization problem. An application example is presented to illustrate the procedure presented in this paper

  19. Multi-processor data acquisition and monitoring systems for particle physics

    International Nuclear Information System (INIS)

    White, V.; Burch, B.; Eng, K.; Heinicke, P.; Pyatetsky, M.; Ritchie, D.

    1983-01-01

    A high speed distributed processing system, using PDP-11 and VAX processors, is being developed at Fermilab. The acquisition of data is done using one or more PDP-11s. Additional processors are connected to provide either data logging or extra data analysis capabilities. Within this framework, functional interchangeability of PDP-11 and VAX processors and of the PDP-11 operating systems, RT-11 and RSX-11M, has been maintained. Inter-processor connections have been implemented in a general way using the 5 megabit DR11-W hardware currently selected for the purpose. Using this approach the authors have been able to make use of several existing data acquisition and analysis packages, such as RT/MULTI, in a multi-processor system

  20. Multi-purpose ECG telemetry system.

    Science.gov (United States)

    Marouf, Mohamed; Vukomanovic, Goran; Saranovac, Lazar; Bozic, Miroslav

    2017-06-19

    The Electrocardiogram ECG is one of the most important non-invasive tools for cardiac diseases diagnosis. Taking advantage of the developed telecommunication infrastructure, several approaches that address the development of telemetry cardiac devices were introduced recently. Telemetry ECG devices allow easy and fast ECG monitoring of patients with suspected cardiac issues. Choosing the right device with the desired working mode, signal quality, and the device cost are still the main obstacles to massive usage of these devices. In this paper, we introduce design, implementation, and validation of a multi-purpose telemetry system for recording, transmission, and interpretation of ECG signals in different recording modes. The system consists of an ECG device, a cloud-based analysis pipeline, and accompanied mobile applications for physicians and patients. The proposed ECG device's mechanical design allows laypersons to easily record post-event short-term ECG signals, using dry electrodes without any preparation. Moreover, patients can use the device to record long-term signals in loop and holter modes, using wet electrodes. In order to overcome the problem of signal quality fluctuation due to using different electrodes types and different placements on subject's chest, customized ECG signal processing and interpretation pipeline is presented for each working mode. We present the evaluation of the novel short-term recorder design. Recording of an ECG signal was performed for 391 patients using a standard 12-leads golden standard ECG and the proposed patient-activated short-term post-event recorder. In the validation phase, a sample of validation signals followed peer review process wherein two experts annotated the signals in terms of signal acceptability for diagnosis.We found that 96% of signals allow detecting arrhythmia and other signal's abnormal changes. Additionally, we compared and presented the correlation coefficient and the automatic QRS delineation results

  1. ARTiS, an Asymmetric Real-Time Scheduler for Linux on Multi-Processor Architectures

    OpenAIRE

    Piel , Éric; Marquet , Philippe; Soula , Julien; Osuna , Christophe; Dekeyser , Jean-Luc

    2005-01-01

    The ARTiS system is a real-time extension of the GNU/Linux scheduler dedicated to SMP (Symmetric Multi-Processors) systems. It allows to mix High Performance Computing and real-time. ARTiS exploits the SMP architecture to guarantee the preemption of a processor when the system has to schedule a real-time task. The implementation is available as a modification of the Linux kernel, especially focusing (but not restricted to) IA-64 architecture. The basic idea of ARTiS is to assign a selected se...

  2. Benchmarking NWP Kernels on Multi- and Many-core Processors

    Science.gov (United States)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  3. Network Coding on Heterogeneous Multi-Core Processors for Wireless Sensor Networks

    Science.gov (United States)

    Kim, Deokho; Park, Karam; Ro, Won W.

    2011-01-01

    While network coding is well known for its efficiency and usefulness in wireless sensor networks, the excessive costs associated with decoding computation and complexity still hinder its adoption into practical use. On the other hand, high-performance microprocessors with heterogeneous multi-cores would be used as processing nodes of the wireless sensor networks in the near future. To this end, this paper introduces an efficient network coding algorithm developed for the heterogenous multi-core processors. The proposed idea is fully tested on one of the currently available heterogeneous multi-core processors referred to as the Cell Broadband Engine. PMID:22164053

  4. Image matrix processor for fast multi-dimensional computations

    Science.gov (United States)

    Roberson, George P.; Skeate, Michael F.

    1996-01-01

    An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.

  5. Efficient gate set tomography on a multi-qubit superconducting processor

    Science.gov (United States)

    Nielsen, Erik; Rudinger, Kenneth; Blume-Kohout, Robin; Bestwick, Andrew; Bloom, Benjamin; Block, Maxwell; Caldwell, Shane; Curtis, Michael; Hudson, Alex; Orgiazzi, Jean-Luc; Papageorge, Alexander; Polloreno, Anthony; Reagor, Matt; Rubin, Nicholas; Scheer, Michael; Selvanayagam, Michael; Sete, Eyob; Sinclair, Rodney; Smith, Robert; Vahidpour, Mehrnoosh; Villiers, Marius; Zeng, William; Rigetti, Chad

    Quantum information processors with five or more qubits are becoming common. Complete, predictive characterization of such devices e.g. via any form of tomography, including gate set tomography appears impossible because the parameter space is intractably large. Randomized benchmarking scales well, but cannot predict device behavior or diagnose failure modes. We introduce a new type of gate set tomography that uses an efficient ansatz to model physically plausible errors, but scales polynomially with the number of qubits. We will describe the theory behind this multi-qubit tomography and present experimental results from using it to characterize a multi-qubit processor made by Rigetti Quantum Computing. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidary of Lockheed Martin Corporation, for the US Department of Energy's NNSA under contract DE-AC04-94AL85000.

  6. Multi-population genomic prediction using a multi-task Bayesian learning model.

    Science.gov (United States)

    Chen, Liuhong; Li, Changxi; Miller, Stephen; Schenkel, Flavio

    2014-05-03

    Genomic prediction in multiple populations can be viewed as a multi-task learning problem where tasks are to derive prediction equations for each population and multi-task learning property can be improved by sharing information across populations. The goal of this study was to develop a multi-task Bayesian learning model for multi-population genomic prediction with a strategy to effectively share information across populations. Simulation studies and real data from Holstein and Ayrshire dairy breeds with phenotypes on five milk production traits were used to evaluate the proposed multi-task Bayesian learning model and compare with a single-task model and a simple data pooling method. A multi-task Bayesian learning model was proposed for multi-population genomic prediction. Information was shared across populations through a common set of latent indicator variables while SNP effects were allowed to vary in different populations. Both simulation studies and real data analysis showed the effectiveness of the multi-task model in improving genomic prediction accuracy for the smaller Ayshire breed. Simulation studies suggested that the multi-task model was most effective when the number of QTL was small (n = 20), with an increase of accuracy by up to 0.09 when QTL effects were lowly correlated between two populations (ρ = 0.2), and up to 0.16 when QTL effects were highly correlated (ρ = 0.8). When QTL genotypes were included for training and validation, the improvements were 0.16 and 0.22, respectively, for scenarios of the low and high correlation of QTL effects between two populations. When the number of QTL was large (n = 200), improvement was small with a maximum of 0.02 when QTL genotypes were not included for genomic prediction. Reduction in accuracy was observed for the simple pooling method when the number of QTL was small and correlation of QTL effects between the two populations was low. For the real data, the multi-task model achieved an

  7. Debugging in a multi-processor environment

    International Nuclear Information System (INIS)

    Spann, J.M.

    1981-01-01

    The Supervisory Control and Diagnostic System (SCDS) for the Mirror Fusion Test Facility (MFTF) consists of nine 32-bit minicomputers arranged in a tightly coupled distributed computer system utilizing a share memory as the data exchange medium. Debugging of more than one program in the multi-processor environment is a difficult process. This paper describes what new tools were developed and how the testing of software is performed in the SCDS for the MFTF project

  8. Tinuso: A processor architecture for a multi-core hardware simulation platform

    DEFF Research Database (Denmark)

    Schleuniger, Pascal; Karlsson, Sven

    2010-01-01

    Multi-core systems have the potential to improve performance, energy and cost properties of embedded systems but also require new design methods and tools to take advantage of the new architectures. Due to the limited accuracy and performance of pure software simulators, we are working on a cycle...... accurate hardware simulation platform. We have developed the Tinuso processor architecture for this platform. Tinuso is a processor architecture optimized for FPGA implementation. The instruction set makes use of predicated instructions and supports C/C++ and assembly language programming. It is designed...... to be easy extendable to maintain the exibility required for the research on multi-core systems. Tinuso contains a co-processor interface to connect to a network interface. This interface allow for communication over an on-chip network. A clock frequency estimation study on a deeply pipelined Tinuso...

  9. System, methods and apparatus for program optimization for multi-threaded processor architectures

    Science.gov (United States)

    Bastoul, Cedric; Lethin, Richard A; Leung, Allen K; Meister, Benoit J; Szilagyi, Peter; Vasilache, Nicolas T; Wohlford, David E

    2015-01-06

    Methods, apparatus and computer software product for source code optimization are provided. In an exemplary embodiment, a first custom computing apparatus is used to optimize the execution of source code on a second computing apparatus. In this embodiment, the first custom computing apparatus contains a memory, a storage medium and at least one processor with at least one multi-stage execution unit. The second computing apparatus contains at least two multi-stage execution units that allow for parallel execution of tasks. The first custom computing apparatus optimizes the code for parallelism, locality of operations and contiguity of memory accesses on the second computing apparatus. This Abstract is provided for the sole purpose of complying with the Abstract requirement rules. This Abstract is submitted with the explicit understanding that it will not be used to interpret or to limit the scope or the meaning of the claims.

  10. DiFX: A software correlator for very long baseline interferometry using multi-processor computing environments

    OpenAIRE

    Deller, A. T.; Tingay, S. J.; Bailes, M.; West, C.

    2007-01-01

    We describe the development of an FX style correlator for Very Long Baseline Interferometry (VLBI), implemented in software and intended to run in multi-processor computing environments, such as large clusters of commodity machines (Beowulf clusters) or computers specifically designed for high performance computing, such as multi-processor shared-memory machines. We outline the scientific and practical benefits for VLBI correlation, these chiefly being due to the inherent flexibility of softw...

  11. Neurovision processor for designing intelligent sensors

    Science.gov (United States)

    Gupta, Madan M.; Knopf, George K.

    1992-03-01

    A programmable multi-task neuro-vision processor, called the Positive-Negative (PN) neural processor, is proposed as a plausible hardware mechanism for constructing robust multi-task vision sensors. The computational operations performed by the PN neural processor are loosely based on the neural activity fields exhibited by certain nervous tissue layers situated in the brain. The neuro-vision processor can be programmed to generate diverse dynamic behavior that may be used for spatio-temporal stabilization (STS), short-term visual memory (STVM), spatio-temporal filtering (STF) and pulse frequency modulation (PFM). A multi- functional vision sensor that performs a variety of information processing operations on time- varying two-dimensional sensory images can be constructed from a parallel and hierarchical structure of numerous individually programmed PN neural processors.

  12. Temporal analysis and scheduling of hard real-time radios running on a multi-processor

    NARCIS (Netherlands)

    Moreira, O.

    2012-01-01

    On a multi-radio baseband system, multiple independent transceivers must share the resources of a multi-processor, while meeting each its own hard real-time requirements. Not all possible combinations of transceivers are known at compile time, so a solution must be found that either allows for

  13. Multi-task Vector Field Learning.

    Science.gov (United States)

    Lin, Binbin; Yang, Sen; Zhang, Chiyuan; Ye, Jieping; He, Xiaofei

    2012-01-01

    Multi-task learning (MTL) aims to improve generalization performance by learning multiple related tasks simultaneously and identifying the shared information among tasks. Most of existing MTL methods focus on learning linear models under the supervised setting. We propose a novel semi-supervised and nonlinear approach for MTL using vector fields. A vector field is a smooth mapping from the manifold to the tangent spaces which can be viewed as a directional derivative of functions on the manifold. We argue that vector fields provide a natural way to exploit the geometric structure of data as well as the shared differential structure of tasks, both of which are crucial for semi-supervised multi-task learning. In this paper, we develop multi-task vector field learning (MTVFL) which learns the predictor functions and the vector fields simultaneously. MTVFL has the following key properties. (1) The vector fields MTVFL learns are close to the gradient fields of the predictor functions. (2) Within each task, the vector field is required to be as parallel as possible which is expected to span a low dimensional subspace. (3) The vector fields from all tasks share a low dimensional subspace. We formalize our idea in a regularization framework and also provide a convex relaxation method to solve the original non-convex problem. The experimental results on synthetic and real data demonstrate the effectiveness of our proposed approach.

  14. Prototype Sistem Multi-Telemetri Wireless untuk Mengukur Suhu Udara Berbasis Mikrokontroler ESP8266 pada Greenhouse

    Directory of Open Access Journals (Sweden)

    Hanum Shirotu Nida

    2017-07-01

    Full Text Available Telemetri wireless adalah proses pengukuran parameter suatu obyek yang hasil pengukurannya dikirimkan ke tempat lain melalui proses pengiriman data tanpa menggunakan kabel (wireless, sedangkan multi telemetri adalah gabungan dari beberapa telemeteri itu sendiri. Penelitian ini merancang prototype sistem multi-telemetri wireless untuk mengukur suhu udara dan kelembaban udara pada greenhouse dengan menggunakan sensor DHT11 dan data hasil dari pembacaan sensor dikirim dengan menggunakan modul WiFi ESP8266 ke server dengan menggunakan protokol HTTP. Dalam penelitian ini diuji nilai sensor DHT11, heap memory ESP8266, jarak atau jangkauan ESP8266, uji coba data missing handling dan kestabilan jaringan. Berdasarkan hasil pengujian diketahui bahwa sensor DHT11 memiliki rata-rata kesalahan ukur suhu 0.92 oC dan kelembaban 3.1%. Modul WiFi ESP8266 mampu menyimpan dan mengirim buffer hingga 100 data dan dapat melakukan pengiriman dalam jangkauan 50 meter. Data missing handling memanfaatkan buffer untuk menyimpan data selama server sedang tidak dapat diakses oleh sensor node agar data tidak hillang. Kestabilan pengiriman data atau koneksi sensor node dengan server dipengaruhi oleh jumlah access point yang sedang berkomunikasi disekitar access point server dengan menggunakan channel yang sama.

  15. Manifold regularized multi-task feature selection for multi-modality classification in Alzheimer's disease.

    Science.gov (United States)

    Jie, Biao; Zhang, Daoqiang; Cheng, Bo; Shen, Dinggang

    2013-01-01

    Accurate diagnosis of Alzheimer's disease (AD), as well as its prodromal stage (i.e., mild cognitive impairment, MCI), is very important for possible delay and early treatment of the disease. Recently, multi-modality methods have been used for fusing information from multiple different and complementary imaging and non-imaging modalities. Although there are a number of existing multi-modality methods, few of them have addressed the problem of joint identification of disease-related brain regions from multi-modality data for classification. In this paper, we proposed a manifold regularized multi-task learning framework to jointly select features from multi-modality data. Specifically, we formulate the multi-modality classification as a multi-task learning framework, where each task focuses on the classification based on each modality. In order to capture the intrinsic relatedness among multiple tasks (i.e., modalities), we adopted a group sparsity regularizer, which ensures only a small number of features to be selected jointly. In addition, we introduced a new manifold based Laplacian regularization term to preserve the geometric distribution of original data from each task, which can lead to the selection of more discriminative features. Furthermore, we extend our method to the semi-supervised setting, which is very important since the acquisition of a large set of labeled data (i.e., diagnosis of disease) is usually expensive and time-consuming, while the collection of unlabeled data is relatively much easier. To validate our method, we have performed extensive evaluations on the baseline Magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET) data of Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our experimental results demonstrate the effectiveness of the proposed method.

  16. Interference control by best-effort process duty-cycling in chip multi-processor systems for real-time medical image processing

    NARCIS (Netherlands)

    Westmijze, M.; Bekooij, Marco Jan Gerrit; Smit, Gerardus Johannes Maria

    2013-01-01

    Systems with chip multi-processors are currently used for several applications that have real-time requirements. In chip multi-processor architectures, many hardware resources such as parts of the cache hierarchy are shared between cores and by using such resources, applications can significantly

  17. Are women better than men at multi-tasking?

    OpenAIRE

    Stoet, Gijsbert; O’Connor, Daryl B.; Conner, Mark; Laws, Keith R.

    2013-01-01

    Background: There seems to be a common belief that women are better in multi-tasking than men, but there is practically no scientific research on this topic. Here, we tested whether women have better multi-tasking skills than men.\\ensuremath\\ensuremath Methods: In Experiment 1, we compared performance of 120 women and 120 men in a computer-based task-switching paradigm. In Experiment 2, we compared a different group of 47 women and 47 men on "paper-and-pencil" multi-tasking tests.\\ensuremath\\...

  18. Manifold Regularized Multi-Task Feature Selection for Multi-Modality Classification in Alzheimer’s Disease

    Science.gov (United States)

    Jie, Biao; Cheng, Bo

    2014-01-01

    Accurate diagnosis of Alzheimer’s disease (AD), as well as its pro-dromal stage (i.e., mild cognitive impairment, MCI), is very important for possible delay and early treatment of the disease. Recently, multi-modality methods have been used for fusing information from multiple different and complementary imaging and non-imaging modalities. Although there are a number of existing multi-modality methods, few of them have addressed the problem of joint identification of disease-related brain regions from multi-modality data for classification. In this paper, we proposed a manifold regularized multi-task learning framework to jointly select features from multi-modality data. Specifically, we formulate the multi-modality classification as a multi-task learning framework, where each task focuses on the classification based on each modality. In order to capture the intrinsic relatedness among multiple tasks (i.e., modalities), we adopted a group sparsity regularizer, which ensures only a small number of features to be selected jointly. In addition, we introduced a new manifold based Laplacian regularization term to preserve the geometric distribution of original data from each task, which can lead to the selection of more discriminative features. Furthermore, we extend our method to the semi-supervised setting, which is very important since the acquisition of a large set of labeled data (i.e., diagnosis of disease) is usually expensive and time-consuming, while the collection of unlabeled data is relatively much easier. To validate our method, we have performed extensive evaluations on the baseline Magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET) data of Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. Our experimental results demonstrate the effectiveness of the proposed method. PMID:24505676

  19. Simulation of Particulate Flows Multi-Processor Machines with Distributed Memory

    Energy Technology Data Exchange (ETDEWEB)

    Uhlmann, M.

    2004-07-01

    We presented a method for the parallelization of an immersed boundary algorithm for particulate flows using the MPI standard of communication. The treatment of the fluid phase used the domain decomposition technique over a Cartesian processor grid. The solution of the Helmholtz problem is approximately factorized an relies upon apparel tri-diagonal solver the Poisson problem is solved by means of a parallel multi-grid technique similar to MUDPACK. for the solid phase we employ a master-slaves technique where one processor handles all the particles contained in its Eulerian fluid sub-domain and zero or more neighbor processors collaborate in the computation of particle-related quantities whenever a particle position over laps the boundary of a sub-domain. the parallel efficiency for some preliminary computations is presented. (Author) 9 refs.

  20. Robust visual tracking via multi-task sparse learning

    KAUST Repository

    Zhang, Tianzhu; Ghanem, Bernard; Liu, Si; Ahuja, Narendra

    2012-01-01

    In this paper, we formulate object tracking in a particle filter framework as a multi-task sparse learning problem, which we denote as Multi-Task Tracking (MTT). Since we model particles as linear combinations of dictionary templates

  1. NMR-MPar: A Fault-Tolerance Approach for Multi-Core and Many-Core Processors

    Directory of Open Access Journals (Sweden)

    Vanessa Vargas

    2018-03-01

    Full Text Available Multi-core and many-core processors are a promising solution to achieve high performance by maintaining a lower power consumption. However, the degree of miniaturization makes them more sensitive to soft-errors. To improve the system reliability, this work proposes a fault-tolerance approach based on redundancy and partitioning principles called N-Modular Redundancy and M-Partitions (NMR-MPar. By combining both principles, this approach allows multi-/many-core processors to perform critical functions in mixed-criticality systems. Benefiting from the capabilities of these devices, NMR-MPar creates different partitions that perform independent functions. For critical functions, it is proposed that N partitions with the same configuration participate of an N-modular redundancy system. In order to validate the approach, a case study is implemented on the KALRAY Multi-Purpose Processing Array (MPPA-256 many-core processor running two parallel benchmark applications. The traveling salesman problem and matrix multiplication applications were selected to test different device’s resources. The effectiveness of NMR-MPar is assessed by software-implemented fault-injection. For evaluation purposes, it is considered that the system is intended to be used in avionics. Results show the improvement of the application reliability by two orders of magnitude when implementing NMR-MPar on the system. Finally, this work opens the possibility to use massive parallelism for dependable applications in embedded systems.

  2. Design of massively parallel hardware multi-processors for highly-demanding embedded applications

    NARCIS (Netherlands)

    Jozwiak, L.; Jan, Y.

    2013-01-01

    Many new embedded applications require complex computations to be performed to tight schedules, while at the same time demanding low energy consumption and low cost. For implementation of these highly-demanding applications, highly-optimized application-specific multi-processor system-on-a-chip

  3. Microprocessor multi-task monitor

    International Nuclear Information System (INIS)

    Ludemann, C.A.

    1983-01-01

    This paper describes a multi-task monitor program for microprocessors. Although written for the Intel 8085, it incorporates features that would be beneficial for implementation in other microprocessors used in controlling and monitoring experiments and accelerators. The monitor places permanent programs (tasks) arbitrarily located throughout ROM in a priority ordered queue. The programmer is provided with the flexibility to add new tasks or modified versions of existing tasks, without having to comply with previously defined task boundaries or having to reprogram all of ROM. Scheduling of tasks is triggered by timers, outside stimuli (interrupts), or inter-task communications. Context switching time is of the order of tenths of a milllisecond

  4. A scalable single-chip multi-processor architecture with on-chip RTOS kernel

    NARCIS (Netherlands)

    Theelen, B.D.; Verschueren, A.C.; Reyes Suarez, V.V.; Stevens, M.P.J.; Nunez, A.

    2003-01-01

    Now that system-on-chip technology is emerging, single-chip multi-processors are becoming feasible. A key problem of designing such systems is the complexity of their on-chip interconnects and memory architecture. It is furthermore unclear at what level software should be integrated. An example of a

  5. Behavioral Simulation and Performance Evaluation of Multi-Processor Architectures

    Directory of Open Access Journals (Sweden)

    Ausif Mahmood

    1996-01-01

    Full Text Available The development of multi-processor architectures requires extensive behavioral simulations to verify the correctness of design and to evaluate its performance. A high level language can provide maximum flexibility in this respect if the constructs for handling concurrent processes and a time mapping mechanism are added. This paper describes a novel technique for emulating hardware processes involved in a parallel architecture such that an object-oriented description of the design is maintained. The communication and synchronization between hardware processes is handled by splitting the processes into their equivalent subprograms at the entry points. The proper scheduling of these subprograms is coordinated by a timing wheel which provides a time mapping mechanism. Finally, a high level language pre-processor is proposed so that the timing wheel and the process emulation details can be made transparent to the user.

  6. An FPGA design flow for reconfigurable network-based multi-processor systems on chip

    NARCIS (Netherlands)

    Kumar, A.; Hansson, M.A; Huisken, J.; Corporaal, H.

    2007-01-01

    Multi-processor systems on chip (MPSoC) platforms are becoming increasingly more heterogeneous and are shifting towards a more communication-centric methodology. Networks on chip (NoC) have emerged as the design paradigm for scalable on-chip communication architectures. As the system complexity

  7. Robust visual tracking via structured multi-task sparse learning

    KAUST Repository

    Zhang, Tianzhu; Ghanem, Bernard; Liu, Si; Ahuja, Narendra

    2012-01-01

    In this paper, we formulate object tracking in a particle filter framework as a structured multi-task sparse learning problem, which we denote as Structured Multi-Task Tracking (S-MTT). Since we model particles as linear combinations of dictionary

  8. Improving multi-tasking ability through action videogames.

    Science.gov (United States)

    Chiappe, Dan; Conger, Mark; Liao, Janet; Caldwell, J Lynn; Vu, Kim-Phuong L

    2013-03-01

    The present study examined whether action videogames can improve multi-tasking in high workload environments. Two groups with no action videogame experience were pre-tested using the Multi-Attribute Task Battery (MATB). It consists of two primary tasks; tracking and fuel management, and two secondary tasks; systems monitoring and communication. One group served as a control group, while a second played action videogames a minimum of 5 h a week for 10 weeks. Both groups returned for a post-assessment on the MATB. We found the videogame treatment enhanced performance on secondary tasks, without interfering with the primary tasks. Our results demonstrate action videogames can increase people's ability to take on additional tasks by increasing attentional capacity. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  9. Commodity multi-processor systems in the ATLAS level-2 trigger

    International Nuclear Information System (INIS)

    Abolins, M.; Blair, R.; Bock, R.; Bogaerts, A.; Dawson, J.; Ermoline, Y.; Hauser, R.; Kugel, A.; Lay, R.; Muller, M.; Noffz, K.-H.; Pope, B.; Schlereth, J.; Werner, P.

    2000-01-01

    Low cost SMP (Symmetric Multi-Processor) systems provide substantial CPU and I/O capacity. These features together with the ease of system integration make them an attractive and cost effective solution for a number of real-time applications in event selection. In ATLAS the authors consider them as intelligent input buffers (active ROB complex), as event flow supervisors or as powerful processing nodes. Measurements of the performance of one off-the-shelf commercial 4-processor PC with two PCI buses, equipped with commercial FPGA based data source cards (microEnable) and running commercial software are presented and mapped on such applications together with a long-term program of work. The SMP systems may be considered as an important building block in future data acquisition systems

  10. Commodity multi-processor systems in the ATLAS level-2 trigger

    CERN Document Server

    Abolins, M; Bock, R; Bogaerts, J A C; Dawson, J; Ermoline, Y; Hauser, R; Kugel, A; Lay, R; Müller, M; Noffz, K H; Pope, B; Schlereth, J L; Werner, P

    2000-01-01

    Low cost SMP (symmetric multi-processor) systems provide substantial CPU and I/O capacity. These features together with the ease of system integration make them an attractive and cost effective solution for a number of real-time applications in event selection. In ATLAS we consider them as intelligent input buffers (an "active" ROB complex), as event flow supervisors or as powerful processing nodes. Measurements of the performance of one off-the-shelf commercial 4- processor PC with two PCI buses, equipped with commercial FPGA based data source cards (microEnable) and running commercial software are presented and mapped on such applications together with a long-term programme of work. The SMP systems may be considered as an important building block in future data acquisition systems. (9 refs).

  11. Design of Networks-on-Chip for Real-Time Multi-Processor Systems-on-Chip

    DEFF Research Database (Denmark)

    Sparsø, Jens

    2012-01-01

    This paper addresses the design of networks-on-chips for use in multi-processor systems-on-chips - the hardware platforms used in embedded systems. These platforms typically have to guarantee real-time properties, and as the network is a shared resource, it has to provide service guarantees...... (bandwidth and/or latency) to different communication flows. The paper reviews some past work in this field and the lessons learned, and the paper discusses ongoing research conducted as part of the project "Time-predictable Multi-Core Architecture for Embedded Systems" (T-CREST), supported by the European...

  12. The LASS hardware processor

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1976-01-01

    The problems of data analysis with hardware processors are reviewed and a description is given of a programmable processor. This processor, the 168/E, has been designed for use in the LASS multi-processor system; it has an execution speed comparable to the IBM 370/168 and uses the subset of IBM 370 instructions appropriate to the LASS analysis task. (Auth.)

  13. Quality-driven model-based design of multi-processor accelerators : an application to LDPC decoders

    NARCIS (Netherlands)

    Jan, Y.

    2012-01-01

    The recent spectacular progress in nano-electronic technology has enabled the implementation of very complex multi-processor systems on single chips (MPSoCs). However in parallel, new highly demanding complex embedded applications are emerging, in fields like communication and networking,

  14. Multi-Task Convolutional Neural Network for Pose-Invariant Face Recognition

    Science.gov (United States)

    Yin, Xi; Liu, Xiaoming

    2018-02-01

    This paper explores multi-task learning (MTL) for face recognition. We answer the questions of how and why MTL can improve the face recognition performance. First, we propose a multi-task Convolutional Neural Network (CNN) for face recognition where identity classification is the main task and pose, illumination, and expression estimations are the side tasks. Second, we develop a dynamic-weighting scheme to automatically assign the loss weight to each side task, which is a crucial problem in MTL. Third, we propose a pose-directed multi-task CNN by grouping different poses to learn pose-specific identity features, simultaneously across all poses. Last but not least, we propose an energy-based weight analysis method to explore how CNN-based MTL works. We observe that the side tasks serve as regularizations to disentangle the variations from the learnt identity features. Extensive experiments on the entire Multi-PIE dataset demonstrate the effectiveness of the proposed approach. To the best of our knowledge, this is the first work using all data in Multi-PIE for face recognition. Our approach is also applicable to in-the-wild datasets for pose-invariant face recognition and achieves comparable or better performance than state of the art on LFW, CFP, and IJB-A datasets.

  15. Ranking Performance Measures in Multi-Task Agencies

    DEFF Research Database (Denmark)

    Christensen, Peter Ove; Sabac, Florin; Tian, Joyce

    2010-01-01

    We derive sufficient conditions for ranking performance evaluation systems in multi-task agency models (using both optimal and linear contracts) in terms of a second-order stochastic dominance (SSD) condition on the likelihood ratios. The SSD condition can be replaced by a variance-covariance mat......We derive sufficient conditions for ranking performance evaluation systems in multi-task agency models (using both optimal and linear contracts) in terms of a second-order stochastic dominance (SSD) condition on the likelihood ratios. The SSD condition can be replaced by a variance...

  16. Multi-threaded ATLAS Simulation on Intel Knights Landing Processors

    CERN Document Server

    Farrell, Steven; The ATLAS collaboration; Calafiura, Paolo; Leggett, Charles

    2016-01-01

    The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), will be delivered to its users in two phases with the first phase online now and the second phase expected in mid-2016. Cori Phase 2 will be based on the KNL architecture and will contain over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a great use-case for the KNL architecture and supercomputers like Cori. Simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this presentation we will give an overview of the ATLAS simulation application with details on its multi-thr...

  17. CMS readiness for multi-core workload scheduling

    Science.gov (United States)

    Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.; Aftab Khan, F.; Letts, J.; Mason, D.; Verguilov, V.

    2017-10-01

    In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides a solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.

  18. CMS Readiness for Multi-Core Workload Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Calero Yzquierdo, A. [Madrid, CIEMAT; Balcas, J. [Caltech; Hernandez, J. [Madrid, CIEMAT; Aftab Khan, F. [NCP, Islamabad; Letts, J. [UC, San Diego; Mason, D. [Fermilab; Verguilov, V. [CLMI, Sofia

    2017-11-22

    In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides a solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.

  19. Rheem: Enabling Multi-Platform Task Execution

    KAUST Repository

    Agrawal, Divy; Kruse, Sebastian; Ouzzani, Mourad; Papotti, Paolo; Quiane-Ruiz, Jorge-Arnulfo; Tang, Nan; Zaki, Mohammed J.; Ba, Lamine; Berti-Equille, Laure; Chawla, Sanjay; Elmagarmid, Ahmed; Hammady, Hossam; Idris, Yasser; Kaoudi, Zoi; Khayyat, Zuhair

    2016-01-01

    Many emerging applications, from domains such as healthcare and oil & gas, require several data processing systems for complex analytics. This demo paper showcases Rheem, a framework that provides multi-platform task execution for such applications. It features a three-layer data processing abstraction and a new query optimization approach for multi-platform settings. We will demonstrate the strengths of Rheem by using real-world scenarios from three different applications, namely, machine learning, data cleaning, and data fusion. © 2016 ACM.

  20. Rheem: Enabling Multi-Platform Task Execution

    KAUST Repository

    Agrawal, Divy

    2016-06-16

    Many emerging applications, from domains such as healthcare and oil & gas, require several data processing systems for complex analytics. This demo paper showcases Rheem, a framework that provides multi-platform task execution for such applications. It features a three-layer data processing abstraction and a new query optimization approach for multi-platform settings. We will demonstrate the strengths of Rheem by using real-world scenarios from three different applications, namely, machine learning, data cleaning, and data fusion. © 2016 ACM.

  1. Off-line mapping of multi-rate dependent task sets to many-core platforms

    DEFF Research Database (Denmark)

    Puffitsch, Wolfgang; Noulard, Eric; Pagetti, Claire

    2015-01-01

    This paper presents an approach to execute safety-critical applications on multi- and many-core processors in a predictable manner. We investigate three concrete platforms: the Intel Single-chip Cloud Computer, the Texas Instruments TMS320C6678 and the Tilera TILEmpower-Gx36. We define an execution...... model to safely execute dependent periodic task sets on these platforms. The four rules of the execution model entail that an off-line mapping of the application to the platform must be computed. The paper details our approach to automatically compute a valid mapping. Furthermore, we evaluate our...

  2. Multi-task feature learning by using trace norm regularization

    Directory of Open Access Journals (Sweden)

    Jiangmei Zhang

    2017-11-01

    Full Text Available Multi-task learning can extract the correlation of multiple related machine learning problems to improve performance. This paper considers applying the multi-task learning method to learn a single task. We propose a new learning approach, which employs the mixture of expert model to divide a learning task into several related sub-tasks, and then uses the trace norm regularization to extract common feature representation of these sub-tasks. A nonlinear extension of this approach by using kernel is also provided. Experiments conducted on both simulated and real data sets demonstrate the advantage of the proposed approach.

  3. Simulation of Particulate Flows on Multi-Processor Machines with Distributed Memory

    International Nuclear Information System (INIS)

    Uhlmann, M.

    2004-01-01

    We present a method for the parallelization of an immersed boundary algorithm for particulate flows using the MPI standard of communication. The treatment of the fluid phase uses the domain decomposition technique over a Cartesian processor grid. The solution of the Hehnholtz problem is approximately factorized an relies upon apparel tri-diagonal solver; the Poisson problem is solved by means of a parallel multi-grid technique simulator MUDPACK. For the solid phase we employ a master-slaves technique where one process or handles all the particles contained in its Eulerian fluid sub-domain and zero or more neighbor processors collaborate in the computation of particle-related quantities whenever a particle position overlaps the boundary of a sub- do mam.The parallel efficiency for some preliminary computations is presented. (Author) 9 refs

  4. Multi-task feature selection in microarray data by binary integer programming.

    Science.gov (United States)

    Lan, Liang; Vucetic, Slobodan

    2013-12-20

    A major challenge in microarray classification is that the number of features is typically orders of magnitude larger than the number of examples. In this paper, we propose a novel feature filter algorithm to select the feature subset with maximal discriminative power and minimal redundancy by solving a quadratic objective function with binary integer constraints. To improve the computational efficiency, the binary integer constraints are relaxed and a low-rank approximation to the quadratic term is applied. The proposed feature selection algorithm was extended to solve multi-task microarray classification problems. We compared the single-task version of the proposed feature selection algorithm with 9 existing feature selection methods on 4 benchmark microarray data sets. The empirical results show that the proposed method achieved the most accurate predictions overall. We also evaluated the multi-task version of the proposed algorithm on 8 multi-task microarray datasets. The multi-task feature selection algorithm resulted in significantly higher accuracy than when using the single-task feature selection methods.

  5. Multi-Threaded Dense Linear Algebra Libraries for Low-Power Asymmetric Multicore Processors

    OpenAIRE

    Catalán, Sandra; Herrero, José R.; Igual, Francisco D.; Rodríguez-Sánchez, Rafael; Quintana-Ortí, Enrique S.

    2015-01-01

    Dense linear algebra libraries, such as BLAS and LAPACK, provide a relevant collection of numerical tools for many scientific and engineering applications. While there exist high performance implementations of the BLAS (and LAPACK) functionality for many current multi-threaded architectures,the adaption of these libraries for asymmetric multicore processors (AMPs)is still pending. In this paper we address this challenge by developing an asymmetry-aware implementation of the BLAS, based on the...

  6. 3D Seismic Imaging through Reverse-Time Migration on Homogeneous and Heterogeneous Multi-Core Processors

    Directory of Open Access Journals (Sweden)

    Mauricio Araya-Polo

    2009-01-01

    Full Text Available Reverse-Time Migration (RTM is a state-of-the-art technique in seismic acoustic imaging, because of the quality and integrity of the images it provides. Oil and gas companies trust RTM with crucial decisions on multi-million-dollar drilling investments. But RTM requires vastly more computational power than its predecessor techniques, and this has somewhat hindered its practical success. On the other hand, despite multi-core architectures promise to deliver unprecedented computational power, little attention has been devoted to mapping efficiently RTM to multi-cores. In this paper, we present a mapping of the RTM computational kernel to the IBM Cell/B.E. processor that reaches close-to-optimal performance. The kernel proves to be memory-bound and it achieves a 98% utilization of the peak memory bandwidth. Our Cell/B.E. implementation outperforms a traditional processor (PowerPC 970MP in terms of performance (with an 15.0× speedup and energy-efficiency (with a 10.0× increase in the GFlops/W delivered. Also, it is the fastest RTM implementation available to the best of our knowledge. These results increase the practical usability of RTM. Also, the RTM-Cell/B.E. combination proves to be a strong competitor in the seismic arena.

  7. Sistem Komunikasi Modul Sensor Jamak Berbasiskan Mikrokontroler Menggunakan Serial Rs-485 Mode Multi Processor Communication (Mpc

    Directory of Open Access Journals (Sweden)

    Suar wibawa

    2016-08-01

    Full Text Available Multi-sensor communication system uses RS-485 standard communication connecting each microcontroller-based data processing unit to form BUS topology network. The advantages of this  communication system  are:  connectivity  (easy  to  connecting  devices  on  a  network, scalability (flexibility to expand the network, more resistant to noise, and easier maintenance. The System is built using Master-Slave communication approach model. This system need to filter every data packet on communication channel because every device that connect in this network can hear every data packet across this network. Multi Processor Communication (MPC model is applied to reduce processor’s burden in inspecting every data packet, so the processor that work in slave side only need to inspect the message for itself without inspecting every data packet across the communication chanel.

  8. Advantages of Task-Specific Multi-Objective Optimisation in Evolutionary Robotics.

    Science.gov (United States)

    Trianni, Vito; López-Ibáñez, Manuel

    2015-01-01

    The application of multi-objective optimisation to evolutionary robotics is receiving increasing attention. A survey of the literature reveals the different possibilities it offers to improve the automatic design of efficient and adaptive robotic systems, and points to the successful demonstrations available for both task-specific and task-agnostic approaches (i.e., with or without reference to the specific design problem to be tackled). However, the advantages of multi-objective approaches over single-objective ones have not been clearly spelled out and experimentally demonstrated. This paper fills this gap for task-specific approaches: starting from well-known results in multi-objective optimisation, we discuss how to tackle commonly recognised problems in evolutionary robotics. In particular, we show that multi-objective optimisation (i) allows evolving a more varied set of behaviours by exploring multiple trade-offs of the objectives to optimise, (ii) supports the evolution of the desired behaviour through the introduction of objectives as proxies, (iii) avoids the premature convergence to local optima possibly introduced by multi-component fitness functions, and (iv) solves the bootstrap problem exploiting ancillary objectives to guide evolution in the early phases. We present an experimental demonstration of these benefits in three different case studies: maze navigation in a single robot domain, flocking in a swarm robotics context, and a strictly collaborative task in collective robotics.

  9. Advantages of Task-Specific Multi-Objective Optimisation in Evolutionary Robotics.

    Directory of Open Access Journals (Sweden)

    Vito Trianni

    Full Text Available The application of multi-objective optimisation to evolutionary robotics is receiving increasing attention. A survey of the literature reveals the different possibilities it offers to improve the automatic design of efficient and adaptive robotic systems, and points to the successful demonstrations available for both task-specific and task-agnostic approaches (i.e., with or without reference to the specific design problem to be tackled. However, the advantages of multi-objective approaches over single-objective ones have not been clearly spelled out and experimentally demonstrated. This paper fills this gap for task-specific approaches: starting from well-known results in multi-objective optimisation, we discuss how to tackle commonly recognised problems in evolutionary robotics. In particular, we show that multi-objective optimisation (i allows evolving a more varied set of behaviours by exploring multiple trade-offs of the objectives to optimise, (ii supports the evolution of the desired behaviour through the introduction of objectives as proxies, (iii avoids the premature convergence to local optima possibly introduced by multi-component fitness functions, and (iv solves the bootstrap problem exploiting ancillary objectives to guide evolution in the early phases. We present an experimental demonstration of these benefits in three different case studies: maze navigation in a single robot domain, flocking in a swarm robotics context, and a strictly collaborative task in collective robotics.

  10. Multi-ASIP Platform Synthesis for Event-Triggered Applications with Cost/Performance Trade-offs

    DEFF Research Database (Denmark)

    Gangadharan, Deepak; Micconi, Laura; Pop, Paul

    2013-01-01

    In this paper, we propose a technique to synthesize a cost-efficient distributed platform consisting of multiple Application Specific Instruction Set Processors (multi-ASIPs) running applications with strict timing constraints. Multi-ASIP platform synthesis is a non-trivial task for two reasons....... Firstly, we need to know the WCET of tasks in target applications to derive platforms (including synthesized ASIPs) in which the tasks are schedulable. However, the WCET of tasks can be known only after the ASIPs are synthesized. We break this circular dependency by using a probability distribution...

  11. Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array

    Science.gov (United States)

    Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul

    2008-04-01

    This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.

  12. Mathematical Methods and Algorithms of Mobile Parallel Computing on the Base of Multi-core Processors

    Directory of Open Access Journals (Sweden)

    Alexander B. Bakulev

    2012-11-01

    Full Text Available This article deals with mathematical models and algorithms, providing mobility of sequential programs parallel representation on the high-level language, presents formal model of operation environment processes management, based on the proposed model of programs parallel representation, presenting computation process on the base of multi-core processors.

  13. A language for data-parallel and task parallel programming dedicated to multi-SIMD computers. Contributions to hydrodynamic simulation with lattice gases

    International Nuclear Information System (INIS)

    Pic, Marc Michel

    1995-01-01

    Parallel programming covers task-parallelism and data-parallelism. Many problems need both parallelisms. Multi-SIMD computers allow hierarchical approach of these parallelisms. The T++ language, based on C++, is dedicated to exploit Multi-SIMD computers using a programming paradigm which is an extension of array-programming to tasks managing. Our language introduced array of independent tasks to achieve separately (MIMD), on subsets of processors of identical behaviour (SIMD), in order to translate the hierarchical inclusion of data-parallelism in task-parallelism. To manipulate in a symmetrical way tasks and data we propose meta-operations which have the same behaviour on tasks arrays and on data arrays. We explain how to implement this language on our parallel computer SYMPHONIE in order to profit by the locally-shared memory, by the hardware virtualization, and by the multiplicity of communications networks. We analyse simultaneously a typical application of such architecture. Finite elements scheme for Fluid mechanic needs powerful parallel computers and requires large floating points abilities. Lattice gases is an alternative to such simulations. Boolean lattice bases are simple, stable, modular, need to floating point computation, but include numerical noise. Boltzmann lattice gases present large precision of computation, but needs floating points and are only locally stable. We propose a new scheme, called multi-bit, who keeps the advantages of each boolean model to which it is applied, with large numerical precision and reduced noise. Experiments on viscosity, physical behaviour, noise reduction and spurious invariants are shown and implementation techniques for parallel Multi-SIMD computers detailed. (author) [fr

  14. High Fidelity, Numerical Investigation of Cross Talk in a Multi-Qubit Xmon Processor

    Science.gov (United States)

    Najafi-Yazdi, Alireza; Kelly, Julian; Martinis, John

    Unwanted electromagnetic interference between qubits, transmission lines, flux lines and other elements of a superconducting quantum processor poses a challenge in engineering such devices. This problem is exacerbated with scaling up the number of qubits. High fidelity, massively parallel computational toolkits, which can simulate the 3D electromagnetic environment and all features of the device, are instrumental in addressing this challenge. In this work, we numerically investigated the crosstalk between various elements of a multi-qubit quantum processor designed and tested by the Google team. The processor consists of 6 superconducting Xmon qubits with flux lines and gatelines. The device also consists of a Purcell filter for readout. The simulations are carried out with a high fidelity, massively parallel EM solver. We will present our findings regarding the sources of crosstalk in the device, as well as numerical model setup, and a comparison with available experimental data.

  15. On developing B-spline registration algorithms for multi-core processors

    International Nuclear Information System (INIS)

    Shackleford, J A; Kandasamy, N; Sharp, G C

    2010-01-01

    Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.

  16. Highway traffic simulation on multi-processor computers

    Energy Technology Data Exchange (ETDEWEB)

    Hanebutte, U.R.; Doss, E.; Tentner, A.M.

    1997-04-01

    A computer model has been developed to simulate highway traffic for various degrees of automation with a high level of fidelity in regard to driver control and vehicle characteristics. The model simulates vehicle maneuvering in a multi-lane highway traffic system and allows for the use of Intelligent Transportation System (ITS) technologies such as an Automated Intelligent Cruise Control (AICC). The structure of the computer model facilitates the use of parallel computers for the highway traffic simulation, since domain decomposition techniques can be applied in a straight forward fashion. In this model, the highway system (i.e. a network of road links) is divided into multiple regions; each region is controlled by a separate link manager residing on an individual processor. A graphical user interface augments the computer model kv allowing for real-time interactive simulation control and interaction with each individual vehicle and road side infrastructure element on each link. Average speed and traffic volume data is collected at user-specified loop detector locations. Further, as a measure of safety the so- called Time To Collision (TTC) parameter is being recorded.

  17. Improving our understanding of multi-tasking in healthcare: Drawing together the cognitive psychology and healthcare literature.

    Science.gov (United States)

    Douglas, Heather E; Raban, Magdalena Z; Walter, Scott R; Westbrook, Johanna I

    2017-03-01

    Multi-tasking is an important skill for clinical work which has received limited research attention. Its impacts on clinical work are poorly understood. In contrast, there is substantial multi-tasking research in cognitive psychology, driver distraction, and human-computer interaction. This review synthesises evidence of the extent and impacts of multi-tasking on efficiency and task performance from health and non-healthcare literature, to compare and contrast approaches, identify implications for clinical work, and to develop an evidence-informed framework for guiding the measurement of multi-tasking in future healthcare studies. The results showed healthcare studies using direct observation have focused on descriptive studies to quantify concurrent multi-tasking and its frequency in different contexts, with limited study of impact. In comparison, non-healthcare studies have applied predominantly experimental and simulation designs, focusing on interleaved and concurrent multi-tasking, and testing theories of the mechanisms by which multi-tasking impacts task efficiency and performance. We propose a framework to guide the measurement of multi-tasking in clinical settings that draws together lessons from these siloed research efforts. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Using Multi-Core Systems for Rover Autonomy

    Science.gov (United States)

    Clement, Brad; Estlin, Tara; Bornstein, Benjamin; Springer, Paul; Anderson, Robert C.

    2010-01-01

    Task Objectives are: (1) Develop and demonstrate key capabilities for rover long-range science operations using multi-core computing, (a) Adapt three rover technologies to execute on SOA multi-core processor (b) Illustrate performance improvements achieved (c) Demonstrate adapted capabilities with rover hardware, (2) Targeting three high-level autonomy technologies (a) Two for onboard data analysis (b) One for onboard command sequencing/planning, (3) Technologies identified as enabling for future missions, (4)Benefits will be measured along several metrics: (a) Execution time / Power requirements (b) Number of data products processed per unit time (c) Solution quality

  19. Multi-threaded ATLAS simulation on Intel Knights Landing processors

    Science.gov (United States)

    Farrell, Steven; Calafiura, Paolo; Leggett, Charles; Tsulaia, Vakhtang; Dotti, Andrea; ATLAS Collaboration

    2017-10-01

    The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with details on its multi-threaded design. Then, we will present a performance analysis of the application on KNL devices and compare it to a traditional x86 platform to demonstrate the capabilities of the architecture and evaluate the benefits of utilizing KNL platforms like Cori for ATLAS production.

  20. Parallel Computational Intelligence-Based Multi-Camera Surveillance System

    Directory of Open Access Journals (Sweden)

    Sergio Orts-Escolano

    2014-04-01

    Full Text Available In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units. It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.

  1. Multi-task pose-invariant face recognition.

    Science.gov (United States)

    Ding, Changxing; Xu, Chang; Tao, Dacheng

    2015-03-01

    Face images captured in unconstrained environments usually contain significant pose variation, which dramatically degrades the performance of algorithms designed to recognize frontal faces. This paper proposes a novel face identification framework capable of handling the full range of pose variations within ±90° of yaw. The proposed framework first transforms the original pose-invariant face recognition problem into a partial frontal face recognition problem. A robust patch-based face representation scheme is then developed to represent the synthesized partial frontal faces. For each patch, a transformation dictionary is learnt under the proposed multi-task learning scheme. The transformation dictionary transforms the features of different poses into a discriminative subspace. Finally, face matching is performed at patch level rather than at the holistic level. Extensive and systematic experimentation on FERET, CMU-PIE, and Multi-PIE databases shows that the proposed method consistently outperforms single-task-based baselines as well as state-of-the-art methods for the pose problem. We further extend the proposed algorithm for the unconstrained face verification problem and achieve top-level performance on the challenging LFW data set.

  2. Design concepts for a virtualizable embedded MPSoC architecture enabling virtualization in embedded multi-processor systems

    CERN Document Server

    Biedermann, Alexander

    2014-01-01

    Alexander Biedermann presents a generic hardware-based virtualization approach, which may transform an array of any off-the-shelf embedded processors into a multi-processor system with high execution dynamism. Based on this approach, he highlights concepts for the design of energy aware systems, self-healing systems as well as parallelized systems. For the latter, the novel so-called Agile Processing scheme is introduced by the author, which enables a seamless transition between sequential and parallel execution schemes. The design of such virtualizable systems is further aided by introduction

  3. Early Student Support for Application of Advanced Multi-Core Processor Technologies to Oceanographic Research

    Science.gov (United States)

    2016-05-07

    REPORT DOCUMENTATION PAGE I . ... ... .. . ,...,.., ............. OMB No. 0704-0188 The public reporting burden for this collection of...Student Support for Appl ication of Advanced Multi- Core Processor N00014-12-1-0298 Technologies to Oceanographic Research Sb. GRANT NUMBER Sc...communications protocols (i.e. UART, I2C, and SPI), through the , ’ . handing off of the data to the server APis. By providing a common set of tools

  4. Parallel Computational Intelligence-Based Multi-Camera Surveillance System

    OpenAIRE

    Orts-Escolano, Sergio; Garcia-Rodriguez, Jose; Morell, Vicente; Cazorla, Miguel; Azorin-Lopez, Jorge; García-Chamizo, Juan Manuel

    2014-01-01

    In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units). It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mob...

  5. A proposal toward a possibilistic multi-robot task allocation

    Energy Technology Data Exchange (ETDEWEB)

    Guerrero, J.

    2017-07-01

    One of the main problems to solve in multi-agent (or multi-robot) systems is to select the best robot or group of robots to carry out a specific task. This problem, referenced as Multi-Agent (robot) task allocation (MRTA), is still an open issue in real environments. Swarm intelligence methods provide very simple solutions for the MRTA problem. One of the most widely used swarm methods are the so-called Response Threshold algorithms, where the behavior of the systems is modeled as a Markov chain and the robots in each time step select the next task to execute according to a transition probability function. Among other factors, this probability depends on a stimulus (for example the distance between the robot and the task). This classical probabilistic approach presents a lot of disadvantages:the transition function must meet constraints of a probabilistic distribution, the system only convergences to a stationary asymptotically, and so on. In order to overcome these problems, a new theoretical framework based on fuzzy (possibilistic) Markov chains was proposed [2]. As was proved, the possibilistic Markov chains outperform the classical probabilistic when a Max-Min algebra is considered for matrix composition. For example, fuzzy Markov chains convergence to a stable state in a finite number of steps 10 times faster than its probability counter part. Moreover, they improve the predictions of the system under imprecise information. Firstly, this paper will review relevant work in MRTA, from theoretical and experimental point of view. Then it will be summarized the aforementioned recent advances given toward a new possibilistic swarm multi-robot task allocation framework. It will be seen how the possibilistic Markov chains behave when other algebras are considered for matrix composition [1] and how the possibility transition function impacts on the system's performance [3]. Finally, it will be proposed new future works in this field. (Author)

  6. A proposal toward a possibilistic multi-robot task allocation

    International Nuclear Information System (INIS)

    Guerrero, J.

    2017-01-01

    One of the main problems to solve in multi-agent (or multi-robot) systems is to select the best robot or group of robots to carry out a specific task. This problem, referenced as Multi-Agent (robot) task allocation (MRTA), is still an open issue in real environments. Swarm intelligence methods provide very simple solutions for the MRTA problem. One of the most widely used swarm methods are the so-called Response Threshold algorithms, where the behavior of the systems is modeled as a Markov chain and the robots in each time step select the next task to execute according to a transition probability function. Among other factors, this probability depends on a stimulus (for example the distance between the robot and the task). This classical probabilistic approach presents a lot of disadvantages:the transition function must meet constraints of a probabilistic distribution, the system only convergences to a stationary asymptotically, and so on. In order to overcome these problems, a new theoretical framework based on fuzzy (possibilistic) Markov chains was proposed [2]. As was proved, the possibilistic Markov chains outperform the classical probabilistic when a Max-Min algebra is considered for matrix composition. For example, fuzzy Markov chains convergence to a stable state in a finite number of steps 10 times faster than its probability counter part. Moreover, they improve the predictions of the system under imprecise information. Firstly, this paper will review relevant work in MRTA, from theoretical and experimental point of view. Then it will be summarized the aforementioned recent advances given toward a new possibilistic swarm multi-robot task allocation framework. It will be seen how the possibilistic Markov chains behave when other algebras are considered for matrix composition [1] and how the possibility transition function impacts on the system's performance [3]. Finally, it will be proposed new future works in this field. (Author)

  7. Optimisation of LHCb Applications for Multi- and Manycore Job Submission

    CERN Document Server

    Rauschmayr, Nathalie; Graciani Diaz, Ricardo; Charpentier, Philippe

    The Worldwide LHC Computing Grid (WLCG) is the largest Computing Grid and is used by all Large Hadron Collider experiments in order to process their recorded data. It provides approximately 400k cores and storages. Nowadays, most of the resources consist of multi- and manycore processors. Conditions at the Large Hadron Collider experiments will change and much larger workloads and jobs consuming more memory are expected in future. This has lead to a shift of paradigm which focuses on executing jobs as multiprocessor tasks in order to use multi- and manycore processors more efficiently. All experiments at CERN are currently investigating how such computing resources can be used more efficiently in terms of memory requirements and handling of concurrency. Until now, there are still many unsolved issues regarding software, scheduling, CPU accounting, task queues, which need to be solved by grid sites and experiments. This thesis develops a systematic approach to optimise the software of the LHCb experiment fo...

  8. Energy Efficient Real-Time Scheduling Using DPM on Mobile Sensors with a Uniform Multi-Cores

    Directory of Open Access Journals (Sweden)

    Youngmin Kim

    2017-12-01

    Full Text Available In wireless sensor networks (WSNs, sensor nodes are deployed for collecting and analyzing data. These nodes use limited energy batteries for easy deployment and low cost. The use of limited energy batteries is closely related to the lifetime of the sensor nodes when using wireless sensor networks. Efficient-energy management is important to extending the lifetime of the sensor nodes. Most effort for improving power efficiency in tiny sensor nodes has focused mainly on reducing the power consumed during data transmission. However, recent emergence of sensor nodes equipped with multi-cores strongly requires attention to be given to the problem of reducing power consumption in multi-cores. In this paper, we propose an energy efficient scheduling method for sensor nodes supporting a uniform multi-cores. We extend the proposed T-Ler plane based scheduling for global optimal scheduling of a uniform multi-cores and multi-processors to enable power management using dynamic power management. In the proposed approach, processor selection for a scheduling and mapping method between the tasks and processors is proposed to efficiently utilize dynamic power management. Experiments show the effectiveness of the proposed approach compared to other existing methods.

  9. Ranking Performance Measures in Multi-Task Agencies

    DEFF Research Database (Denmark)

    Christensen, Peter Ove; Sabac, Florin; Tian, Joyce

    We derive sufficient conditions for ranking performance evaluation systems in multi-task agency models using both optimal and linear contracts in terms of a second-order stochastic dominance (SSD) condition on the likelihood ratios. The SSD condition can be replaced by a variance-covariance matrix...

  10. Quality-Driven Model-Based Design of MultiProcessor Embedded Systems for Highlydemanding Applications

    DEFF Research Database (Denmark)

    Jozwiak, Lech; Madsen, Jan

    2013-01-01

    The recent spectacular progress in modern nano-dimension semiconductor technology enabled implementation of a complete complex multi-processor system on a single chip (MPSoC), global networking and mobile wire-less communication, and facilitated a fast progress in these areas. New important...... accessible or distant) objects, installations, machines or devices, or even implanted in human or animal body can serve as examples. However, many of the modern embedded application impose very stringent functional and parametric demands. Moreover, the spectacular advances in microelectronics introduced...

  11. Energy Efficient Multi-Core Processing

    Directory of Open Access Journals (Sweden)

    Charles Leech

    2014-06-01

    Full Text Available This paper evaluates the present state of the art of energy-efficient embedded processor design techniques and demonstrates, how small, variable-architecture embedded processors may exploit a run-time minimal architectural synthesis technique to achieve greater energy and area efficiency whilst maintaining performance. The picoMIPS architecture is presented, inspired by the MIPS, as an example of a minimal and energy efficient processor. The picoMIPS is a variablearchitecture RISC microprocessor with an application-specific minimised instruction set. Each implementation will contain only the necessary datapath elements in order to maximise area efficiency. Due to the relationship between logic gate count and power consumption, energy efficiency is also maximised in the processor therefore the system is designed to perform a specific task in the most efficient processor-based form. The principles of the picoMIPS processor are illustrated with an example of the discrete cosine transform (DCT and inverse DCT (IDCT algorithms implemented in a multi-core context to demonstrate the concept of minimal architecture synthesis and how it can be used to produce an application specific, energy efficient processor.

  12. Development of a VME multi-processor system for plasma control at the JT-60 Upgrade

    International Nuclear Information System (INIS)

    Takahashi, M.; Kurihara, K.; Kawamata, Y.; Akasaka, H.; Kimura, T.

    1992-01-01

    Design and initial operation results are reported of a VME multi-processor system [1] for plasma control at a large fusion device named 'the JT-60 Upgrade' utilizing three 32-bit MC88100 based RISC computers and VME components. Development of the system was stimulated by faster and more accurate computation requirements for the plasma position and current control. The RISC computers operate at 25 MHz along with two cashe memories named MC88200. We newly developed VME bus modules of up/down counter, analog-to-digital converter and clock pulse generator for measuring magnetic field and coil current and for synchronizing the processing in the three RISCs and direct digital controllers (DDCs) of magnet power supplies. We also evaluated that the speed of the data transfer between the VME bus system and the DDCs through CAMAC highways satisfies the above requirements. In the initial operation of the JT-60 upgrade, it has been proved that the VME multi-processor system well controls the plasma position and current with a sampling period of 250 μsec and a delay of 500 μsec. (author)

  13. Multi-tasking and Arduino : why and how?

    NARCIS (Netherlands)

    Feijs, L.M.G.; Chen, L.L.; Djajadiningrat, T.; Feijs, L.M.G.; Fraser, S.; Hu, J.; Kyffin, S.; Steffen, D.

    2013-01-01

    In this article I argue that it is important to develop experiential prototypes which have multi-tasking capabilities. At the same time I show that for embedded prototype software based on the popular Arduino platform this is not too difficult. The approach is explained and illustrated using

  14. Multi-task learning with group information for human action recognition

    Science.gov (United States)

    Qian, Li; Wu, Song; Pu, Nan; Xu, Shulin; Xiao, Guoqiang

    2018-04-01

    Human action recognition is an important and challenging task in computer vision research, due to the variations in human motion performance, interpersonal differences and recording settings. In this paper, we propose a novel multi-task learning framework with group information (MTL-GI) for accurate and efficient human action recognition. Specifically, we firstly obtain group information through calculating the mutual information according to the latent relationship between Gaussian components and action categories, and clustering similar action categories into the same group by affinity propagation clustering. Additionally, in order to explore the relationships of related tasks, we incorporate group information into multi-task learning. Experimental results evaluated on two popular benchmarks (UCF50 and HMDB51 datasets) demonstrate the superiority of our proposed MTL-GI framework.

  15. Concurrent Learning of Control in Multi agent Sequential Decision Tasks

    Science.gov (United States)

    2018-04-17

    Concurrent Learning of Control in Multi-agent Sequential Decision Tasks The overall objective of this project was to develop multi-agent reinforcement... learning (MARL) approaches for intelligent agents to autonomously learn distributed control policies in decentral- ized partially observable... learning of policies in Dec-POMDPs, established performance bounds, evaluated these algorithms both theoretically and empirically, The views

  16. Research status of multi - robot systems task allocation and uncertainty treatment

    Science.gov (United States)

    Li, Dahui; Fan, Qi; Dai, Xuefeng

    2017-08-01

    The multi-robot coordination algorithm has become a hot research topic in the field of robotics in recent years. It has a wide range of applications and good application prospects. This paper analyzes and summarizes the current research status of multi-robot coordination algorithms at home and abroad. From task allocation and dealing with uncertainty, this paper discusses the multi-robot coordination algorithm and presents the advantages and disadvantages of each method commonly used.

  17. Multi-robot Task Allocation for Search and Rescue Missions

    International Nuclear Information System (INIS)

    Hussein, Ahmed; Adel, Mohamed; Bakr, Mohamed; Shehata, Omar M; Khamis, Alaa

    2014-01-01

    Many researchers from academia and industry are attracted to investigate how to design and develop robust versatile multi-robot systems by solving a number of challenging and complex problems such as task allocation, group formation, self-organization and much more. In this study, the problem of multi-robot task allocation (MRTA) is tackled. MRTA is the problem of optimally allocating a set of tasks to a group of robots to optimize the overall system performance while being subjected to a set of constraints. A generic market-based approach is proposed in this paper to solve this problem. The efficacy of the proposed approach is quantitatively evaluated through simulation and real experimentation using heterogeneous Khepera-III mobile robots. The results from both simulation and experimentation indicate the high performance of the proposed algorithms and their applicability in search and rescue missions

  18. Optimization of multi-phase compressible lattice Boltzmann codes on massively parallel multi-core systems

    NARCIS (Netherlands)

    Biferale, L.; Mantovani, F.; Pivanti, M.; Pozzati, F.; Sbragaglia, M.; Schifano, S.F.; Toschi, F.; Tripiccione, R.

    2011-01-01

    We develop a Lattice Boltzmann code for computational fluid-dynamics and optimize it for massively parallel systems based on multi-core processors. Our code describes 2D multi-phase compressible flows. We analyze the performance bottlenecks that we find as we gradually expose a larger fraction of

  19. An Initial Investigation of Factors Affecting Multi-Task Performance

    National Research Council Canada - National Science Library

    Branscome, Tersa A; Swoboda, Jennifer C; Fatkin, Linda T

    2007-01-01

    This report presents the results of the first in a series of investigations designed to increase fundamental knowledge and understanding of the factors affecting multi-task performance in a military environment...

  20. Task-oriented control of Single-Master Multi-Slave Manipulator System

    International Nuclear Information System (INIS)

    Kosuge, Kazuhiro; Ishikawa, Jun; Furuta, Katsuhisa; Hariki, Kazuo; Sakai, Masaru.

    1994-01-01

    A master-slave manipulator system, in general, consists of a master arm manipulated by a human and a slave arm used for real tasks. Some tasks, such as manipulation of a heavy object, etc., require two or more slave arms operated simultaneously. A Single-Master Multi-Slave Manipulator System consists of a master arm with six degrees of freedom and two or more slave arms, each of which has six or more degrees of freedom. In this system, a master arm controls the task-oriented variables using Virtual Internal Model (VIM) based on the concept of 'Task-Oriented Control'. VIM is a reference model driven by sensory information and used to describe the desired relation between the motion of a master arm and task-oriented variables. The motion of slave arms are controlled based on the task oriented variables generated by VIM and tailors the system to meet specific tasks. A single-master multi-slave manipulator system, having two slave arms, is experimentally developed and illustrates the concept. (author)

  1. Robust visual tracking via structured multi-task sparse learning

    KAUST Repository

    Zhang, Tianzhu

    2012-11-09

    In this paper, we formulate object tracking in a particle filter framework as a structured multi-task sparse learning problem, which we denote as Structured Multi-Task Tracking (S-MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in Multi-Task Tracking (MTT). By employing popular sparsity-inducing lp,q mixed norms (specifically p∈2,∞ and q=1), we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular L1 tracker (Mei and Ling, IEEE Trans Pattern Anal Mach Intel 33(11):2259-2272, 2011) is a special case of our MTT formulation (denoted as the L11 tracker) when p=q=1. Under the MTT framework, some of the tasks (particle representations) are often more closely related and more likely to share common relevant covariates than other tasks. Therefore, we extend the MTT framework to take into account pairwise structural correlations between particles (e.g. spatial smoothness of representation) and denote the novel framework as S-MTT. The problem of learning the regularized sparse representation in MTT and S-MTT can be solved efficiently using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, S-MTT and MTT are computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that S-MTT is much better than MTT, and both methods consistently outperform state-of-the-art trackers. © 2012 Springer Science+Business Media New York.

  2. A Spatial Queuing-Based Algorithm for Multi-Robot Task Allocation

    Directory of Open Access Journals (Sweden)

    William Lenagh

    2015-08-01

    Full Text Available Multi-robot task allocation (MRTA is an important area of research in autonomous multi-robot systems. The main problem in MRTA is to allocate a set of tasks to a set of robots so that the tasks can be completed by the robots while ensuring that a certain metric, such as the time required to complete all tasks, or the distance traveled, or the energy expended by the robots is reduced. We consider a scenario where tasks can appear dynamically and a task needs to be performed by multiple robots to be completed. We propose a new algorithm called SQ-MRTA (Spatial Queueing-MRTA that uses a spatial queue-based model to allocate tasks between robots in a distributed manner. We have implemented the SQ-MRTA algorithm on accurately simulated models of Corobot robots within the Webots simulator for different numbers of robots and tasks and compared its performance with other state-of-the-art MRTA algorithms. Our results show that the SQ-MRTA algorithm is able to scale up with the number of tasks and robots in the environment, and it either outperforms or performs comparably with respect to other distributed MRTA algorithms.

  3. Keystone Business Models for Network Security Processors

    OpenAIRE

    Arthur Low; Steven Muegge

    2013-01-01

    Network security processors are critical components of high-performance systems built for cybersecurity. Development of a network security processor requires multi-domain experience in semiconductors and complex software security applications, and multiple iterations of both software and hardware implementations. Limited by the business models in use today, such an arduous task can be undertaken only by large incumbent companies and government organizations. Neither the “fabless semiconductor...

  4. Algorithm Design of CPCI Backboard's Interrupts Management Based on VxWorks' Multi-Tasks

    Science.gov (United States)

    Cheng, Jingyuan; An, Qi; Yang, Junfeng

    2006-09-01

    This paper begins with a brief introduction of the embedded real-time operating system VxWorks and CompactPCI standard, then gives the programming interfaces of Peripheral Controller Interface (PCI) configuring, interrupts handling and multi-tasks programming interface under VxWorks, and then emphasis is placed on the software frameworks of CPCI interrupt management based on multi-tasks. This method is sound in design and easy to adapt, ensures that all possible interrupts are handled in time, which makes it suitable for data acquisition systems with multi-channels, a high data rate, and hard real-time high energy physics.

  5. Robust visual tracking via multi-task sparse learning

    KAUST Repository

    Zhang, Tianzhu

    2012-06-01

    In this paper, we formulate object tracking in a particle filter framework as a multi-task sparse learning problem, which we denote as Multi-Task Tracking (MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in MTT. By employing popular sparsity-inducing p, q mixed norms (p D; 1), we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular L 1 tracker [15] is a special case of our MTT formulation (denoted as the L 11 tracker) when p q 1. The learning problem can be efficiently solved using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, MTT is computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that MTT methods consistently outperform state-of-the-art trackers. © 2012 IEEE.

  6. HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.

    Science.gov (United States)

    Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye

    2017-02-09

    In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.

  7. Dynamic Task Allocation in Multi-Hop Multimedia Wireless Sensor Networks with Low Mobility

    Directory of Open Access Journals (Sweden)

    Klaus Moessner

    2013-10-01

    Full Text Available This paper presents a task allocation-oriented framework to enable efficient in-network processing and cost-effective multi-hop resource sharing for dynamic multi-hop multimedia wireless sensor networks with low node mobility, e.g., pedestrian speeds. The proposed system incorporates a fast task reallocation algorithm to quickly recover from possible network service disruptions, such as node or link failures. An evolutional self-learning mechanism based on a genetic algorithm continuously adapts the system parameters in order to meet the desired application delay requirements, while also achieving a sufficiently long network lifetime. Since the algorithm runtime incurs considerable time delay while updating task assignments, we introduce an adaptive window size to limit the delay periods and ensure an up-to-date solution based on node mobility patterns and device processing capabilities. To the best of our knowledge, this is the first study that yields multi-objective task allocation in a mobile multi-hop wireless environment under dynamic conditions. Simulations are performed in various settings, and the results show considerable performance improvement in extending network lifetime compared to heuristic mechanisms. Furthermore, the proposed framework provides noticeable reduction in the frequency of missing application deadlines.

  8. The Multi-Feature Hypothesis: Connectionist Guidelines for L2 Task Design

    Science.gov (United States)

    Moonen, Machteld; de Graaff, Rick; Westhoff, Gerard; Brekelmans, Mieke

    2014-01-01

    This study focuses on the effects of task type on the retention and ease of activation of second language (L2) vocabulary, based on the multi-feature hypothesis (Moonen, De Graaff, & Westhoff, 2006). Two tasks were compared: a writing task and a list-learning task. It was hypothesized that performing the writing task would yield higher…

  9. Identifying beneficial task relations for multi-task learning in deep neural networks

    DEFF Research Database (Denmark)

    Bingel, Joachim; Søgaard, Anders

    2017-01-01

    Multi-task learning (MTL) in deep neural networks for NLP has recently received increasing interest due to some compelling benefits, including its potential to efficiently regularize models and to reduce the need for labeled data. While it has brought significant improvements in a number of NLP...

  10. SHARP: A multi-mission AI system for spacecraft telemetry monitoring and diagnosis

    Science.gov (United States)

    Lawson, Denise L.; James, Mark L.

    1989-01-01

    The Spacecraft Health Automated Reasoning Prototype (SHARP) is a system designed to demonstrate automated health and status analysis for multi-mission spacecraft and ground data systems operations. Telecommunications link analysis of the Voyager II spacecraft is the initial focus for the SHARP system demonstration which will occur during Voyager's encounter with the planet Neptune in August, 1989, in parallel with real-time Voyager operations. The SHARP system combines conventional computer science methodologies with artificial intelligence techniques to produce an effective method for detecting and analyzing potential spacecraft and ground systems problems. The system performs real-time analysis of spacecraft and other related telemetry, and is also capable of examining data in historical context. A brief introduction is given to the spacecraft and ground systems monitoring process at the Jet Propulsion Laboratory. The current method of operation for monitoring the Voyager Telecommunications subsystem is described, and the difficulties associated with the existing technology are highlighted. The approach taken in the SHARP system to overcome the current limitations is also described, as well as both the conventional and artificial intelligence solutions developed in SHARP.

  11. Multi-threaded ATLAS simulation on Intel Knights Landing processors

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00014247; The ATLAS collaboration; Calafiura, Paolo; Leggett, Charles; Tsulaia, Vakhtang; Dotti, Andrea

    2017-01-01

    The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with detai...

  12. Summary of multi-core hardware and programming model investigations

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Suzanne Marie; Pedretti, Kevin Thomas Tauke; Levenhagen, Michael J.

    2008-05-01

    This report summarizes our investigations into multi-core processors and programming models for parallel scientific applications. The motivation for this study was to better understand the landscape of multi-core hardware, future trends, and the implications on system software for capability supercomputers. The results of this study are being used as input into the design of a new open-source light-weight kernel operating system being targeted at future capability supercomputers made up of multi-core processors. A goal of this effort is to create an agile system that is able to adapt to and efficiently support whatever multi-core hardware and programming models gain acceptance by the community.

  13. Application of the coupled code Athlet-Quabox/Cubbox for the extreme scenarios of the OECD/NRC BWR turbine trip benchmark and its performance on multi-processor computers

    International Nuclear Information System (INIS)

    Langenbuch, S.; Schmidt, K.D.; Velkov, K.

    2003-01-01

    The OECD/NRC BWR Turbine Trip (TT) Benchmark is investigated to perform code-to-code comparison of coupled codes including a comparison to measured data which are available from turbine trip experiments at Peach Bottom 2. This Benchmark problem for a BWR over-pressure transient represents a challenging application of coupled codes which integrate 3-dimensional neutron kinetics into thermal-hydraulic system codes for best-estimate simulation of plant transients. This transient represents a typical application of coupled codes which are usually performed on powerful workstations using a single CPU. Nowadays, the availability of multi-CPUs is much easier. Indeed, powerful workstations already provide 4 to 8 CPU, computer centers give access to multi-processor systems with numbers of CPUs in the order of 16 up to several 100. Therefore, the performance of the coupled code Athlet-Quabox/Cubbox on multi-processor systems is studied. Different cases of application lead to changing requirements of the code efficiency, because the amount of computer time spent in different parts of the code is varying. This paper presents main results of the coupled code Athlet-Quabox/Cubbox for the extreme scenarios of the BWR TT Benchmark together with evaluations of the code performance on multi-processor computers. (authors)

  14. Hierarchical DSE for multi-ASIP platforms

    DEFF Research Database (Denmark)

    Micconi, Laura; Corvino, Rosilde; Gangadharan, Deepak

    2013-01-01

    This work proposes a hierarchical Design Space Exploration (DSE) for the design of multi-processor platforms targeted to specific applications with strict timing and area constraints. In particular, it considers platforms integrating multiple Application Specific Instruction Set Processors (ASIPs...

  15. The Fermilab Advanced Computer Program multi-array processor system (ACPMAPS): A site oriented supercomputer for theoretical physics

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1988-08-01

    The ACP Multi-Array Processor System (ACPMAPS) is a highly cost effective, local memory parallel computer designed for floating point intensive grid based problems. The processing nodes of the system are single board array processors based on the FORTRAN and C programmable Weitek XL chip set. The nodes are connected by a network of very high bandwidth 16 port crossbar switches. The architecture is designed to achieve the highest possible cost effectiveness while maintaining a high level of programmability. The primary application of the machine at Fermilab will be lattice gauge theory. The hardware is supported by a transparent site oriented software system called CANOPY which shields theorist users from the underlying node structure. 4 refs., 2 figs

  16. Dynamic configuration management of a multi-standard and multi-mode reconfigurable multi-ASIP architecture for turbo decoding

    Science.gov (United States)

    Lapotre, Vianney; Gogniat, Guy; Baghdadi, Amer; Diguet, Jean-Philippe

    2017-12-01

    The multiplication of connected devices goes along with a large variety of applications and traffic types needing diverse requirements. Accompanying this connectivity evolution, the last years have seen considerable evolutions of wireless communication standards in the domain of mobile telephone networks, local/wide wireless area networks, and Digital Video Broadcasting (DVB). In this context, intensive research has been conducted to provide flexible turbo decoder targeting high throughput, multi-mode, multi-standard, and power consumption efficiency. However, flexible turbo decoder implementations have not often considered dynamic reconfiguration issues in this context that requires high speed configuration switching. Starting from this assessment, this paper proposes the first solution that allows frame-by-frame run-time configuration management of a multi-processor turbo decoder without compromising the decoding performances.

  17. Multi-robot task allocation based on two dimensional artificial fish swarm algorithm

    Science.gov (United States)

    Zheng, Taixiong; Li, Xueqin; Yang, Liangyi

    2007-12-01

    The problem of task allocation for multiple robots is to allocate more relative-tasks to less relative-robots so as to minimize the processing time of these tasks. In order to get optimal multi-robot task allocation scheme, a twodimensional artificial swarm algorithm based approach is proposed in this paper. In this approach, the normal artificial fish is extended to be two dimension artificial fish. In the two dimension artificial fish, each vector of primary artificial fish is extended to be an m-dimensional vector. Thus, each vector can express a group of tasks. By redefining the distance between artificial fish and the center of artificial fish, the behavior of two dimension fish is designed and the task allocation algorithm based on two dimension artificial swarm algorithm is put forward. At last, the proposed algorithm is applied to the problem of multi-robot task allocation and comparer with GA and SA based algorithm is done. Simulation and compare result shows the proposed algorithm is effective.

  18. Peer Pressure in Multi-Dimensional Work Tasks

    OpenAIRE

    Felix Ebeling; Gerlinde Fellner; Johannes Wahlig

    2012-01-01

    We study the influence of peer pressure in multi-dimensional work tasks theoretically and in a controlled laboratory experiment. Thereby, workers face peer pressure in only one work dimension. We find that effort provision increases in the dimension where peer pressure is introduced. However, not all of this increase translates into a productivity gain, since the effect is partly offset by a decrease of effort in the work dimension without peer pressure. Furthermore, this tradeoff is stronger...

  19. Indistinguishability Operators Applied to Task Allocation Problems in Multi-Agent Systems

    Directory of Open Access Journals (Sweden)

    José Guerrero

    2017-09-01

    Full Text Available In this paper we show an application of indistinguishability operators to model response functions. Such functions are used in the mathematical modeling of the task allocation problem in multi-agent systems when the stimulus, perceived by the agent, to perform a task is assessed by means of the response threshold model. In particular, we propose this kind of operators to represent a response function when the stimulus only depends on the distance between the agent and a determined task, since we prove that two celebrated response functions used in the literature can be reproduced by appropriate indistinguishability operators when the stimulus only depends on the distance to each task that must be carried out. Despite the fact there is currently no systematic method to generate response functions, this paper provides, for the first time, a theoretical foundation to generate them and study their properties. To validate the theoretical results, the aforementioned indistinguishability operators have been used to simulate, with MATLAB, the allocation of a set of tasks in a multi-robot system with fuzzy Markov chains.

  20. SHARP: A multi-mission artificial intelligence system for spacecraft telemetry monitoring and diagnosis

    Science.gov (United States)

    Lawson, Denise L.; James, Mark L.

    1989-01-01

    The Spacecraft Health Automated Reasoning Prototype (SHARP) is a system designed to demonstrate automated health and status analysis for multi-mission spacecraft and ground data systems operations. Telecommunications link analysis of the Voyager 2 spacecraft is the initial focus for the SHARP system demonstration which will occur during Voyager's encounter with the planet Neptune in August, 1989, in parallel with real time Voyager operations. The SHARP system combines conventional computer science methodologies with artificial intelligence techniques to produce an effective method for detecting and analyzing potential spacecraft and ground systems problems. The system performs real time analysis of spacecraft and other related telemetry, and is also capable of examining data in historical context. A brief introduction is given to the spacecraft and ground systems monitoring process at the Jet Propulsion Laboratory. The current method of operation for monitoring the Voyager Telecommunications subsystem is described, and the difficulties associated with the existing technology are highlighted. The approach taken in the SHARP system to overcome the current limitations is also described, as well as both the conventional and artificial intelligence solutions developed in SHARP.

  1. Power-Energy Simulation for Multi-Core Processors in Bench-marking

    Directory of Open Access Journals (Sweden)

    Mona A. Abou-Of

    2017-01-01

    Full Text Available At Microarchitectural level, multi-core processor, as a complex System on Chip, has sophisticated on-chip components including cores, shared caches, interconnects and system controllers such as memory and ethernet controllers. At technological level, architects should consider the device types forecast in the International Technology Roadmap for Semiconductors (ITRS. Energy simulation enables architects to study two important metrics simultaneously. Timing is a key element of the CPU performance that imposes constraints on the CPU target clock frequency. Power and the resulting heat impose more severe design constraints, such as core clustering, while semiconductor industry is providing more transistors in the die area in pace with Moore’s law. Energy simulators provide a solution for such serious challenge. Energy is modelled either by combining performance benchmarking tool with a power simulator or by an integrated framework of both performance simulator and power profiling system. This article presents and asses trade-offs between different architectures using four cores battery-powered mobile systems by running a custom-made and a standard benchmark tools. The experimental results assure the Energy/ Frequency convexity rule over a range of frequency settings on different number of enabled cores. The reported results show that increasing the number of cores has a great effect on increasing the power consumption. However, a minimum energy dissipation will occur at a lower frequency which reduces the power consumption. Despite that, increasing the number of cores will also increase the effective cores value which will reflect a better processor performance.

  2. Synthetic Synchronisation: From Attention and Multi-Tasking to Negative Capability and Judgment

    Science.gov (United States)

    Stables, Andrew

    2013-01-01

    Educational literature has tended to focus, explicitly and implicitly, on two kinds of task orientation: the ability either to focus on a single task, or to multi-task. A third form of orientation characterises many highly successful people. This is the ability to combine several tasks into one: to "kill two (or more) birds with one…

  3. Improved sensitivity and specificity for resting state and task fMRI with multiband multi-echo EPI compared to multi-echo EPI at 7T.

    NARCIS (Netherlands)

    Boyacioglu, R.; Schulz, J.; Koopmans, P.J.; Barth, M.; Norris, David Gordon

    2015-01-01

    A multiband multi-echo (MBME) sequence is implemented and compared to a matched standard multi-echo (ME) protocol to investigate the potential improvement in sensitivity and spatial specificity at 7 T for resting state and task fMRI. ME acquisition is attractive because BOLD sensitivity is less

  4. Interactive high-resolution isosurface ray casting on multicore processors.

    Science.gov (United States)

    Wang, Qin; JaJa, Joseph

    2008-01-01

    We present a new method for the interactive rendering of isosurfaces using ray casting on multi-core processors. This method consists of a combination of an object-order traversal that coarsely identifies possible candidate 3D data blocks for each small set of contiguous pixels, and an isosurface ray casting strategy tailored for the resulting limited-size lists of candidate 3D data blocks. While static screen partitioning is widely used in the literature, our scheme performs dynamic allocation of groups of ray casting tasks to ensure almost equal loads among the different threads running on multi-cores while maintaining spatial locality. We also make careful use of memory management environment commonly present in multi-core processors. We test our system on a two-processor Clovertown platform, each consisting of a Quad-Core 1.86-GHz Intel Xeon Processor, for a number of widely different benchmarks. The detailed experimental results show that our system is efficient and scalable, and achieves high cache performance and excellent load balancing, resulting in an overall performance that is superior to any of the previous algorithms. In fact, we achieve an interactive isosurface rendering on a 1024(2) screen for all the datasets tested up to the maximum size of the main memory of our platform.

  5. Multi-Source Multi-Target Dictionary Learning for Prediction of Cognitive Decline.

    Science.gov (United States)

    Zhang, Jie; Li, Qingyang; Caselli, Richard J; Thompson, Paul M; Ye, Jieping; Wang, Yalin

    2017-06-01

    Alzheimer's Disease (AD) is the most common type of dementia. Identifying correct biomarkers may determine pre-symptomatic AD subjects and enable early intervention. Recently, Multi-task sparse feature learning has been successfully applied to many computer vision and biomedical informatics researches. It aims to improve the generalization performance by exploiting the shared features among different tasks. However, most of the existing algorithms are formulated as a supervised learning scheme. Its drawback is with either insufficient feature numbers or missing label information. To address these challenges, we formulate an unsupervised framework for multi-task sparse feature learning based on a novel dictionary learning algorithm. To solve the unsupervised learning problem, we propose a two-stage Multi-Source Multi-Target Dictionary Learning (MMDL) algorithm. In stage 1, we propose a multi-source dictionary learning method to utilize the common and individual sparse features in different time slots. In stage 2, supported by a rigorous theoretical analysis, we develop a multi-task learning method to solve the missing label problem. Empirical studies on an N = 3970 longitudinal brain image data set, which involves 2 sources and 5 targets, demonstrate the improved prediction accuracy and speed efficiency of MMDL in comparison with other state-of-the-art algorithms.

  6. VIRTUS: a multi-processor system in FASTBUS

    International Nuclear Information System (INIS)

    Ellett, J.; Jackson, R.; Ritter, R.; Schlein, P.; Yaeger, D.; Zweizig, J.

    1986-01-01

    VIRTUS is a system of parallel MC68000-based processors interconnected by FASTBUS that is used either on-line as an intelligent trigger component or off-line for full event processing. Each processor receives the complete set of data from one event. The host computer, a VAX 11/780, down-line loads all software to the processors, controls and monitors the functioning of all processors, and writes processed data to tape. Instructions, programs, and data are transferred among the processors and the host in the form of fixed format, variable length data blocks. (Auth.)

  7. Multi spectral scaling data acquisition system

    International Nuclear Information System (INIS)

    Behere, Anita; Patil, R.D.; Ghodgaonkar, M.D.; Gopalakrishnan, K.R.

    1997-01-01

    In nuclear spectroscopy applications, it is often desired to acquire data at high rate with high resolution. With the availability of low cost computers, it is possible to make a powerful data acquisition system with minimum hardware and software development, by designing a PC plug-in acquisition board. But in using the PC processor for data acquisition, the PC can not be used as a multitasking node. Keeping this in view, PC plug-in acquisition boards with on-board processor find tremendous applications. Transputer based data acquisition board has been designed which can be configured as a high count rate pulse height MCA or as a Multi Spectral Scaler. Multi Spectral Scaling (MSS) is a new technique, in which multiple spectra are acquired in small time frames and are then analyzed. This paper describes the details of this multi spectral scaling data acquisition system. 2 figs

  8. Scalable Parallelization of Skyline Computation for Multi-core Processors

    DEFF Research Database (Denmark)

    Chester, Sean; Sidlauskas, Darius; Assent, Ira

    2015-01-01

    The skyline is an important query operator for multi-criteria decision making. It reduces a dataset to only those points that offer optimal trade-offs of dimensions. In general, it is very expensive to compute. Recently, multi-core CPU algorithms have been proposed to accelerate the computation...... of the skyline. However, they do not sufficiently minimize dominance tests and so are not competitive with state-of-the-art sequential algorithms. In this paper, we introduce a novel multi-core skyline algorithm, Hybrid, which processes points in blocks. It maintains a shared, global skyline among all threads...

  9. Scalable Task Assignment for Heterogeneous Multi-Robot Teams

    Directory of Open Access Journals (Sweden)

    Paula García

    2013-02-01

    Full Text Available This work deals with the development of a dynamic task assignment strategy for heterogeneous multi-robot teams in typical real world scenarios. The strategy must be efficiently scalable to support problems of increasing complexity with minimum designer intervention. To this end, we have selected a very simple auction-based strategy, which has been implemented and analysed in a multi-robot cleaning problem that requires strong coordination and dynamic complex subtask organization. We will show that the selection of a simple auction strategy provides a linear computational cost increase with the number of robots that make up the team and allows the solving of highly complex assignment problems in dynamic conditions by means of a hierarchical sub-auction policy. To coordinate and control the team, a layered behaviour-based architecture has been applied that allows the reusing of the auction-based strategy to achieve different coordination levels.

  10. Production Task Queue Optimization Based on Multi-Attribute Evaluation for Complex Product Assembly Workshop.

    Science.gov (United States)

    Li, Lian-Hui; Mo, Rong

    2015-01-01

    The production task queue has a great significance for manufacturing resource allocation and scheduling decision. Man-made qualitative queue optimization method has a poor effect and makes the application difficult. A production task queue optimization method is proposed based on multi-attribute evaluation. According to the task attributes, the hierarchical multi-attribute model is established and the indicator quantization methods are given. To calculate the objective indicator weight, criteria importance through intercriteria correlation (CRITIC) is selected from three usual methods. To calculate the subjective indicator weight, BP neural network is used to determine the judge importance degree, and then the trapezoid fuzzy scale-rough AHP considering the judge importance degree is put forward. The balanced weight, which integrates the objective weight and the subjective weight, is calculated base on multi-weight contribution balance model. The technique for order preference by similarity to an ideal solution (TOPSIS) improved by replacing Euclidean distance with relative entropy distance is used to sequence the tasks and optimize the queue by the weighted indicator value. A case study is given to illustrate its correctness and feasibility.

  11. Multi-task learning for cross-platform siRNA efficacy prediction: an in-silico study.

    Science.gov (United States)

    Liu, Qi; Xu, Qian; Zheng, Vincent W; Xue, Hong; Cao, Zhiwei; Yang, Qiang

    2010-04-10

    Gene silencing using exogenous small interfering RNAs (siRNAs) is now a widespread molecular tool for gene functional study and new-drug target identification. The key mechanism in this technique is to design efficient siRNAs that incorporated into the RNA-induced silencing complexes (RISC) to bind and interact with the mRNA targets to repress their translations to proteins. Although considerable progress has been made in the computational analysis of siRNA binding efficacy, few joint analysis of different RNAi experiments conducted under different experimental scenarios has been done in research so far, while the joint analysis is an important issue in cross-platform siRNA efficacy prediction. A collective analysis of RNAi mechanisms for different datasets and experimental conditions can often provide new clues on the design of potent siRNAs. An elegant multi-task learning paradigm for cross-platform siRNA efficacy prediction is proposed. Experimental studies were performed on a large dataset of siRNA sequences which encompass several RNAi experiments recently conducted by different research groups. By using our multi-task learning method, the synergy among different experiments is exploited and an efficient multi-task predictor for siRNA efficacy prediction is obtained. The 19 most popular biological features for siRNA according to their jointly importance in multi-task learning were ranked. Furthermore, the hypothesis is validated out that the siRNA binding efficacy on different messenger RNAs(mRNAs) have different conditional distribution, thus the multi-task learning can be conducted by viewing tasks at an "mRNA"-level rather than at the "experiment"-level. Such distribution diversity derived from siRNAs bound to different mRNAs help indicate that the properties of target mRNA have important implications on the siRNA binding efficacy. The knowledge gained from our study provides useful insights on how to analyze various cross-platform RNAi data for uncovering

  12. Multi-task learning for cross-platform siRNA efficacy prediction: an in-silico study

    Directory of Open Access Journals (Sweden)

    Xue Hong

    2010-04-01

    Full Text Available Abstract Background Gene silencing using exogenous small interfering RNAs (siRNAs is now a widespread molecular tool for gene functional study and new-drug target identification. The key mechanism in this technique is to design efficient siRNAs that incorporated into the RNA-induced silencing complexes (RISC to bind and interact with the mRNA targets to repress their translations to proteins. Although considerable progress has been made in the computational analysis of siRNA binding efficacy, few joint analysis of different RNAi experiments conducted under different experimental scenarios has been done in research so far, while the joint analysis is an important issue in cross-platform siRNA efficacy prediction. A collective analysis of RNAi mechanisms for different datasets and experimental conditions can often provide new clues on the design of potent siRNAs. Results An elegant multi-task learning paradigm for cross-platform siRNA efficacy prediction is proposed. Experimental studies were performed on a large dataset of siRNA sequences which encompass several RNAi experiments recently conducted by different research groups. By using our multi-task learning method, the synergy among different experiments is exploited and an efficient multi-task predictor for siRNA efficacy prediction is obtained. The 19 most popular biological features for siRNA according to their jointly importance in multi-task learning were ranked. Furthermore, the hypothesis is validated out that the siRNA binding efficacy on different messenger RNAs(mRNAs have different conditional distribution, thus the multi-task learning can be conducted by viewing tasks at an "mRNA"-level rather than at the "experiment"-level. Such distribution diversity derived from siRNAs bound to different mRNAs help indicate that the properties of target mRNA have important implications on the siRNA binding efficacy. Conclusions The knowledge gained from our study provides useful insights on how to

  13. The communication processor of TUMULT-64

    NARCIS (Netherlands)

    Smit, Gerardus Johannes Maria; Jansen, P.G.

    1988-01-01

    Tumult (Twente University MULTi-processor system) is a modular extendible multi-processor system designed and implemented at the Twente University of Technology in co-operation with Oce Nederland B.V. and the Dr. Neher Laboratories (Dutch PTT). Characteristics of the hardware are: MIMD type,

  14. Fuzzy logic based power-efficient real-time multi-core system

    CERN Document Server

    Ahmed, Jameel; Najam, Shaheryar; Najam, Zohaib

    2017-01-01

    This book focuses on identifying the performance challenges involved in computer architectures, optimal configuration settings and analysing their impact on the performance of multi-core architectures. Proposing a power and throughput-aware fuzzy-logic-based reconfiguration for Multi-Processor Systems on Chip (MPSoCs) in both simulation and real-time environments, it is divided into two major parts. The first part deals with the simulation-based power and throughput-aware fuzzy logic reconfiguration for multi-core architectures, presenting the results of a detailed analysis on the factors impacting the power consumption and performance of MPSoCs. In turn, the second part highlights the real-time implementation of fuzzy-logic-based power-efficient reconfigurable multi-core architectures for Intel and Leone3 processors. .

  15. Energy-aware Thread and Data Management in Heterogeneous Multi-core, Multi-memory Systems

    Energy Technology Data Exchange (ETDEWEB)

    Su, Chun-Yi [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States)

    2014-12-16

    By 2004, microprocessor design focused on multicore scaling—increasing the number of cores per die in each generation—as the primary strategy for improving performance. These multicore processors typically equip multiple memory subsystems to improve data throughput. In addition, these systems employ heterogeneous processors such as GPUs and heterogeneous memories like non-volatile memory to improve performance, capacity, and energy efficiency. With the increasing volume of hardware resources and system complexity caused by heterogeneity, future systems will require intelligent ways to manage hardware resources. Early research to improve performance and energy efficiency on heterogeneous, multi-core, multi-memory systems focused on tuning a single primitive or at best a few primitives in the systems. The key limitation of past efforts is their lack of a holistic approach to resource management that balances the tradeoff between performance and energy consumption. In addition, the shift from simple, homogeneous systems to these heterogeneous, multicore, multi-memory systems requires in-depth understanding of efficient resource management for scalable execution, including new models that capture the interchange between performance and energy, smarter resource management strategies, and novel low-level performance/energy tuning primitives and runtime systems. Tuning an application to control available resources efficiently has become a daunting challenge; managing resources in automation is still a dark art since the tradeoffs among programming, energy, and performance remain insufficiently understood. In this dissertation, I have developed theories, models, and resource management techniques to enable energy-efficient execution of parallel applications through thread and data management in these heterogeneous multi-core, multi-memory systems. I study the effect of dynamic concurrent throttling on the performance and energy of multi-core, non-uniform memory access

  16. Combined radar and telemetry system

    Energy Technology Data Exchange (ETDEWEB)

    Rodenbeck, Christopher T.; Young, Derek; Chou, Tina; Hsieh, Lung-Hwa; Conover, Kurt; Heintzleman, Richard

    2017-08-01

    A combined radar and telemetry system is described. The combined radar and telemetry system includes a processing unit that executes instructions, where the instructions define a radar waveform and a telemetry waveform. The processor outputs a digital baseband signal based upon the instructions, where the digital baseband signal is based upon the radar waveform and the telemetry waveform. A radar and telemetry circuit transmits, simultaneously, a radar signal and telemetry signal based upon the digital baseband signal.

  17. Optimization and parallelization of B-spline based orbital evaluations in QMC on multi/many-core shared memory processors

    OpenAIRE

    Mathuriya, Amrita; Luo, Ye; Benali, Anouar; Shulenburger, Luke; Kim, Jeongnim

    2016-01-01

    B-spline based orbital representations are widely used in Quantum Monte Carlo (QMC) simulations of solids, historically taking as much as 50% of the total run time. Random accesses to a large four-dimensional array make it challenging to efficiently utilize caches and wide vector units of modern CPUs. We present node-level optimizations of B-spline evaluations on multi/many-core shared memory processors. To increase SIMD efficiency and bandwidth utilization, we first apply data layout transfo...

  18. Real-time multi-task operators support system

    International Nuclear Information System (INIS)

    Wang He; Peng Minjun; Wang Hao; Cheng Shouyu

    2005-01-01

    The development in computer software and hardware technology and information processing as well as the accumulation in the design and feedback from Nuclear Power Plant (NPP) operation created a good opportunity to develop an integrated Operator Support System. The Real-time Multi-task Operator Support System (RMOSS) has been built to support the operator's decision making process during normal and abnormal operations. RMOSS consists of five system subtasks such as Data Collection and Validation Task (DCVT), Operation Monitoring Task (OMT), Fault Diagnostic Task (FDT), Operation Guideline Task (OGT) and Human Machine Interface Task (HMIT). RMOSS uses rule-based expert system and Artificial Neural Network (ANN). The rule-based expert system is used to identify the predefined events in static conditions and track the operation guideline through data processing. In dynamic status, Back-Propagation Neural Network is adopted for fault diagnosis, which is trained with the Genetic Algorithm. Embedded real-time operation system VxWorks and its integrated environment Tornado II are used as the RMOSS software cross-development. VxGUI is used to design HMI. All of the task programs are designed in C language. The task tests and function evaluation of RMOSS have been done in one real-time full scope simulator. Evaluation results show that each task of RMOSS is capable of accomplishing its functions. (authors)

  19. Position-aware deep multi-task learning for drug-drug interaction extraction.

    Science.gov (United States)

    Zhou, Deyu; Miao, Lei; He, Yulan

    2018-05-01

    A drug-drug interaction (DDI) is a situation in which a drug affects the activity of another drug synergistically or antagonistically when being administered together. The information of DDIs is crucial for healthcare professionals to prevent adverse drug events. Although some known DDIs can be found in purposely-built databases such as DrugBank, most information is still buried in scientific publications. Therefore, automatically extracting DDIs from biomedical texts is sorely needed. In this paper, we propose a novel position-aware deep multi-task learning approach for extracting DDIs from biomedical texts. In particular, sentences are represented as a sequence of word embeddings and position embeddings. An attention-based bidirectional long short-term memory (BiLSTM) network is used to encode each sentence. The relative position information of words with the target drugs in text is combined with the hidden states of BiLSTM to generate the position-aware attention weights. Moreover, the tasks of predicting whether or not two drugs interact with each other and further distinguishing the types of interactions are learned jointly in multi-task learning framework. The proposed approach has been evaluated on the DDIExtraction challenge 2013 corpus and the results show that with the position-aware attention only, our proposed approach outperforms the state-of-the-art method by 0.99% for binary DDI classification, and with both position-aware attention and multi-task learning, our approach achieves a micro F-score of 72.99% on interaction type identification, outperforming the state-of-the-art approach by 1.51%, which demonstrates the effectiveness of the proposed approach. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. A lock circuit for a multi-core processor

    DEFF Research Database (Denmark)

    2015-01-01

    An integrated circuit comprising a multiple processor cores and a lock circuit that comprises a queue register with respective bits set or reset via respective, connections dedicated to respective processor cores, whereby the queue register identifies those among the multiple processor cores...... that are enqueued in the queue register. Furthermore, the integrated circuit comprises a current register and a selector circuit configured to select a processor core and identify that processor core by a value in the current register. A selected processor core is a prioritized processor core among the cores...... configured with an integrated circuit; and a silicon die configured with an integrated circuit....

  1. Debugging multi-core systems-on-chip

    NARCIS (Netherlands)

    Vermeulen, B.; Goossens, K.G.W.; Kornaros, G.

    2010-01-01

    In this chapter, we introduced three fundamental reasons why debugging a multi-processor SoC is intrinsically difficult; (1) limited internal observability, (2) asynchronicity, and (3) non-determinism.

  2. Production Task Queue Optimization Based on Multi-Attribute Evaluation for Complex Product Assembly Workshop.

    Directory of Open Access Journals (Sweden)

    Lian-Hui Li

    Full Text Available The production task queue has a great significance for manufacturing resource allocation and scheduling decision. Man-made qualitative queue optimization method has a poor effect and makes the application difficult. A production task queue optimization method is proposed based on multi-attribute evaluation. According to the task attributes, the hierarchical multi-attribute model is established and the indicator quantization methods are given. To calculate the objective indicator weight, criteria importance through intercriteria correlation (CRITIC is selected from three usual methods. To calculate the subjective indicator weight, BP neural network is used to determine the judge importance degree, and then the trapezoid fuzzy scale-rough AHP considering the judge importance degree is put forward. The balanced weight, which integrates the objective weight and the subjective weight, is calculated base on multi-weight contribution balance model. The technique for order preference by similarity to an ideal solution (TOPSIS improved by replacing Euclidean distance with relative entropy distance is used to sequence the tasks and optimize the queue by the weighted indicator value. A case study is given to illustrate its correctness and feasibility.

  3. UTLEON3 Exploring Fine-Grain Multi-Threading in FPGAs

    CERN Document Server

    Daněk, Martin; Kohout, Lukáš; Sýkora, Jaroslav; Bartosinski, Roman

    2013-01-01

    This book describes a specification, microarchitecture, VHDL implementation and evaluation of a SPARC v8 CPU with fine-grain multi-threading, called micro-threading. The CPU, named UTLEON3, is an alternative platform for exploring CPU multi-threading that is compatible with the industry-standard GRLIB package. The processor microarchitecture was designed to map in an efficient way the data-flow scheme on a classical von Neumann pipelined processing used in common processors, while retaining full binary compatibility with existing legacy programs.  Describes and documents a working SPARC v8, with fine-grain multithreading and fast context switch; Provides VHDL sources for the described processor; Describes a latency-tolerant framework for coupling hardware accelerators to microthreaded processor pipelines; Includes programming by example in the micro-threaded assembly language.    

  4. An Investigation of Factors Affecting Multi-Task Performance in an Immersive Environment

    National Research Council Canada - National Science Library

    Branscome, Teresa A; Grynovicki, Jock O

    2007-01-01

    This report presents the results of a study included in a series of investigations designed to increase fundamental knowledge and understanding of the factors affecting multi-task performance in a military environment...

  5. Identification and Analysis of Multi-tasking Product Information Search Sessions with Query Logs

    Directory of Open Access Journals (Sweden)

    Xiang Zhou

    2016-09-01

    Full Text Available Purpose: This research aims to identify product search tasks in online shopping and analyze the characteristics of consumer multi-tasking search sessions. Design/methodology/approach: The experimental dataset contains 8,949 queries of 582 users from 3,483 search sessions. A sequential comparison of the Jaccard similarity coefficient between two adjacent search queries and hierarchical clustering of queries is used to identify search tasks. Findings: (1 Users issued a similar number of queries (1.43 to 1.47 with similar lengths (7.3-7.6 characters per task in mono-tasking and multi-tasking sessions, and (2 Users spent more time on average in sessions with more tasks, but spent less time for each task when the number of tasks increased in a session. Research limitations: The task identification method that relies only on query terms does not completely reflect the complex nature of consumer shopping behavior. Practical implications: These results provide an exploratory understanding of the relationships among multiple shopping tasks, and can be useful for product recommendation and shopping task prediction. Originality/value: The originality of this research is its use of query clustering with online shopping task identification and analysis, and the analysis of product search session characteristics.

  6. Data Stream Processing Study in a Multichannel Telemetry Data Registering System

    Directory of Open Access Journals (Sweden)

    I. M. Sidyakin

    2015-01-01

    Full Text Available The paper presents the results of research that is aimed to improve the reliability of transmission of telemetry information (TMI through a communication channel with noise from the object of telemeasurements to the telemetry system for collecting and processing data. It considers the case where the quality of received information changes over time, due to movement of the object relative to the receiving station, or other factors that cause changes in the characteristics of noise in the channel, up to the total loss due to some temporary sites. To improve the reliability of transmission and ensure continuous communication with the object, it is proposed to use a multi-channel system to record the TMI. This system consists of several telemetry stations, which simultaneously register data stream transmitted from the telemetry object. The multichannel system generates a single stream of TMI for the user at the output. The stream comprises the most reliable pieces of information, being received at all inputs of the system.The paper investigates the task of constructing a multi-channel registration scheme for telemetry information (TMI to provide a simultaneous reception of the telemeasurement data by multiple telemetry stations and to form a single TMI stream containing the most reliable pieces of received data on the basis of quality analysis of information being received.In a multichannel registering system of TMI there are three main factors affecting the quality of the output of a single stream of information: 1 quality of the method used for protecting against errors during transmission over the communication channel with noise; 2 efficiency of the synchronization process of telemetry frames in the received flow of information; 3 efficiency of the applied criteria to form a single output stream from multiple input streams coming from different stations in the discussed multichannel registering system of TMI.In the paper, in practical

  7. Exploring Task Mappings on Heterogeneous MPSoCs using a Bias-Elitist Genetic Algorithm

    NARCIS (Netherlands)

    Quan, W.; Pimentel, A.D.

    2014-01-01

    Exploration of task mappings plays a crucial role in achieving high performance in heterogeneous multi-processor system-on-chip (MPSoC) platforms. The problem of optimally mapping a set of tasks onto a set of given heterogeneous processors for maximal throughput has been known, in general, to be

  8. Multi-task transfer learning deep convolutional neural network: application to computer-aided diagnosis of breast cancer on mammograms

    Science.gov (United States)

    Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Helvie, Mark A.; Cha, Kenny H.; Richter, Caleb D.

    2017-12-01

    Transfer learning in deep convolutional neural networks (DCNNs) is an important step in its application to medical imaging tasks. We propose a multi-task transfer learning DCNN with the aim of translating the ‘knowledge’ learned from non-medical images to medical diagnostic tasks through supervised training and increasing the generalization capabilities of DCNNs by simultaneously learning auxiliary tasks. We studied this approach in an important application: classification of malignant and benign breast masses. With Institutional Review Board (IRB) approval, digitized screen-film mammograms (SFMs) and digital mammograms (DMs) were collected from our patient files and additional SFMs were obtained from the Digital Database for Screening Mammography. The data set consisted of 2242 views with 2454 masses (1057 malignant, 1397 benign). In single-task transfer learning, the DCNN was trained and tested on SFMs. In multi-task transfer learning, SFMs and DMs were used to train the DCNN, which was then tested on SFMs. N-fold cross-validation with the training set was used for training and parameter optimization. On the independent test set, the multi-task transfer learning DCNN was found to have significantly (p  =  0.007) higher performance compared to the single-task transfer learning DCNN. This study demonstrates that multi-task transfer learning may be an effective approach for training DCNN in medical imaging applications when training samples from a single modality are limited.

  9. Multi-Task Vehicle Detection with Region-of-Interest Voting.

    Science.gov (United States)

    Chu, Wenqing; Liu, Yao; Shen, Chen; Cai, Deng; Hua, Xian-Sheng

    2017-10-12

    Vehicle detection is a challenging problem in autonomous driving systems, due to its large structural and appearance variations. In this paper, we propose a novel vehicle detection scheme based on multi-task deep convolutional neural networks (CNN) and region-of-interest (RoI) voting. In the design of CNN architecture, we enrich the supervised information with subcategory, region overlap, bounding-box regression and category of each training RoI as a multi-task learning framework. This design allows the CNN model to share visual knowledge among different vehicle attributes simultaneously, thus detection robustness can be effectively improved. In addition, most existing methods consider each RoI independently, ignoring the clues from its neighboring RoIs. In our approach, we utilize the CNN model to predict the offset direction of each RoI boundary towards the corresponding ground truth. Then each RoI can vote those suitable adjacent bounding boxes which are consistent with this additional information. The voting results are combined with the score of each RoI itself to find a more accurate location from a large number of candidates. Experimental results on the real-world computer vision benchmarks KITTI and the PASCAL2007 vehicle dataset show that our approach achieves superior performance in vehicle detection compared with other existing published works.

  10. A Low-cost Multi-channel Analogue Signal Generator

    CERN Document Server

    Müller, F; The ATLAS collaboration; Shen, W; Stamen, R

    2009-01-01

    A scalable multi-channel analogue signal generator is presented. It uses a commercial low-cost graphics card with multiple outputs in a standard PC as signal source. Each color signal serves as independent channel to generate an analogue signal. A custom-built external PCB was developed to adjust the graphics card output voltage levels for a specific task, which needed differential signals. The system furthermore comprises a software package to program the signal shape. The signal generator was successfully used as independent test bed for the ATLAS Level-1 Trigger Pre-Processor, providing up to 16 analogue signals.

  11. Multi-processor system for real-time deconvolution and flow estimation in medical ultrasound

    DEFF Research Database (Denmark)

    Jensen, Jesper Lomborg; Jensen, Jørgen Arendt; Stetson, Paul F.

    1996-01-01

    of the algorithms. Many of the algorithms can only be properly evaluated in a clinical setting with real-time processing, which generally cannot be done with conventional equipment. This paper therefore presents a multi-processor system capable of performing 1.2 billion floating point operations per second on RF...... filter is used with a second time-reversed recursive estimation step. Here it is necessary to perform about 70 arithmetic operations per RF sample or about 1 billion operations per second for real-time deconvolution. Furthermore, these have to be floating point operations due to the adaptive nature...... interfaced to our previously-developed real-time sampling system that can acquire RF data at a rate of 20 MHz and simultaneously transmit the data at 20 MHz to the processing system via several parallel channels. These two systems can, thus, perform real-time processing of ultrasound data. The advantage...

  12. Control system of the inspection robots group applying auctions and multi-criteria analysis for task allocation

    Science.gov (United States)

    Panfil, Wawrzyniec; Moczulski, Wojciech

    2017-10-01

    In the paper presented is a control system of a mobile robots group intended for carrying out inspection missions. The main research problem was to define such a control system in order to facilitate a cooperation of the robots resulting in realization of the committed inspection tasks. Many of the well-known control systems use auctions for tasks allocation, where a subject of an auction is a task to be allocated. It seems that in the case of missions characterized by much larger number of tasks than number of robots it will be better if robots (instead of tasks) are subjects of auctions. The second identified problem concerns the one-sided robot-to-task fitness evaluation. Simultaneous assessment of the robot-to-task fitness and task attractiveness for robot should affect positively for the overall effectiveness of the multi-robot system performance. The elaborated system allows to assign tasks to robots using various methods for evaluation of fitness between robots and tasks, and using some tasks allocation methods. There is proposed the method for multi-criteria analysis, which is composed of two assessments, i.e. robot's concurrency position for task among other robots and task's attractiveness for robot among other tasks. Furthermore, there are proposed methods for tasks allocation applying the mentioned multi-criteria analysis method. The verification of both the elaborated system and the proposed tasks' allocation methods was carried out with the help of simulated experiments. The object under test was a group of inspection mobile robots being a virtual counterpart of the real mobile-robot group.

  13. A Performance-Prediction Model for PIC Applications on Clusters of Symmetric MultiProcessors: Validation with Hierarchical HPF+OpenMP Implementation

    Directory of Open Access Journals (Sweden)

    Sergio Briguglio

    2003-01-01

    Full Text Available A performance-prediction model is presented, which describes different hierarchical workload decomposition strategies for particle in cell (PIC codes on Clusters of Symmetric MultiProcessors. The devised workload decomposition is hierarchically structured: a higher-level decomposition among the computational nodes, and a lower-level one among the processors of each computational node. Several decomposition strategies are evaluated by means of the prediction model, with respect to the memory occupancy, the parallelization efficiency and the required programming effort. Such strategies have been implemented by integrating the high-level languages High Performance Fortran (at the inter-node stage and OpenMP (at the intra-node one. The details of these implementations are presented, and the experimental values of parallelization efficiency are compared with the predicted results.

  14. Performance impact of mutation operators of a subpopulation-based genetic algorithm for multi-robot task allocation problems.

    Science.gov (United States)

    Liu, Chun; Kroll, Andreas

    2016-01-01

    Multi-robot task allocation determines the task sequence and distribution for a group of robots in multi-robot systems, which is one of constrained combinatorial optimization problems and more complex in case of cooperative tasks because they introduce additional spatial and temporal constraints. To solve multi-robot task allocation problems with cooperative tasks efficiently, a subpopulation-based genetic algorithm, a crossover-free genetic algorithm employing mutation operators and elitism selection in each subpopulation, is developed in this paper. Moreover, the impact of mutation operators (swap, insertion, inversion, displacement, and their various combinations) is analyzed when solving several industrial plant inspection problems. The experimental results show that: (1) the proposed genetic algorithm can obtain better solutions than the tested binary tournament genetic algorithm with partially mapped crossover; (2) inversion mutation performs better than other tested mutation operators when solving problems without cooperative tasks, and the swap-inversion combination performs better than other tested mutation operators/combinations when solving problems with cooperative tasks. As it is difficult to produce all desired effects with a single mutation operator, using multiple mutation operators (including both inversion and swap) is suggested when solving similar combinatorial optimization problems.

  15. Multi-processor developments in the United States for future high energy physics experiments and accelerators

    International Nuclear Information System (INIS)

    Gaines, I.

    1988-03-01

    The use of multi-processors for analysis and high-level triggering in High Energy Physics experiments, pioneered by the early emulator systems, has reached maturity, in particular with the multiple microprocessor systems in use at Fermilab. It is widely acknowledged that such systems will fulfill the major portion of the computing needs of future large experiments. Recent developments at Fermilab's Advanced Computer Program will make such systems even more powerful, cost-effective, and easier to use than they are at present. The next generation of microprocessors, already available, will provide CPU power of about one VAX 780 equivalent/$300, while supporting most VMS FORTRAN extensions and large (>8MB) amounts of memory. Low cost high density mass storage devices (based on video tape cartridge technology) will allow parallel I/O to remove potential I/O bottlenecks in systems of over 1000 VAX equipment processors. New interconnection schemes and system software will allow more flexible topologies and extremely high data bandwidth, especially for on-line systems. This talk will summarize the work at the Advanced Computer Program and the rest of the US in this field. 3 refs., 4 figs

  16. Multi-fuel reformers for fuel cells used in transportation. Phase 1: Multi-fuel reformers

    Science.gov (United States)

    1994-05-01

    DOE has established the goal, through the Fuel Cells in Transportation Program, of fostering the rapid development and commercialization of fuel cells as economic competitors for the internal combustion engine. Central to this goal is a safe feasible means of supplying hydrogen of the required purity to the vehicular fuel cell system. Two basic strategies are being considered: (1) on-board fuel processing whereby alternative fuels such as methanol, ethanol or natural gas stored on the vehicle undergo reformation and subsequent processing to produce hydrogen, and (2) on-board storage of pure hydrogen provided by stationary fuel processing plants. This report analyzes fuel processor technologies, types of fuel and fuel cell options for on-board reformation. As the Phase 1 of a multi-phased program to develop a prototype multi-fuel reformer system for a fuel cell powered vehicle, the objective of this program was to evaluate the feasibility of a multi-fuel reformer concept and to select a reforming technology for further development in the Phase 2 program, with the ultimate goal of integration with a DOE-designated fuel cell and vehicle configuration. The basic reformer processes examined in this study included catalytic steam reforming (SR), non-catalytic partial oxidation (POX) and catalytic partial oxidation (also known as Autothermal Reforming, or ATR). Fuels under consideration in this study included methanol, ethanol, and natural gas. A systematic evaluation of reforming technologies, fuels, and transportation fuel cell applications was conducted for the purpose of selecting a suitable multi-fuel processor for further development and demonstration in a transportation application.

  17. Multi-task linear programming discriminant analysis for the identification of progressive MCI individuals.

    Science.gov (United States)

    Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang

    2014-01-01

    Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images

  18. Multi-task linear programming discriminant analysis for the identification of progressive MCI individuals.

    Directory of Open Access Journals (Sweden)

    Guan Yu

    Full Text Available Accurately identifying mild cognitive impairment (MCI individuals who will progress to Alzheimer's disease (AD is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI and fluorodeoxyglucose positron emission tomography (FDG-PET. However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI subjects and 226 stable MCI (sMCI subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images and also the single-task classification method (using only MRI or only subjects with both MRI and

  19. The Heidelberg POLYP - a flexible and fault-tolerant poly-processor

    International Nuclear Information System (INIS)

    Maenner, R.; Deluigi, B.

    1981-01-01

    The Heidelberg poly-processor system POLYP is described. It is intended to be used in nuclear physics for reprocessing of experimental data, in high energy physics as second-stage trigger processor, and generally in other applications requiring high-computing power. The POLYP system consists of any number of I/O-processors, processor modules (eventually of different types), global memory segments, and a host processor. All modules (up to several hundred) are connected by a multiple common-data-bus system; all processors, additionally, by a multiple sync bus system for processor/task-scheduling. All hard- and software is designed to be decentralized and free of bottle-necks. Most hardware-faults like single-bit errors in memory or multi-bit errors during transfers are automatically corrected. Defective modules, buses, etc., can be removed with only a graceful degradation of the system-throughput. (orig.)

  20. Transfer and Multi-task Learning in QSAR Modeling: Advances and Challenges

    Directory of Open Access Journals (Sweden)

    Rodolfo S. Simões

    2018-02-01

    Full Text Available Medicinal chemistry projects involve some steps aiming to develop a new drug, such as the analysis of biological targets related to a given disease, the discovery and the development of drug candidates for these targets, performing parallel biological tests to validate the drug effectiveness and side effects. Approaches as quantitative study of activity-structure relationships (QSAR involve the construction of predictive models that relate a set of descriptors of a chemical compound series and its biological activities with respect to one or more targets in the human body. Datasets used to perform QSAR analyses are generally characterized by a small number of samples and this makes them more complex to build accurate predictive models. In this context, transfer and multi-task learning techniques are very suitable since they take information from other QSAR models to the same biological target, reducing efforts and costs for generating new chemical compounds. Therefore, this review will present the main features of transfer and multi-task learning studies, as well as some applications and its potentiality in drug design projects.

  1. An optimal multi-channel memory controller for real-time systems

    NARCIS (Netherlands)

    Gomony, M.D.; Akesson, K.B.; Goossens, K.G.W.

    2013-01-01

    Optimal utilization of a multi-channel memory, such as Wide IO DRAM, as shared memory in multi-processor platforms depends on the mapping of memory clients to the memory channels, the granularity at which the memory requests are interleaved in each channel, and the bandwidth and memory capacity

  2. Metacognition of Multi-Tasking: How Well Do We Predict the Costs of Divided Attention?

    OpenAIRE

    Finley, Jason R.; Benjamin, Aaron S.; McCarley, Jason S.

    2014-01-01

    Risky multi-tasking, such as texting while driving, may occur because people misestimate the costs of divided attention. In two experiments, participants performed a computerized visual-manual tracking task in which they attempted to keep a mouse cursor within a small target that moved erratically around a circular track. They then separately performed an auditory n-back task. After practicing both tasks separately, participants received feedback on their single-task tracking performance and ...

  3. A Multi-task Principal Agent Model for Knowledge Contribution of Enterprise Staff

    Directory of Open Access Journals (Sweden)

    Chengyi LE

    2016-10-01

    Full Text Available According to the different behavior characteristics of knowledge contribution of enterprise employees, a multi-task principal-agent relationship of knowledge contribution between enterprise and employees is established based on principal-agent theory, analyzing staff’s knowledge contribution behavior of knowledge creation and knowledge participation. Based on this, a multi-task principal agent model for knowledge contribution of enterprise staff is developed to formulate the asymmetry of information in knowledge contribution Then, a set of incentive measures are derived from the theoretic model, aiming to prompt the knowledge contribution in enterprise. The result shows that staff’s knowledge creation behavior and positive participation behavior can influence and further promote each other Enterprise should set up respective target levels of both knowledge creation contribution and knowledge participation contribution and make them irreplaceable to each other. This work contributes primarily to the development of the literature on knowledge management and principal-agent theory. In addition, the applicability of the findings will be improved by further empirical analysis.

  4. Multi-core System Architecture for Safety-critical Control Applications

    DEFF Research Database (Denmark)

    Li, Gang

    and size, and high power consumption. Increasing the frequency of a processor is becoming painful now due to the explosive power consumption. Furthermore, components integrated into a single-core processor have to be certified to the highest SIL, due to that no isolation is provided in a traditional single...... certification cost. Meanwhile, hardware platforms with improved processing power are required to execute the applications of larger size. To tackle the two issues mentioned above, the state of the art approaches are using more Electronic Control Units (ECU) in a federated architecture or increasing......-core processor. A promising alternative to improve processing power and provide isolation is to adopt a multi-core architecture with on-chip isolation. In general, a specific multi-core architecture can facilitate the development and certification of safety-related systems, due to its physical isolation between...

  5. Transcranial direct current stimulation facilitates cognitive multi-task performance differentially depending on anode location and subtask.

    Directory of Open Access Journals (Sweden)

    Melissa eScheldrup

    2014-09-01

    Full Text Available There is a need to facilitate acquisition of real world cognitive multi-tasks that require long periods of training (e.g., air traffic control, intelligence analysis, medicine. Non-invasive brain stimulation – specifically transcranial Direct Current Stimulation (tDCS – has promise as a method to speed multi-task training. We hypothesized that during acquisition of the complex multi-task Space Fortress, subtasks that require focused attention on ship control would benefit from tDCS aimed at the dorsal attention network while subtasks that require redirection of attention would benefit from tDCS aimed at the right hemisphere ventral attention network. We compared effects of 30 min prefrontal and parietal stimulation to right and left hemispheres on subtask performance during the first 45 min of training. The strongest effects both overall and for ship flying (control and velocity subtasks were seen with a right parietal (C4 to left shoulder montage, shown by modeling to induce an electric field that includes nodes in both dorsal and ventral attention networks. This is consistent with the re-orienting hypothesis that the ventral attention network is activated along with the dorsal attention network if a new, task-relevant event occurs while visuospatial attention is focused (Corbetta et al., 2008. No effects were seen with anodes over sites that stimulated only dorsal (C3 or only ventral (F10 attention networks. The speed subtask (update memory for symbols benefited from an F9 anode over left prefrontal cortex. These results argue for development of tDCS as a training aid in real world settings where multi-tasking is critical.

  6. Transcranial direct current stimulation facilitates cognitive multi-task performance differentially depending on anode location and subtask.

    Science.gov (United States)

    Scheldrup, Melissa; Greenwood, Pamela M; McKendrick, Ryan; Strohl, Jon; Bikson, Marom; Alam, Mahtab; McKinley, R Andy; Parasuraman, Raja

    2014-01-01

    There is a need to facilitate acquisition of real world cognitive multi-tasks that require long periods of training (e.g., air traffic control, intelligence analysis, medicine). Non-invasive brain stimulation-specifically transcranial Direct Current Stimulation (tDCS)-has promise as a method to speed multi-task training. We hypothesized that during acquisition of the complex multi-task Space Fortress, subtasks that require focused attention on ship control would benefit from tDCS aimed at the dorsal attention network while subtasks that require redirection of attention would benefit from tDCS aimed at the right hemisphere ventral attention network. We compared effects of 30 min prefrontal and parietal stimulation to right and left hemispheres on subtask performance during the first 45 min of training. The strongest effects both overall and for ship flying (control and velocity subtasks) were seen with a right parietal (C4, reference to left shoulder) montage, shown by modeling to induce an electric field that includes nodes in both dorsal and ventral attention networks. This is consistent with the re-orienting hypothesis that the ventral attention network is activated along with the dorsal attention network if a new, task-relevant event occurs while visuospatial attention is focused (Corbetta et al., 2008). No effects were seen with anodes over sites that stimulated only dorsal (C3) or only ventral (F10) attention networks. The speed subtask (update memory for symbols) benefited from an F9 anode over left prefrontal cortex. These results argue for development of tDCS as a training aid in real world settings where multi-tasking is critical.

  7. Poster: A Software-Defined Multi-Camera Network

    OpenAIRE

    Chen, Po-Yen; Chen, Chien; Selvaraj, Parthiban; Claesen, Luc

    2016-01-01

    The widespread popularity of OpenFlow leads to a significant increase in the number of applications developed in SoftwareDefined Networking (SDN). In this work, we propose the architecture of a Software-Defined Multi-Camera Network consisting of small, flexible, economic, and programmable cameras which combine the functions of the processor, switch, and camera. A Software-Defined Multi-Camera Network can effectively reduce the overall network bandwidth and reduce a large amount of the Capex a...

  8. Algorithm-Dependent Generalization Bounds for Multi-Task Learning.

    Science.gov (United States)

    Liu, Tongliang; Tao, Dacheng; Song, Mingli; Maybank, Stephen J

    2017-02-01

    Often, tasks are collected for multi-task learning (MTL) because they share similar feature structures. Based on this observation, in this paper, we present novel algorithm-dependent generalization bounds for MTL by exploiting the notion of algorithmic stability. We focus on the performance of one particular task and the average performance over multiple tasks by analyzing the generalization ability of a common parameter that is shared in MTL. When focusing on one particular task, with the help of a mild assumption on the feature structures, we interpret the function of the other tasks as a regularizer that produces a specific inductive bias. The algorithm for learning the common parameter, as well as the predictor, is thereby uniformly stable with respect to the domain of the particular task and has a generalization bound with a fast convergence rate of order O(1/n), where n is the sample size of the particular task. When focusing on the average performance over multiple tasks, we prove that a similar inductive bias exists under certain conditions on the feature structures. Thus, the corresponding algorithm for learning the common parameter is also uniformly stable with respect to the domains of the multiple tasks, and its generalization bound is of the order O(1/T), where T is the number of tasks. These theoretical analyses naturally show that the similarity of feature structures in MTL will lead to specific regularizations for predicting, which enables the learning algorithms to generalize fast and correctly from a few examples.

  9. Risk assessments using the Strain Index and the TLV for HAL, Part II: Multi-task jobs and prevalence of CTS.

    Science.gov (United States)

    Kapellusch, Jay M; Silverstein, Barbara A; Bao, Stephen S; Thiese, Mathew S; Merryweather, Andrew S; Hegmann, Kurt T; Garg, Arun

    2018-02-01

    The Strain Index (SI) and the American Conference of Governmental Industrial Hygienists (ACGIH) threshold limit value for hand activity level (TLV for HAL) have been shown to be associated with prevalence of distal upper-limb musculoskeletal disorders such as carpal tunnel syndrome (CTS). The SI and TLV for HAL disagree on more than half of task exposure classifications. Similarly, time-weighted average (TWA), peak, and typical exposure techniques used to quantity physical exposure from multi-task jobs have shown between-technique agreement ranging from 61% to 93%, depending upon whether the SI or TLV for HAL model was used. This study compared exposure-response relationships between each model-technique combination and prevalence of CTS. Physical exposure data from 1,834 workers (710 with multi-task jobs) were analyzed using the SI and TLV for HAL and the TWA, typical, and peak multi-task job exposure techniques. Additionally, exposure classifications from the SI and TLV for HAL were combined into a single measure and evaluated. Prevalent CTS cases were identified using symptoms and nerve-conduction studies. Mixed effects logistic regression was used to quantify exposure-response relationships between categorized (i.e., low, medium, and high) physical exposure and CTS prevalence for all model-technique combinations, and for multi-task workers, mono-task workers, and all workers combined. Except for TWA TLV for HAL, all model-technique combinations showed monotonic increases in risk of CTS with increased physical exposure. The combined-models approach showed stronger association than the SI or TLV for HAL for multi-task workers. Despite differences in exposure classifications, nearly all model-technique combinations showed exposure-response relationships with prevalence of CTS for the combined sample of mono-task and multi-task workers. Both the TLV for HAL and the SI, with the TWA or typical techniques, appear useful for epidemiological studies and surveillance

  10. Scheduling with Group Dynamics: a Multi-Robot Task Allocation Algorithm based on Vacancy Chains

    National Research Council Canada - National Science Library

    Dahl, Torbjorn S; Mataric, Maja J; Sukhatme, Gaurav S

    2002-01-01

    .... We present a multi-robot task allocation algorithm that is sensitive to group dynamics. Our algorithm is based on vacancy chains, a resource distribution process common in human and animal societies...

  11. Deep Multi-Task Learning for Tree Genera Classification

    Science.gov (United States)

    Ko, C.; Kang, J.; Sohn, G.

    2018-05-01

    The goal for our paper is to classify tree genera using airborne Light Detection and Ranging (LiDAR) data with Convolution Neural Network (CNN) - Multi-task Network (MTN) implementation. Unlike Single-task Network (STN) where only one task is assigned to the learning outcome, MTN is a deep learning architect for learning a main task (classification of tree genera) with other tasks (in our study, classification of coniferous and deciduous) simultaneously, with shared classification features. The main contribution of this paper is to improve classification accuracy from CNN-STN to CNN-MTN. This is achieved by introducing a concurrence loss (Lcd) to the designed MTN. This term regulates the overall network performance by minimizing the inconsistencies between the two tasks. Results show that we can increase the classification accuracy from 88.7 % to 91.0 % (from STN to MTN). The second goal of this paper is to solve the problem of small training sample size by multiple-view data generation. The motivation of this goal is to address one of the most common problems in implementing deep learning architecture, the insufficient number of training data. We address this problem by simulating training dataset with multiple-view approach. The promising results from this paper are providing a basis for classifying a larger number of dataset and number of classes in the future.

  12. A queueing model of pilot decision making in a multi-task flight management situation

    Science.gov (United States)

    Walden, R. S.; Rouse, W. B.

    1977-01-01

    Allocation of decision making responsibility between pilot and computer is considered and a flight management task, designed for the study of pilot-computer interaction, is discussed. A queueing theory model of pilot decision making in this multi-task, control and monitoring situation is presented. An experimental investigation of pilot decision making and the resulting model parameters are discussed.

  13. Pruning techniques for multi-objective system-level design space exploration

    NARCIS (Netherlands)

    Piscitelli, R.

    2014-01-01

    System-level design space exploration (DSE), which is performed early in the design process, is of eminent importance to the design of complex multi-processor embedded system architectures. During system-level DSE, system parameters like, e.g., the number and type of processors, the type and size of

  14. Problems With Deployment of Multi-Domained, Multi-Homed Mobile Networks

    Science.gov (United States)

    Ivancic, William D.

    2008-01-01

    This document describes numerous problems associated with deployment of multi-homed mobile platforms consisting of multiple networks and traversing large geographical areas. The purpose of this document is to provide insight to real-world deployment issues and provide information to groups that are addressing many issues related to multi-homing, policy-base routing, route optimization and mobile security - particularly those groups within the Internet Engineering Task Force.

  15. Multi-view L2-SVM and its multi-view core vector machine.

    Science.gov (United States)

    Huang, Chengquan; Chung, Fu-lai; Wang, Shitong

    2016-03-01

    In this paper, a novel L2-SVM based classifier Multi-view L2-SVM is proposed to address multi-view classification tasks. The proposed Multi-view L2-SVM classifier does not have any bias in its objective function and hence has the flexibility like μ-SVC in the sense that the number of the yielded support vectors can be controlled by a pre-specified parameter. The proposed Multi-view L2-SVM classifier can make full use of the coherence and the difference of different views through imposing the consensus among multiple views to improve the overall classification performance. Besides, based on the generalized core vector machine GCVM, the proposed Multi-view L2-SVM classifier is extended into its GCVM version MvCVM which can realize its fast training on large scale multi-view datasets, with its asymptotic linear time complexity with the sample size and its space complexity independent of the sample size. Our experimental results demonstrated the effectiveness of the proposed Multi-view L2-SVM classifier for small scale multi-view datasets and the proposed MvCVM classifier for large scale multi-view datasets. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Effects of two doses of glucose and a caffeine–glucose combination on cognitive performance and mood during multi-tasking

    Science.gov (United States)

    Scholey, Andrew; Savage, Karen; O'Neill, Barry V; Owen, Lauren; Stough, Con; Priestley, Caroline; Wetherell, Mark

    2014-01-01

    Background This study assessed the effects of two doses of glucose and a caffeine–glucose combination on mood and performance of an ecologically valid, computerised multi-tasking platform. Materials and methods Following a double-blind, placebo-controlled, randomised, parallel-groups design, 150 healthy adults (mean age 34.78 years) consumed drinks containing placebo, 25 g glucose, 60 g glucose or 60 g glucose with 40 mg caffeine. They completed a multi-tasking framework at baseline and then 30 min following drink consumption with mood assessments immediately before and after the multi-tasking framework. Blood glucose and salivary caffeine were co-monitored. Results The caffeine–glucose group had significantly better total multi-tasking scores than the placebo or 60 g glucose groups and were significantly faster at mental arithmetic tasks than either glucose drink group. There were no significant treatment effects on mood. Caffeine and glucose levels confirmed compliance with overnight abstinence/fasting, respectively, and followed the predicted post-drink patterns. Conclusion These data suggest that co-administration of glucose and caffeine allows greater allocation of attentional resources than placebo or glucose alone. At present, we cannot rule out the possibility that the effects are due to caffeine alone Future studies should aim at disentangling caffeine and glucose effects. PMID:25196040

  17. Effects of two doses of glucose and a caffeine-glucose combination on cognitive performance and mood during multi-tasking.

    Science.gov (United States)

    Scholey, Andrew; Savage, Karen; O'Neill, Barry V; Owen, Lauren; Stough, Con; Priestley, Caroline; Wetherell, Mark

    2014-09-01

    This study assessed the effects of two doses of glucose and a caffeine-glucose combination on mood and performance of an ecologically valid, computerised multi-tasking platform. Following a double-blind, placebo-controlled, randomised, parallel-groups design, 150 healthy adults (mean age 34.78 years) consumed drinks containing placebo, 25 g glucose, 60 g glucose or 60 g glucose with 40 mg caffeine. They completed a multi-tasking framework at baseline and then 30 min following drink consumption with mood assessments immediately before and after the multi-tasking framework. Blood glucose and salivary caffeine were co-monitored. The caffeine-glucose group had significantly better total multi-tasking scores than the placebo or 60 g glucose groups and were significantly faster at mental arithmetic tasks than either glucose drink group. There were no significant treatment effects on mood. Caffeine and glucose levels confirmed compliance with overnight abstinence/fasting, respectively, and followed the predicted post-drink patterns. These data suggest that co-administration of glucose and caffeine allows greater allocation of attentional resources than placebo or glucose alone. At present, we cannot rule out the possibility that the effects are due to caffeine alone Future studies should aim at disentangling caffeine and glucose effects. © 2014 The Authors. Human Psychopharmacology: Clinical and Experimental published by John Wiley & Sons, Ltd.

  18. The Multi-energy High precision Data Processor Based on AD7606

    Science.gov (United States)

    Zhao, Chen; Zhang, Yanchi; Xie, Da

    2017-11-01

    This paper designs an information collector based on AD7606 to realize the high-precision simultaneous acquisition of multi-source information of multi-energy systems to form the information platform of the energy Internet at Laogang with electricty as its major energy source. Combined with information fusion technologies, this paper analyzes the data to improve the overall energy system scheduling capability and reliability.

  19. HardwareSoftware Co-design for Heterogeneous Multi-core Platforms The hArtes Toolchain

    CERN Document Server

    2012-01-01

    This book describes the results and outcome of the FP6 project, known as hArtes, which focuses on the development of an integrated tool chain targeting a heterogeneous multi core platform comprising of a general purpose processor (ARM or powerPC), a DSP (the diopsis) and an FPGA. The tool chain takes existing source code and proposes transformations and mappings such that legacy code can easily be ported to a modern, multi-core platform. Benefits of the hArtes approach, described in this book, include: Uses a familiar programming paradigm: hArtes proposes a familiar programming paradigm which is compatible with the widely used programming practice, irrespective of the target platform. Enables users to view multiple cores as a single processor: the hArtes approach abstracts away the heterogeneity as well as the multi-core aspect of the underlying hardware so the developer can view the platform as consisting of a single, general purpose processor. Facilitates easy porting of existing applications: hArtes provid...

  20. Multiple optical code-label processing using multi-wavelength frequency comb generator and multi-port optical spectrum synthesizer.

    Science.gov (United States)

    Moritsuka, Fumi; Wada, Naoya; Sakamoto, Takahide; Kawanishi, Tetsuya; Komai, Yuki; Anzai, Shimako; Izutsu, Masayuki; Kodate, Kashiko

    2007-06-11

    In optical packet switching (OPS) and optical code division multiple access (OCDMA) systems, label generation and processing are key technologies. Recently, several label processors have been proposed and demonstrated. However, in order to recognize N different labels, N separate devices are required. Here, we propose and experimentally demonstrate a large-scale, multiple optical code (OC)-label generation and processing technology based on multi-port, a fully tunable optical spectrum synthesizer (OSS) and a multi-wavelength electro-optic frequency comb generator. The OSS can generate 80 different OC-labels simultaneously and can perform 80-parallel matched filtering. We also demonstrated its application to OCDMA.

  1. MULTI-CORE AND OPTICAL PROCESSOR RELATED APPLICATIONS RESEARCH AT OAK RIDGE NATIONAL LABORATORY

    Energy Technology Data Exchange (ETDEWEB)

    Barhen, Jacob [ORNL; Kerekes, Ryan A [ORNL; ST Charles, Jesse Lee [ORNL; Buckner, Mark A [ORNL

    2008-01-01

    High-speed parallelization of common tasks holds great promise as a low-risk approach to achieving the significant increases in signal processing and computational performance required for next generation innovations in reconfigurable radio systems. Researchers at the Oak Ridge National Laboratory have been working on exploiting the parallelization offered by this emerging technology and applying it to a variety of problems. This paper will highlight recent experience with four different parallel processors applied to signal processing tasks that are directly relevant to signal processing required for SDR/CR waveforms. The first is the EnLight Optical Core Processor applied to matched filter (MF) correlation processing via fast Fourier transform (FFT) of broadband Dopplersensitive waveforms (DSW) using active sonar arrays for target tracking. The second is the IBM CELL Broadband Engine applied to 2-D discrete Fourier transform (DFT) kernel for image processing and frequency domain processing. And the third is the NVIDIA graphical processor applied to document feature clustering. EnLight Optical Core Processor. Optical processing is inherently capable of high-parallelism that can be translated to very high performance, low power dissipation computing. The EnLight 256 is a small form factor signal processing chip (5x5 cm2) with a digital optical core that is being developed by an Israeli startup company. As part of its evaluation of foreign technology, ORNL's Center for Engineering Science Advanced Research (CESAR) had access to a precursor EnLight 64 Alpha hardware for a preliminary assessment of capabilities in terms of large Fourier transforms for matched filter banks and on applications related to Doppler-sensitive waveforms. This processor is optimized for array operations, which it performs in fixed-point arithmetic at the rate of 16 TeraOPS at 8-bit precision. This is approximately 1000 times faster than the fastest DSP available today. The optical core

  2. MULTI-CORE AND OPTICAL PROCESSOR RELATED APPLICATIONS RESEARCH AT OAK RIDGE NATIONAL LABORATORY

    International Nuclear Information System (INIS)

    Barhen, Jacob; Kerekes, Ryan A.; St Charles, Jesse Lee; Buckner, Mark A.

    2008-01-01

    High-speed parallelization of common tasks holds great promise as a low-risk approach to achieving the significant increases in signal processing and computational performance required for next generation innovations in reconfigurable radio systems. Researchers at the Oak Ridge National Laboratory have been working on exploiting the parallelization offered by this emerging technology and applying it to a variety of problems. This paper will highlight recent experience with four different parallel processors applied to signal processing tasks that are directly relevant to signal processing required for SDR/CR waveforms. The first is the EnLight Optical Core Processor applied to matched filter (MF) correlation processing via fast Fourier transform (FFT) of broadband Dopplersensitive waveforms (DSW) using active sonar arrays for target tracking. The second is the IBM CELL Broadband Engine applied to 2-D discrete Fourier transform (DFT) kernel for image processing and frequency domain processing. And the third is the NVIDIA graphical processor applied to document feature clustering. EnLight Optical Core Processor. Optical processing is inherently capable of high-parallelism that can be translated to very high performance, low power dissipation computing. The EnLight 256 is a small form factor signal processing chip (5x5 cm2) with a digital optical core that is being developed by an Israeli startup company. As part of its evaluation of foreign technology, ORNL's Center for Engineering Science Advanced Research (CESAR) had access to a precursor EnLight 64 Alpha hardware for a preliminary assessment of capabilities in terms of large Fourier transforms for matched filter banks and on applications related to Doppler-sensitive waveforms. This processor is optimized for array operations, which it performs in fixed-point arithmetic at the rate of 16 TeraOPS at 8-bit precision. This is approximately 1000 times faster than the fastest DSP available today. The optical core

  3. T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors

    Directory of Open Access Journals (Sweden)

    Youngmin Kim

    2016-07-01

    Full Text Available Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM. Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction.

  4. Software protocol design: Communication and control in a multi-task robot machine for ITER vacuum vessel assembly and maintenance

    International Nuclear Information System (INIS)

    Li, Ming; Wu, Huapeng; Handroos, Heikki; Yang, Guangyou; Wang, Yongbo

    2015-01-01

    Highlights: • A high-level protocol is proposed for the data inter-transmission. • The protocol design is task-oriented for the robot control in the software system. • The protocol functions as a role of middleware in the software. • The protocol running stand-alone as an independent process in the software provides greater security. • Providing a reference design protocol for the multi-task robot machine in the industry. - Abstract: A specific communication and control protocol for software design of a multi-task robot machine is proposed. In order to fulfill the requirements on the complicated multi machining functions and the high performance motion control, the software design of robot is divided into two main parts accordingly, which consists of the user-oriented HMI part and robot control-oriented real-time control system. The two parts of software are deployed in the different hardware for the consideration of run-time performance, which forms a client–server-control architecture. Therefore a high-level task-oriented protocol is designed for the data inter-communication between the HMI part and the control system part, in which all the transmitting data related to a machining task is divided into three categories: trajectory-oriented data, task control-oriented data and status monitoring-oriented data. The protocol consists of three sub-protocols accordingly – a trajectory protocol, task control protocol and status protocol – which are deployed over the Ethernet and run as independent processes in both the client and server computers. The protocols are able to manage the vast amounts of data streaming due to the multi machining functions in a more efficient way. Since the protocol is functioning in the software as a role of middleware, and providing the data interface standards for the developing groups of two parts of software, it also permits greater focus of both software parts developers on their own requirements-oriented design. By

  5. Software protocol design: Communication and control in a multi-task robot machine for ITER vacuum vessel assembly and maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Li, Ming, E-mail: ming.li@lut.fi [Laboratory of Intelligent Machines, Lappeenranta University of Technology (Finland); Wu, Huapeng; Handroos, Heikki [Laboratory of Intelligent Machines, Lappeenranta University of Technology (Finland); Yang, Guangyou [School of Mechanical Engineering, Hubei University of Technology, Wuhan (China); Wang, Yongbo [Laboratory of Intelligent Machines, Lappeenranta University of Technology (Finland)

    2015-10-15

    Highlights: • A high-level protocol is proposed for the data inter-transmission. • The protocol design is task-oriented for the robot control in the software system. • The protocol functions as a role of middleware in the software. • The protocol running stand-alone as an independent process in the software provides greater security. • Providing a reference design protocol for the multi-task robot machine in the industry. - Abstract: A specific communication and control protocol for software design of a multi-task robot machine is proposed. In order to fulfill the requirements on the complicated multi machining functions and the high performance motion control, the software design of robot is divided into two main parts accordingly, which consists of the user-oriented HMI part and robot control-oriented real-time control system. The two parts of software are deployed in the different hardware for the consideration of run-time performance, which forms a client–server-control architecture. Therefore a high-level task-oriented protocol is designed for the data inter-communication between the HMI part and the control system part, in which all the transmitting data related to a machining task is divided into three categories: trajectory-oriented data, task control-oriented data and status monitoring-oriented data. The protocol consists of three sub-protocols accordingly – a trajectory protocol, task control protocol and status protocol – which are deployed over the Ethernet and run as independent processes in both the client and server computers. The protocols are able to manage the vast amounts of data streaming due to the multi machining functions in a more efficient way. Since the protocol is functioning in the software as a role of middleware, and providing the data interface standards for the developing groups of two parts of software, it also permits greater focus of both software parts developers on their own requirements-oriented design. By

  6. Multi-Attribute Task Battery - Applications in pilot workload and strategic behavior research

    Science.gov (United States)

    Arnegard, Ruth J.; Comstock, J. R., Jr.

    1991-01-01

    The Multi-Attribute Task (MAT) Battery provides a benchmark set of tasks for use in a wide range of lab studies of operator performance and workload. The battery incorporates tasks analogous to activities that aircraft crewmembers perform in flight, while providing a high degree of experimenter control, performance data on each subtask, and freedom to nonpilot test subjects. Features not found in existing computer based tasks include an auditory communication task (to simulate Air Traffic Control communication), a resource management task permitting many avenues or strategies of maintaining target performance, a scheduling window which gives the operator information about future task demands, and the option of manual or automated control of tasks. Performance data are generated for each subtask. In addition, the task battery may be paused and onscreen workload rating scales presented to the subject. The MAT Battery requires a desktop computer with color graphics. The communication task requires a serial link to a second desktop computer with a voice synthesizer or digitizer card.

  7. A proposal to manage multi-task dialogs in conversational interfaces

    Directory of Open Access Journals (Sweden)

    David GRIOL

    2016-11-01

    Full Text Available The emergence of smart devices and recent advances in spoken language technology are currently extending the use of conversational interfaces and spoken interaction to perform many tasks. The dialog management task of a conversational interface consists of selecting the next system response considering the user's actions, the dialog history, and the results of accessing the data repositories. In this paper we describe a dialog management technique adapted to multi-task conversational systems. In our proposal, specialized dialog models are used to deal with each specific subtask of dialog objective for which the dialog system has been designed. The practical application of the proposed technique to develop a dialog system acting as a customer support service shows that the use of these specialized dialog models increases the quality and number of successful interactions with the system in comparison with developing a single dialog model.

  8. A highly efficient multi-core algorithm for clustering extremely large datasets

    Directory of Open Access Journals (Sweden)

    Kraus Johann M

    2010-04-01

    Full Text Available Abstract Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer.

  9. Artificial Immune Systems as a Modern Tool for Solving Multi-Purpose Optimization Tasks in the Field of Logistics

    Directory of Open Access Journals (Sweden)

    Skitsko Volodymyr I.

    2017-03-01

    Full Text Available The article investigates various aspects of the functioning of artificial immune systems and their using to solve different tasks. The analysis of the studied literature showed that nowadays there exist combinations of artificial immune systems, in particular with genetic algorithms, the particle swarm optimization method, artificial neural networks, etc., to solve different tasks. However, the solving of economic tasks is paid little attention. The article presents the basic terminology of artificial immune systems; the steps of the clonal selection algorithm are described, as well as a brief description of the negative selection algorithm, the immune network algorithm and the dendritic algorithm is given; conceptual aspects of the use of an artificial immune system for solving multi-purpose optimization problems are formulated, and an example of solving a problem in the field of logistics is described. Artificial immune systems as a means of solving various weakly structured, multi-criteria and multi-purpose economic tasks, in particular in the sphere of logistics, are a promising tool that requires further research. Therefore, it is advisable in the future to focus on the use of various existing immune algorithms for solving various economic problems.

  10. Harnessing the Benefits of Bimanual and Multi-finger Input for Supporting Grouping Tasks on Interactive Tabletops

    OpenAIRE

    Geyer, Florian; Höchtl, Anita; Reiterer, Harald

    2012-01-01

    In this paper we describe an experimental study investigating the use of bimanual and multi-finger input for grouping items spatially on a tabletop interface. In a singleuser setup, we compared two typical interaction techniques supporting this task. We studied the grouping and regrouping performance in general and the use of bimanual and multi-finger input in particular. Our results show that the traditional container concept may not be an adequate fit for interactive tabletops. Rather, we d...

  11. A High Performance Multi-Core FPGA Implementation for 2D Pixel Clustering for the ATLAS Fast TracKer (FTK) Processor

    CERN Document Server

    Sotiropoulou, C-L; The ATLAS collaboration; Beretta, M; Gkaitatzis, S; Kordas, K; Nikolaidis, S; Petridou, C; Volpi, G

    2014-01-01

    The high performance multi-core 2D pixel clustering FPGA implementation used for the input system of the ATLAS Fast TracKer (FTK) processor is presented. The input system for the FTK processor will receive data from the Pixel and micro-strip detectors read out drivers (RODs) at 760Gbps, the full rate of level 1 triggers. Clustering is required as a method to reduce the high rate of the received data before further processing, as well as to determine the cluster centroid for obtaining obtain the best spatial measurement. Our implementation targets the pixel detectors and uses a 2D-clustering algorithm that takes advantage of a moving window technique to minimize the logic required for cluster identification. The design is fully generic and the cluster detection window size can be adjusted for optimizing the cluster identification process. Τhe implementation can be parallelized by instantiating multiple cores to identify different clusters independently thus exploiting more FPGA resources. This flexibility mak...

  12. AMD's 64-bit Opteron processor

    CERN Multimedia

    CERN. Geneva

    2003-01-01

    This talk concentrates on issues that relate to obtaining peak performance from the Opteron processor. Compiler options, memory layout, MPI issues in multi-processor configurations and the use of a NUMA kernel will be covered. A discussion of recent benchmarking projects and results will also be included.BiographiesDavid RichDavid directs AMD's efforts in high performance computing and also in the use of Opteron processors...

  13. A multi-rate DPSK modem for free-space laser communications

    Science.gov (United States)

    Spellmeyer, N. W.; Browne, C. A.; Caplan, D. O.; Carney, J. J.; Chavez, M. L.; Fletcher, A. S.; Fitzgerald, J. J.; Kaminsky, R. D.; Lund, G.; Hamilton, S. A.; Magliocco, R. J.; Mikulina, O. V.; Murphy, R. J.; Rao, H. G.; Scheinbart, M. S.; Seaver, M. M.; Wang, J. P.

    2014-03-01

    The multi-rate DPSK format, which enables efficient free-space laser communications over a wide range of data rates, is finding applications in NASA's Laser Communications Relay Demonstration. We discuss the design and testing of an efficient and robust multi-rate DPSK modem, including aspects of the electrical, mechanical, thermal, and optical design. The modem includes an optically preamplified receiver, an 0.5-W average power transmitter, a LEON3 rad-hard microcontroller that provides the command and telemetry interface and supervisory control, and a Xilinx Virtex-5 radhard reprogrammable FPGA that both supports the high-speed data flow to and from the modem and controls the modem's analog and digital subsystems. For additional flexibility, the transmitter and receiver can be configured to support operation with multi-rate PPM waveforms.

  14. A Compute Environment of ABC95 Array Computer Based on Multi-FPGA Chip

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    ABC95 array computer is a multi-function network's computer based on FPGA technology, The multi-function network supports processors conflict-free access data from memory and supports processors access data from processors based on enhanced MESH network.ABC95 instruction's system includes control instructions, scalar instructions, vectors instructions.Mostly net-work instructions are introduced.A programming environment of ABC95 array computer assemble language is designed.A programming environment of ABC95 array computer for VC++ is advanced.It includes load function of ABC95 array computer program and data, store function, run function and so on.Specially, The data type of ABC95 array computer conflict-free access is defined.The results show that these technologies can develop programmer of ABC95 array computer effectively.

  15. The processor farm for online triggering and full event reconstruction of the HERA-B experiment at HERA

    International Nuclear Information System (INIS)

    Gellrich, A.; Dippel, R.; Gensch, U.; Kowallik, R.; Legrand, I.C.; Leich, H.; Sun, F.; Wegner, P.

    1996-01-01

    The main goal of the HERA-B experiment which start taking data in 1988 is to study CP violation in B decays. This article describes the concept and the planned implementation of a multi-processor system, called processor farm,as the last part of the data acquisition and trigger system of the HERA B experiment. The third level trigger task and a full online event reconstruction will be performed on this processor farm, consisting of more then 100 powerful RISC processors which are based on commercial hardware boards. The controlling will be done by a real-time operating system which provides a software development environment, including FORTRAN and C compilers. (author)

  16. An Investigation of Factors Affecting Multi-Task Performance in an Immersive Environment

    Science.gov (United States)

    2007-12-01

    in Team Sports . Personality and Individual Differences July 1998, 25 (1), 119- 128. Potosky, D. A Field Study of Computer Efficacy Beliefs as an...vacation I like to engage in active sports rather than just lie around. ____ 60. I’ll try anything once. ____ 61. I often feel unsure of myself...heart rate variability ( HRV ). • To identify non-invasive psychological and physiological measures of cognitive readiness in a multi-task environment

  17. RTEMS SMP and MTAPI for Efficient Multi-Core Space Applications on LEON3/LEON4 Processors

    Science.gov (United States)

    Cederman, Daniel; Hellstrom, Daniel; Sherrill, Joel; Bloom, Gedare; Patte, Mathieu; Zulianello, Marco

    2015-09-01

    This paper presents the final result of an European Space Agency (ESA) activity aimed at improving the software support for LEON processors used in SMP configurations. One of the benefits of using a multicore system in a SMP configuration is that in many instances it is possible to better utilize the available processing resources by load balancing between cores. This however comes with the cost of having to synchronize operations between cores, leading to increased complexity. While in an AMP system one can use multiple instances of operating systems that are only uni-processor capable, a SMP system requires the operating system to be written to support multicore systems. In this activity we have improved and extended the SMP support of the RTEMS real-time operating system and ensured that it fully supports the multicore capable LEON processors. The targeted hardware in the activity has been the GR712RC, a dual-core core LEON3FT processor, and the functional prototype of ESA's Next Generation Multiprocessor (NGMP), a quad core LEON4 processor. The final version of the NGMP is now available as a product under the name GR740. An implementation of the Multicore Task Management API (MTAPI) has been developed as part of this activity to aid in the parallelization of applications for RTEMS SMP. It allows for simplified development of parallel applications using the task-based programming model. An existing space application, the Gaia Video Processing Unit, has been ported to RTEMS SMP using the MTAPI implementation to demonstrate the feasibility and usefulness of multicore processors for space payload software. The activity is funded by ESA under contract 4000108560/13/NL/JK. Gedare Bloom is supported in part by NSF CNS-0934725.

  18. The research and application of multi-biometric acquisition embedded system

    Science.gov (United States)

    Deng, Shichao; Liu, Tiegen; Guo, Jingjing; Li, Xiuyan

    2009-11-01

    The identification technology based on multi-biometric can greatly improve the applicability, reliability and antifalsification. This paper presents a multi-biometric system bases on embedded system, which includes: three capture daughter boards are applied to obtain different biometric: one each for fingerprint, iris and vein of the back of hand; FPGA (Field Programmable Gate Array) is designed as coprocessor, which uses to configure three daughter boards on request and provides data path between DSP (digital signal processor) and daughter boards; DSP is the master processor and its functions include: control the biometric information acquisition, extracts feature as required and responsible for compare the results with the local database or data server through network communication. The advantages of this system were it can acquire three different biometric in real time, extracts complexity feature flexibly in different biometrics' raw data according to different purposes and arithmetic and network interface on the core-board will be the solution of big data scale. Because this embedded system has high stability, reliability, flexibility and fit for different data scale, it can satisfy the demand of multi-biometric recognition.

  19. Communication costs in a multi-tiered MPSoC

    NARCIS (Netherlands)

    van de Burgwal, M.D.; Smit, Gerardus Johannes Maria

    2008-01-01

    The amount of digital processing required for phased array beamformers is very large. It requires many parallel processors, which can be organized in a multi-tiered structure. Communication costs differ for each of the stages in such an architecture. For example, communication costs from the antenna

  20. A Fault Detection Mechanism in a Data-flow Scheduled Multithreaded Processor

    NARCIS (Netherlands)

    Fu, J.; Yang, Q.; Poss, R.; Jesshope, C.R.; Zhang, C.

    2014-01-01

    This paper designs and implements the Redundant Multi-Threading (RMT) in a Data-flow scheduled MultiThreaded (DMT) multicore processor, called Data-flow scheduled Redundant Multi-Threading (DRMT). Meanwhile, It presents Asynchronous Output Comparison (AOC) for RMT techniques to avoid fault detection

  1. Robust Online Multi-Task Learning with Correlative and Personalized Structures

    KAUST Repository

    Yang, Peng

    2017-06-29

    Multi-Task Learning (MTL) can enhance a classifier\\'s generalization performance by learning multiple related tasks simultaneously. Conventional MTL works under the offline setting and suffers from expensive training cost and poor scalability. To address such issues, online learning techniques have been applied to solve MTL problems. However, most existing algorithms of online MTL constrain task relatedness into a presumed structure via a single weight matrix, which is a strict restriction that does not always hold in practice. In this paper, we propose a robust online MTL framework that overcomes this restriction by decomposing the weight matrix into two components: the first one captures the low-rank common structure among tasks via a nuclear norm; the second one identifies the personalized patterns of outlier tasks via a group lasso. Theoretical analysis shows the proposed algorithm can achieve a sub-linear regret with respect to the best linear model in hindsight. However, the nuclear norm that simply adds all nonzero singular values together may not be a good low-rank approximation. To improve the results, we use a log-determinant function as a non-convex rank approximation. Experimental results on a number of real-world applications also verify the efficacy of our approaches.

  2. Robust Online Multi-Task Learning with Correlative and Personalized Structures

    KAUST Repository

    Yang, Peng; Zhao, Peilin; Gao, Xin

    2017-01-01

    Multi-Task Learning (MTL) can enhance a classifier's generalization performance by learning multiple related tasks simultaneously. Conventional MTL works under the offline setting and suffers from expensive training cost and poor scalability. To address such issues, online learning techniques have been applied to solve MTL problems. However, most existing algorithms of online MTL constrain task relatedness into a presumed structure via a single weight matrix, which is a strict restriction that does not always hold in practice. In this paper, we propose a robust online MTL framework that overcomes this restriction by decomposing the weight matrix into two components: the first one captures the low-rank common structure among tasks via a nuclear norm; the second one identifies the personalized patterns of outlier tasks via a group lasso. Theoretical analysis shows the proposed algorithm can achieve a sub-linear regret with respect to the best linear model in hindsight. However, the nuclear norm that simply adds all nonzero singular values together may not be a good low-rank approximation. To improve the results, we use a log-determinant function as a non-convex rank approximation. Experimental results on a number of real-world applications also verify the efficacy of our approaches.

  3. Carotid flow pulsatility is higher in women with greater decrement in gait speed during multi-tasking.

    Science.gov (United States)

    Gonzales, Joaquin U; James, C Roger; Yang, Hyung Suk; Jensen, Daniel; Atkins, Lee; Al-Khalil, Kareem; O'Boyle, Michael

    2017-05-01

    Central arterial hemodynamics is associated with cognitive impairment. Reductions in gait speed during walking while performing concurrent tasks known as dual-tasking (DT) or multi-tasking (MT) is thought to reflect the cognitive cost that exceeds neural capacity to share resources. We hypothesized that central vascular function would associate with decrements in gait speed during DT or MT. Gait speed was measured using a motion capture system in 56 women (30-80y) without mild-cognitive impairment. Dual-tasking was considered walking at a fast-pace while balancing a tray. Multi-tasking was the DT condition plus subtracting by serial 7's. Applanation tonometry was used for measurement of aortic stiffness and central pulse pressure. Doppler-ultrasound was used to measure blood flow velocity and β-stiffness index in the common carotid artery. The percent change in gait speed was larger for MT than DT (14.1±11.2 vs. 8.7±9.6%, p decrement (third tertile) as compared to women with less decrement (first tertile) in gait speed during MT after adjusting for age, gait speed, and task error. Carotid pulse pressure and β-stiffness did not contribute to these tertile differences. Elevated carotid flow pulsatility and resistance are characteristics found in healthy women that show lower cognitive capacity to walk and perform multiple concurrent tasks. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Low-communication parallel quantum multi-target preimage search

    NARCIS (Netherlands)

    Banegas, G.S.; Bernstein, D.J.; Adams, Carlisle; Camenisch, Jan

    2017-01-01

    The most important pre-quantum threat to AES-128 is the 1994 van Oorschot–Wiener “parallel rho method”, a low-communication parallel pre-quantum multi-target preimage-search algorithm. This algorithm uses a mesh of p small processors, each running for approximately 2 128 /pt 2128/pt fast steps, to

  5. Interleaved Subtask Scheduling on Multi Processor SOC

    NARCIS (Netherlands)

    Zhe, M.

    2006-01-01

    The ever-progressing semiconductor processing technique has integrated more and more embedded processors on a single system-on-achip (SoC). With such powerful SoC platforms, and also due to the stringent time-to-market deadlines, many functionalities which used to be implemented in ASICs are

  6. Upgrade of the PreProcessor System for the ATLAS Level-1 Calorimeter Trigger

    CERN Document Server

    Khomich, A

    2010-01-01

    The ATLAS Level-1 Calorimeter Trigger is a hardware-based pipelined system designed to identify high-pT objects in the ATLAS calorimeters within a fixed latency of 2.5\\,us. It consists of three subsystems: the PreProcessor which conditions and digitizes analogue signals and two digital processors. The majority of the PreProcessor's tasks are performed on a dense Multi-Chip Module(MCM) consisting of FADCs, a time-adjustment and digital processing ASICs, and LVDS serialisers designed and implemented in ten years old technologies. An MCM substitute, based on today's components (dual channel FADCs and FPGA), is being developed to profit from state-of-the-art electronics and to enhance the flexibility of the digital processing. Development and first test results are presented.

  7. Upgrade of the PreProcessor System for the ATLAS LVL1 Calorimeter Trigger

    CERN Document Server

    Khomich, A; The ATLAS collaboration

    2010-01-01

    The ATLAS Level-1 Calorimeter Trigger is a hardware-based pipelined system designed to identify high-pT objects in the ATLAS calorimeters within a fixed latency of 2.5us. It consists of three subsystems: the PreProcessor which conditions and digitizes analogue signals and two digital processors. The majority of the PreProcessor's tasks are performed on a dense Multi-Chip Module(MCM) consisting of FADCs, a time-adjustment and digital processing ASICs, and LVDS serializers designed and implemented in ten years old technologies. An MCM substitute, based on today's components (dual channel FADCs and FPGA), is being developed to profit from state-of-the-art electronics and to enhance the flexibility of the digital processing. Development and first test results are presented.

  8. Processor farming method for multi-scale analysis of masonry structures

    Science.gov (United States)

    Krejčí, Tomáš; Koudelka, Tomáš

    2017-07-01

    This paper describes a processor farming method for a coupled heat and moisture transport in masonry using a two-level approach. The motivation for the two-level description comes from difficulties connected with masonry structures, where the size of stone blocks is much larger than the size of mortar layers and very fine finite element mesh has to be used. The two-level approach is suitable for parallel computing because nearly all computations can be performed independently with little synchronization. This approach is called processor farming. The master processor is dealing with the macro-scale level - the structure and the slave processors are dealing with a homogenization procedure on the meso-scale level which is represented by an appropriate representative volume element.

  9. Intermediate Frequency Digital Receiver Based on Multi-FPGA System

    Directory of Open Access Journals (Sweden)

    Chengchang Zhang

    2016-01-01

    Full Text Available Aiming at high-cost, large-size, and inflexibility problems of traditional analog intermediate frequency receiver in the aerospace telemetry, tracking, and command (TTC system, we have proposed a new intermediate frequency (IF digital receiver based on Multi-FPGA system in this paper. Digital beam forming (DBF is realized by coordinated rotation digital computer (CORDIC algorithm. An experimental prototype has been developed on a compact Multi-FPGA system with three FPGAs to receive 16 channels of IF digital signals. Our experimental results show that our proposed scheme is able to provide a great convenience for the design of IF digital receiver, which offers a valuable reference for real-time, low power, high density, and small size receiver design.

  10. Interval prediction for graded multi-label classification

    CERN Document Server

    Lastra, Gerardo; Bahamonde, Antonio

    2014-01-01

    Multi-label was introduced as an extension of multi-class classification. The aim is to predict a set of classes (called labels in this context) instead of a single one, namely the set of relevant labels. If membership to the set of relevant labels is defined to a certain degree, the learning task is called graded multi-label classification. These learning tasks can be seen as a set of ordinal classifications. Hence, recommender systems can be considered as multi-label classification tasks. In this paper, we present a new type of nondeterministic learner that, for each instance, tries to predict at the same time the true grade for each label. When the classification is uncertain for a label, however, the hypotheses predict a set of consecutive grades, i.e., an interval. The goal is to keep the set of predicted grades as small as possible; while still containing the true grade. We shall see that these classifiers take advantage of the interrelations of labels. The result is that, with quite narrow intervals, i...

  11. Resting-state brain activity in the motor cortex reflects task-induced activity: A multi-voxel pattern analysis.

    Science.gov (United States)

    Kusano, Toshiki; Kurashige, Hiroki; Nambu, Isao; Moriguchi, Yoshiya; Hanakawa, Takashi; Wada, Yasuhiro; Osu, Rieko

    2015-08-01

    It has been suggested that resting-state brain activity reflects task-induced brain activity patterns. In this study, we examined whether neural representations of specific movements can be observed in the resting-state brain activity patterns of motor areas. First, we defined two regions of interest (ROIs) to examine brain activity associated with two different behavioral tasks. Using multi-voxel pattern analysis with regularized logistic regression, we designed a decoder to detect voxel-level neural representations corresponding to the tasks in each ROI. Next, we applied the decoder to resting-state brain activity. We found that the decoder discriminated resting-state neural activity with accuracy comparable to that associated with task-induced neural activity. The distribution of learned weighted parameters for each ROI was similar for resting-state and task-induced activities. Large weighted parameters were mainly located on conjunctive areas. Moreover, the accuracy of detection was higher than that for a decoder whose weights were randomly shuffled, indicating that the resting-state brain activity includes multi-voxel patterns similar to the neural representation for the tasks. Therefore, these results suggest that the neural representation of resting-state brain activity is more finely organized and more complex than conventionally considered.

  12. Dynamic Voltage-Frequency and Workload Joint Scaling Power Management for Energy Harvesting Multi-Core WSN Node SoC

    Directory of Open Access Journals (Sweden)

    Xiangyu Li

    2017-02-01

    Full Text Available This paper proposes a scheduling and power management solution for energy harvesting heterogeneous multi-core WSN node SoC such that the system continues to operate perennially and uses the harvested energy efficiently. The solution consists of a heterogeneous multi-core system oriented task scheduling algorithm and a low-complexity dynamic workload scaling and configuration optimization algorithm suitable for light-weight platforms. Moreover, considering the power consumption of most WSN applications have the characteristic of data dependent behavior, we introduce branches handling mechanism into the solution as well. The experimental result shows that the proposed algorithm can operate in real-time on a lightweight embedded processor (MSP430, and that it can make a system do more valuable works and make more than 99.9% use of the power budget.

  13. Computer controlled multi-leaf conformation radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Matsuda, T [Tokyo Metropolitan Komagome Hospital (Japan); Inamura, K

    1981-10-01

    A conformation radiotherapy system with 5-split collimators of which openings can be controlled symmetrically by computerized techniques during rotational irradiation by a linear accelerator has been developed. Outline of the system performance and its clinical applications are described as follows. 1. Profile of the system: The hardware is composed of three parts, namely, the multi-split collimator, the electronic data processor, and the interface between those two parts. 1) The multi-leaf collimator is composed of 5 pairs (10 leaves) diaphragms. It can be mounted to the X-ray head of a linear accelerator when used, and can be dismounted after its use. 2) The electronic data processor sends control signal to the collimator according to the 5-leaf target volume data which can be stored into a minifloppy disc through the curve digitizer previously. This part is composed of a) dedicated micro processor, b) I/O expansion unit, c) color CRT display with key board, d) dual mini-floppy disc unit, e) curve digitizer and f) digital plotter for recording and verification of resulted accuracy. 2. Performance of the system: 1) Maximum field size: 15 cm x 15 cm at isocenter. 2) Maximum elongation ratio of the target volume: 3 : 1 when the longer diameter is 15 cm. 3) Control accuracy: Within +-3 mm deviation from planned beam focus at isocenter. 3. Clinical application: The method of treatment planning and clinical advantages of this irradiation method are explained by raising clinical experiences such as treating brain tumor and rectal cancer.

  14. Computer controlled multi-leaf conformation radiotherapy

    International Nuclear Information System (INIS)

    Matsuda, Tadayoshi; Inamura, Kiyonari.

    1981-01-01

    A conformation radiotherapy system with 5-split collimators of which openings can be controlled symmetrically by computerized techniques during rotational irradiation by a linear accelerator has been developed. Outline of the system performance and its clinical applications are described as follows. 1. Profile of the system: The hardware is composed of three parts, namely, the multi-split collimator, the electronic data processor, and the interface between those two parts. 1) The multi-leaf collimator is composed of 5 pairs (10 leaves) diaphragms. It can be mounted to the X-ray head of a linear accelerator when used, and can be dismounted after its use. 2) The electronic data processor sends control signal to the collimator according to the 5-leaf target volume data which can be stored into a minifloppy disc through the curve digitizer previously. This part is composed of a) dedicated micro processor, b) I/O expansion unit, c) color CRT display with key board, d) dual mini-floppy disc unit, e) curve digitizer and f) digital plotter for recording and verification of resulted accuracy. 2. Performance of the system: 1) Maximum field size: 15 cm x 15 cm at isocenter. 2) Maximum elongation ratio of the target volume: 3 : 1 when the longer diameter is 15 cm. 3) Control accuracy: Within +-3 mm deviation from planned beam focus at isocenter. 3. Clinical application: The method of treatment planning and clinical advantages of this irradiation method are explained by raising clinical experiences such as treating brain tumor and rectal cancer. (author)

  15. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  16. Immune networks: multi-tasking capabilities at medium load

    Science.gov (United States)

    Agliari, E.; Annibale, A.; Barra, A.; Coolen, A. C. C.; Tantari, D.

    2013-08-01

    Associative network models featuring multi-tasking properties have been introduced recently and studied in the low-load regime, where the number P of simultaneously retrievable patterns scales with the number N of nodes as P ˜ log N. In addition to their relevance in artificial intelligence, these models are increasingly important in immunology, where stored patterns represent strategies to fight pathogens and nodes represent lymphocyte clones. They allow us to understand the crucial ability of the immune system to respond simultaneously to multiple distinct antigen invasions. Here we develop further the statistical mechanical analysis of such systems, by studying the medium-load regime, P ˜ Nδ with δ ∈ (0, 1]. We derive three main results. First, we reveal the nontrivial architecture of these networks: they exhibit a high degree of modularity and clustering, which is linked to their retrieval abilities. Second, by solving the model we demonstrate for δ frameworks are required to achieve effective retrieval.

  17. Uncertainty Flow Facilitates Zero-Shot Multi-Label Learning in Affective Facial Analysis

    Directory of Open Access Journals (Sweden)

    Wenjun Bai

    2018-02-01

    Full Text Available Featured Application: The proposed Uncertainty Flow framework may benefit the facial analysis with its promised elevation in discriminability in multi-label affective classification tasks. Moreover, this framework also allows the efficient model training and between tasks knowledge transfer. The applications that rely heavily on continuous prediction on emotional valance, e.g., to monitor prisoners’ emotional stability in jail, can be directly benefited from our framework. Abstract: To lower the single-label dependency on affective facial analysis, it urges the fruition of multi-label affective learning. The impediment to practical implementation of existing multi-label algorithms pertains to scarcity of scalable multi-label training datasets. To resolve this, an inductive transfer learning based framework, i.e.,Uncertainty Flow, is put forward in this research to allow knowledge transfer from a single labelled emotion recognition task to a multi-label affective recognition task. I.e., the model uncertainty—which can be quantified in Uncertainty Flow—is distilled from a single-label learning task. The distilled model uncertainty ensures the later efficient zero-shot multi-label affective learning. On the theoretical perspective, within our proposed Uncertainty Flow framework, the feasibility of applying weakly informative priors, e.g., uniform and Cauchy prior, is fully explored in this research. More importantly, based on the derived weight uncertainty, three sets of prediction related uncertainty indexes, i.e., soft-max uncertainty, pure uncertainty and uncertainty plus are proposed to produce reliable and accurate multi-label predictions. Validated on our manual annotated evaluation dataset, i.e., the multi-label annotated FER2013, our proposed Uncertainty Flow in multi-label facial expression analysis exhibited superiority to conventional multi-label learning algorithms and multi-label compatible neural networks. The success of our

  18. Continuous Video Modeling to Prompt Completion of Multi-Component Tasks by Adults with Moderate Intellectual Disability

    Science.gov (United States)

    Mechling, Linda C.; Ayres, Kevin M.; Purrazzella, Kaitlin; Purrazzella, Kimberly

    2014-01-01

    This investigation examined the ability of four adults with moderate intellectual disability to complete multi-component tasks using continuous video modeling. Continuous video modeling, which is a newly researched application of video modeling, presents video in a "looping" format which automatically repeats playing of the video while…

  19. CASPER: Embedding Power Estimation and Hardware-Controlled Power Management in a Cycle-Accurate Micro-Architecture Simulation Platform for Many-Core Multi-Threading Heterogeneous Processors

    Directory of Open Access Journals (Sweden)

    Arun Ravindran

    2012-02-01

    Full Text Available Despite the promising performance improvement observed in emerging many-core architectures in high performance processors, high power consumption prohibitively affects their use and marketability in the low-energy sectors, such as embedded processors, network processors and application specific instruction processors (ASIPs. While most chip architects design power-efficient processors by finding an optimal power-performance balance in their design, some use sophisticated on-chip autonomous power management units, which dynamically reduce the voltage or frequencies of idle cores and hence extend battery life and reduce operating costs. For large scale designs of many-core processors, a holistic approach integrating both these techniques at different levels of abstraction can potentially achieve maximal power savings. In this paper we present CASPER, a robust instruction trace driven cycle-accurate many-core multi-threading micro-architecture simulation platform where we have incorporated power estimation models of a wide variety of tunable many-core micro-architectural design parameters, thus enabling processor architects to explore a sufficiently large design space and achieve power-efficient designs. Additionally CASPER is designed to accommodate cycle-accurate models of hardware controlled power management units, enabling architects to experiment with and evaluate different autonomous power-saving mechanisms to study the run-time power-performance trade-offs in embedded many-core processors. We have implemented two such techniques in CASPER–Chipwide Dynamic Voltage and Frequency Scaling, and Performance Aware Core-Specific Frequency Scaling, which show average power savings of 35.9% and 26.2% on a baseline 4-core SPARC based architecture respectively. This power saving data accounts for the power consumption of the power management units themselves. The CASPER simulation platform also provides users with complete support of SPARCV9

  20. Risk assessments using the Strain Index and the TLV for HAL, Part I: Task and multi-task job exposure classifications.

    Science.gov (United States)

    Kapellusch, Jay M; Bao, Stephen S; Silverstein, Barbara A; Merryweather, Andrew S; Thiese, Mathew S; Hegmann, Kurt T; Garg, Arun

    2017-12-01

    The Strain Index (SI) and the American Conference of Governmental Industrial Hygienists (ACGIH) Threshold Limit Value for Hand Activity Level (TLV for HAL) use different constituent variables to quantify task physical exposures. Similarly, time-weighted-average (TWA), Peak, and Typical exposure techniques to quantify physical exposure from multi-task jobs make different assumptions about each task's contribution to the whole job exposure. Thus, task and job physical exposure classifications differ depending upon which model and technique are used for quantification. This study examines exposure classification agreement, disagreement, correlation, and magnitude of classification differences between these models and techniques. Data from 710 multi-task job workers performing 3,647 tasks were analyzed using the SI and TLV for HAL models, as well as with the TWA, Typical and Peak job exposure techniques. Physical exposures were classified as low, medium, and high using each model's recommended, or a priori limits. Exposure classification agreement and disagreement between models (SI, TLV for HAL) and between job exposure techniques (TWA, Typical, Peak) were described and analyzed. Regardless of technique, the SI classified more tasks as high exposure than the TLV for HAL, and the TLV for HAL classified more tasks as low exposure. The models agreed on 48.5% of task classifications (kappa = 0.28) with 15.5% of disagreement between low and high exposure categories. Between-technique (i.e., TWA, Typical, Peak) agreement ranged from 61-93% (kappa: 0.16-0.92) depending on whether the SI or TLV for HAL was used. There was disagreement between the SI and TLV for HAL and between the TWA, Typical and Peak techniques. Disagreement creates uncertainty for job design, job analysis, risk assessments, and developing interventions. Task exposure classifications from the SI and TLV for HAL might complement each other. However, TWA, Typical, and Peak job exposure techniques all have

  1. Simulation of a nuclear measurement system around a multi-task mode real-time monitor

    International Nuclear Information System (INIS)

    De Grandi, G.; Ouiguini, R.

    1983-01-01

    When debugging and testing material and software for the automation of systems, the non-availability of this last one states important logistic problems. A simulator of the system to be automatized, conceived around a multi-task mode real-time monitor, allowing the debugging of the software of automation without the physical presence of the system to be automatized, is proposed in the present report

  2. A Randomized Controlled ERP Study on the Effects of Multi-Domain Cognitive Training and Task Difficulty on Task Switching Performance in Older Adults

    Science.gov (United States)

    Küper, Kristina; Gajewski, Patrick D.; Frieg, Claudia; Falkenstein, Michael

    2017-01-01

    Executive functions are subject to a marked age-related decline, but have been shown to benefit from cognitive training interventions. As of yet, it is, however, still relatively unclear which neural mechanism can mediate training-related performance gains. In the present electrophysiological study, we examined the effects of multi-domain cognitive training on performance in an untrained cue-based task switch paradigm featuring Stroop color words: participants either had to indicate the word meaning of Stroop stimuli (word task) or perform the more difficult task of color naming (color task). One-hundred and three older adults (>65 years old) were randomly assigned to a training group receiving a 4-month multi-domain cognitive training, a passive no-contact control group or an active (social) control group receiving a 4-month relaxation training. For all groups, we recorded performance and EEG measures before and after the intervention. For the cognitive training group, but not for the two control groups, we observed an increase in response accuracy at posttest, irrespective of task and trial type. No training-related effects on reaction times were found. Cognitive training was also associated with an overall increase in N2 amplitude and a decrease of P2 latency on single trials. Training-related performance gains were thus likely mediated by an enhancement of response selection and improved access to relevant stimulus-response mappings. Additionally, cognitive training was associated with an amplitude decrease in the time window of the target-locked P3 at fronto-central electrodes. An increase in the switch positivity during advance task preparation emerged after both cognitive and relaxation training. Training-related behavioral and event-related potential (ERP) effects were not modulated by task difficulty. The data suggest that cognitive training increased slow negative potentials during target processing which enhanced the N2 and reduced a subsequent P3-like

  3. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets.

    Science.gov (United States)

    Scharfe, Michael; Pielot, Rainer; Schreiber, Falk

    2010-01-11

    Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.

  4. Age-related effects on postural control under multi-task conditions.

    Science.gov (United States)

    Granacher, Urs; Bridenbaugh, Stephanie A; Muehlbauer, Thomas; Wehrle, Anja; Kressig, Reto W

    2011-01-01

    Changes in postural sway and gait patterns due to simultaneously performed cognitive (CI) and/or motor interference (MI) tasks have previously been reported and are associated with an increased risk of falling in older adults. The objectives of this study were to investigate the effects of a CI and/or MI task on static and dynamic postural control in young and elderly subjects, and to find out whether there is an association between measures of static and dynamic postural control while concurrently performing the CI and/or MI task. A total of 36 healthy young (n = 18; age: 22.3 ± 3.0 years; BMI: 21.0 ± 1.6 kg/m(2)) and elderly adults (n = 18; age: 73.5 ± 5.5 years; BMI: 24.2 ± 2.9 kg/m(2)) participated in this study. Static postural control was measured during bipedal stance, and dynamic postural control was obtained while walking on an instrumented walkway. Irrespective of the task condition, i.e. single-task or multiple tasks, elderly participants showed larger center-of-pressure displacements and greater stride-to-stride variability than younger participants. Associations between measures of static and dynamic postural control were found only under the single-task condition in the elderly. Age-related deficits in the postural control system seem to be primarily responsible for the observed results. The weak correlations detected between static and dynamic measures could indicate that fall-risk assessment should incorporate dynamic measures under multi-task conditions, and that skills like erect standing and walking are independent of each other and may have to be trained complementarily. Copyright © 2010 S. Karger AG, Basel.

  5. Multi-party Quantum Computation

    OpenAIRE

    Smith, Adam

    2001-01-01

    We investigate definitions of and protocols for multi-party quantum computing in the scenario where the secret data are quantum systems. We work in the quantum information-theoretic model, where no assumptions are made on the computational power of the adversary. For the slightly weaker task of verifiable quantum secret sharing, we give a protocol which tolerates any t < n/4 cheating parties (out of n). This is shown to be optimal. We use this new tool to establish that any multi-party quantu...

  6. Implicit Unstructured Aerodynamics on Emerging Multi- and Many-Core HPC Architectures

    KAUST Repository

    Al Farhan, Mohammed A.; Kaushik, Dinesh K.; Keyes, David E.

    2017-01-01

    Instruction, Multiple Data (SIMD) for hundreds of threads per node. We explore thread-level performance optimizations on state-of-the-art multi- and many-core Intel processors, including the second generation of Xeon Phi, Knights Landing (KNL). We study

  7. Multi-Robot Remote Interaction with FS-MAS

    Directory of Open Access Journals (Sweden)

    Yunliang Jiang

    2013-02-01

    Full Text Available The need to reduce bandwidth, improve productivity, autonomy and the scalability in multi-robot teleoperation has been recognized for a long time. In this article we propose a novel finite state machine mobile agent based on the network interaction service model, namely FS-MAS. This model consists of three finite state machines, namely the Finite State Mobile Agent (FS-Agent, which is the basic service module. The Service Content Finite State Machine (Content-FS, using the XML language to define workflow, to describe service content and service computation process. The Mobile Agent computation model Finite State Machine (MACM-FS, used to describe the service implementation. Finally, we apply this service model to the multi-robot system, the initial realization completing complex tasks in the form of multi-robot scheduling. This demonstrates that the robot has greatly improved intelligence, and provides a wide solution space for critical issues such as task division, rational and efficient use of resource and multi-robot collaboration.

  8. LDRD final report : a lightweight operating system for multi-core capability class supercomputers.

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Suzanne Marie; Hudson, Trammell B. (OS Research); Ferreira, Kurt Brian; Bridges, Patrick G. (University of New Mexico); Pedretti, Kevin Thomas Tauke; Levenhagen, Michael J.; Brightwell, Ronald Brian

    2010-09-01

    The two primary objectives of this LDRD project were to create a lightweight kernel (LWK) operating system(OS) designed to take maximum advantage of multi-core processors, and to leverage the virtualization capabilities in modern multi-core processors to create a more flexible and adaptable LWK environment. The most significant technical accomplishments of this project were the development of the Kitten lightweight kernel, the co-development of the SMARTMAP intra-node memory mapping technique, and the development and demonstration of a scalable virtualization environment for HPC. Each of these topics is presented in this report by the inclusion of a published or submitted research paper. The results of this project are being leveraged by several ongoing and new research projects.

  9. Virtualized Multi-Mission Operations Center (vMMOC) and its Cloud Services

    Science.gov (United States)

    Ido, Haisam Kassim

    2017-01-01

    His presentation will cover, the current and future, technical and organizational opportunities and challenges with virtualizing a multi-mission operations center. The full deployment of Goddard Space Flight Centers (GSFC) Virtualized Multi-Mission Operations Center (vMMOC) is nearly complete. The Space Science Mission Operations (SSMO) organizations spacecraft ACE, Fermi, LRO, MMS(4), OSIRIS-REx, SDO, SOHO, Swift, and Wind are in the process of being fully migrated to the vMMOC. The benefits of the vMMOC will be the normalization and the standardization of IT services, mission operations, maintenance, and development as well as ancillary services and policies such as collaboration tools, change management systems, and IT Security. The vMMOC will also provide operational efficiencies regarding hardware, IT domain expertise, training, maintenance and support.The presentation will also cover SSMO's secure Situational Awareness Dashboard in an integrated, fleet centric, cloud based web services fashion. Additionally the SSMO Telemetry as a Service (TaaS) will be covered, which allows authorized users and processes to access telemetry for the entire SSMO fleet, and for the entirety of each spacecrafts history. Both services leverage cloud services in a secure FISMA High and FedRamp environment, and also leverage distributed object stores in order to house and provide the telemetry. The services are also in the process of leveraging the cloud computing services elasticity and horizontal scalability. In the design phase is the Navigation as a Service (NaaS) which will provide a standardized, efficient, and normalized service for the fleet's space flight dynamics operations. Additional future services that may be considered are Ground Segment as a Service (GSaaS), Telemetry and Command as a Service (TCaaS), Flight Software Simulation as a Service, etc.

  10. The RADAR Test Methodology: Evaluating a Multi-Task Machine Learning System with Humans in the Loop

    Science.gov (United States)

    2006-10-01

    details, static websites, and an ecommerce vendor portal. The “corpus” consists of the email and world state content. The latter consists of facts...learned fact variation, and the opportunity to induce a substantial crisis workload. The conference itself was a 4-day, multi-track technical conference...General 1. I am confident I completed the task well. 2. The task was difficult to complete. 3. I could have done as good of a job without the

  11. Hardware processors for pattern recognition tasks in experiments with wire chambers

    International Nuclear Information System (INIS)

    Verkerk, C.

    1975-01-01

    Hardware processors for pattern recognition tasks in experiments with multiwire proportional chambers or drift chambers are described. They vary from simple ones used for deciding in real time if particle trajectories are straight to complex ones for recognition of curved tracks. Schematics and block-diagrams of different processors are shown

  12. Cross-domain and multi-task transfer learning of deep convolutional neural network for breast cancer diagnosis in digital breast tomosynthesis

    Science.gov (United States)

    Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir; Helvie, Mark A.; Richter, Caleb; Cha, Kenny

    2018-02-01

    We propose a cross-domain, multi-task transfer learning framework to transfer knowledge learned from non-medical images by a deep convolutional neural network (DCNN) to medical image recognition task while improving the generalization by multi-task learning of auxiliary tasks. A first stage cross-domain transfer learning was initiated from ImageNet trained DCNN to mammography trained DCNN. 19,632 regions-of-interest (ROI) from 2,454 mass lesions were collected from two imaging modalities: digitized-screen film mammography (SFM) and full-field digital mammography (DM), and split into training and test sets. In the multi-task transfer learning, the DCNN learned the mass classification task simultaneously from the training set of SFM and DM. The best transfer network for mammography was selected from three transfer networks with different number of convolutional layers frozen. The performance of single-task and multitask transfer learning on an independent SFM test set in terms of the area under the receiver operating characteristic curve (AUC) was 0.78+/-0.02 and 0.82+/-0.02, respectively. In the second stage cross-domain transfer learning, a set of 12,680 ROIs from 317 mass lesions on DBT were split into validation and independent test sets. We first studied the data requirements for the first stage mammography trained DCNN by varying the mammography training data from 1% to 100% and evaluated its learning on the DBT validation set in inference mode. We found that the entire available mammography set provided the best generalization. The DBT validation set was then used to train only the last four fully connected layers, resulting in an AUC of 0.90+/-0.04 on the independent DBT test set.

  13. Measuring Performance of Soft Real-Time Tasks on Multi-core Systems

    OpenAIRE

    Rafiq, Salman

    2011-01-01

    Multi-core platforms are well established, and they are slowly moving into the area of embedded and real-time systems. Nowadays to take advantage of multi-core systems in terms of throughput, soft real-time applications are run together with general purpose applications under an operating system such as Linux. But due to shared hardware resources in multi-core architectures, it is likely that these applications will interfere and compete with each other. This can cause slower response times f...

  14. Research on monitoring system of water resources in irrigation region based on multi-agent

    International Nuclear Information System (INIS)

    Zhao, T H; Wang, D S

    2012-01-01

    Irrigation agriculture is the basis of agriculture and rural economic development in China. Realizing the water resource information of irrigated area will make full use of existing water resource and increase benefit of irrigation agriculture greatly. However, the water resource information system of many irrigated areas in our country is not still very sound at present, it lead to the wasting of a lot of water resources. This paper has analyzed the existing water resource monitoring system of irrigated areas, introduced the Multi-Agent theories, and set up a water resource monitoring system of irrigated area based on multi-Agent. This system is composed of monitoring multi-Agent federal, telemetry multi-Agent federal, and the Communication Network GSM between them. It can make full use of good intelligence and communication coordination in the multi-Agent federation interior, improve the dynamic monitoring and controlling timeliness of water resource of irrigated area greatly, provide information service for the sustainable development of irrigated area, and lay a foundation for realizing high information of water resource of irrigated area.

  15. Multi-view Multi-sparsity Kernel Reconstruction for Multi-class Image Classification

    KAUST Repository

    Zhu, Xiaofeng; Xie, Qing; Zhu, Yonghua; Liu, Xingyi; Zhang, Shichao

    2015-01-01

    This paper addresses the problem of multi-class image classification by proposing a novel multi-view multi-sparsity kernel reconstruction (MMKR for short) model. Given images (including test images and training images) representing with multiple

  16. Hybrid Multi-Layer Network Control for Emerging Cyber-Infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Summerhill, Richard [Internet2, Washington, DC (United States); Lehman, Tom [Univ. of Southern California, Los Angeles, CA (United States). Information Sciences Inst. (ISI); Ghani, Nasir [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Electrical & Computer Engineering; Boyd, Eric [Univ. Corporation for Advanced Internet Development (UCAID), Washington, DC (United States)

    2009-08-14

    There were four basic task areas identified for the Hybrid-MLN project. They are: Multi-Layer, Multi-Domain, Control Plane Architecture and Implementation; Heterogeneous DataPlane Testing; Simulation; Project Publications, Reports, and Presentations.

  17. An Energy-Aware Runtime Management of Multi-Core Sensory Swarms

    Directory of Open Access Journals (Sweden)

    Sungchan Kim

    2017-08-01

    Full Text Available In sensory swarms, minimizing energy consumption under performance constraint is one of the key objectives. One possible approach to this problem is to monitor application workload that is subject to change at runtime, and to adjust system configuration adaptively to satisfy the performance goal. As today’s sensory swarms are usually implemented using multi-core processors with adjustable clock frequency, we propose to monitor the CPU workload periodically and adjust the task-to-core allocation or clock frequency in an energy-efficient way in response to the workload variations. In doing so, we present an online heuristic that determines the most energy-efficient adjustment that satisfies the performance requirement. The proposed method is based on a simple yet effective energy model that is built upon performance prediction using IPC (instructions per cycle measured online and power equation derived empirically. The use of IPC accounts for memory intensities of a given workload, enabling the accurate prediction of execution time. Hence, the model allows us to rapidly and accurately estimate the effect of the two control knobs, clock frequency adjustment and core allocation. The experiments show that the proposed technique delivers considerable energy saving of up to 45%compared to the state-of-the-art multi-core energy management technique.

  18. Investigation of Large Scale Cortical Models on Clustered Multi-Core Processors

    Science.gov (United States)

    2013-02-01

    Playstation 3 with 6 available SPU cores outperforms the Intel Xeon processor (with 4 cores) by about 1.9 times for the HTM model and by 2.4 times...runtime breakdowns of the HTM and Dean models respectively on the Cell processor (on the Playstation 3) and the Intel Xeon processor ( 4 thread...YOUR FORM TO THE ABOVE ORGANIZATION. 1. REPORT DATE (DD-MM-YYYY) 2. REPORT TYPE 3. DATES COVERED (From - To) 4 . TITLE AND SUBTITLE 5a. CONTRACT NUMBER

  19. A Multi-Agent Framework for Coordination of Intelligent Assistive Technologies

    DEFF Research Database (Denmark)

    Valente, Pedro Ricardo da Nova; Hossain, S.; Groenbaek, B.

    2010-01-01

    Intelligent care for the future is the IntelliCare project's main priority. This paper describes the design of a generic multi-agent framework for coordination of intelligent assistive technologies. The paper overviews technologies and software systems suitable for context awareness...... and housekeeping tasks, especially for performing a multi-robot cleaning-task activity. It also describes conducted work in the design of a multi-agent platform for coordination of intelligent assistive technologies. Instead of using traditional robot odometry estimation methods, we have tested an independent...

  20. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets

    Directory of Open Access Journals (Sweden)

    Pielot Rainer

    2010-01-01

    Full Text Available Abstract Background Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE, a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. Results We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. Conclusions The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.

  1. A CCA+ICA based model for multi-task brain imaging data fusion and its application to schizophrenia.

    Science.gov (United States)

    Sui, Jing; Adali, Tülay; Pearlson, Godfrey; Yang, Honghui; Sponheim, Scott R; White, Tonya; Calhoun, Vince D

    2010-05-15

    Collection of multiple-task brain imaging data from the same subject has now become common practice in medical imaging studies. In this paper, we propose a simple yet effective model, "CCA+ICA", as a powerful tool for multi-task data fusion. This joint blind source separation (BSS) model takes advantage of two multivariate methods: canonical correlation analysis and independent component analysis, to achieve both high estimation accuracy and to provide the correct connection between two datasets in which sources can have either common or distinct between-dataset correlation. In both simulated and real fMRI applications, we compare the proposed scheme with other joint BSS models and examine the different modeling assumptions. The contrast images of two tasks: sensorimotor (SM) and Sternberg working memory (SB), derived from a general linear model (GLM), were chosen to contribute real multi-task fMRI data, both of which were collected from 50 schizophrenia patients and 50 healthy controls. When examining the relationship with duration of illness, CCA+ICA revealed a significant negative correlation with temporal lobe activation. Furthermore, CCA+ICA located sensorimotor cortex as the group-discriminative regions for both tasks and identified the superior temporal gyrus in SM and prefrontal cortex in SB as task-specific group-discriminative brain networks. In summary, we compared the new approach to some competitive methods with different assumptions, and found consistent results regarding each of their hypotheses on connecting the two tasks. Such an approach fills a gap in existing multivariate methods for identifying biomarkers from brain imaging data.

  2. Matrix multiplication with a hypercube algorithm on multi-core processor cluster

    Directory of Open Access Journals (Sweden)

    José Crispín Zavala-Díaz

    2015-01-01

    Full Text Available Se analiza, modifica e implementa el algoritmo de multiplicación de matrices de Dekel, Nassimi y Sahani o hipercubo en un cluster de procesadores multi-core, donde el número de procesadores utilizado es menor al requerido por el algoritmo de n3. Se utilizan 23, 43 y 83 unidades procesadoras para multiplicar matrices de orden de magnitud de 10X10, 102X102 y 103X103. Los resultados del modelo matemático del algoritmo modificado y los obtenidos de la experimentación computacional muestran que es posible alcanzar rapidez y eficiencias paralelas aceptables, en función del número de unidades procesadoras utilizadas. También se muestra que la influencia del enlace externo de comunicación entre los nodos disminuye si se utiliza una combinación de los canales de comunicación disponibles entre los núcleos en un cluster multi-core.

  3. Co-Labeling for Multi-View Weakly Labeled Learning.

    Science.gov (United States)

    Xu, Xinxing; Li, Wen; Xu, Dong; Tsang, Ivor W

    2016-06-01

    It is often expensive and time consuming to collect labeled training samples in many real-world applications. To reduce human effort on annotating training samples, many machine learning techniques (e.g., semi-supervised learning (SSL), multi-instance learning (MIL), etc.) have been studied to exploit weakly labeled training samples. Meanwhile, when the training data is represented with multiple types of features, many multi-view learning methods have shown that classifiers trained on different views can help each other to better utilize the unlabeled training samples for the SSL task. In this paper, we study a new learning problem called multi-view weakly labeled learning, in which we aim to develop a unified approach to learn robust classifiers by effectively utilizing different types of weakly labeled multi-view data from a broad range of tasks including SSL, MIL and relative outlier detection (ROD). We propose an effective approach called co-labeling to solve the multi-view weakly labeled learning problem. Specifically, we model the learning problem on each view as a weakly labeled learning problem, which aims to learn an optimal classifier from a set of pseudo-label vectors generated by using the classifiers trained from other views. Unlike traditional co-training approaches using a single pseudo-label vector for training each classifier, our co-labeling approach explores different strategies to utilize the predictions from different views, biases and iterations for generating the pseudo-label vectors, making our approach more robust for real-world applications. Moreover, to further improve the weakly labeled learning on each view, we also exploit the inherent group structure in the pseudo-label vectors generated from different strategies, which leads to a new multi-layer multiple kernel learning problem. Promising results for text-based image retrieval on the NUS-WIDE dataset as well as news classification and text categorization on several real-world multi

  4. Multi-Kepler GPU vs. multi-Intel MIC for spin systems simulations

    Science.gov (United States)

    Bernaschi, M.; Bisson, M.; Salvadore, F.

    2014-10-01

    We present and compare the performances of two many-core architectures: the Nvidia Kepler and the Intel MIC both in a single system and in cluster configuration for the simulation of spin systems. As a benchmark we consider the time required to update a single spin of the 3D Heisenberg spin glass model by using the Over-relaxation algorithm. We present data also for a traditional high-end multi-core architecture: the Intel Sandy Bridge. The results show that although on the two Intel architectures it is possible to use basically the same code, the performances of a Intel MIC change dramatically depending on (apparently) minor details. Another issue is that to obtain a reasonable scalability with the Intel Phi coprocessor (Phi is the coprocessor that implements the MIC architecture) in a cluster configuration it is necessary to use the so-called offload mode which reduces the performances of the single system. As to the GPU, the Kepler architecture offers a clear advantage with respect to the previous Fermi architecture maintaining exactly the same source code. Scalability of the multi-GPU implementation remains very good by using the CPU as a communication co-processor of the GPU. All source codes are provided for inspection and for double-checking the results.

  5. AN EFFECTIVE MULTI-CLUSTERING ANONYMIZATION APPROACH USING DISCRETE COMPONENT TASK FOR NON-BINARY HIGH DIMENSIONAL DATA SPACES

    Directory of Open Access Journals (Sweden)

    L.V. Arun Shalin

    2016-01-01

    Full Text Available Clustering is a process of grouping elements together, designed in such a way that the elements assigned to similar data points in a cluster are more comparable to each other than the remaining data points in a cluster. During clustering certain difficulties related when dealing with high dimensional data are ubiquitous and abundant. Works concentrated using anonymization method for high dimensional data spaces failed to address the problem related to dimensionality reduction during the inclusion of non-binary databases. In this work we study methods for dimensionality reduction for non-binary database. By analyzing the behavior of dimensionality reduction for non-binary database, results in performance improvement with the help of tag based feature. An effective multi-clustering anonymization approach called Discrete Component Task Specific Multi-Clustering (DCTSM is presented for dimensionality reduction on non-binary database. To start with we present the analysis of attribute in the non-binary database and cluster projection identifies the sparseness degree of dimensions. Additionally with the quantum distribution on multi-cluster dimension, the solution for relevancy of attribute and redundancy on non-binary data spaces is provided resulting in performance improvement on the basis of tag based feature. Multi-clustering tag based feature reduction extracts individual features and are correspondingly replaced by the equivalent feature clusters (i.e. tag clusters. During training, the DCTSM approach uses multi-clusters instead of individual tag features and then during decoding individual features is replaced by corresponding multi-clusters. To measure the effectiveness of the method, experiments are conducted on existing anonymization method for high dimensional data spaces and compared with the DCTSM approach using Statlog German Credit Data Set. Improved tag feature extraction and minimum error rate compared to conventional anonymization

  6. Artificial emotion triggered stochastic behavior transitions with motivational gain effects for multi-objective robot tasks

    Science.gov (United States)

    Dağlarli, Evren; Temeltaş, Hakan

    2007-04-01

    This paper presents artificial emotional system based autonomous robot control architecture. Hidden Markov model developed as mathematical background for stochastic emotional and behavior transitions. Motivation module of architecture considered as behavioral gain effect generator for achieving multi-objective robot tasks. According to emotional and behavioral state transition probabilities, artificial emotions determine sequences of behaviors. Also motivational gain effects of proposed architecture can be observed on the executing behaviors during simulation.

  7. Derivation of optimal joint operating rules for multi-purpose multi-reservoir water-supply system

    Science.gov (United States)

    Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wang, Chao; Lei, Xiao-hui; Xiong, Yi-song; Zhang, Wei

    2017-08-01

    The derivation of joint operating policy is a challenging task for a multi-purpose multi-reservoir system. This study proposed an aggregation-decomposition model to guide the joint operation of multi-purpose multi-reservoir system, including: (1) an aggregated model based on the improved hedging rule to ensure the long-term water-supply operating benefit; (2) a decomposed model to allocate the limited release to individual reservoirs for the purpose of maximizing the total profit of the facing period; and (3) a double-layer simulation-based optimization model to obtain the optimal time-varying hedging rules using the non-dominated sorting genetic algorithm II, whose objectives were to minimize maximum water deficit and maximize water supply reliability. The water-supply system of Li River in Guangxi Province, China, was selected for the case study. The results show that the operating policy proposed in this study is better than conventional operating rules and aggregated standard operating policy for both water supply and hydropower generation due to the use of hedging mechanism and effective coordination among multiple objectives.

  8. Stochastic multi-period multi-product multi-objective Aggregate Production Planning model in multi-echelon supply chain

    Directory of Open Access Journals (Sweden)

    Kaveh Khalili-Damghani

    2017-07-01

    Full Text Available In this paper a multi-period multi-product multi-objective aggregate production planning (APP model is proposed for an uncertain multi-echelon supply chain considering financial risk, customer satisfaction, and human resource training. Three conflictive objective functions and several sets of real constraints are considered concurrently in the proposed APP model. Some parameters of the proposed model are assumed to be uncertain and handled through a two-stage stochastic programming (TSSP approach. The proposed TSSP is solved using three multi-objective solution procedures, i.e., the goal attainment technique, the modified ε-constraint method, and STEM method. The whole procedure is applied in an automotive resin and oil supply chain as a real case study wherein the efficacy and applicability of the proposed approaches are illustrated in comparison with existing experimental production planning method.

  9. Research on Scheduling Algorithm for Multi-satellite and Point Target Task on Swinging Mode

    Science.gov (United States)

    Wang, M.; Dai, G.; Peng, L.; Song, Z.; Chen, G.

    2012-12-01

    Nowadays, using satellite in space to observe ground is an important and major method to obtain ground information. With the development of the scientific technology in the field of space, many fields such as military and economic and other areas have more and more requirement of space technology because of the benefits of the satellite's widespread, timeliness and unlimited of area and country. And at the same time, because of the wide use of all kinds of satellites, sensors, repeater satellites and ground receiving stations, ground control system are now facing great challenge. Therefore, how to make the best value of satellite resources so as to make full use of them becomes an important problem of ground control system. Satellite scheduling is to distribute the resource to all tasks without conflict to obtain the scheduling result so as to complete as many tasks as possible to meet user's requirement under considering the condition of the requirement of satellites, sensors and ground receiving stations. Considering the size of the task, we can divide tasks into point task and area task. This paper only considers point targets. In this paper, a description of satellite scheduling problem and a chief introduction of the theory of satellite scheduling are firstly made. We also analyze the restriction of resource and task in scheduling satellites. The input and output flow of scheduling process are also chiefly described in the paper. On the basis of these analyses, we put forward a scheduling model named as multi-variable optimization model for multi-satellite and point target task on swinging mode. In the multi-variable optimization model, the scheduling problem is transformed the parametric optimization problem. The parameter we wish to optimize is the swinging angle of every time-window. In the view of the efficiency and accuracy, some important problems relating the satellite scheduling such as the angle relation between satellites and ground targets, positive

  10. Industrial applications of multi-functional, multi-phase reactors

    NARCIS (Netherlands)

    Harmsen, G.J.; Chewter, L.A.

    1999-01-01

    To reveal trends in the design and operation of multi-functional, multi-phase reactors, this paper describes, in historical sequence, three industrial applications of multi-functional, multi-phase reactors developed and operated by Shell Chemicals during the last five decades. For each case, we

  11. Multi-bunch energy compensation in the NLC bunch compressor

    International Nuclear Information System (INIS)

    Zimmermann, F.; Raubenheimer, T.O.; Thomson, K.A.

    1996-06-01

    The task of the NLC bunch compressor is to reduce the length of each bunch in a train of 90 bunches from 4 mm, at extraction from the damping ring, to about 100 μm, suitable for injection into the X-band main linac. This task is complicated by longitudinal long-range wake fields and the multi-bunch beam loading in the various accelerating sections of the compressor. One possible approach to compensate the multi-bunch beam loading is to add two RF systems with slightly different frequencies (' Δf' scheme) to each accelerating section, as first proposed by Kikuchi. This paper summarizes the choice of parameters for three such compensating sections, and presents simulation results of combined single- and multi-bunch dynamics for four different NLC versions. The multi-bunch energy compensation is shown to be straightforward and its performance to be satisfactory

  12. Optical Array Processor: Laboratory Results

    Science.gov (United States)

    Casasent, David; Jackson, James; Vaerewyck, Gerard

    1987-01-01

    A Space Integrating (SI) Optical Linear Algebra Processor (OLAP) is described and laboratory results on its performance in several practical engineering problems are presented. The applications include its use in the solution of a nonlinear matrix equation for optimal control and a parabolic Partial Differential Equation (PDE), the transient diffusion equation with two spatial variables. Frequency-multiplexed, analog and high accuracy non-base-two data encoding are used and discussed. A multi-processor OLAP architecture is described and partitioning and data flow issues are addressed.

  13. A Heterogeneous Multi-core Architecture with a Hardware Kernel for Control Systems

    DEFF Research Database (Denmark)

    Li, Gang; Guan, Wei; Sierszecki, Krzysztof

    2012-01-01

    Rapid industrialisation has resulted in a demand for improved embedded control systems with features such as predictability, high processing performance and low power consumption. Software kernel implementation on a single processor is becoming more difficult to satisfy those constraints. This pa......Rapid industrialisation has resulted in a demand for improved embedded control systems with features such as predictability, high processing performance and low power consumption. Software kernel implementation on a single processor is becoming more difficult to satisfy those constraints......). Second, a heterogeneous multi-core architecture is investigated, focusing on its performance in relation to hard real-time constraints and predictable behavior. Third, the hardware implementation of HARTEX is designated to support the heterogeneous multi-core architecture. This hardware kernel has...... several advantages over a similar kernel implemented in software: higher-speed processing capability, parallel computation, and separation between the kernel itself and the applications being run. A microbenchmark has been used to compare the hardware kernel with the software kernel, and compare...

  14. Statistical Projections for Multi-resolution, Multi-dimensional Visual Data Exploration and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Hoa T. [Univ. of Utah, Salt Lake City, UT (United States); Stone, Daithi [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-01-01

    An ongoing challenge in visual exploration and analysis of large, multi-dimensional datasets is how to present useful, concise information to a user for some specific visualization tasks. Typical approaches to this problem have proposed either reduced-resolution versions of data, or projections of data, or both. These approaches still have some limitations such as consuming high computation or suffering from errors. In this work, we explore the use of a statistical metric as the basis for both projections and reduced-resolution versions of data, with a particular focus on preserving one key trait in data, namely variation. We use two different case studies to explore this idea, one that uses a synthetic dataset, and another that uses a large ensemble collection produced by an atmospheric modeling code to study long-term changes in global precipitation. The primary findings of our work are that in terms of preserving the variation signal inherent in data, that using a statistical measure more faithfully preserves this key characteristic across both multi-dimensional projections and multi-resolution representations than a methodology based upon averaging.

  15. Immune networks: multi-tasking capabilities at medium load

    International Nuclear Information System (INIS)

    Agliari, E; Annibale, A; Barra, A; Coolen, A C C; Tantari, D

    2013-01-01

    Associative network models featuring multi-tasking properties have been introduced recently and studied in the low-load regime, where the number P of simultaneously retrievable patterns scales with the number N of nodes as P ∼ log N. In addition to their relevance in artificial intelligence, these models are increasingly important in immunology, where stored patterns represent strategies to fight pathogens and nodes represent lymphocyte clones. They allow us to understand the crucial ability of the immune system to respond simultaneously to multiple distinct antigen invasions. Here we develop further the statistical mechanical analysis of such systems, by studying the medium-load regime, P ∼ N δ with δ ∈ (0, 1]. We derive three main results. First, we reveal the nontrivial architecture of these networks: they exhibit a high degree of modularity and clustering, which is linked to their retrieval abilities. Second, by solving the model we demonstrate for δ < 1 the existence of large regions in the phase diagram where the network can retrieve all stored patterns simultaneously. Finally, in the high-load regime δ = 1 we find that the system behaves as a spin-glass, suggesting that finite-connectivity frameworks are required to achieve effective retrieval. (paper)

  16. Programs Lucky and LuckyC - 3D parallel transport codes for the multi-group transport equation solution for XYZ geometry by Pm Sn method

    International Nuclear Information System (INIS)

    Moriakov, A.; Vasyukhno, V.; Netecha, M.; Khacheresov, G.

    2003-01-01

    Powerful supercomputers are available today. MBC-1000M is one of Russian supercomputers that may be used by distant way access. Programs LUCKY and LUCKY C were created to work for multi-processors systems. These programs have algorithms created especially for these computers and used MPI (message passing interface) service for exchanges between processors. LUCKY may resolved shielding tasks by multigroup discreet ordinate method. LUCKY C may resolve critical tasks by same method. Only XYZ orthogonal geometry is available. Under little space steps to approximate discreet operator this geometry may be used as universal one to describe complex geometrical structures. Cross section libraries are used up to P8 approximation by Legendre polynomials for nuclear data in GIT format. Programming language is Fortran-90. 'Vector' processors may be used that lets get a time profit up to 30 times. But unfortunately MBC-1000M has not these processors. Nevertheless sufficient value for efficiency of parallel calculations was obtained under 'space' (LUCKY) and 'space and energy' (LUCKY C ) paralleling. AUTOCAD program is used to control geometry after a treatment of input data. Programs have powerful geometry module, it is a beautiful tool to achieve any geometry. Output results may be processed by graphic programs on personal computer. (authors)

  17. Anisotropic multi-scale fluid registration: evaluation in magnetic resonance breast imaging

    International Nuclear Information System (INIS)

    Crum, W R; Tanner, C; Hawkes, D J

    2005-01-01

    Registration using models of compressible viscous fluids has not found the general application of some other techniques (e.g., free-form-deformation (FFD)) despite its ability to model large diffeomorphic deformations. We report on a multi-resolution fluid registration algorithm which improves on previous work by (a) directly solving the Navier-Stokes equation at the resolution of the images (b) accommodating image sampling anisotropy using semi-coarsening and implicit smoothing in a full multi-grid (FMG) solver and (c) exploiting the inherent multi-resolution nature of FMG to implement a multi-scale approach. Evaluation is on five magnetic resonance (MR) breast images subject to six biomechanical deformation fields over 11 multi-resolution schemes. Quantitative assessment is by tissue overlaps and target registration errors and by registering using the known correspondences rather than image features to validate the fluid model. Context is given by comparison with a validated FFD algorithm and by application to images of volunteers subjected to large applied deformation. The results show that fluid registration of 3D breast MR images to sub-voxel accuracy is possible in minutes on a 1.6 GHz Linux-based Athlon processor with coarse solutions obtainable in a few tens of seconds. Accuracy and computation time are comparable to FFD techniques validated for this application

  18. A multi-GPU implementation of a D2Q37 lattice Boltzmann code

    NARCIS (Netherlands)

    Biferale, L.; Mantovani, F.; Pivanti, M.; Pozzati, F.; Sbragaglia, M.; Scagliarini, Andrea; Schifano, S.F.; Toschi, F.; Tripiccione, R.; Wyrzykowski, R.; Dongarra, J.; Karczewski, K.; Wasniewski, J.

    2012-01-01

    We describe a parallel implementation of a compressible Lattice Boltzmann code on a multi-GPU cluster based on Nvidia Fermi processors. We analyze how to optimize the algorithm for GP-GPU architectures, describe the implementation choices that we have adopted and compare our performance results with

  19. Software defined radio (SDR) architecture for concurrent multi-satellite communications

    Science.gov (United States)

    Maheshwarappa, Mamatha R.

    generic software methodology for both ground and space applications that will remain unaltered despite new evolutions in hardware, and supports concurrent multi-standard, multi-channel and multi-rate telemetry signals.

  20. Symplectic multi-particle tracking on GPUs

    Science.gov (United States)

    Liu, Zhicong; Qiang, Ji

    2018-05-01

    A symplectic multi-particle tracking model is implemented on the Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) language. The symplectic tracking model can preserve phase space structure and reduce non-physical effects in long term simulation, which is important for beam property evaluation in particle accelerators. Though this model is computationally expensive, it is very suitable for parallelization and can be accelerated significantly by using GPUs. In this paper, we optimized the implementation of the symplectic tracking model on both single GPU and multiple GPUs. Using a single GPU processor, the code achieves a factor of 2-10 speedup for a range of problem sizes compared with the time on a single state-of-the-art Central Processing Unit (CPU) node with similar power consumption and semiconductor technology. It also shows good scalability on a multi-GPU cluster at Oak Ridge Leadership Computing Facility. In an application to beam dynamics simulation, the GPU implementation helps save more than a factor of two total computing time in comparison to the CPU implementation.

  1. The Multi-Attribute Task Battery II (MATB-II) Software for Human Performance and Workload Research: A User's Guide

    Science.gov (United States)

    Santiago-Espada, Yamira; Myer, Robert R.; Latorella, Kara A.; Comstock, James R., Jr.

    2011-01-01

    The Multi-Attribute Task Battery (MAT Battery). is a computer-based task designed to evaluate operator performance and workload, has been redeveloped to operate in Windows XP Service Pack 3, Windows Vista and Windows 7 operating systems.MATB-II includes essentially the same tasks as the original MAT Battery, plus new configuration options including a graphical user interface for controlling modes of operation. MATB-II can be executed either in training or testing mode, as defined by the MATB-II configuration file. The configuration file also allows set up of the default timeouts for the tasks, the flow rates of the pumps and tank levels of the Resource Management (RESMAN) task. MATB-II comes with a default event file that an experimenter can modify and adapt

  2. Multi-Robot, Multi-Target Particle Swarm Optimization Search in Noisy Wireless Environments

    Energy Technology Data Exchange (ETDEWEB)

    Kurt Derr; Milos Manic

    2009-05-01

    Multiple small robots (swarms) can work together using Particle Swarm Optimization (PSO) to perform tasks that are difficult or impossible for a single robot to accomplish. The problem considered in this paper is exploration of an unknown environment with the goal of finding a target(s) at an unknown location(s) using multiple small mobile robots. This work demonstrates the use of a distributed PSO algorithm with a novel adaptive RSS weighting factor to guide robots for locating target(s) in high risk environments. The approach was developed and analyzed on multiple robot single and multiple target search. The approach was further enhanced by the multi-robot-multi-target search in noisy environments. The experimental results demonstrated how the availability of radio frequency signal can significantly affect robot search time to reach a target.

  3. A Novel Approach to Develop the Lower Order Model of Multi-Input Multi-Output System

    Science.gov (United States)

    Rajalakshmy, P.; Dharmalingam, S.; Jayakumar, J.

    2017-10-01

    A mathematical model is a virtual entity that uses mathematical language to describe the behavior of a system. Mathematical models are used particularly in the natural sciences and engineering disciplines like physics, biology, and electrical engineering as well as in the social sciences like economics, sociology and political science. Physicists, Engineers, Computer scientists, and Economists use mathematical models most extensively. With the advent of high performance processors and advanced mathematical computations, it is possible to develop high performing simulators for complicated Multi Input Multi Ouptut (MIMO) systems like Quadruple tank systems, Aircrafts, Boilers etc. This paper presents the development of the mathematical model of a 500 MW utility boiler which is a highly complex system. A synergistic combination of operational experience, system identification and lower order modeling philosophy has been effectively used to develop a simplified but accurate model of a circulation system of a utility boiler which is a MIMO system. The results obtained are found to be in good agreement with the physics of the process and with the results obtained through design procedure. The model obtained can be directly used for control system studies and to realize hardware simulators for boiler testing and operator training.

  4. Binaural unmasking of multi-channel stimuli in bilateral cochlear implant users.

    Science.gov (United States)

    Van Deun, Lieselot; van Wieringen, Astrid; Francart, Tom; Büchner, Andreas; Lenarz, Thomas; Wouters, Jan

    2011-10-01

    Previous work suggests that bilateral cochlear implant users are sensitive to interaural cues if experimental speech processors are used to preserve accurate interaural information in the electrical stimulation pattern. Binaural unmasking occurs in adults and children when an interaural delay is applied to the envelope of a high-rate pulse train. Nevertheless, for speech perception, binaural unmasking benefits have not been demonstrated consistently, even with coordinated stimulation at both ears. The present study aimed at bridging the gap between basic psychophysical performance on binaural signal detection tasks on the one hand and binaural perception of speech in noise on the other hand. Therefore, binaural signal detection was expanded to multi-channel stimulation and biologically relevant interaural delays. A harmonic complex, consisting of three sinusoids (125, 250, and 375 Hz), was added to three 125-Hz-wide noise bands centered on the sinusoids. When an interaural delay of 700 μs was introduced, an average BMLD of 3 dB was established. Outcomes are promising in view of real-life benefits. Future research should investigate the generalization of the observed benefits for signal detection to speech perception in everyday listening situations and determine the importance of coordination of bilateral speech processors and accentuation of envelope cues.

  5. Multi-Language and Multi-Purpose Educational Tool for Kids

    DEFF Research Database (Denmark)

    Holmen, Hee; Valente, Andrea; Marchetti, E.

    2005-01-01

    ‘Crazipes’ is one of the prototype games within SMAALL, a multi-language and multi-purpose games project for young kids of age 3-5 years old. The main goal of SMAALL is to expose young learners in multi-purpose and multi-module games. In the prototype of Crazipes, the game is designed to teach fo...

  6. Multi-target-qubit unconventional geometric phase gate in a multi-cavity system.

    Science.gov (United States)

    Liu, Tong; Cao, Xiao-Zhi; Su, Qi-Ping; Xiong, Shao-Jie; Yang, Chui-Ping

    2016-02-22

    Cavity-based large scale quantum information processing (QIP) may involve multiple cavities and require performing various quantum logic operations on qubits distributed in different cavities. Geometric-phase-based quantum computing has drawn much attention recently, which offers advantages against inaccuracies and local fluctuations. In addition, multiqubit gates are particularly appealing and play important roles in QIP. We here present a simple and efficient scheme for realizing a multi-target-qubit unconventional geometric phase gate in a multi-cavity system. This multiqubit phase gate has a common control qubit but different target qubits distributed in different cavities, which can be achieved using a single-step operation. The gate operation time is independent of the number of qubits and only two levels for each qubit are needed. This multiqubit gate is generic, e.g., by performing single-qubit operations, it can be converted into two types of significant multi-target-qubit phase gates useful in QIP. The proposal is quite general, which can be used to accomplish the same task for a general type of qubits such as atoms, NV centers, quantum dots, and superconducting qubits.

  7. Multi-target consensus circle pursuit for multi-agent systems via a distributed multi-flocking method

    Science.gov (United States)

    Pei, Huiqin; Chen, Shiming; Lai, Qiang

    2016-12-01

    This paper studies the multi-target consensus pursuit problem of multi-agent systems. For solving the problem, a distributed multi-flocking method is designed based on the partial information exchange, which is employed to realise the pursuit of multi-target and the uniform distribution of the number of pursuing agents with the dynamic target. Combining with the proposed circle formation control strategy, agents can adaptively choose the target to form the different circle formation groups accomplishing a multi-target pursuit. The speed state of pursuing agents in each group converges to the same value. A Lyapunov approach is utilised to analyse the stability of multi-agent systems. In addition, a sufficient condition is given for achieving the dynamic target consensus pursuit, and which is then analysed. Finally, simulation results verify the effectiveness of the proposed approaches.

  8. Research on monitoring system of water resources in Shiyang River Basin based on Multi-agent

    Science.gov (United States)

    Zhao, T. H.; Yin, Z.; Song, Y. Z.

    2012-11-01

    The Shiyang River Basin is the most populous, economy relatively develop, the highest degree of development and utilization of water resources, water conflicts the most prominent, ecological environment problems of the worst hit areas in Hexi inland river basin in Gansu province. the contradiction between people and water is aggravated constantly in the basin. This text combines multi-Agent technology with monitoring system of water resource, the establishment of a management center, telemetry Agent Federation, as well as the communication network between the composition of the Shiyang River Basin water resources monitoring system. By taking advantage of multi-agent system intelligence and communications coordination to improve the timeliness of the basin water resources monitoring.

  9. Research on monitoring system of water resources in Shiyang River Basin based on Multi-agent

    International Nuclear Information System (INIS)

    Zhao, T h; Yin, Z; Song, Y Z

    2012-01-01

    The Shiyang River Basin is the most populous, economy relatively develop, the highest degree of development and utilization of water resources, water conflicts the most prominent, ecological environment problems of the worst hit areas in Hexi inland river basin in Gansu province. the contradiction between people and water is aggravated constantly in the basin. This text combines multi-Agent technology with monitoring system of water resource, the establishment of a management center, telemetry Agent Federation, as well as the communication network between the composition of the Shiyang River Basin water resources monitoring system. By taking advantage of multi-agent system intelligence and communications coordination to improve the timeliness of the basin water resources monitoring.

  10. Combining on-hardware prototyping and high-level simulation for DSE of multi-ASIP systems

    NARCIS (Netherlands)

    Meloni, P.; Pomata, S.; Raffo, L.; Piscitelli, R.; Pimentel, A.D.; McAllister, J.; Bhattacharyya, S.

    2012-01-01

    Modern heterogeneous multi-processor embedded systems very often expose to the designer a large number of degrees of freedom, related to the application partitioning/mapping and to the component- and system-level architecture composition. The number is even larger when the designer targets systems

  11. Human-Robot Teaming in a Multi-Agent Space Assembly Task

    Science.gov (United States)

    Rehnmark, Fredrik; Currie, Nancy; Ambrose, Robert O.; Culbert, Christopher

    2004-01-01

    NASA's Human Space Flight program depends heavily on spacewalks performed by pairs of suited human astronauts. These Extra-Vehicular Activities (EVAs) are severely restricted in both duration and scope by consumables and available manpower. An expanded multi-agent EVA team combining the information-gathering and problem-solving skills of humans with the survivability and physical capabilities of robots is proposed and illustrated by example. Such teams are useful for large-scale, complex missions requiring dispersed manipulation, locomotion and sensing capabilities. To study collaboration modalities within a multi-agent EVA team, a 1-g test is conducted with humans and robots working together in various supporting roles.

  12. Multi-location gram-positive and gram-negative bacterial protein subcellular localization using gene ontology and multi-label classifier ensemble.

    Science.gov (United States)

    Wang, Xiao; Zhang, Jun; Li, Guo-Zheng

    2015-01-01

    It has become a very important and full of challenge task to predict bacterial protein subcellular locations using computational methods. Although there exist a lot of prediction methods for bacterial proteins, the majority of these methods can only deal with single-location proteins. But unfortunately many multi-location proteins are located in the bacterial cells. Moreover, multi-location proteins have special biological functions capable of helping the development of new drugs. So it is necessary to develop new computational methods for accurately predicting subcellular locations of multi-location bacterial proteins. In this article, two efficient multi-label predictors, Gpos-ECC-mPLoc and Gneg-ECC-mPLoc, are developed to predict the subcellular locations of multi-label gram-positive and gram-negative bacterial proteins respectively. The two multi-label predictors construct the GO vectors by using the GO terms of homologous proteins of query proteins and then adopt a powerful multi-label ensemble classifier to make the final multi-label prediction. The two multi-label predictors have the following advantages: (1) they improve the prediction performance of multi-label proteins by taking the correlations among different labels into account; (2) they ensemble multiple CC classifiers and further generate better prediction results by ensemble learning; and (3) they construct the GO vectors by using the frequency of occurrences of GO terms in the typical homologous set instead of using 0/1 values. Experimental results show that Gpos-ECC-mPLoc and Gneg-ECC-mPLoc can efficiently predict the subcellular locations of multi-label gram-positive and gram-negative bacterial proteins respectively. Gpos-ECC-mPLoc and Gneg-ECC-mPLoc can efficiently improve prediction accuracy of subcellular localization of multi-location gram-positive and gram-negative bacterial proteins respectively. The online web servers for Gpos-ECC-mPLoc and Gneg-ECC-mPLoc predictors are freely accessible

  13. Multi-model-based Access Control in Construction Projects

    Directory of Open Access Journals (Sweden)

    Frank Hilbert

    2012-04-01

    Full Text Available During the execution of large scale construction projects performed by Virtual Organizations (VO, relatively complex technical models have to be exchanged between the VO members. For linking the trade and transfer of these models, a so-called multi-model container format was developed. Considering the different skills and tasks of the involved partners, it is not necessary for them to know all the models in every technical detailing. Furthermore, the model size can lead to a delay in communication. In this paper an approach is presented for defining model cut-outs according to the current project context. Dynamic dependencies to the project context as well as static dependencies on the organizational structure are mapped in a context-sensitive rule. As a result, an approach for dynamic filtering of multi-models is obtained which ensures, together with a filtering service, that the involved VO members get a simplified view of complex multi-models as well as sufficient permissions depending on their tasks.

  14. Multi-focus and multi-level techniques for visualization and analysis of networks with thematic data

    Science.gov (United States)

    Cossalter, Michele; Mengshoel, Ole J.; Selker, Ted

    2013-01-01

    Information-rich data sets bring several challenges in the areas of visualization and analysis, even when associated with node-link network visualizations. This paper presents an integration of multi-focus and multi-level techniques that enable interactive, multi-step comparisons in node-link networks. We describe NetEx, a visualization tool that enables users to simultaneously explore different parts of a network and its thematic data, such as time series or conditional probability tables. NetEx, implemented as a Cytoscape plug-in, has been applied to the analysis of electrical power networks, Bayesian networks, and the Enron e-mail repository. In this paper we briefly discuss visualization and analysis of the Enron social network, but focus on data from an electrical power network. Specifically, we demonstrate how NetEx supports the analytical task of electrical power system fault diagnosis. Results from a user study with 25 subjects suggest that NetEx enables more accurate isolation of complex faults compared to an especially designed software tool.

  15. Collaborative-Hybrid Multi-Layer Network Control for Emerging Cyber-Infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, Tom [USC; Ghani, Nasir [UNM; Boyd, Eric [UCAID

    2010-08-31

    At a high level, there were four basic task areas identified for the Hybrid-MLN project. They are: o Multi-Layer, Multi-Domain, Control Plane Architecture and Implementation, including OSCARS layer2 and InterDomain Adaptation, Integration of LambdaStation and Terapaths with Layer2 dynamic provisioning, Control plane software release, Scheduling, AAA, security architecture, Network Virtualization architecture, Multi-Layer Network Architecture Framework Definition; o Heterogeneous DataPlane Testing; o Simulation; o Project Publications, Reports, and Presentations.

  16. When global rule reversal meets local task switching: The neural mechanisms of coordinated behavioral adaptation to instructed multi-level demand changes.

    Science.gov (United States)

    Shi, Yiquan; Wolfensteller, Uta; Schubert, Torsten; Ruge, Hannes

    2018-02-01

    Cognitive flexibility is essential to cope with changing task demands and often it is necessary to adapt to combined changes in a coordinated manner. The present fMRI study examined how the brain implements such multi-level adaptation processes. Specifically, on a "local," hierarchically lower level, switching between two tasks was required across trials while the rules of each task remained unchanged for blocks of trials. On a "global" level regarding blocks of twelve trials, the task rules could reverse or remain the same. The current task was cued at the start of each trial while the current task rules were instructed before the start of a new block. We found that partly overlapping and partly segregated neural networks play different roles when coping with the combination of global rule reversal and local task switching. The fronto-parietal control network (FPN) supported the encoding of reversed rules at the time of explicit rule instruction. The same regions subsequently supported local task switching processes during actual implementation trials, irrespective of rule reversal condition. By contrast, a cortico-striatal network (CSN) including supplementary motor area and putamen was increasingly engaged across implementation trials and more so for rule reversal than for nonreversal blocks, irrespective of task switching condition. Together, these findings suggest that the brain accomplishes the coordinated adaptation to multi-level demand changes by distributing processing resources either across time (FPN for reversed rule encoding and later for task switching) or across regions (CSN for reversed rule implementation and FPN for concurrent task switching). © 2017 Wiley Periodicals, Inc.

  17. Multi-view Multi-sparsity Kernel Reconstruction for Multi-class Image Classification

    KAUST Repository

    Zhu, Xiaofeng

    2015-05-28

    This paper addresses the problem of multi-class image classification by proposing a novel multi-view multi-sparsity kernel reconstruction (MMKR for short) model. Given images (including test images and training images) representing with multiple visual features, the MMKR first maps them into a high-dimensional space, e.g., a reproducing kernel Hilbert space (RKHS), where test images are then linearly reconstructed by some representative training images, rather than all of them. Furthermore a classification rule is proposed to classify test images. Experimental results on real datasets show the effectiveness of the proposed MMKR while comparing to state-of-the-art algorithms.

  18. Multi-Aspect Group Formation using Facility Location Analysis

    NARCIS (Netherlands)

    Neshati, Mahmood; Beigy, Hamid; Hiemstra, Djoerd

    2012-01-01

    In this paper, we propose an optimization framework to retrieve an optimal group of experts to perform a given multi-aspect task/project. Each task needs a diverse set of skills and the group of assigned experts should be able to collectively cover all required aspects of the task. We consider three

  19. Measuring multi-configurational character by orbital entanglement

    Science.gov (United States)

    Stein, Christopher J.; Reiher, Markus

    2017-09-01

    One of the most critical tasks at the very beginning of a quantum chemical investigation is the choice of either a multi- or single-configurational method. Naturally, many proposals exist to define a suitable diagnostic of the multi-configurational character for various types of wave functions in order to assist this crucial decision. Here, we present a new orbital-entanglement-based multi-configurational diagnostic termed Zs(1). The correspondence of orbital entanglement and static (or non-dynamic) electron correlation permits the definition of such a diagnostic. We chose our diagnostic to meet important requirements such as well-defined limits for pure single-configurational and multi-configurational wave functions. The Zs(1) diagnostic can be evaluated from a partially converged, but qualitatively correct, and therefore inexpensive density matrix renormalisation group wave function as in our recently presented automated active orbital selection protocol. Its robustness and the fact that it can be evaluated at low cost make this diagnostic a practical tool for routine applications.

  20. The evaluation of multi-structure, multi-atlas pelvic anatomy features in a prostate MR lymphography CAD system

    Science.gov (United States)

    Meijs, M.; Debats, O.; Huisman, H.

    2015-03-01

    In prostate cancer, the detection of metastatic lymph nodes indicates progression from localized disease to metastasized cancer. The detection of positive lymph nodes is, however, a complex and time consuming task for experienced radiologists. Assistance of a two-stage Computer-Aided Detection (CAD) system in MR Lymphography (MRL) is not yet feasible due to the large number of false positives in the first stage of the system. By introducing a multi-structure, multi-atlas segmentation, using an affine transformation followed by a B-spline transformation for registration, the organ location is given by a mean density probability map. The atlas segmentation is semi-automatically drawn with ITK-SNAP, using Active Contour Segmentation. Each anatomic structure is identified by a label number. Registration is performed using Elastix, using Mutual Information and an Adaptive Stochastic Gradient optimization. The dataset consists of the MRL scans of ten patients, with lymph nodes manually annotated in consensus by two expert readers. The feature map of the CAD system consists of the Multi-Atlas and various other features (e.g. Normalized Intensity and multi-scale Blobness). The voxel-based Gentleboost classifier is evaluated using ROC analysis with cross validation. We show in a set of 10 studies that adding multi-structure, multi-atlas anatomical structure likelihood features improves the quality of the lymph node voxel likelihood map. Multiple structure anatomy maps may thus make MRL CAD more feasible.

  1. A New Multi-Sensor Track Fusion Architecture for Multi-Sensor Information Integration

    Science.gov (United States)

    2004-09-01

    NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION ...NAME(S) AND ADDRESS(ES) Lockheed Martin Aeronautical Systems Company,Marietta,GA,3063 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING...tracking process and degrades the track accuracy. ARCHITECHTURE OF MULTI-SENSOR TRACK FUSION MODEL The Alpha

  2. Preparation and evaluation of highly drug-loaded fine globular granules using a multi-functional rotor processor.

    Science.gov (United States)

    Iwao, Yasunori; Kimura, Shin-Ichiro; Ishida, Masayuki; Mise, Ryohei; Yamada, Masaki; Namiki, Noriyuki; Noguchi, Shuji; Itai, Shigeru

    2015-01-01

    The manufacture of highly drug-loaded fine globular granules eventually applied for orally disintegrating tablets has been investigated using a unique multi-functional rotor processor with acetaminophen, which was used as a model drug substance. Experimental design and statistical analysis were used to evaluate potential relationships between three key operating parameters (i.e., the binder flow rate, atomization pressure and rotating speed) and a series of associated micromeritics (i.e., granule mean size, proportion of fine particles (106-212 µm), flowability, roundness and water content). The results of multiple linear regression analysis revealed several trends, including (1) the binder flow rate and atomization pressure had significant positive and negative effects on the granule mean size value, Carr's flowability index, granular roundness and water content, respectively; (2) the proportion of fine particles was positively affected by the product of interaction between the binder flow rate and atomization pressure; and (3) the granular roundness was negatively and positively affected by the product of interactions between the binder flow rate and the atomization pressure, and the binder flow rate and rotating speed, respectively. The results of this study led to the identification of optimal operating conditions for the preparation of granules, and could therefore be used to provide important information for the development of processes for the manufacture of highly drug-loaded fine globular granules.

  3. Multi-View Multi-Instance Learning Based on Joint Sparse Representation and Multi-View Dictionary Learning.

    Science.gov (United States)

    Li, Bing; Yuan, Chunfeng; Xiong, Weihua; Hu, Weiming; Peng, Houwen; Ding, Xinmiao; Maybank, Steve

    2017-12-01

    In multi-instance learning (MIL), the relations among instances in a bag convey important contextual information in many applications. Previous studies on MIL either ignore such relations or simply model them with a fixed graph structure so that the overall performance inevitably degrades in complex environments. To address this problem, this paper proposes a novel multi-view multi-instance learning algorithm (MIL) that combines multiple context structures in a bag into a unified framework. The novel aspects are: (i) we propose a sparse -graph model that can generate different graphs with different parameters to represent various context relations in a bag, (ii) we propose a multi-view joint sparse representation that integrates these graphs into a unified framework for bag classification, and (iii) we propose a multi-view dictionary learning algorithm to obtain a multi-view graph dictionary that considers cues from all views simultaneously to improve the discrimination of the MIL. Experiments and analyses in many practical applications prove the effectiveness of the M IL.

  4. Modeling activity recognition of multi resident using label combination of multi label classification in smart home

    Science.gov (United States)

    Mohamed, Raihani; Perumal, Thinagaran; Sulaiman, Md Nasir; Mustapha, Norwati; Zainudin, M. N. Shah

    2017-10-01

    Pertaining to the human centric concern and non-obtrusive way, the ambient sensor type technology has been selected, accepted and embedded in the environment in resilient style. Human activities, everyday are gradually becoming complex and thus complicate the inferences of activities when it involving the multi resident in the same smart environment. Current works solutions focus on separate model between the resident, activities and interactions. Some study use data association and extra auxiliary of graphical nodes to model human tracking information in an environment and some produce separate framework to incorporate the auxiliary for interaction feature model. Thus, recognizing the activities and which resident perform the activity at the same time in the smart home are vital for the smart home development and future applications. This paper will cater the above issue by considering the simplification and efficient method using the multi label classification framework. This effort eliminates time consuming and simplifies a lot of pre-processing tasks comparing with previous approach. Applications to the multi resident multi label learning in smart home problems shows the LC (Label Combination) using Decision Tree (DT) as base classifier can tackle the above problems.

  5. Paper Prototyping: The Surplus Merit of a Multi-Method Approach

    Directory of Open Access Journals (Sweden)

    Stephanie Bettina Linek

    2015-07-01

    Full Text Available This article describes a multi-method approach for usability testing. The approach combines paper prototyping and think-aloud with two supplemental methods: advanced scribbling and a handicraft task. The method of advanced scribbling instructs the participants to use different colors for marking important, unnecessary and confusing elements in a paper prototype. In the handicraft task the participants have to tinker a paper prototype of their wish version. Both methods deliver additional information on the needs and expectations of the potential users and provide helpful indicators for clarifying complex or contradictory findings. The multi-method approach and its surplus benefit are illustrated by a pilot study on the redesign of the homepage of a library 2.0. The findings provide positive evidence for the applicability of the advanced scribbling and the handicraft task as well as for the surplus merit of the multi-method approach. The article closes with a discussion and outlook. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs150379

  6. Design of Smart Multi-Functional Integrated Aviation Photoelectric Payload

    Science.gov (United States)

    Zhang, X.

    2018-04-01

    To coordinate with the small UAV at reconnaissance mission, we've developed a smart multi-functional integrated aviation photoelectric payload. The payload weighs only 1kg, and has a two-axis stabilized platform with visible task payload, infrared task payload, laser pointers and video tracker. The photoelectric payload could complete the reconnaissance tasks above the target area (including visible and infrared). Because of its light weight, small size, full-featured, high integrated, the constraints of the UAV platform carrying the payload will be reduced a lot, which helps the payload suit for more extensive using occasions. So all users of this type of smart multi-functional integrated aviation photoelectric payload will do better works on completion of the ground to better pinpoint targets, artillery calibration, assessment of observe strike damage, customs officials and other tasks.

  7. Locality-Aware Task Scheduling and Data Distribution for OpenMP Programs on NUMA Systems and Manycore Processors

    Directory of Open Access Journals (Sweden)

    Ananya Muddukrishna

    2015-01-01

    Full Text Available Performance degradation due to nonuniform data access latencies has worsened on NUMA systems and can now be felt on-chip in manycore processors. Distributing data across NUMA nodes and manycore processor caches is necessary to reduce the impact of nonuniform latencies. However, techniques for distributing data are error-prone and fragile and require low-level architectural knowledge. Existing task scheduling policies favor quick load-balancing at the expense of locality and ignore NUMA node/manycore cache access latencies while scheduling. Locality-aware scheduling, in conjunction with or as a replacement for existing scheduling, is necessary to minimize NUMA effects and sustain performance. We present a data distribution and locality-aware scheduling technique for task-based OpenMP programs executing on NUMA systems and manycore processors. Our technique relieves the programmer from thinking of NUMA system/manycore processor architecture details by delegating data distribution to the runtime system and uses task data dependence information to guide the scheduling of OpenMP tasks to reduce data stall times. We demonstrate our technique on a four-socket AMD Opteron machine with eight NUMA nodes and on the TILEPro64 processor and identify that data distribution and locality-aware task scheduling improve performance up to 69% for scientific benchmarks compared to default policies and yet provide an architecture-oblivious approach for programmers.

  8. Very wide register : an asymmetric register file organization for low power embedded processors.

    NARCIS (Netherlands)

    Raghavan, P.; Lambrechts, A.; Jayapala, M.; Catthoor, F.; Verkest, D.T.M.L.; Corporaal, H.

    2007-01-01

    In current embedded systems processors, multi-ported register files are one of the most power hungry parts of the processor, even when they are clustered. This paper presents a novel register file architecture, which has single ported cells and asymmetric interfaces to the memory and to the

  9. A meta-ontological framework for multi-agent systems design

    OpenAIRE

    Sokolova, Marina; Fernández Caballero, Antonio

    2007-01-01

    The paper introduces an approach to using a meta-ontology framework for complex multi-agent systems design, and illustrates it in an application related to ecological-medical issues. The described shared ontology is pooled from private sub-ontologies, which represent a problem area ontology, an agent ontology, a task ontology, an ontology of interactions, and the multi-agent system architecture ontology.

  10. IMPETUS - Interactive MultiPhysics Environment for Unified Simulations.

    Science.gov (United States)

    Ha, Vi Q; Lykotrafitis, George

    2016-12-08

    We introduce IMPETUS - Interactive MultiPhysics Environment for Unified Simulations, an object oriented, easy-to-use, high performance, C++ program for three-dimensional simulations of complex physical systems that can benefit a large variety of research areas, especially in cell mechanics. The program implements cross-communication between locally interacting particles and continuum models residing in the same physical space while a network facilitates long-range particle interactions. Message Passing Interface is used for inter-processor communication for all simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Protocol to assess the neurophysiology associated with multi-segmental postural coordination

    International Nuclear Information System (INIS)

    Lomond, Karen V; Henry, Sharon M; Jacobs, Jesse V; Hitt, Juvena R; Horak, Fay B; Cohen, Rajal G; Schwartz, Daniel; Dumas, Julie A; Naylor, Magdalena R; Watts, Richard; DeSarno, Michael J

    2013-01-01

    Anticipatory postural adjustments (APAs) stabilize potential disturbances to posture caused by movement. Impaired APAs are common with disease and injury. Brain functions associated with generating APAs remain uncertain due to a lack of paired tasks that require similar limb motion from similar postural orientations, but differ in eliciting an APA while also being compatible with brain imaging techniques (e.g., functional magnetic resonance imaging; fMRI). This study developed fMRI-compatible tasks differentiated by the presence or absence of APAs during leg movement. Eighteen healthy subjects performed two leg movement tasks, supported leg raise (SLR) and unsupported leg raise (ULR), to elicit isolated limb motion (no APA) versus multi-segmental coordination patterns (including APA), respectively. Ground reaction forces under the feet and electromyographic activation amplitudes were assessed to determine the coordination strategy elicited for each task. Results demonstrated that the ULR task elicited a multi-segmental coordination that was either minimized or absent in the SLR task, indicating that it would serve as an adequate control task for fMRI protocols. A pilot study with a single subject performing each task in an MRI scanner demonstrated minimal head movement in both tasks and brain activation patterns consistent with an isolated limb movement for the SLR task versus multi-segmental postural coordination for the ULR task. (note)

  12. Multi-criteria objective based climate change impact assessment for multi-purpose multi-reservoir systems

    Science.gov (United States)

    Müller, Ruben; Schütze, Niels

    2014-05-01

    Water resources systems with reservoirs are expected to be sensitive to climate change. Assessment studies that analyze the impact of climate change on the performance of reservoirs can be divided in two groups: (1) Studies that simulate the operation under projected inflows with the current set of operational rules. Due to non adapted operational rules the future performance of these reservoirs can be underestimated and the impact overestimated. (2) Studies that optimize the operational rules for best adaption of the system to the projected conditions before the assessment of the impact. The latter allows for estimating more realistically future performance and adaption strategies based on new operation rules are available if required. Multi-purpose reservoirs serve various, often conflicting functions. If all functions cannot be served simultaneously at a maximum level, an effective compromise between multiple objectives of the reservoir operation has to be provided. Yet under climate change the historically preferenced compromise may no longer be the most suitable compromise in the future. Therefore a multi-objective based climate change impact assessment approach for multi-purpose multi-reservoir systems is proposed in the study. Projected inflows are provided in a first step using a physically based rainfall-runoff model. In a second step, a time series model is applied to generate long-term inflow time series. Finally, the long-term inflow series are used as driving variables for a simulation-based multi-objective optimization of the reservoir system in order to derive optimal operation rules. As a result, the adapted Pareto-optimal set of diverse best compromise solutions can be presented to the decision maker in order to assist him in assessing climate change adaption measures with respect to the future performance of the multi-purpose reservoir system. The approach is tested on a multi-purpose multi-reservoir system in a mountainous catchment in Germany. A

  13. A combinatorial approach to multi-skill workforce scheduling

    NARCIS (Netherlands)

    Firat, M.; Hurkens, C.A.J.

    2010-01-01

    This paper deals with scheduling complex tasks with an inhomogeneous set of resources. The problem is to assign technicians to tasks with multi-level skill requirements. Here the requirements are merely the presence of a set of technicians that possess the necessary capabilities. An additional

  14. Multi-focal Vision and Gaze Control Improve Navigation Performance

    Directory of Open Access Journals (Sweden)

    Kolja Kuehnlenz

    2008-11-01

    Full Text Available Multi-focal vision systems comprise cameras with various fields of view and measurement accuracies. This article presents a multi-focal approach to localization and mapping of mobile robots with active vision. An implementation of the novel concept is done considering a humanoid robot navigation scenario where the robot is visually guided through a structured environment with several landmarks. Various embodiments of multi-focal vision systems are investigated and the impact on navigation performance is evaluated in comparison to a conventional mono-focal stereo set-up. The comparative studies clearly show the benefits of multi-focal vision for mobile robot navigation: flexibility to assign the different available sensors optimally in each situation, enhancement of the visible field, higher localization accuracy, and, thus, better task performance, i.e. path following behavior of the mobile robot. It is shown that multi-focal vision may strongly improve navigation performance.

  15. Heterogeneous Face Attribute Estimation: A Deep Multi-Task Learning Approach.

    Science.gov (United States)

    Han, Hu; K Jain, Anil; Shan, Shiguang; Chen, Xilin

    2017-08-10

    Face attribute estimation has many potential applications in video surveillance, face retrieval, and social media. While a number of methods have been proposed for face attribute estimation, most of them did not explicitly consider the attribute correlation and heterogeneity (e.g., ordinal vs. nominal and holistic vs. local) during feature representation learning. In this paper, we present a Deep Multi-Task Learning (DMTL) approach to jointly estimate multiple heterogeneous attributes from a single face image. In DMTL, we tackle attribute correlation and heterogeneity with convolutional neural networks (CNNs) consisting of shared feature learning for all the attributes, and category-specific feature learning for heterogeneous attributes. We also introduce an unconstrained face database (LFW+), an extension of public-domain LFW, with heterogeneous demographic attributes (age, gender, and race) obtained via crowdsourcing. Experimental results on benchmarks with multiple face attributes (MORPH II, LFW+, CelebA, LFWA, and FotW) show that the proposed approach has superior performance compared to state of the art. Finally, evaluations on a public-domain face database (LAP) with a single attribute show that the proposed approach has excellent generalization ability.

  16. Timing system for multi-bunch/multi-train operation at ATF

    International Nuclear Information System (INIS)

    Naito, T.; Hayano, H.; Urakawa, J.; Imai, T.

    2000-01-01

    A timing system has been constructed for multi-bunch/multi-train operation at KEK-ATF. The linac accelerates 20 bunches of multi-bunch with 2.8 ns spacing. The Damping Ring stores up to 5 trains of multi-bunch. The timing system is required to provide flexible operation mode and bucket selection. A personal computer is used for manipulating the timing. The performance of kicker magnets at the injection/extruction is key issue for multi-train operation. The hardware and the test results are presented. (author)

  17. Implementation of Grid-computing Framework for Simulation in Multi-scale Structural Analysis

    Directory of Open Access Journals (Sweden)

    Data Iranata

    2010-05-01

    Full Text Available A new grid-computing framework for simulation in multi-scale structural analysis is presented. Two levels of parallel processing will be involved in this framework: multiple local distributed computing environments connected by local network to form a grid-based cluster-to-cluster distributed computing environment. To successfully perform the simulation, a large-scale structural system task is decomposed into the simulations of a simplified global model and several detailed component models using various scales. These correlated multi-scale structural system tasks are distributed among clusters and connected together in a multi-level hierarchy and then coordinated over the internet. The software framework for supporting the multi-scale structural simulation approach is also presented. The program architecture design allows the integration of several multi-scale models as clients and servers under a single platform. To check its feasibility, a prototype software system has been designed and implemented to perform the proposed concept. The simulation results show that the software framework can increase the speedup performance of the structural analysis. Based on this result, the proposed grid-computing framework is suitable to perform the simulation of the multi-scale structural analysis.

  18. Complex engineering objects construction using Multi-D innovative technology

    International Nuclear Information System (INIS)

    Agafonov, Alexey

    2013-01-01

    Multi-D technology is an integrated innovative project management system for a construction of complex engineering objects based on a construction process simulation using an intellectual 3D model. Multi-D technology includes: • The unified schedule of E+P+C; • The schedule of loading of human resources, machines & mechanisms; • The budget of expenses and the income integrated with the schedule; • 3D model; • Multi-D model; • Weekly-daily tasks (with 4th level schedules); • Control system of interaction of Customer-EPC(m) company - Contractors; • Change and configuration management system

  19. Multi-stage decoding of multi-level modulation codes

    Science.gov (United States)

    Lin, Shu; Kasami, Tadao; Costello, Daniel J., Jr.

    1991-01-01

    Various types of multi-stage decoding for multi-level modulation codes are investigated. It is shown that if the component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. Particularly, it is shown that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum soft-decision decoding of the code is very small, only a fraction of dB loss in signal to noise ratio at a bit error rate (BER) of 10(exp -6).

  20. Multi-agent cooperation rescue algorithm based on influence degree and state prediction

    Science.gov (United States)

    Zheng, Yanbin; Ma, Guangfu; Wang, Linlin; Xi, Pengxue

    2018-04-01

    Aiming at the multi-agent cooperative rescue in disaster, a multi-agent cooperative rescue algorithm based on impact degree and state prediction is proposed. Firstly, based on the influence of the information in the scene on the collaborative task, the influence degree function is used to filter the information. Secondly, using the selected information to predict the state of the system and Agent behavior. Finally, according to the result of the forecast, the cooperative behavior of Agent is guided and improved the efficiency of individual collaboration. The simulation results show that this algorithm can effectively solve the cooperative rescue problem of multi-agent and ensure the efficient completion of the task.

  1. Telemetry Timing Analysis for Image Reconstruction of Kompsat Spacecraft

    Directory of Open Access Journals (Sweden)

    Jin-Ho Lee

    2000-06-01

    Full Text Available The KOMPSAT (KOrea Multi-Purpose SATellite has two optical imaging instruments called EOC (Electro-Optical Camera and OSMI (Ocean Scanning Multispectral Imager. The image data of these instruments are transmitted to ground station and restored correctly after post-processing with the telemetry data transferred from KOMPSAT spacecraft. The major timing information of the KOMPSAT is OBT (On-Board Time which is formatted by the on-board computer of the spacecraft, based on 1Hz sync. pulse coming from the GPS receiver involved. The OBT is transmitted to ground station with the house-keeping telemetry data of the spacecraft while it is distributed to the instruments via 1553B data bus for synchronization during imaging and formatting. The timing information contained in the spacecraft telemetry data would have direct relation to the image data of the instruments, which should be well explained to get a more accurate image. This paper addresses the timing analysis of the KOMPSAT spacecraft and instruments, including the gyro data timing analysis for the correct restoration of the EOC and OSMI image data at ground station.

  2. Efficient Multi-Label Feature Selection Using Entropy-Based Label Selection

    Directory of Open Access Journals (Sweden)

    Jaesung Lee

    2016-11-01

    Full Text Available Multi-label feature selection is designed to select a subset of features according to their importance to multiple labels. This task can be achieved by ranking the dependencies of features and selecting the features with the highest rankings. In a multi-label feature selection problem, the algorithm may be faced with a dataset containing a large number of labels. Because the computational cost of multi-label feature selection increases according to the number of labels, the algorithm may suffer from a degradation in performance when processing very large datasets. In this study, we propose an efficient multi-label feature selection method based on an information-theoretic label selection strategy. By identifying a subset of labels that significantly influence the importance of features, the proposed method efficiently outputs a feature subset. Experimental results demonstrate that the proposed method can identify a feature subset much faster than conventional multi-label feature selection methods for large multi-label datasets.

  3. The multi temporal/multi-model approach to predictive uncertainty assessment in real-time flood forecasting

    Science.gov (United States)

    Barbetta, Silvia; Coccia, Gabriele; Moramarco, Tommaso; Brocca, Luca; Todini, Ezio

    2017-08-01

    This work extends the multi-temporal approach of the Model Conditional Processor (MCP-MT) to the multi-model case and to the four Truncated Normal Distributions (TNDs) approach, demonstrating the improvement on the single-temporal one. The study is framed in the context of probabilistic Bayesian decision-making that is appropriate to take rational decisions on uncertain future outcomes. As opposed to the direct use of deterministic forecasts, the probabilistic forecast identifies a predictive probability density function that represents a fundamental knowledge on future occurrences. The added value of MCP-MT is the identification of the probability that a critical situation will happen within the forecast lead-time and when, more likely, it will occur. MCP-MT is thoroughly tested for both single-model and multi-model configurations at a gauged site on the Tiber River, central Italy. The stages forecasted by two operative deterministic models, STAFOM-RCM and MISDc, are considered for the study. The dataset used for the analysis consists of hourly data from 34 flood events selected on a time series of six years. MCP-MT improves over the original models' forecasts: the peak overestimation and the rising limb delayed forecast, characterizing MISDc and STAFOM-RCM respectively, are significantly mitigated, with a reduced mean error on peak stage from 45 to 5 cm and an increased coefficient of persistence from 0.53 up to 0.75. The results show that MCP-MT outperforms the single-temporal approach and is potentially useful for supporting decision-making because the exceedance probability of hydrometric thresholds within a forecast horizon and the most probable flooding time can be estimated.

  4. Continuous Video Modeling to Assist with Completion of Multi-Step Home Living Tasks by Young Adults with Moderate Intellectual Disability

    Science.gov (United States)

    Mechling, Linda C.; Ayres, Kevin M.; Bryant, Kathryn J.; Foster, Ashley L.

    2014-01-01

    The current study evaluated a relatively new video-based procedure, continuous video modeling (CVM), to teach multi-step cleaning tasks to high school students with moderate intellectual disability. CVM in contrast to video modeling and video prompting allows repetition of the video model (looping) as many times as needed while the user completes…

  5. Correlation between a 2D Channelized Hotelling Observer and Human Observers in a Low-contrast Detection Task with Multi-slice Reading in CT

    Science.gov (United States)

    Yu, Lifeng; Chen, Baiyu; Kofler, James M.; Favazza, Christopher P.; Leng, Shuai; Kupinski, Matthew A.; McCollough, Cynthia H.

    2017-01-01

    Purpose Model observers have been successfully developed and used to assess the quality of static 2D CT images. However, radiologists typically read images by paging through multiple 2D slices (i.e. multi-slice reading). The purpose of this study was to correlate human and model observer performance in a low-contrast detection task performed using both 2D and multi-slice reading, and to determine if the 2D model observer still correlate well with human observer performance in multi-slice reading. Methods A phantom containing 18 low-contrast spheres (6 sizes × 3 contrast levels) was scanned on a 192-slice CT scanner at 5 dose levels (CTDIvol = 27, 13.5, 6.8, 3.4, and 1.7 mGy), each repeated 100 times. Images were reconstructed using both filtered-backprojection (FBP) and an iterative reconstruction (IR) method (ADMIRE, Siemens). A 3D volume of interest (VOI) around each sphere was extracted and placed side-by-side with a signal-absent VOI to create a 2-alternative forced choice (2AFC) trial. Sixteen 2AFC studies were generated, each with 100 trials, to evaluate the impact of radiation dose, lesion size and contrast, and reconstruction methods on object detection. In total, 1600 trials were presented to both model and human observers. Three medical physicists acted as human observers and were allowed to page through the 3D volumes to make a decision for each 2AFC trial. The human observer performance was compared with the performance of a multi-slice channelized Hotelling observer (CHO_MS), which integrates multi-slice image data, and with the performance of previously validated CHO, which operates on static 2D images (CHO_2D). For comparison, the same 16 2AFC studies were also performed in a 2D viewing mode by the human observers and compared with the multi-slice viewing performance and the two CHO models. Results Human observer performance was well correlated with the CHO_2D performance in the 2D viewing mode (Pearson product-moment correlation coefficient R=0

  6. Multi-stage decoding for multi-level block modulation codes

    Science.gov (United States)

    Lin, Shu

    1991-01-01

    In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  7. Analysis and synthesis of multi-qubit, multi-mode quantum devices

    Energy Technology Data Exchange (ETDEWEB)

    Solgun, Firat

    2015-03-27

    In this thesis we propose new methods in multi-qubit multi-mode circuit quantum electrodynamics (circuit-QED) architectures. First we describe a direct parity measurement method for three qubits, which can be realized in 2D circuit-QED with a possible extension to four qubits in a 3D circuit-QED setup for the implementation of the surface code. In Chapter 3 we show how to derive Hamiltonians and compute relaxation rates of the multi-mode superconducting microwave circuits consisting of single Josephson junctions using an exact impedance synthesis technique (the Brune synthesis) and applying previous formalisms for lumped element circuit quantization. In the rest of the thesis we extend our method to multi-junction (multi-qubit) multi-mode circuits through the use of state-space descriptions which allows us to quantize any multiport microwave superconducting circuit with a reciprocal lossy impedance response.

  8. Optimization of the coherence function estimation for multi-core central processing unit

    Science.gov (United States)

    Cheremnov, A. G.; Faerman, V. A.; Avramchuk, V. S.

    2017-02-01

    The paper considers use of parallel processing on multi-core central processing unit for optimization of the coherence function evaluation arising in digital signal processing. Coherence function along with other methods of spectral analysis is commonly used for vibration diagnosis of rotating machinery and its particular nodes. An algorithm is given for the function evaluation for signals represented with digital samples. The algorithm is analyzed for its software implementation and computational problems. Optimization measures are described, including algorithmic, architecture and compiler optimization, their results are assessed for multi-core processors from different manufacturers. Thus, speeding-up of the parallel execution with respect to sequential execution was studied and results are presented for Intel Core i7-4720HQ и AMD FX-9590 processors. The results show comparatively high efficiency of the optimization measures taken. In particular, acceleration indicators and average CPU utilization have been significantly improved, showing high degree of parallelism of the constructed calculating functions. The developed software underwent state registration and will be used as a part of a software and hardware solution for rotating machinery fault diagnosis and pipeline leak location with acoustic correlation method.

  9. Artificial intelligence for multi-mission planetary operations

    Science.gov (United States)

    Atkinson, David J.; Lawson, Denise L.; James, Mark L.

    1990-01-01

    A brief introduction is given to an automated system called the Spacecraft Health Automated Reasoning Prototype (SHARP). SHARP is designed to demonstrate automated health and status analysis for multi-mission spacecraft and ground data systems operations. The SHARP system combines conventional computer science methodologies with artificial intelligence techniques to produce an effective method for detecting and analyzing potential spacecraft and ground systems problems. The system performs real-time analysis of spacecraft and other related telemetry, and is also capable of examining data in historical context. Telecommunications link analysis of the Voyager II spacecraft is the initial focus for evaluation of the prototype in a real-time operations setting during the Voyager spacecraft encounter with Neptune in August, 1989. The preliminary results of the SHARP project and plans for future application of the technology are discussed.

  10. Multi-view clustering via multi-manifold regularized non-negative matrix factorization.

    Science.gov (United States)

    Zong, Linlin; Zhang, Xianchao; Zhao, Long; Yu, Hong; Zhao, Qianli

    2017-04-01

    Non-negative matrix factorization based multi-view clustering algorithms have shown their competitiveness among different multi-view clustering algorithms. However, non-negative matrix factorization fails to preserve the locally geometrical structure of the data space. In this paper, we propose a multi-manifold regularized non-negative matrix factorization framework (MMNMF) which can preserve the locally geometrical structure of the manifolds for multi-view clustering. MMNMF incorporates consensus manifold and consensus coefficient matrix with multi-manifold regularization to preserve the locally geometrical structure of the multi-view data space. We use two methods to construct the consensus manifold and two methods to find the consensus coefficient matrix, which leads to four instances of the framework. Experimental results show that the proposed algorithms outperform existing non-negative matrix factorization based algorithms for multi-view clustering. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. A Hybrid Task Graph Scheduler for High Performance Image Processing Workflows.

    Science.gov (United States)

    Blattner, Timothy; Keyrouz, Walid; Bhattacharyya, Shuvra S; Halem, Milton; Brady, Mary

    2017-12-01

    Designing applications for scalability is key to improving their performance in hybrid and cluster computing. Scheduling code to utilize parallelism is difficult, particularly when dealing with data dependencies, memory management, data motion, and processor occupancy. The Hybrid Task Graph Scheduler (HTGS) improves programmer productivity when implementing hybrid workflows for multi-core and multi-GPU systems. The Hybrid Task Graph Scheduler (HTGS) is an abstract execution model, framework, and API that increases programmer productivity when implementing hybrid workflows for such systems. HTGS manages dependencies between tasks, represents CPU and GPU memories independently, overlaps computations with disk I/O and memory transfers, keeps multiple GPUs occupied, and uses all available compute resources. Through these abstractions, data motion and memory are explicit; this makes data locality decisions more accessible. To demonstrate the HTGS application program interface (API), we present implementations of two example algorithms: (1) a matrix multiplication that shows how easily task graphs can be used; and (2) a hybrid implementation of microscopy image stitching that reduces code size by ≈ 43% compared to a manually coded hybrid workflow implementation and showcases the minimal overhead of task graphs in HTGS. Both of the HTGS-based implementations show good performance. In image stitching the HTGS implementation achieves similar performance to the hybrid workflow implementation. Matrix multiplication with HTGS achieves 1.3× and 1.8× speedup over the multi-threaded OpenBLAS library for 16k × 16k and 32k × 32k size matrices, respectively.

  12. Simulation-optimization framework for multi-site multi-season hybrid stochastic streamflow modeling

    Science.gov (United States)

    Srivastav, Roshan; Srinivasan, K.; Sudheer, K. P.

    2016-11-01

    A simulation-optimization (S-O) framework is developed for the hybrid stochastic modeling of multi-site multi-season streamflows. The multi-objective optimization model formulated is the driver and the multi-site, multi-season hybrid matched block bootstrap model (MHMABB) is the simulation engine within this framework. The multi-site multi-season simulation model is the extension of the existing single-site multi-season simulation model. A robust and efficient evolutionary search based technique, namely, non-dominated sorting based genetic algorithm (NSGA - II) is employed as the solution technique for the multi-objective optimization within the S-O framework. The objective functions employed are related to the preservation of the multi-site critical deficit run sum and the constraints introduced are concerned with the hybrid model parameter space, and the preservation of certain statistics (such as inter-annual dependence and/or skewness of aggregated annual flows). The efficacy of the proposed S-O framework is brought out through a case example from the Colorado River basin. The proposed multi-site multi-season model AMHMABB (whose parameters are obtained from the proposed S-O framework) preserves the temporal as well as the spatial statistics of the historical flows. Also, the other multi-site deficit run characteristics namely, the number of runs, the maximum run length, the mean run sum and the mean run length are well preserved by the AMHMABB model. Overall, the proposed AMHMABB model is able to show better streamflow modeling performance when compared with the simulation based SMHMABB model, plausibly due to the significant role played by: (i) the objective functions related to the preservation of multi-site critical deficit run sum; (ii) the huge hybrid model parameter space available for the evolutionary search and (iii) the constraint on the preservation of the inter-annual dependence. Split-sample validation results indicate that the AMHMABB model is

  13. Biomorphic Multi-Agent Architecture for Persistent Computing

    Science.gov (United States)

    Lodding, Kenneth N.; Brewster, Paul

    2009-01-01

    A multi-agent software/hardware architecture, inspired by the multicellular nature of living organisms, has been proposed as the basis of design of a robust, reliable, persistent computing system. Just as a multicellular organism can adapt to changing environmental conditions and can survive despite the failure of individual cells, a multi-agent computing system, as envisioned, could adapt to changing hardware, software, and environmental conditions. In particular, the computing system could continue to function (perhaps at a reduced but still reasonable level of performance) if one or more component( s) of the system were to fail. One of the defining characteristics of a multicellular organism is unity of purpose. In biology, the purpose is survival of the organism. The purpose of the proposed multi-agent architecture is to provide a persistent computing environment in harsh conditions in which repair is difficult or impossible. A multi-agent, organism-like computing system would be a single entity built from agents or cells. Each agent or cell would be a discrete hardware processing unit that would include a data processor with local memory, an internal clock, and a suite of communication equipment capable of both local line-of-sight communications and global broadcast communications. Some cells, denoted specialist cells, could contain such additional hardware as sensors and emitters. Each cell would be independent in the sense that there would be no global clock, no global (shared) memory, no pre-assigned cell identifiers, no pre-defined network topology, and no centralized brain or control structure. Like each cell in a living organism, each agent or cell of the computing system would contain a full description of the system encoded as genes, but in this case, the genes would be components of a software genome.

  14. Collective Machine Learning: Team Learning and Classification in Multi-Agent Systems

    Science.gov (United States)

    Gifford, Christopher M.

    2009-01-01

    This dissertation focuses on the collaboration of multiple heterogeneous, intelligent agents (hardware or software) which collaborate to learn a task and are capable of sharing knowledge. The concept of collaborative learning in multi-agent and multi-robot systems is largely under studied, and represents an area where further research is needed to…

  15. The bit slice micro-processor 'GESPRO' as a project in the UA2 experiment

    CERN Document Server

    Becam, C; Delanghe, J; Fest, H M; Lecoq, J; Martin, H; Mencik, M; MerkeI, B; Meyer, J M; Perrin, M; Plothow, H; Rampazzo, J P; Schittly, A

    1981-01-01

    The bit slice micro-processor GESPRO is a CAMAC module plugged into a standard Elliot system crate via which it communicates as a slave with its host computer. It has full control of CAMAC as a master unit. GESPRO is a 24 bit machine with multi-mode memory addressing capacity of 64K words. The micro-processor structure uses 5 buses including pipe-line registers to mask access time and 16 interrupt levels. The micro-program memory capacity is 2K (RAM) words of 48 bits each. A special hardwired module allows floating point, as well as integer, multiplication of 24*24 bits, result in 48 bits, in about 200 ns. This micro-processor could be used in the UA2 data acquisition chain and trigger system for the following tasks: (a) online data reduction, i.e. to read DURANDAL, process the information resulting in accepting or rejecting the event; (b) readout and analysis of the accepted data; (c) preprocess the data. The UA2 version of GESPRO is under construction, programs and micro-programs are under development. Hard...

  16. Real-time SHVC software decoding with multi-threaded parallel processing

    Science.gov (United States)

    Gudumasu, Srinivas; He, Yuwen; Ye, Yan; He, Yong; Ryu, Eun-Seok; Dong, Jie; Xiu, Xiaoyu

    2014-09-01

    This paper proposes a parallel decoding framework for scalable HEVC (SHVC). Various optimization technologies are implemented on the basis of SHVC reference software SHM-2.0 to achieve real-time decoding speed for the two layer spatial scalability configuration. SHVC decoder complexity is analyzed with profiling information. The decoding process at each layer and the up-sampling process are designed in parallel and scheduled by a high level application task manager. Within each layer, multi-threaded decoding is applied to accelerate the layer decoding speed. Entropy decoding, reconstruction, and in-loop processing are pipeline designed with multiple threads based on groups of coding tree units (CTU). A group of CTUs is treated as a processing unit in each pipeline stage to achieve a better trade-off between parallelism and synchronization. Motion compensation, inverse quantization, and inverse transform modules are further optimized with SSE4 SIMD instructions. Simulations on a desktop with an Intel i7 processor 2600 running at 3.4 GHz show that the parallel SHVC software decoder is able to decode 1080p spatial 2x at up to 60 fps (frames per second) and 1080p spatial 1.5x at up to 50 fps for those bitstreams generated with SHVC common test conditions in the JCT-VC standardization group. The decoding performance at various bitrates with different optimization technologies and different numbers of threads are compared in terms of decoding speed and resource usage, including processor and memory.

  17. Software Product Lines for Multi-Cloud Microservices-Based Applications

    OpenAIRE

    Sousa , Gustavo; Rudametkin , Walter; Duchien , Laurence

    2016-01-01

    International audience; Multi-cloud computing is the use of resources and services from multiple independent cloud providers. It is used to avoid vendor lock-in, comply with location regulations, and optimize reliability, performance and costs. Microservices is an architectural style becoming increasingly used in cloud computing as it allows for better resources usage. However, building multi-cloud systems is a very complex and time consuming task, which calls for automation and supporting to...

  18. DIALIGN P: Fast pair-wise and multiple sequence alignment using parallel processors

    Directory of Open Access Journals (Sweden)

    Kaufmann Michael

    2004-09-01

    Full Text Available Abstract Background Parallel computing is frequently used to speed up computationally expensive tasks in Bioinformatics. Results Herein, a parallel version of the multi-alignment program DIALIGN is introduced. We propose two ways of dividing the program into independent sub-routines that can be run on different processors: (a pair-wise sequence alignments that are used as a first step to multiple alignment account for most of the CPU time in DIALIGN. Since alignments of different sequence pairs are completely independent of each other, they can be distributed to multiple processors without any effect on the resulting output alignments. (b For alignments of large genomic sequences, we use a heuristics by splitting up sequences into sub-sequences based on a previously introduced anchored alignment procedure. For our test sequences, this combined approach reduces the program running time of DIALIGN by up to 97%. Conclusions By distributing sub-routines to multiple processors, the running time of DIALIGN can be crucially improved. With these improvements, it is possible to apply the program in large-scale genomics and proteomics projects that were previously beyond its scope.

  19. The software design of multi-branch, multi-point remote monitoring system for temperature measurement based on MSP430 and DS18B20

    International Nuclear Information System (INIS)

    Yu Jun; Yan Yu

    2009-01-01

    This paper present that the system can acquire the remote temperature measurement data of 40 monitoring points, through the RS-232 serial port and the Intranet. System's hardware is consist of TI's MSP430F149 mixed-signal processor and UA7000A network module. Using digital temperature sensor DS18B20, the structure is simple and easy to expand, the sensors directly send out the temperature data, MSP430F149 has the advantage of ultra-low-power and high degree of integration. Using msp430F149, the multi-branch multi-point temperature measurement system is powerful, simple structure, high reliability, strong anti-interference capability. The client software is user-friendly and easy to use, it is designed in Microsoft Visual C+ +6.0 environment. The monitoring system is able to complete a total of 4 branches of the 40-point temperature measurements in real-time remote monitoring. (authors)

  20. Structural analysis of ITER multi-purpose deployer

    International Nuclear Information System (INIS)

    Manuelraj, Manoah Stephen; Dutta, Pramit; Gotewal, Krishan Kumar; Rastogi, Naveen; Tesini, Alessandro; Choi, Chang-Hwan

    2016-01-01

    Highlights: • System modelling for structural analysis of the Multi-Purpose Deployer (MPD). • Finite element modeling of the Multi-Purpose Deployer (MPD). • Static, modal and seismic response analysis of the Multi-Purpose Deployer (MPD). • Iterative structural analysis and design update to satisfy the structural criteria. • Modal analysis for various kinematic configurations. • Reaction force calculations on the interfacing systems. - Abstract: The Multi-Purpose Deployer (MPD) is a general purpose ITER in-vessel remote handling (RH) system. The main handling equipment, known as the MPD Transporter, consists of a series of linked bodies, which provide anchoring to the vacuum vessel port and an articulated multi-degree of freedom motion to perform various in-vessel maintenance tasks. During the in-vessel operations, the structural integrity of the system should be guaranteed against various operational and seismic loads. This paper presents the structural analysis results of the concept design of the MPD Transporter considering the seismic events. Static structural, modal and frequency response spectrum analyses have been performed to verify the structural integrity of the system, and to provide reaction forces to the interfacing systems such as vacuum vessel and cask. Iterative analyses and design updates are carried out based on the reference design of the system to improve the structural behavior of the system. The frequency responses of the system in various kinematics and payloads are assessed.

  1. Multi-kilowatt modularized spacecraft power processing system development

    International Nuclear Information System (INIS)

    Andrews, R.E.; Hayden, J.H.; Hedges, R.T.; Rehmann, D.W.

    1975-07-01

    A review of existing information pertaining to spacecraft power processing systems and equipment was accomplished with a view towards applicability to the modularization of multi-kilowatt power processors. Power requirements for future spacecraft were determined from the NASA mission model-shuttle systems payload data study which provided the limits for modular power equipment capabilities. Three power processing systems were compared to evaluation criteria to select the system best suited for modularity. The shunt regulated direct energy transfer system was selected by this analysis for a conceptual design effort which produced equipment specifications, schematics, envelope drawings, and power module configurations

  2. Component Functional Allocations of the ESF Multi-loop Controller for the KNICS ESF-CCS Design

    International Nuclear Information System (INIS)

    Hur, Seop; Choi, Jong Kyun; Kim, Dong Hoon; Kim, Ho; Kim, Seong Tae

    2006-01-01

    The safety related components in nuclear power plants are traditionally controlled by single-loop controllers. Traditional single-loop controller systems utilize dedicated processors for each component but that components independence is compromised through a sharing of power supplies, auxiliary logic modules and auxiliary I/O cards. In the new design of the ESF-CCS, the multi-loop controllers with data networks are widely used. Since components are assigned to ESF-CCS functional groups in a manner consistent with their process relationship, the effects of the failures are predictable and manageable. Therefore, the key issues for the design of multi-loop controller is to allocate the components to the each multi-loop controller through plant and function analysis and grouping. This paper deals with an ESF component functional allocation which is performed through allocation criteria and a fault analysis

  3. A Theory of Tax Avoidance - Managerial Incentives for Tax Planning in a Multi-Task Principal-Agent Model

    OpenAIRE

    Ewert, Ralf; Niemann, Rainer

    2014-01-01

    We derive determinants of tax avoidance by means of a multi-task principal-agent model. We extend prevailing models by integrating both corporate and individual income taxation as well as by including tax planning effort in the agent’s action portfolio. Our model shows novel and apparently paradoxical results regarding the impact of increased tax rates on efforts, risks, and incentive schemes. First, the principal’s after-tax profit can increase with a higher corporate tax rate. Second, t...

  4. A diagram retrieval method with multi-label learning

    Science.gov (United States)

    Fu, Songping; Lu, Xiaoqing; Liu, Lu; Qu, Jingwei; Tang, Zhi

    2015-01-01

    In recent years, the retrieval of plane geometry figures (PGFs) has attracted increasing attention in the fields of mathematics education and computer science. However, the high cost of matching complex PGF features leads to the low efficiency of most retrieval systems. This paper proposes an indirect classification method based on multi-label learning, which improves retrieval efficiency by reducing the scope of compare operation from the whole database to small candidate groups. Label correlations among PGFs are taken into account for the multi-label classification task. The primitive feature selection for multi-label learning and the feature description of visual geometric elements are conducted individually to match similar PGFs. The experiment results show the competitive performance of the proposed method compared with existing PGF retrieval methods in terms of both time consumption and retrieval quality.

  5. Synthetic Aperture Sequential Beamforming implemented on multi-core platforms

    DEFF Research Database (Denmark)

    Kjeldsen, Thomas; Lassen, Lee; Hemmsen, Martin Christian

    2014-01-01

    This paper compares several computational ap- proaches to Synthetic Aperture Sequential Beamforming (SASB) targeting consumer level parallel processors such as multi-core CPUs and GPUs. The proposed implementations demonstrate that ultrasound imaging using SASB can be executed in real- time with ...... per second) on an Intel Core i7 2600 CPU with an AMD HD7850 and a NVIDIA GTX680 GPU. The fastest CPU and GPU implementations use 14% and 1.3% of the real-time budget of 62 ms/frame, respectively. The maximum achieved processing rate is 1265 frames/s....

  6. Automatic Multi-Level Thresholding Segmentation Based on Multi-Objective Optimization

    Directory of Open Access Journals (Sweden)

    L. DJEROU,

    2012-01-01

    Full Text Available In this paper, we present a new multi-level image thresholding technique, called Automatic Threshold based on Multi-objective Optimization "ATMO" that combines the flexibility of multi-objective fitness functions with the power of a Binary Particle Swarm Optimization algorithm "BPSO", for searching the "optimum" number of the thresholds and simultaneously the optimal thresholds of three criteria: the between-class variances criterion, the minimum error criterion and the entropy criterion. Some examples of test images are presented to compare our segmentation method, based on the multi-objective optimization approach with Otsu’s, Kapur’s and Kittler’s methods. Our experimental results show that the thresholding method based on multi-objective optimization is more efficient than the classical Otsu’s, Kapur’s and Kittler’s methods.

  7. Multi-Hop Link Capacity of Multi-Route Multi-Hop MRC Diversity for a Virtual Cellular Network

    Science.gov (United States)

    Daou, Imane; Kudoh, Eisuke; Adachi, Fumiyuki

    In virtual cellular network (VCN), proposed for high-speed mobile communications, the signal transmitted from a mobile terminal is received by some wireless ports distributed in each virtual cell and relayed to the central port that acts as a gateway to the core network. In this paper, we apply the multi-route MHMRC diversity in order to decrease the transmit power and increase the multi-hop link capacity. The transmit power, the interference power and the link capacity are evaluated for DS-CDMA multi-hop VCN by computer simulation. The multi-route MHMRC diversity can be applied to not only DS-CDMA but also other access schemes (i. e. MC-CDMA, OFDM, etc.).

  8. APPLICATION OF THE MODELS-3 COMMUNITY MULTI-SCALE AIR QUALITY (CMAQ) MODEL SYSTEM TO SOS/NASHVILLE 1999

    Science.gov (United States)

    The Models-3 Community Multi-scale Air Quality (CMAQ) model, first released by the USEPA in 1999 (Byun and Ching. 1999), continues to be developed and evaluated. The principal components of the CMAQ system include a comprehensive emission processor known as the Sparse Matrix O...

  9. A Modular Pipelined Processor for High Resolution Gamma-Ray Spectroscopy

    Science.gov (United States)

    Veiga, Alejandro; Grunfeld, Christian

    2016-02-01

    The design of a digital signal processor for gamma-ray applications is presented in which a single ADC input can simultaneously provide temporal and energy characterization of gamma radiation for a wide range of applications. Applying pipelining techniques, the processor is able to manage and synchronize very large volumes of streamed real-time data. Its modular user interface provides a flexible environment for experimental design. The processor can fit in a medium-sized FPGA device operating at ADC sampling frequency, providing an efficient solution for multi-channel applications. Two experiments are presented in order to characterize its temporal and energy resolution.

  10. Keystone Business Models for Network Security Processors

    Directory of Open Access Journals (Sweden)

    Arthur Low

    2013-07-01

    Full Text Available Network security processors are critical components of high-performance systems built for cybersecurity. Development of a network security processor requires multi-domain experience in semiconductors and complex software security applications, and multiple iterations of both software and hardware implementations. Limited by the business models in use today, such an arduous task can be undertaken only by large incumbent companies and government organizations. Neither the “fabless semiconductor” models nor the silicon intellectual-property licensing (“IP-licensing” models allow small technology companies to successfully compete. This article describes an alternative approach that produces an ongoing stream of novel network security processors for niche markets through continuous innovation by both large and small companies. This approach, referred to here as the "business ecosystem model for network security processors", includes a flexible and reconfigurable technology platform, a “keystone” business model for the company that maintains the platform architecture, and an extended ecosystem of companies that both contribute and share in the value created by innovation. New opportunities for business model innovation by participating companies are made possible by the ecosystem model. This ecosystem model builds on: i the lessons learned from the experience of the first author as a senior integrated circuit architect for providers of public-key cryptography solutions and as the owner of a semiconductor startup, and ii the latest scholarly research on technology entrepreneurship, business models, platforms, and business ecosystems. This article will be of interest to all technology entrepreneurs, but it will be of particular interest to owners of small companies that provide security solutions and to specialized security professionals seeking to launch their own companies.

  11. A frame simulator for data produced by 'multi-accumulation' readout detectors

    Science.gov (United States)

    Bonoli, Carlotta; Bortoletto, Favio; Giro, Enrico; Corcione, Leonardo; Ligori, Sebastiano; Nicastro, Luciano

    2010-07-01

    A simulator of data frames produced by 'multi-accumulation' readout detectors has been developed during the feasibility study for the NIS spectrograph, part of the European Euclid mission. The software can emulate various readout strategies, allowing to compare the efficiency of different sampling techniques. Special care is given to two crucial aspects: the minimization of the noise and the effects produced by cosmic hits. The resulting readout noise is analyzed as a function of the background sources, detector native characteristics and readout strategy, while the image deterioration by cosmic rays covers the simulation of hits and their correction efficiency varying the readout modalities. Simulated "multi-accumulation" frames, typical of multiplexer based detectors, are an ideal tool for testing the efficiency of cosmic ray rejection techniques. In the present case cosmic rays are added to each raw frame conforming to the rates and energy expected in the operational L2 region and in the chosen exposure time. Procedures efficiency for cosmic ray identification and correction can also be easily tested in terms of memory occupancy and telemetry rates.

  12. Multi-Modal Traveler Information System - Gateway Functional Requirements

    Science.gov (United States)

    1997-11-17

    The Multi-Modal Traveler Information System (MMTIS) project involves a large number of Intelligent Transportation System (ITS) related tasks. It involves research of all ITS initiatives in the Gary-Chicago-Milwaukee (GCM) Corridor which are currently...

  13. Box-Particle Cardinality Balanced Multi-Target Multi-Bernoulli Filter

    OpenAIRE

    L. Song; X. Zhao

    2014-01-01

    As a generalized particle filtering, the box-particle filter (Box-PF) has a potential to process the measurements affected by bounded error of unknown distributions and biases. Inspired by the Box-PF, a novel implementation for multi-target tracking, called box-particle cardinality balanced multi-target multi-Bernoulli (Box-CBMeMBer) filter is presented in this paper. More important, to eliminate the negative effect of clutters in the estimation of the numbers of targets, an improved generali...

  14. Grammar-Based Multi-Frontal Solver for One Dimensional Isogeometric Analysis with Multiple Right-Hand-Sides

    KAUST Repository

    Kuźnik, Krzysztof

    2013-06-01

    This paper introduces a grammar-based model for developing a multi-thread multi-frontal parallel direct solver for one- dimensional isogeometric finite element method. The model includes the integration of B-splines for construction of the element local matrices and the multi-frontal solver algorithm. The integration and the solver algorithm are partitioned into basic indivisible tasks, namely the grammar productions, that can be executed squentially. The partial order of execution of the basic tasks is analyzed to provide the scheduling for the execution of the concurrent integration and multi-frontal solver algo- rithm. This graph grammar analysis allows for optimal concurrent execution of all tasks. The model has been implemented and tested on NVIDIA CUDA GPU, delivering logarithmic execution time for linear, quadratic, cubic and higher order B-splines. Thus, the CUDA implementation delivers the optimal performance predicted by our graph grammar analysis. We utilize the solver for multiple right hand sides related to the solution of non-stationary or inverse problems.

  15. Multi-Input Convolutional Neural Network for Flower Grading

    Directory of Open Access Journals (Sweden)

    Yu Sun

    2017-01-01

    Full Text Available Flower grading is a significant task because it is extremely convenient for managing the flowers in greenhouse and market. With the development of computer vision, flower grading has become an interdisciplinary focus in both botany and computer vision. A new dataset named BjfuGloxinia contains three quality grades; each grade consists of 107 samples and 321 images. A multi-input convolutional neural network is designed for large scale flower grading. Multi-input CNN achieves a satisfactory accuracy of 89.6% on the BjfuGloxinia after data augmentation. Compared with a single-input CNN, the accuracy of multi-input CNN is increased by 5% on average, demonstrating that multi-input convolutional neural network is a promising model for flower grading. Although data augmentation contributes to the model, the accuracy is still limited by lack of samples diversity. Majority of misclassification is derived from the medium class. The image processing based bud detection is useful for reducing the misclassification, increasing the accuracy of flower grading to approximately 93.9%.

  16. Virtualization and emulation of a CAN device on a multi-processor system on chip

    NARCIS (Netherlands)

    Breaban, G.D.; Koedam, M.L.P.J.; Stuijk, S.; Goossens, K.G.W.

    The increasing number of applications implemented on modern vehicles leads to the use of multi-core platforms in the automotive field. As the number of I/O interfaces offered by these platforms is typically lower than the number of integrated applications, a solution is needed to provide access to

  17. Time synchronization for an emulated CAN device on a Multi-Processor System on Chip

    NARCIS (Netherlands)

    Breaban, G.; Koedam, M.; Stuijk, S.; Goossens, K.G.W.

    2017-01-01

    The increasing number of applications implemented on modern vehicles leads to the use of multi-core platforms in the automotive field. As the number of I/O interfaces offered by these platforms is typically lower than the number of integrated applications, a solution is needed to provide access to

  18. Multi-Touch Tables and Collaborative Learning

    Science.gov (United States)

    Higgins, Steve; Mercier, Emma; Burd, Liz; Joyce-Gibbons, Andrew

    2012-01-01

    The development of multi-touch tables, an emerging technology for classroom learning, offers valuable opportunities to explore how its features can be designed to support effective collaboration in schools. In this study, small groups of 10- to 11-year-old children undertook a history task where they had to connect various pieces of information…

  19. Multi-wavelength and multi-colour temporal and spatial optical solitons

    DEFF Research Database (Denmark)

    Kivshar, Y. S.; Sukhorukov, A. A.; Ostrovskaya, E. A.

    2000-01-01

    We present an overview of several novel types of multi- component envelope solitary waves that appear in fiber and waveguide nonlinear optics. In particular, we describe multi-channel solitary waves in bit-parallel-wavelength fiber transmission systems for high performance computer networks, multi......-color parametric spatial solitary waves due to cascaded nonlinearities of quadratic materials, and quasiperiodic envelope solitons in Fibonacci optical superlattices....

  20. Multi-Branch Fully Convolutional Network for Face Detection

    KAUST Repository

    Bai, Yancheng; Ghanem, Bernard

    2017-01-01

    Face detection is a fundamental problem in computer vision. It is still a challenging task in unconstrained conditions due to significant variations in scale, pose, expressions, and occlusion. In this paper, we propose a multi-branch fully

  1. Beyond reliability, multi-state failure analysis of satellite subsystems: A statistical approach

    International Nuclear Information System (INIS)

    Castet, Jean-Francois; Saleh, Joseph H.

    2010-01-01

    Reliability is widely recognized as a critical design attribute for space systems. In recent articles, we conducted nonparametric analyses and Weibull fits of satellite and satellite subsystems reliability for 1584 Earth-orbiting satellites launched between January 1990 and October 2008. In this paper, we extend our investigation of failures of satellites and satellite subsystems beyond the binary concept of reliability to the analysis of their anomalies and multi-state failures. In reliability analysis, the system or subsystem under study is considered to be either in an operational or failed state; multi-state failure analysis introduces 'degraded states' or partial failures, and thus provides more insights through finer resolution into the degradation behavior of an item and its progression towards complete failure. The database used for the statistical analysis in the present work identifies five states for each satellite subsystem: three degraded states, one fully operational state, and one failed state (complete failure). Because our dataset is right-censored, we calculate the nonparametric probability of transitioning between states for each satellite subsystem with the Kaplan-Meier estimator, and we derive confidence intervals for each probability of transitioning between states. We then conduct parametric Weibull fits of these probabilities using the Maximum Likelihood Estimation (MLE) approach. After validating the results, we compare the reliability versus multi-state failure analyses of three satellite subsystems: the thruster/fuel; the telemetry, tracking, and control (TTC); and the gyro/sensor/reaction wheel subsystems. The results are particularly revealing of the insights that can be gleaned from multi-state failure analysis and the deficiencies, or blind spots, of the traditional reliability analysis. In addition to the specific results provided here, which should prove particularly useful to the space industry, this work highlights the importance

  2. Enhancing Image Processing Performance for PCID in a Heterogeneous Network of Multi-code Processors

    Science.gov (United States)

    Linderman, R.; Spetka, S.; Fitzgerald, D.; Emeny, S.

    The Physically-Constrained Iterative Deconvolution (PCID) image deblurring code is being ported to heterogeneous networks of multi-core systems, including Intel Xeons and IBM Cell Broadband Engines. This paper reports results from experiments using the JAWS supercomputer at MHPCC (60 TFLOPS of dual-dual Xeon nodes linked with Infiniband) and the Cell Cluster at AFRL in Rome, NY. The Cell Cluster has 52 TFLOPS of Playstation 3 (PS3) nodes with IBM Cell Broadband Engine multi-cores and 15 dual-quad Xeon head nodes. The interconnect fabric includes Infiniband, 10 Gigabit Ethernet and 1 Gigabit Ethernet to each of the 336 PS3s. The results compare approaches to parallelizing FFT executions across the Xeons and the Cell's Synergistic Processing Elements (SPEs) for frame-level image processing. The experiments included Intel's Performance Primitives and Math Kernel Library, FFTW3.2, and Carnegie Mellon's SPIRAL. Optimization of FFTs in the PCID code led to a decrease in relative processing time for FFTs. Profiling PCID version 6.2, about one year ago, showed the 13 functions that accounted for the highest percentage of processing were all FFT processing functions. They accounted for over 88% of processing time in one run on Xeons. FFT optimizations led to improvement in the current PCID version 8.0. A recent profile showed that only two of the 19 functions with the highest processing time were FFT processing functions. Timing measurements showed that FFT processing for PCID version 8.0 has been reduced to less than 19% of overall processing time. We are working toward a goal of scaling to 200-400 cores per job (1-2 imagery frames/core). Running a pair of cores on each set of frames reduces latency by implementing parallel FFT processing. Our current results show scaling well out to 100 pairs of cores. These results support the next higher level of parallelism in PCID, where groups of several hundred frames each producing one resolved image are sent to cliques of several

  3. Multi-scale and multi-orientation medical image analysis

    NARCIS (Netherlands)

    Haar Romenij, ter B.M.; Deserno, T.M.

    2011-01-01

    Inspired by multi-scale and multi-orientation mechanisms recognized in the first stages of our visual system, this chapter gives a tutorial overview of the basic principles. Images are discrete, measured data. The optimal aperture for an observation with as little artefacts as possible, is derived

  4. Multi-intelligence critical rating assessment of fusion techniques (MiCRAFT)

    Science.gov (United States)

    Blasch, Erik

    2015-06-01

    Assessment of multi-intelligence fusion techniques includes credibility of algorithm performance, quality of results against mission needs, and usability in a work-domain context. Situation awareness (SAW) brings together low-level information fusion (tracking and identification), high-level information fusion (threat and scenario-based assessment), and information fusion level 5 user refinement (physical, cognitive, and information tasks). To measure SAW, we discuss the SAGAT (Situational Awareness Global Assessment Technique) technique for a multi-intelligence fusion (MIF) system assessment that focuses on the advantages of MIF against single intelligence sources. Building on the NASA TLX (Task Load Index), SAGAT probes, SART (Situational Awareness Rating Technique) questionnaires, and CDM (Critical Decision Method) decision points; we highlight these tools for use in a Multi-Intelligence Critical Rating Assessment of Fusion Techniques (MiCRAFT). The focus is to measure user refinement of a situation over the information fusion quality of service (QoS) metrics: timeliness, accuracy, confidence, workload (cost), and attention (throughput). A key component of any user analysis includes correlation, association, and summarization of data; so we also seek measures of product quality and QuEST of information. Building a notion of product quality from multi-intelligence tools is typically subjective which needs to be aligned with objective machine metrics.

  5. Multi-level trellis coded modulation and multi-stage decoding

    Science.gov (United States)

    Costello, Daniel J., Jr.; Wu, Jiantian; Lin, Shu

    1990-01-01

    Several constructions for multi-level trellis codes are presented and many codes with better performance than previously known codes are found. These codes provide a flexible trade-off between coding gain, decoding complexity, and decoding delay. New multi-level trellis coded modulation schemes using generalized set partitioning methods are developed for Quadrature Amplitude Modulation (QAM) and Phase Shift Keying (PSK) signal sets. New rotationally invariant multi-level trellis codes which can be combined with differential encoding to resolve phase ambiguity are presented.

  6. Multi-objective optimization of linear multi-state multiple sliding window system

    International Nuclear Information System (INIS)

    Konak, Abdullah; Kulturel-Konak, Sadan; Levitin, Gregory

    2012-01-01

    This paper considers the optimal element sequencing in a linear multi-state multiple sliding window system that consists of n linearly ordered multi-state elements. Each multi-state element can have different states: from complete failure up to perfect functioning. A performance rate is associated with each state. The failure of type i in the system occurs if for any i (1≤i≤I) the cumulative performance of any r i consecutive elements is lower than w i . The element sequence strongly affects the probability of any type of system failure. The sequence that minimizes the probability of certain type of failure can provide high probability of other types of failures. Therefore the optimization problem for the multiple sliding window system is essentially multi-objective. The paper formulates and solves the multi-objective optimization problem for the multiple sliding window systems. A multi-objective Genetic Algorithm is used as the optimization engine. Illustrative examples are presented.

  7. PMHT Approach for Multi-Target Multi-Sensor Sonar Tracking in Clutter.

    Science.gov (United States)

    Li, Xiaohua; Li, Yaan; Yu, Jing; Chen, Xiao; Dai, Miao

    2015-11-06

    Multi-sensor sonar tracking has many advantages, such as the potential to reduce the overall measurement uncertainty and the possibility to hide the receiver. However, the use of multi-target multi-sensor sonar tracking is challenging because of the complexity of the underwater environment, especially the low target detection probability and extremely large number of false alarms caused by reverberation. In this work, to solve the problem of multi-target multi-sensor sonar tracking in the presence of clutter, a novel probabilistic multi-hypothesis tracker (PMHT) approach based on the extended Kalman filter (EKF) and unscented Kalman filter (UKF) is proposed. The PMHT can efficiently handle the unknown measurements-to-targets and measurements-to-transmitters data association ambiguity. The EKF and UKF are used to deal with the high degree of nonlinearity in the measurement model. The simulation results show that the proposed algorithm can improve the target tracking performance in a cluttered environment greatly, and its computational load is low.

  8. Evaluation of the multi-sums for large scale problems

    International Nuclear Information System (INIS)

    Bluemlein, J.; Hasselhuhn, A.; Schneider, C.

    2012-02-01

    A big class of Feynman integrals, in particular, the coefficients of their Laurent series expansion w.r.t. the dimension parameter ε can be transformed to multi-sums over hypergeometric terms and harmonic sums. In this article, we present a general summation method based on difference fields that simplifies these multi--sums by transforming them from inside to outside to representations in terms of indefinite nested sums and products. In particular, we present techniques that assist in the task to simplify huge expressions of such multi-sums in a completely automatic fashion. The ideas are illustrated on new calculations coming from 3-loop topologies of gluonic massive operator matrix elements containing two fermion lines, which contribute to the transition matrix elements in the variable flavor scheme. (orig.)

  9. Evaluation of the multi-sums for large scale problems

    Energy Technology Data Exchange (ETDEWEB)

    Bluemlein, J.; Hasselhuhn, A. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Schneider, C. [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation

    2012-02-15

    A big class of Feynman integrals, in particular, the coefficients of their Laurent series expansion w.r.t. the dimension parameter {epsilon} can be transformed to multi-sums over hypergeometric terms and harmonic sums. In this article, we present a general summation method based on difference fields that simplifies these multi--sums by transforming them from inside to outside to representations in terms of indefinite nested sums and products. In particular, we present techniques that assist in the task to simplify huge expressions of such multi-sums in a completely automatic fashion. The ideas are illustrated on new calculations coming from 3-loop topologies of gluonic massive operator matrix elements containing two fermion lines, which contribute to the transition matrix elements in the variable flavor scheme. (orig.)

  10. Multi-purpose passive debugging for embedded wireless

    DEFF Research Database (Denmark)

    Hansen, Morten Tranberg

    Debugging embedded wireless systems can be cumbersome and hard due to low visibility. To ease the task of debugging we propose a multi-purpose passive debugging framework, called TinyDebug, for developing embedded wireless systems. TinyDebug is designed to be used throughout the entire system...

  11. A theoretical framework for negotiating the path of emergency management multi-agency coordination.

    Science.gov (United States)

    Curnin, Steven; Owen, Christine; Paton, Douglas; Brooks, Benjamin

    2015-03-01

    Multi-agency coordination represents a significant challenge in emergency management. The need for liaison officers working in strategic level emergency operations centres to play organizational boundary spanning roles within multi-agency coordination arrangements that are enacted in complex and dynamic emergency response scenarios creates significant research and practical challenges. The aim of the paper is to address a gap in the literature regarding the concept of multi-agency coordination from a human-environment interaction perspective. We present a theoretical framework for facilitating multi-agency coordination in emergency management that is grounded in human factors and ergonomics using the methodology of core-task analysis. As a result we believe the framework will enable liaison officers to cope more efficiently within the work domain. In addition, we provide suggestions for extending the theory of core-task analysis to an alternate high reliability environment. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  12. Fermilab advanced computer program multi-microprocessor project

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Biel, J.

    1985-06-01

    Fermilab's Advanced Computer Program is constructing a powerful 128 node multi-microprocessor system for data analysis in high-energy physics. The system will use commercial 32-bit microprocessors programmed in Fortran-77. Extensive software supports easy migration of user applications from a uniprocessor environment to the multiprocessor and provides sophisticated program development, debugging, and error handling and recovery tools. This system is designed to be readily copied, providing computing cost effectiveness of below $2200 per VAX 11/780 equivalent. The low cost, commercial availability, compatibility with off-line analysis programs, and high data bandwidths (up to 160 MByte/sec) make the system an ideal choice for applications to on-line triggers as well as an offline data processor

  13. Generalized modeling of multi-component vaporization/condensation phenomena for multi-phase-flow analysis

    International Nuclear Information System (INIS)

    Morita, K.; Fukuda, K.; Tobita, Y.; Kondo, Sa.; Suzuki, T.; Maschek, W.

    2003-01-01

    A new multi-component vaporization/condensation (V/C) model was developed to provide a generalized model for safety analysis codes of liquid metal cooled reactors (LMRs). These codes simulate thermal-hydraulic phenomena of multi-phase, multi-component flows, which is essential to investigate core disruptive accidents of LMRs such as fast breeder reactors and accelerator driven systems. The developed model characterizes the V/C processes associated with phase transition by employing heat transfer and mass-diffusion limited models for analyses of relatively short-time-scale multi-phase, multi-component hydraulic problems, among which vaporization and condensation, or simultaneous heat and mass transfer, play an important role. The heat transfer limited model describes the non-equilibrium phase transition processes occurring at interfaces, while the mass-diffusion limited model is employed to represent effects of non-condensable gases and multi-component mixture on V/C processes. Verification of the model and method employed in the multi-component V/C model of a multi-phase flow code was performed successfully by analyzing a series of multi-bubble condensation experiments. The applicability of the model to the accident analysis of LMRs is also discussed by comparison between steam and metallic vapor systems. (orig.)

  14. D1.3 -- Short Report on the First Draft Multi-link Channel Model

    DEFF Research Database (Denmark)

    Pedersen, Troels; Raulefs, Ronald; Steinboeck, Gerhard

    This deliverable is a preliminary report on the activities towards multi-link channel models. It summarizes the activities and achievements of investigations of WP1 Task 1.2 in the first year of the project. In this deliverable work focuses on the characterization of the crosscorrelation of multi...

  15. Balance training with multi-task exercises improves fall-related self-efficacy, gait, balance performance and physical function in older adults with osteoporosis: a randomized controlled trial.

    Science.gov (United States)

    Halvarsson, Alexandra; Franzén, Erika; Ståhle, Agneta

    2015-04-01

    To evaluate the effects of a balance training program including dual- and multi-task exercises on fall-related self-efficacy, fear of falling, gait and balance performance, and physical function in older adults with osteoporosis with an increased risk of falling and to evaluate whether additional physical activity would further improve the effects. Randomized controlled trial, including three groups: two intervention groups (Training, or Training+Physical activity) and one Control group, with a 12-week follow-up. Stockholm County, Sweden. Ninety-six older adults, aged 66-87, with verified osteoporosis. A specific and progressive balance training program including dual- and multi-task three times/week for 12 weeks, and physical activity for 30 minutes, three times/week. Fall-related self-efficacy (Falls Efficacy Scale-International), fear of falling (single-item question - 'In general, are you afraid of falling?'), gait speed with and without a cognitive dual-task at preferred pace and fast walking (GAITRite®), balance performance tests (one-leg stance, and modified figure of eight), and physical function (Late-Life Function and Disability Instrument). Both intervention groups significantly improved their fall-related self-efficacy as compared to the controls (p ≤ 0.034, 4 points) and improved their balance performance. Significant differences over time and between groups in favour of the intervention groups were found for walking speed with a dual-task (p=0.003), at fast walking speed (p=0.008), and for advanced lower extremity physical function (p=0.034). This balance training program, including dual- and multi-task, improves fall-related self-efficacy, gait speed, balance performance, and physical function in older adults with osteoporosis. © The Author(s) 2014.

  16. Identification of Time-Varying Pilot Control Behavior in Multi-Axis Control Tasks

    Science.gov (United States)

    Zaal, Peter M. T.; Sweet, Barbara T.

    2012-01-01

    Recent developments in fly-by-wire control architectures for rotorcraft have introduced new interest in the identification of time-varying pilot control behavior in multi-axis control tasks. In this paper a maximum likelihood estimation method is used to estimate the parameters of a pilot model with time-dependent sigmoid functions to characterize time-varying human control behavior. An experiment was performed by 9 general aviation pilots who had to perform a simultaneous roll and pitch control task with time-varying aircraft dynamics. In 8 different conditions, the axis containing the time-varying dynamics and the growth factor of the dynamics were varied, allowing for an analysis of the performance of the estimation method when estimating time-dependent parameter functions. In addition, a detailed analysis of pilots adaptation to the time-varying aircraft dynamics in both the roll and pitch axes could be performed. Pilot control behavior in both axes was significantly affected by the time-varying aircraft dynamics in roll and pitch, and by the growth factor. The main effect was found in the axis that contained the time-varying dynamics. However, pilot control behavior also changed over time in the axis not containing the time-varying aircraft dynamics. This indicates that some cross coupling exists in the perception and control processes between the roll and pitch axes.

  17. Deep multi-scale convolutional neural network for hyperspectral image classification

    Science.gov (United States)

    Zhang, Feng-zhe; Yang, Xia

    2018-04-01

    In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.

  18. MULTI-TEMPORAL REMOTE SENSING IMAGE CLASSIFICATION - A MULTI-VIEW APPROACH

    Data.gov (United States)

    National Aeronautics and Space Administration — MULTI-TEMPORAL REMOTE SENSING IMAGE CLASSIFICATION - A MULTI-VIEW APPROACH VARUN CHANDOLA AND RANGA RAJU VATSAVAI Abstract. Multispectral remote sensing images have...

  19. Measuring Multi-tasking Ability

    Science.gov (United States)

    2003-07-01

    sociological factors pertaining to social structures and values. For example, telecommuting , job-sharing, and families’ attempts to decrease the amount...achievement strivings (actively working hard to achieve goals), and poly- chronicity ( the preference for working on more than one task at a time) with MT...Joslyn note (2000), this description of ADM makes it sound exceedingly easy. However, nothing could be farther from the truth . The task qualifies as an MT

  20. A longitudinal multi-bunch feedback system using parallel digital signal processors

    International Nuclear Information System (INIS)

    Sapozhnikov, L.; Fox, J.D.; Olsen, J.J.; Oxoby, G.; Linscott, I.; Drago, A.; Serio, M.

    1994-01-01

    A programmable longitudinal feedback system based on four AT ampersand T 1610 digital signal processors has been developed as a component of the PEP-II R ampersand D program. This longitudinal quick prototype is a proof of concept for the PEP-II system and implements full-speed bunch-by-bunch signal processing for storage rings with bunch spacings of 4 ns. The design incorporates a phase-detector-based front end that digitizes the oscillation phases of bunches at the 250 MHz crossing rate, four programmable signal processors that compute correction signals, and a 250-MHz hold buffer/kicker driver stage that applies correction signals back on the beam. The design implements a general-purpose, table-driven downsampler that allows the system to be operated at several accelerator facilities. The hardware architecture of the signal processing is described, and the software algorithms used in the feedback signal computation are discussed. The system configuration used for tests at the LBL Advanced Light Source is presented

  1. Evaluating the scalability of HEP software and multi-core hardware

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A

    2011-01-01

    As researchers have reached the practical limits of processor performance improvements by frequency scaling, it is clear that the future of computing lies in the effective utilization of parallel and multi-core architectures. Since this significant change in computing is well underway, it is vital for HEP programmers to understand the scalability of their software on modern hardware and the opportunities for potential improvements. This work aims to quantify the benefit of new mainstream architectures to the HEP community through practical benchmarking on recent hardware solutions, including the usage of parallelized HEP applications.

  2. Advanced graphical user interface for multi-physics simulations using AMST

    Science.gov (United States)

    Hoffmann, Florian; Vogel, Frank

    2017-07-01

    Numerical modelling of particulate matter has gained much popularity in recent decades. Advanced Multi-physics Simulation Technology (AMST) is a state-of-the-art three dimensional numerical modelling technique combining the eX-tended Discrete Element Method (XDEM) with Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) [1]. One major limitation of this code is the lack of a graphical user interface (GUI) meaning that all pre-processing has to be made directly in a HDF5-file. This contribution presents the first graphical pre-processor developed for AMST.

  3. Multi-digit handwritten sindhi numerals recognition using som neural network

    International Nuclear Information System (INIS)

    Chandio, A.A.; Jalbani, A.H.; Awan, S.A.

    2017-01-01

    In this research paper a multi-digit Sindhi handwritten numerals recognition system using SOM Neural Network is presented. Handwritten digits recognition is one of the challenging tasks and a lot of research is being carried out since many years. A remarkable work has been done for recognition of isolated handwritten characters as well as digits in many languages like English, Arabic, Devanagari, Chinese, Urdu and Pashto. However, the literature reviewed does not show any remarkable work done for Sindhi numerals recognition. The recognition of Sindhi digits is a difficult task due to the various writing styles and different font sizes. Therefore, SOM (Self-Organizing Map), a NN (Neural Network) method is used which can recognize digits with various writing styles and different font sizes. Only one sample is required to train the network for each pair of multi-digit numerals. A database consisting of 4000 samples of multi-digits consisting only two digits from 10-50 and other matching numerals have been collected by 50 users and the experimental results of proposed method show that an accuracy of 86.89% is achieved. (author)

  4. Multi-armed spirals and multi-pairs antispirals in spatial rock–paper–scissors games

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Luo-Luo, E-mail: jiangluoluo@gmail.com [College of Physics and Electronic Information Engineering, Wenzhou University, Wenzhou 325035 (China); College of Physics and Technology, Guangxi Normal University, Guilin, Guangxi 541004 (China); Wang, Wen-Xu [School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, AZ 85287 (United States); Department of Physics, Beijing Normal University, Beijing 100875 (China); Lai, Ying-Cheng [School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, AZ 85287 (United States); Department of Physics, Arizona State University, Tempe, AZ 85287 (United States); Ni, Xuan [School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, AZ 85287 (United States)

    2012-07-09

    We study the formation of multi-armed spirals and multi-pairs antispirals in spatial rock–paper–scissors games with mobile individuals. We discover a set of seed distributions of species, which is able to produce multi-armed spirals and multi-pairs antispirals with a finite number of arms and pairs based on stochastic processes. The joint spiral waves are also predicted by a theoretical model based on partial differential equations associated with specific initial conditions. The spatial entropy of patterns is introduced to differentiate the multi-armed spirals and multi-pairs antispirals. For the given mobility, the spatial entropy of multi-armed spirals is higher than that of single armed spirals. The stability of the waves is explored with respect to individual mobility. Particularly, we find that both two armed spirals and one pair antispirals transform to the single armed spirals. Furthermore, multi-armed spirals and multi-pairs antispirals are relatively stable for intermediate mobility. The joint spirals with lower numbers of arms and pairs are relatively more stable than those with higher numbers of arms and pairs. In addition, comparing to large amount of previous work, we employ the no flux boundary conditions which enables quantitative studies of pattern formation and stability in the system of stochastic interactions in the absence of excitable media. -- Highlights: ► Multi-armed spirals and multi-pairs antispirals are observed. ► Patterns are predicted by computer simulations and partial differential equations. ► The spatial entropy of patterns is introduced. ► Patterns are relatively stable for intermediate mobility. ► The joint spirals with lower numbers of arms and pairs are relatively more stable.

  5. Multi-armed spirals and multi-pairs antispirals in spatial rock–paper–scissors games

    International Nuclear Information System (INIS)

    Jiang, Luo-Luo; Wang, Wen-Xu; Lai, Ying-Cheng; Ni, Xuan

    2012-01-01

    We study the formation of multi-armed spirals and multi-pairs antispirals in spatial rock–paper–scissors games with mobile individuals. We discover a set of seed distributions of species, which is able to produce multi-armed spirals and multi-pairs antispirals with a finite number of arms and pairs based on stochastic processes. The joint spiral waves are also predicted by a theoretical model based on partial differential equations associated with specific initial conditions. The spatial entropy of patterns is introduced to differentiate the multi-armed spirals and multi-pairs antispirals. For the given mobility, the spatial entropy of multi-armed spirals is higher than that of single armed spirals. The stability of the waves is explored with respect to individual mobility. Particularly, we find that both two armed spirals and one pair antispirals transform to the single armed spirals. Furthermore, multi-armed spirals and multi-pairs antispirals are relatively stable for intermediate mobility. The joint spirals with lower numbers of arms and pairs are relatively more stable than those with higher numbers of arms and pairs. In addition, comparing to large amount of previous work, we employ the no flux boundary conditions which enables quantitative studies of pattern formation and stability in the system of stochastic interactions in the absence of excitable media. -- Highlights: ► Multi-armed spirals and multi-pairs antispirals are observed. ► Patterns are predicted by computer simulations and partial differential equations. ► The spatial entropy of patterns is introduced. ► Patterns are relatively stable for intermediate mobility. ► The joint spirals with lower numbers of arms and pairs are relatively more stable.

  6. Towards Coordination and Control of Multi-robot Systems

    DEFF Research Database (Denmark)

    Quottrup, Michael Melholt

    This thesis focuses on control and coordination of mobile multi-robot systems (MRS). MRS can often deal with tasks that are difficult to be accomplished by a single robot. One of the challenges is the need to control, coordinate and synchronize the operation of several robots to perform some...... specified task. This calls for new strategies and methods which allow the desired system behavior to be specified in a formal and succinct way. Two different frameworks for the coordination and control of MRS have been investigated. Framework I - A network of robots is modeled as a network of multi...... a requirement specification in Computational Tree Logic (CTL) for a network of robots. The result is a set of motion plans for the robots which satisfy the specification. Framework II - A framework for controller synthesis for a single robot with respect to requirement specification in Linear-time Temporal...

  7. Multi-modal distraction: insights from children's limited attention.

    Science.gov (United States)

    Matusz, Pawel J; Broadbent, Hannah; Ferrari, Jessica; Forrest, Benjamin; Merkley, Rebecca; Scerif, Gaia

    2015-03-01

    How does the multi-sensory nature of stimuli influence information processing? Cognitive systems with limited selective attention can elucidate these processes. Six-year-olds, 11-year-olds and 20-year-olds engaged in a visual search task that required them to detect a pre-defined coloured shape under conditions of low or high visual perceptual load. On each trial, a peripheral distractor that could be either compatible or incompatible with the current target colour was presented either visually, auditorily or audiovisually. Unlike unimodal distractors, audiovisual distractors elicited reliable compatibility effects across the two levels of load in adults and in the older children, but high visual load significantly reduced distraction for all children, especially the youngest participants. This study provides the first demonstration that multi-sensory distraction has powerful effects on selective attention: Adults and older children alike allocate attention to potentially relevant information across multiple senses. However, poorer attentional resources can, paradoxically, shield the youngest children from the deleterious effects of multi-sensory distraction. Furthermore, we highlight how developmental research can enrich the understanding of distinct mechanisms controlling adult selective attention in multi-sensory environments. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Multi-valley effective mass theory for device-level modeling of open quantum dynamics

    Science.gov (United States)

    Jacobson, N. Tobias; Baczewski, Andrew D.; Frees, Adam; Gamble, John King; Montano, Ines; Moussa, Jonathan E.; Muller, Richard P.; Nielsen, Erik

    2015-03-01

    Simple models for semiconductor-based quantum information processors can provide useful qualitative descriptions of device behavior. However, as experimental implementations have matured, more specific guidance from theory has become necessary, particularly in the form of quantitatively reliable yet computationally efficient modeling. Besides modeling static device properties, improved characterization of noisy gate operations requires a more sophisticated description of device dynamics. Making use of recent developments in multi-valley effective mass theory, we discuss device-level simulations of the open system quantum dynamics of a qubit interacting with phonons and other noise sources. Sandia is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy National Nuclear Security Administration under Contract No. DE-AC04-94AL85000.

  9. Computational cost of isogeometric multi-frontal solvers on parallel distributed memory machines

    KAUST Repository

    Woźniak, Maciej

    2015-02-01

    This paper derives theoretical estimates of the computational cost for isogeometric multi-frontal direct solver executed on parallel distributed memory machines. We show theoretically that for the Cp-1 global continuity of the isogeometric solution, both the computational cost and the communication cost of a direct solver are of order O(log(N)p2) for the one dimensional (1D) case, O(Np2) for the two dimensional (2D) case, and O(N4/3p2) for the three dimensional (3D) case, where N is the number of degrees of freedom and p is the polynomial order of the B-spline basis functions. The theoretical estimates are verified by numerical experiments performed with three parallel multi-frontal direct solvers: MUMPS, PaStiX and SuperLU, available through PETIGA toolkit built on top of PETSc. Numerical results confirm these theoretical estimates both in terms of p and N. For a given problem size, the strong efficiency rapidly decreases as the number of processors increases, becoming about 20% for 256 processors for a 3D example with 1283 unknowns and linear B-splines with C0 global continuity, and 15% for a 3D example with 643 unknowns and quartic B-splines with C3 global continuity. At the same time, one cannot arbitrarily increase the problem size, since the memory required by higher order continuity spaces is large, quickly consuming all the available memory resources even in the parallel distributed memory version. Numerical results also suggest that the use of distributed parallel machines is highly beneficial when solving higher order continuity spaces, although the number of processors that one can efficiently employ is somehow limited.

  10. Coordinated Multi-layer Multi-domain Optical Network (COMMON) for Large-Scale Science Applications (COMMON)

    Energy Technology Data Exchange (ETDEWEB)

    Vokkarane, Vinod [University of Massachusetts

    2013-09-01

    We intend to implement a Coordinated Multi-layer Multi-domain Optical Network (COMMON) Framework for Large-scale Science Applications. In the COMMON project, specific problems to be addressed include 1) anycast/multicast/manycast request provisioning, 2) deployable OSCARS enhancements, 3) multi-layer, multi-domain quality of service (QoS), and 4) multi-layer, multidomain path survivability. In what follows, we outline the progress in the above categories (Year 1, 2, and 3 deliverables).

  11. Middleware for multi-client and multi-server mobile applications

    NARCIS (Netherlands)

    Rocha, B.P.S.; Rezende, C.G.; Loureiro, A.A.F.

    2007-01-01

    With popularization of mobile computing, many developers have faced problems due to great heterogeneity of devices. To address this issue, we present in this work a middleware for multi-client and multi-server mobile applications. We assume that the middleware at the server side has no resource

  12. Multi-dimensional Fuzzy Euler Approximation

    Directory of Open Access Journals (Sweden)

    Yangyang Hao

    2017-05-01

    Full Text Available Multi-dimensional Fuzzy differential equations driven by multi-dimen-sional Liu process, have been intensively applied in many fields. However, we can not obtain the analytic solution of every multi-dimensional fuzzy differential equation. Then, it is necessary for us to discuss the numerical results in most situations. This paper focuses on the numerical method of multi-dimensional fuzzy differential equations. The multi-dimensional fuzzy Taylor expansion is given, based on this expansion, a numerical method which is designed for giving the solution of multi-dimensional fuzzy differential equation via multi-dimensional Euler method will be presented, and its local convergence also will be discussed.

  13. Instance annotation for multi-instance multi-label learning

    Science.gov (United States)

    F. Briggs; X.Z. Fern; R. Raich; Q. Lou

    2013-01-01

    Multi-instance multi-label learning (MIML) is a framework for supervised classification where the objects to be classified are bags of instances associated with multiple labels. For example, an image can be represented as a bag of segments and associated with a list of objects it contains. Prior work on MIML has focused on predicting label sets for previously unseen...

  14. L2 Word Recognition: Influence of L1 Orthography on Multi-syllabic Word Recognition.

    Science.gov (United States)

    Hamada, Megumi

    2017-10-01

    L2 reading research suggests that L1 orthographic experience influences L2 word recognition. Nevertheless, the findings on multi-syllabic words in English are still limited despite the fact that a vast majority of words are multi-syllabic. The study investigated whether L1 orthography influences the recognition of multi-syllabic words, focusing on the position of an embedded word. The participants were Arabic ESL learners, Chinese ESL learners, and native speakers of English. The task was a word search task, in which the participants identified a target word embedded in a pseudoword at the initial, middle, or final position. The search accuracy and speed indicated that all groups showed a strong preference for the initial position. The accuracy data further indicated group differences. The Arabic group showed higher accuracy in the final than middle, while the Chinese group showed the opposite and the native speakers showed no difference between the two positions. The findings suggest that L2 multi-syllabic word recognition involves unique processes.

  15. Multi-office engineering

    International Nuclear Information System (INIS)

    Cowle, E.S.; Hall, L.D.; Koss, P.; Saheb, E.; Setrakian, V.

    1995-01-01

    This paper addresses the viability of multi-office project engineering as has been made possible in a large part by the computer age. Brief discussions are provided on two past projects describing the authors' initial efforts at multi-office engineering, and an in-depth discussion is provided on a current Bechtel project that demonstrates their multi-office engineering capabilities. Efficiencies and cost savings associated with executing an engineering project from multiple office locations was identified as a viable and cost-effective execution approach. The paper also discusses how the need for multi-office engineering came about, what is required to succeed, and where they are going from here. Furthermore, it summarizes the benefits to their clients and to Bechtel

  16. Extended multi-configuration quasi-degenerate perturbation theory: the new approach to multi-state multi-reference perturbation theory.

    Science.gov (United States)

    Granovsky, Alexander A

    2011-06-07

    The distinctive desirable features, both mathematically and physically meaningful, for all partially contracted multi-state multi-reference perturbation theories (MS-MR-PT) are explicitly formulated. The original approach to MS-MR-PT theory, called extended multi-configuration quasi-degenerate perturbation theory (XMCQDPT), having most, if not all, of the desirable properties is introduced. The new method is applied at the second order of perturbation theory (XMCQDPT2) to the 1(1)A(')-2(1)A(') conical intersection in allene molecule, the avoided crossing in LiF molecule, and the 1(1)A(1) to 2(1)A(1) electronic transition in cis-1,3-butadiene. The new theory has several advantages compared to those of well-established approaches, such as second order multi-configuration quasi-degenerate perturbation theory and multi-state-second order complete active space perturbation theory. The analysis of the prevalent approaches to the MS-MR-PT theory performed within the framework of the XMCQDPT theory unveils the origin of their common inherent problems. We describe the efficient implementation strategy that makes XMCQDPT2 an especially useful general-purpose tool in the high-level modeling of small to large molecular systems. © 2011 American Institute of Physics

  17. Optimization of multi-branch switched diversity systems

    KAUST Repository

    Nam, Haewoon

    2009-10-01

    A performance optimization based on the optimal switching threshold(s) for a multi-branch switched diversity system is discussed in this paper. For the conventional multi-branch switched diversity system with a single switching threshold, the optimal switching threshold is a function of both the average channel SNR and the number of diversity branches, where computing the optimal switching threshold is not a simple task when the number of diversity branches is high. The newly proposed multi-branch switched diversity system is based on a sequence of switching thresholds, instead of a single switching threshold, where a different diversity branch uses a different switching threshold for signal comparison. Thanks to the fact that each switching threshold in the sequence can be optimized only based on the number of the remaining diversity branches, the proposed system makes it easy to find these switching thresholds. Furthermore, some selected numerical and simulation results show that the proposed switched diversity system with the sequence of optimal switching thresholds outperforms the conventional system with the single optimal switching threshold. © 2009 IEEE.

  18. On the role of cost-sensitive learning in multi-class brain-computer interfaces.

    Science.gov (United States)

    Devlaminck, Dieter; Waegeman, Willem; Wyns, Bart; Otte, Georges; Santens, Patrick

    2010-06-01

    Brain-computer interfaces (BCIs) present an alternative way of communication for people with severe disabilities. One of the shortcomings in current BCI systems, recently put forward in the fourth BCI competition, is the asynchronous detection of motor imagery versus resting state. We investigated this extension to the three-class case, in which the resting state is considered virtually lying between two motor classes, resulting in a large penalty when one motor task is misclassified into the other motor class. We particularly focus on the behavior of different machine-learning techniques and on the role of multi-class cost-sensitive learning in such a context. To this end, four different kernel methods are empirically compared, namely pairwise multi-class support vector machines (SVMs), two cost-sensitive multi-class SVMs and kernel-based ordinal regression. The experimental results illustrate that ordinal regression performs better than the other three approaches when a cost-sensitive performance measure such as the mean-squared error is considered. By contrast, multi-class cost-sensitive learning enables us to control the number of large errors made between two motor tasks.

  19. Multi-segmental movement patterns reflect juggling complexity and skill level.

    Science.gov (United States)

    Zago, Matteo; Pacifici, Ilaria; Lovecchio, Nicola; Galli, Manuela; Federolf, Peter Andreas; Sforza, Chiarella

    2017-08-01

    The juggling action of six experts and six intermediates jugglers was recorded with a motion capture system and decomposed into its fundamental components through Principal Component Analysis. The aim was to quantify trends in movement dimensionality, multi-segmental patterns and rhythmicity as a function of proficiency level and task complexity. Dimensionality was quantified in terms of Residual Variance, while the Relative Amplitude was introduced to account for individual differences in movement components. We observed that: experience-related modifications in multi-segmental actions exist, such as the progressive reduction of error-correction movements, especially in complex task condition. The systematic identification of motor patterns sensitive to the acquisition of specific experience could accelerate the learning process. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Estimation of surface soil moisture and roughness from multi-angular ASAR imagery in the Watershed Allied Telemetry Experimental Research (WATER

    Directory of Open Access Journals (Sweden)

    S. G. Wang

    2011-05-01

    Full Text Available Radar remote sensing has demonstrated its applicability to the retrieval of basin-scale soil moisture. The mechanism of radar backscattering from soils is complicated and strongly influenced by surface roughness. Additionally, retrieval of soil moisture using AIEM (advanced integrated equation model-like models is a classic example of underdetermined problem due to a lack of credible known soil roughness distributions at a regional scale. Characterization of this roughness is therefore crucial for an accurate derivation of soil moisture based on backscattering models. This study aims to simultaneously obtain surface roughness parameters (standard deviation of surface height σ and correlation length cl along with soil moisture from multi-angular ASAR images by using a two-step retrieval scheme based on the AIEM. The method firstly used a semi-empirical relationship that relates the roughness slope, Zs (Zs = σ2/cl and the difference in backscattering coefficient (Δσ from two ASAR images acquired with different incidence angles. Meanwhile, by using an experimental statistical relationship between σ and cl, both these parameters can be estimated. Then, the deduced roughness parameters were used for the retrieval of soil moisture in association with the AIEM. An evaluation of the proposed method was performed in an experimental area in the middle stream of the Heihe River Basin, where the Watershed Allied Telemetry Experimental Research (WATER was taken place. It is demonstrated that the proposed method is feasible to achieve reliable estimation of soil water content. The key challenge is the presence of vegetation cover, which significantly impacts the estimates of surface roughness and soil moisture.

  1. Multi-Threaded DNA Tag/Anti-Tag Library Generator for Multi-Core Platforms

    Science.gov (United States)

    2009-05-01

    base pair)  Watson ‐ Crick  strand pairs that bind perfectly within pairs, but poorly across pairs. A variety  of  DNA  strand hybridization metrics...AFRL-RI-RS-TR-2009-131 Final Technical Report May 2009 MULTI-THREADED DNA TAG/ANTI-TAG LIBRARY GENERATOR FOR MULTI-CORE PLATFORMS...TYPE Final 3. DATES COVERED (From - To) Jun 08 – Feb 09 4. TITLE AND SUBTITLE MULTI-THREADED DNA TAG/ANTI-TAG LIBRARY GENERATOR FOR MULTI-CORE

  2. A Multi-Objective Learning to re-Rank Approach to Optimize Online Marketplaces for Multiple Stakeholders

    OpenAIRE

    Nguyen, Phong; Dines, John; Krasnodebski, Jan

    2017-01-01

    Multi-objective recommender systems address the difficult task of recommending items that are relevant to multiple, possibly conflicting, criteria. However these systems are most often designed to address the objective of one single stakeholder, typically, in online commerce, the consumers whose input and purchasing decisions ultimately determine the success of the recommendation systems. In this work, we address the multi-objective, multi-stakeholder, recommendation problem involving one or ...

  3. Post-error response inhibition in high math-anxious individuals: Evidence from a multi-digit addition task.

    Science.gov (United States)

    Núñez-Peña, M Isabel; Tubau, Elisabet; Suárez-Pellicioni, Macarena

    2017-06-01

    The aim of the study was to investigate how high math-anxious (HMA) individuals react to errors in an arithmetic task. Twenty HMA and 19 low math-anxious (LMA) individuals were presented with a multi-digit addition verification task and were given response feedback. Post-error adjustment measures (response time and accuracy) were analyzed in order to study differences between groups when faced with errors in an arithmetical task. Results showed that both HMA and LMA individuals were slower to respond following an error than following a correct answer. However, post-error accuracy effects emerged only for the HMA group, showing that they were also less accurate after having committed an error than after giving the right answer. Importantly, these differences were observed only when individuals needed to repeat the same response given in the previous trial. These results suggest that, for HMA individuals, errors caused reactive inhibition of the erroneous response, facilitating performance if the next problem required the alternative response but hampering it if the response was the same. This stronger reaction to errors could be a factor contributing to the difficulties that HMA individuals experience in learning math and doing math tasks. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. A quantum-classical simulation of a multi-surface multi-mode ...

    Indian Academy of Sciences (India)

    Multi surface multi mode quantum dynamics; parallelized quantum classical approach; TDDVR method. 1. ... cal simulation on molecular system is a great cha- llenge for ..... on a multiple core cluster with shared memory using. OpenMP based ...

  5. Closed-Loop Neuroprosthesis for Reach-to-Grasp Assistance: Combining Adaptive Multi-channel Neuromuscular Stimulation with a Multi-joint Arm Exoskeleton.

    Science.gov (United States)

    Grimm, Florian; Gharabaghi, Alireza

    2016-01-01

    Stroke patients with severe motor deficits cannot execute task-oriented rehabilitation exercises with their affected upper extremity. Advanced rehabilitation technology may support them in performing such reach-to-grasp movements. The challenge is, however, to provide assistance as needed, while maintaining the participants' commitment during the exercises. In this feasibility study, we introduced a closed-loop neuroprosthesis for reach-to-grasp assistance which combines adaptive multi-channel neuromuscular stimulation with a multi-joint arm exoskeleton. Eighteen severely affected chronic stroke patients were assisted by a gravity-compensating, seven-degree-of-freedom exoskeleton which was attached to the paretic arm for performing reach-to-grasp exercises resembling activities of daily living in a virtual environment. During the exercises, adaptive electrical stimulation was applied to seven different muscles of the upper extremity in a performance-dependent way to enhance the task-oriented movement trajectory. The stimulation intensity was individualized for each targeted muscle and remained subthreshold, i.e., induced no overt support. Closed-loop neuromuscular stimulation could be well integrated into the exoskeleton-based training, and increased the task-related range of motion (p = 0.0004) and movement velocity (p = 0.015), while preserving accuracy. The highest relative stimulation intensity was required to facilitate the grasping function. The facilitated range of motion correlated with the upper extremity Fugl-Meyer Assessment score of the patients (p = 0.028). Combining adaptive multi-channel neuromuscular stimulation with antigravity assistance amplifies the residual motor capabilities of severely affected stroke patients during rehabilitation exercises and may thus provide a customized training environment for patient-tailored support while preserving the participants' engagement.

  6. Closed-Loop Neuroprosthesis for Reach-to-Grasp Assistance: Combining Adaptive Multi-channel Neuromuscular Stimulation with a Multi-joint Arm Exoskeleton

    Science.gov (United States)

    Grimm, Florian; Gharabaghi, Alireza

    2016-01-01

    Stroke patients with severe motor deficits cannot execute task-oriented rehabilitation exercises with their affected upper extremity. Advanced rehabilitation technology may support them in performing such reach-to-grasp movements. The challenge is, however, to provide assistance as needed, while maintaining the participants' commitment during the exercises. In this feasibility study, we introduced a closed-loop neuroprosthesis for reach-to-grasp assistance which combines adaptive multi-channel neuromuscular stimulation with a multi-joint arm exoskeleton. Eighteen severely affected chronic stroke patients were assisted by a gravity-compensating, seven-degree-of-freedom exoskeleton which was attached to the paretic arm for performing reach-to-grasp exercises resembling activities of daily living in a virtual environment. During the exercises, adaptive electrical stimulation was applied to seven different muscles of the upper extremity in a performance-dependent way to enhance the task-oriented movement trajectory. The stimulation intensity was individualized for each targeted muscle and remained subthreshold, i.e., induced no overt support. Closed-loop neuromuscular stimulation could be well integrated into the exoskeleton-based training, and increased the task-related range of motion (p = 0.0004) and movement velocity (p = 0.015), while preserving accuracy. The highest relative stimulation intensity was required to facilitate the grasping function. The facilitated range of motion correlated with the upper extremity Fugl-Meyer Assessment score of the patients (p = 0.028). Combining adaptive multi-channel neuromuscular stimulation with antigravity assistance amplifies the residual motor capabilities of severely affected stroke patients during rehabilitation exercises and may thus provide a customized training environment for patient-tailored support while preserving the participants' engagement. PMID:27445658

  7. Cloud Detection by Fusing Multi-Scale Convolutional Features

    Science.gov (United States)

    Li, Zhiwei; Shen, Huanfeng; Wei, Yancong; Cheng, Qing; Yuan, Qiangqiang

    2018-04-01

    Clouds detection is an important pre-processing step for accurate application of optical satellite imagery. Recent studies indicate that deep learning achieves best performance in image segmentation tasks. Aiming at boosting the accuracy of cloud detection for multispectral imagery, especially for those that contain only visible and near infrared bands, in this paper, we proposed a deep learning based cloud detection method termed MSCN (multi-scale cloud net), which segments cloud by fusing multi-scale convolutional features. MSCN was trained on a global cloud cover validation collection, and was tested in more than ten types of optical images with different resolution. Experiment results show that MSCN has obvious advantages over the traditional multi-feature combined cloud detection method in accuracy, especially when in snow and other areas covered by bright non-cloud objects. Besides, MSCN produced more detailed cloud masks than the compared deep cloud detection convolution network. The effectiveness of MSCN make it promising for practical application in multiple kinds of optical imagery.

  8. Interactive Approach for Multi-Level Multi-Objective Fractional Programming Problems with Fuzzy Parameters

    Directory of Open Access Journals (Sweden)

    M.S. Osman

    2018-03-01

    Full Text Available In this paper, an interactive approach for solving multi-level multi-objective fractional programming (ML-MOFP problems with fuzzy parameters is presented. The proposed interactive approach makes an extended work of Shi and Xia (1997. In the first phase, the numerical crisp model of the ML-MOFP problem has been developed at a confidence level without changing the fuzzy gist of the problem. Then, the linear model for the ML-MOFP problem is formulated. In the second phase, the interactive approach simplifies the linear multi-level multi-objective model by converting it into separate multi-objective programming problems. Also, each separate multi-objective programming problem of the linear model is solved by the ∊-constraint method and the concept of satisfactoriness. Finally, illustrative examples and comparisons with the previous approaches are utilized to evince the feasibility of the proposed approach.

  9. Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods.

    Science.gov (United States)

    Hoak, Anthony; Medeiros, Henry; Povinelli, Richard J

    2017-03-03

    We develop an interactive likelihood (ILH) for sequential Monte Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL) and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA) and classification of events, activities and relationships for multi-object trackers (CLEAR MOT)). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter.

  10. Improving Multi-Instance Multi-Label Learning by Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Ying Yin

    2016-05-01

    Full Text Available Multi-instance multi-label learning is a learning framework, where every object is represented by a bag of instances and associated with multiple labels simultaneously. The existing degeneration strategy-based methods often suffer from some common drawbacks: (1 the user-specific parameter for the number of clusters may incur the effective problem; (2 SVM may bring a high computational cost when utilized as the classifier builder. In this paper, we propose an algorithm, namely multi-instance multi-label (MIML-extreme learning machine (ELM, to address the problems. To our best knowledge, we are the first to utilize ELM in the MIML problem and to conduct the comparison of ELM and SVM on MIML. Extensive experiments have been conducted on real datasets and synthetic datasets. The results show that MIMLELM tends to achieve better generalization performance at a higher learning speed.

  11. Self-powered information measuring wireless networks using the distribution of tasks within multicore processors

    Science.gov (United States)

    Zhuravska, Iryna M.; Koretska, Oleksandra O.; Musiyenko, Maksym P.; Surtel, Wojciech; Assembay, Azat; Kovalev, Vladimir; Tleshova, Akmaral

    2017-08-01

    The article contains basic approaches to develop the self-powered information measuring wireless networks (SPIM-WN) using the distribution of tasks within multicore processors critical applying based on the interaction of movable components - as in the direction of data transmission as wireless transfer of energy coming from polymetric sensors. Base mathematic model of scheduling tasks within multiprocessor systems was modernized to schedule and allocate tasks between cores of one-crystal computer (SoC) to increase energy efficiency SPIM-WN objects.

  12. Modul.LES: a multi-compartment, multi-organism aquatic life support system as experimental platform for research in ∆g

    Science.gov (United States)

    Hilbig, Reinhard; Anken, Ralf; Grimm, Dennis

    In view of space exploration and long-term satellite missions, a new generation of multi-modular, multi-organism bioregenerative life support system with different experimental units (Modul.LES) is planned, and subunits are under construction. Modul.LES will be managed via telemetry and remote control and therefore is a fully automated experimental platform for different kinds of investigations. After several forerunner projects like AquaCells (2005), C.E.B.A.S. (1998, 2003) or Aquahab (OHB-System AG the Oreochromis Mossambicus Eu-glena Gracilis Aquatic Habitat (OmegaHab) was successfully flown in 2007 in course of the FOTON-M3 Mission. It was a 3 chamber controlled life support system (CLSS), compris-ing a bioreactor with the green algae Euglena gracilis, a fish chamber with larval cichlid fish Oreochromis mossambicus and a filter chamber with biodegrading bacteria. The sensory super-vision of housekeeping management was registered and controlled by telemetry. Additionally, all scientific data and videos of the organisms aboard were stored and sequentially transmitted to relay stations. Based on the effective performance of OmegaHab, this system was chosen for a reflight on Bion-M1 in 2012. As Bion-M1 is a long term mission (appr. 4 weeks), this CLSS (OmegaHab-XP) has to be redesigned and refurbished with enhanced performance. The number of chambers has been increased from 3 to 4: an algae bioreactor, a fish tank for adult and larval fish (hatchery inserted), a nutrition chamber with higher plants and crustaceans and a filter chamber. The OmegaHab-XP is a full automated system with an extended satellite downlink for video monitoring and housekeeping data acquisition, but no uplink for remote control. OmegaHab-XP provides numerous physical and chemical parameters which will be monitored regarding the state of the biological processes and thus enables the automated con-trol aboard. Besides the two basic parameters oxygen content and temperature, products of the

  13. Multi generations in the workforce: Building collaboration

    Directory of Open Access Journals (Sweden)

    Vasanthi Srinivasan

    2012-03-01

    Full Text Available Organisations the world over in today's rapid growth context are faced with the challenge of understanding a multi-generational workforce and devising policies and processes to build collaboration between them. In its first part, this article synthesises the literature on generational studies, with emphasis on the definition of generations and the characteristics of the generational cohorts. It emphasises that such studies are embedded in the socio-economic-cultural-context and India-specific scholarship must take into account the demographic and economic variations across the country. It then discusses the challenges of multi-generations in the Indian workforce, their impact on leadership styles and managerial practices, and the task of building inter-generational collaboration with an eminent panel of practitioners and researchers.

  14. A multi-component matrix loop algebra and a unified expression of the multi-component AKNS hierarchy and the multi-component BPT hierarchy

    International Nuclear Information System (INIS)

    Zhang Yufeng

    2005-01-01

    A set of multi-component matrix Lie algebra is constructed, which is devote to obtaining a new loop algebra A-bar M-1 . It follows that an isospectral problem is established. By making use of Tu scheme, a Liouville integrable multi-component hierarchy of soliton equations is generated, which possesses the bi-Hamiltonian structures. As its reduction cases, the multi-component AKNS hierarchy and the formalism of the multi-component BPT hierarchy are given, respectively

  15. Multi-label literature classification based on the Gene Ontology graph

    Directory of Open Access Journals (Sweden)

    Lu Xinghua

    2008-12-01

    Full Text Available Abstract Background The Gene Ontology is a controlled vocabulary for representing knowledge related to genes and proteins in a computable form. The current effort of manually annotating proteins with the Gene Ontology is outpaced by the rate of accumulation of biomedical knowledge in literature, which urges the development of text mining approaches to facilitate the process by automatically extracting the Gene Ontology annotation from literature. The task is usually cast as a text classification problem, and contemporary methods are confronted with unbalanced training data and the difficulties associated with multi-label classification. Results In this research, we investigated the methods of enhancing automatic multi-label classification of biomedical literature by utilizing the structure of the Gene Ontology graph. We have studied three graph-based multi-label classification algorithms, including a novel stochastic algorithm and two top-down hierarchical classification methods for multi-label literature classification. We systematically evaluated and compared these graph-based classification algorithms to a conventional flat multi-label algorithm. The results indicate that, through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods can significantly improve predictions of the Gene Ontology terms implied by the analyzed text. Furthermore, the graph-based multi-label classifiers are capable of suggesting Gene Ontology annotations (to curators that are closely related to the true annotations even if they fail to predict the true ones directly. A software package implementing the studied algorithms is available for the research community. Conclusion Through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods have better potential than the conventional flat multi-label classification approach to facilitate

  16. Terascale Visualization: Multi-resolution Aspirin for Big-Data Headaches

    Science.gov (United States)

    Duchaineau, Mark

    2001-06-01

    Recent experience on the Accelerated Strategic Computing Initiative (ASCI) computers shows that computational physicists are successfully producing a prodigious collection of numbers on several thousand processors. But with this wealth of numbers comes an unprecedented difficulty in processing and moving them to provide useful insight and analysis. In this talk, a few simulations are highlighted where recent advancements in multiple-resolution mathematical representations and algorithms have provided some hope of seeing most of the physics of interest while keeping within the practical limits of the post-simulation storage and interactive data-exploration resources. A whole host of visualization research activities was spawned by the 1999 Gordon Bell Prize-winning computation of a shock-tube experiment showing Richtmyer-Meshkov turbulent instabilities. This includes efforts for the entire data pipeline from running simulation to interactive display: wavelet compression of field data, multi-resolution volume rendering and slice planes, out-of-core extraction and simplification of mixing-interface surfaces, shrink-wrapping to semi-regularize the surfaces, semi-structured surface wavelet compression, and view-dependent display-mesh optimization. More recently on the 12 TeraOps ASCI platform, initial results from a 5120-processor, billion-atom molecular dynamics simulation showed that 30-to-1 reductions in storage size can be achieved with no human-observable errors for the analysis required in simulations of supersonic crack propagation. This made it possible to store the 25 trillion bytes worth of simulation numbers in the available storage, which was under 1 trillion bytes. While multi-resolution methods and related systems are still in their infancy, for the largest-scale simulations there is often no other choice should the science require detailed exploration of the results.

  17. Optimal task partition and state-dependent loading in heterogeneous two-element work sharing system

    International Nuclear Information System (INIS)

    Levitin, Gregory; Xing, Liudong; Ben-Haim, Hanoch; Dai, Yuanshun

    2016-01-01

    Many real-world systems such as multi-channel data communication, multi-path flow transmission and multi-processor computing systems have work sharing attributes where system elements perform different portions of the same task simultaneously. Motivated by these applications, this paper models a heterogeneous work-sharing system with two non-repairable elements. When one element fails, the other element takes over the uncompleted task of the failed element upon finishing its own part; the load level of the remaining operating element can change at the time of the failure, which further affects its performance, failure behavior and operation cost. Considering these dynamics, mission success probability (MSP), expected mission completion time (EMCT) and expected cost of successful mission (ECSM) are first derived. Further, optimization problems are formulated and solved, which find optimal task partition and element load levels maximizing MSP, minimizing EMCT or minimizing ECSM. Effects of element reliability, performance, operation cost on the optimal solutions are also investigated through examples. Results of this work can facilitate a tradeoff analysis of different mission performance indices for heterogeneous work-sharing systems. - Highlights: • A heterogeneous work-sharing system with two non-repairable elements is considered. • The optimal work distribution and element loading problem is formulated and solved. • Effects of element reliability, performance, operation cost on the optimal solutions are investigated.

  18. Scalar multi-wormholes

    International Nuclear Information System (INIS)

    Egorov, A I; Kashargin, P E; Sushkov, Sergey V

    2016-01-01

    In 1921 Bach and Weyl derived the method of superposition to construct new axially symmetric vacuum solutions of general relativity. In this paper we extend the Bach–Weyl approach to non-vacuum configurations with massless scalar fields. Considering a phantom scalar field with the negative kinetic energy, we construct a multi-wormhole solution describing an axially symmetric superposition of N wormholes. The solution found is static, everywhere regular and has no event horizons. These features drastically tell the multi-wormhole configuration from other axially symmetric vacuum solutions which inevitably contain gravitationally inert singular structures, such as ‘struts’ and ‘membranes’, that keep the two bodies apart making a stable configuration. However, the multi-wormholes are static without any singular struts. Instead, the stationarity of the multi-wormhole configuration is provided by the phantom scalar field with the negative kinetic energy. Anther unusual property is that the multi-wormhole spacetime has a complicated topological structure. Namely, in the spacetime there exist 2 N asymptotically flat regions connected by throats. (paper)

  19. Multi-color and artistic dithering

    OpenAIRE

    Ostromoukhov, Victor; Hersch, Roger D.

    1999-01-01

    A multi-color dithering algorithm is proposed, which converts a barycentric combination of color intensities into a multi-color non-overlapping surface coverage. Multi-color dithering is a generalization of standard bi-level dithering. Combined with tetrahedral color separation, multi-color dithering makes it possible to print images made of a set of non-standard inks. In contrast to most previous color halftoning methods, multi-color dithering ensures by construction that the different selec...

  20. Egalitarianism in Multi-Choice Games

    NARCIS (Netherlands)

    Brânzei, R.; Llorca, N.; Sánchez-Soriano, J.; Tijs, S.H.

    2007-01-01

    In this paper we introduce the equal division core for arbitrary multi-choice games and the constrained egalitarian solution for con- vex multi-choice games, using a multi-choice version of the Dutta-Ray algorithm for traditional convex games. These egalitarian solutions for multi-choice games have

  1. Multi parton interactions and multi parton distributions in QCD

    International Nuclear Information System (INIS)

    Diehl, M.

    2012-01-01

    After a brief recapitulation of the general interest of parton densities, we discuss multiple hard interactions and multi parton distributions. We report on recent theoretical progress in their QCD description, on outstanding conceptual problems and on possibilities to use multi parton distributions as a laboratory to test and improve our understanding of hadron structure. (author)

  2. Evaluation of the Intel Westmere-EP server processor

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A; CERN. Geneva. IT Department

    2010-01-01

    In this paper we report on a set of benchmark results recently obtained by CERN openlab when comparing the 6-core “Westmere-EP” processor with Intel’s previous generation of the same microarchitecture, the “Nehalem-EP”. The former is produced in a new 32nm process, the latter in 45nm. Both platforms are dual-socket servers. Multiple benchmarks were used to get a good understanding of the performance of the new processor. We used both industry-standard benchmarks, such as SPEC2006, and specific High Energy Physics benchmarks, representing both simulation of physics detectors and data analysis of physics events. Before summarizing the results we must stress the fact that benchmarking of modern processors is a very complex affair. One has to control (at least) the following features: processor frequency, overclocking via Turbo mode, the number of physical cores in use, the use of logical cores via Simultaneous Multi-Threading (SMT), the cache sizes available, the memory configuration installed, as well...

  3. Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods

    Directory of Open Access Journals (Sweden)

    Anthony Hoak

    2017-03-01

    Full Text Available We develop an interactive likelihood (ILH for sequential Monte Carlo (SMC methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL and TUD-Stadtmitte using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA and classification of events, activities and relationships for multi-object trackers (CLEAR MOT. In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter.

  4. Multi-criteria appraisal of multi-modal urban public transport systems

    NARCIS (Netherlands)

    Keyvan Ekbatani, M.; Cats, O.

    2015-01-01

    This study proposes a multi-criteria decision making (MCDM) modelling framework for the appraisal of multi-modal urban public transportation services. MCDM is commonly used to obtain choice alternatives that satisfy a range of performance indicators. The framework embraces both compensatory and

  5. Cooperative control of multi-agent systems optimal and adaptive design approaches

    CERN Document Server

    Lewis, Frank L; Hengster-Movric, Kristian; Das, Abhijit

    2014-01-01

    Task complexity, communication constraints, flexibility and energy-saving concerns are all factors that may require a group of autonomous agents to work together in a cooperative manner. Applications involving such complications include mobile robots, wireless sensor networks, unmanned aerial vehicles (UAVs), spacecraft, and so on. In such networked multi-agent scenarios, the restrictions imposed by the communication graph topology can pose severe problems in the design of cooperative feedback control systems.  Cooperative control of multi-agent systems is a challenging topic for both control theorists and practitioners and has been the subject of significant recent research. Cooperative Control of Multi-Agent Systems extends optimal control and adaptive control design methods to multi-agent systems on communication graphs.  It develops Riccati design techniques for general linear dynamics for cooperative state feedback design, cooperative observer design, and cooperative dynamic output feedback design.  B...

  6. A Multi-Sensorial Hybrid Control for Robotic Manipulation in Human-Robot Workspaces

    Directory of Open Access Journals (Sweden)

    Juan A. Corrales

    2011-10-01

    Full Text Available Autonomous manipulation in semi-structured environments where human operators can interact is an increasingly common task in robotic applications. This paper describes an intelligent multi-sensorial approach that solves this issue by providing a multi-robotic platform with a high degree of autonomy and the capability to perform complex tasks. The proposed sensorial system is composed of a hybrid visual servo control to efficiently guide the robot towards the object to be manipulated, an inertial motion capture system and an indoor localization system to avoid possible collisions between human operators and robots working in the same workspace, and a tactile sensor algorithm to correctly manipulate the object. The proposed controller employs the whole multi-sensorial system and combines the measurements of each one of the used sensors during two different phases considered in the robot task: a first phase where the robot approaches the object to be grasped, and a second phase of manipulation of the object. In both phases, the unexpected presence of humans is taken into account. This paper also presents the successful results obtained in several experimental setups which verify the validity of the proposed approach.

  7. Closure relations for the multi-species Euler system. Construction and study of relaxation schemes for the multi-species and multi-components Euler systems; Relations de fermeture pour le systeme des equations d'Euler multi-especes. Construction et etude de schemas de relaxation en multi-especes et en multi-constituants

    Energy Technology Data Exchange (ETDEWEB)

    Dellacherie, St. [CEA Saclay, Dir. de l' Energie Nucleaire DEN/SFNME/LMPE, Lab. de Modelisation Physique et de l' Enrichissement, 91 - Gif sur Yvette (France); Rency, N. [Paris-11 Univ., CNRS UMR 8628, 91 - Orsay (France)

    2001-07-01

    After having recalled the formal convergence of the semi-classical multi-species Boltzmann equations toward the multi-species Euler system (i.e. mixture of gases having the same velocity), we generalize to this system the closure relations proposed by B. Despres and by F. Lagoutiere for the multi-components Euler system (i.e. mixture of non miscible fluids having the same velocity). Then, we extend the energy relaxation schemes proposed by F. Coquel and by B. Perthame for the numerical resolution of the mono-species Euler system to the multi-species isothermal Euler system and to the multi-components isobar-isothermal Euler system. This allows to obtain a class of entropic schemes under a CFL criteria. In the multi-components case, this class of entropic schemes is perhaps a way for the treatment of interface problems and, then, for the treatment of the numerical mixture area by using a Lagrange + projection scheme. Nevertheless, we have to find a good projection stage in the multi-components case. At last, in the last chapter, we discuss, through the study of a dynamical system, about a system proposed by R. Abgrall and by R. Saurel for the numerical resolution of the multi-components Euler system.

  8. A FPGA-Based, Granularity-Variable Neuromorphic Processor and Its Application in a MIMO Real-Time Control System.

    Science.gov (United States)

    Zhang, Zhen; Ma, Cheng; Zhu, Rong

    2017-08-23

    Artificial Neural Networks (ANNs), including Deep Neural Networks (DNNs), have become the state-of-the-art methods in machine learning and achieved amazing success in speech recognition, visual object recognition, and many other domains. There are several hardware platforms for developing accelerated implementation of ANN models. Since Field Programmable Gate Array (FPGA) architectures are flexible and can provide high performance per watt of power consumption, they have drawn a number of applications from scientists. In this paper, we propose a FPGA-based, granularity-variable neuromorphic processor (FBGVNP). The traits of FBGVNP can be summarized as granularity variability, scalability, integrated computing, and addressing ability: first, the number of neurons is variable rather than constant in one core; second, the multi-core network scale can be extended in various forms; third, the neuron addressing and computing processes are executed simultaneously. These make the processor more flexible and better suited for different applications. Moreover, a neural network-based controller is mapped to FBGVNP and applied in a multi-input, multi-output, (MIMO) real-time, temperature-sensing and control system. Experiments validate the effectiveness of the neuromorphic processor. The FBGVNP provides a new scheme for building ANNs, which is flexible, highly energy-efficient, and can be applied in many areas.

  9. A FPGA-Based, Granularity-Variable Neuromorphic Processor and Its Application in a MIMO Real-Time Control System

    Directory of Open Access Journals (Sweden)

    Zhen Zhang

    2017-08-01

    Full Text Available Artificial Neural Networks (ANNs, including Deep Neural Networks (DNNs, have become the state-of-the-art methods in machine learning and achieved amazing success in speech recognition, visual object recognition, and many other domains. There are several hardware platforms for developing accelerated implementation of ANN models. Since Field Programmable Gate Array (FPGA architectures are flexible and can provide high performance per watt of power consumption, they have drawn a number of applications from scientists. In this paper, we propose a FPGA-based, granularity-variable neuromorphic processor (FBGVNP. The traits of FBGVNP can be summarized as granularity variability, scalability, integrated computing, and addressing ability: first, the number of neurons is variable rather than constant in one core; second, the multi-core network scale can be extended in various forms; third, the neuron addressing and computing processes are executed simultaneously. These make the processor more flexible and better suited for different applications. Moreover, a neural network-based controller is mapped to FBGVNP and applied in a multi-input, multi-output, (MIMO real-time, temperature-sensing and control system. Experiments validate the effectiveness of the neuromorphic processor. The FBGVNP provides a new scheme for building ANNs, which is flexible, highly energy-efficient, and can be applied in many areas.

  10. Efficient Execution of Video Applications on Heterogeneous Multi- and Many-Core Processors

    NARCIS (Netherlands)

    Pereira de Azevedo Filho, A.

    2011-01-01

    In this dissertation we present methodologies and evaluations aiming at increasing the efficiency of video coding applications for heterogeneous many-core processors composed of SIMD-only, scratchpad memory based cores. Our contributions are spread in three different fronts: thread-level parallelism

  11. Quantitative multi-modal NDT data analysis

    International Nuclear Information System (INIS)

    Heideklang, René; Shokouhi, Parisa

    2014-01-01

    A single NDT technique is often not adequate to provide assessments about the integrity of test objects with the required coverage or accuracy. In such situations, it is often resorted to multi-modal testing, where complementary and overlapping information from different NDT techniques are combined for a more comprehensive evaluation. Multi-modal material and defect characterization is an interesting task which involves several diverse fields of research, including signal and image processing, statistics and data mining. The fusion of different modalities may improve quantitative nondestructive evaluation by effectively exploiting the augmented set of multi-sensor information about the material. It is the redundant information in particular, whose quantification is expected to lead to increased reliability and robustness of the inspection results. There are different systematic approaches to data fusion, each with its specific advantages and drawbacks. In our contribution, these will be discussed in the context of nondestructive materials testing. A practical study adopting a high-level scheme for the fusion of Eddy Current, GMR and Thermography measurements on a reference metallic specimen with built-in grooves will be presented. Results show that fusion is able to outperform the best single sensor regarding detection specificity, while retaining the same level of sensitivity

  12. Literature Survey on Technical Issues and Insights of Multi-Unit PSA

    International Nuclear Information System (INIS)

    Baek, Sejin; Park, Soyoung; Heo, Gyunyoung

    2016-01-01

    The need consider the risk impact in case of multi-unit in a single site increased after the accident at Fukushima Daiichi in March 2011. This means that we have to consider the single-unit initiators impacting the other units and the simultaneous accidents of the multi-unit on the same site. Particularly, this kind of technical concern is serious in case of the Republic of Korea where multi-units had to be located in high-density population area due to geographical features. The Nuclear Safety and Security Commission (NSSC) in the Republic of Korea has been trying to identify the state of the art of international and domestic regulations and techniques on multi-unit risk assessment and planning the road map for the safety researches. However, we have to say that finding a common accepted methodology along with safety criteria for multi-unit PSA was not an easy task up to now. This paper summarizes and analyzes related international and domestic journals' papers, conferences' papers and reports about the multi-unit PSA classifying categories with themes to understand the technical tendency of multi-unit PSA. In addition, some insights that were obtained from this classification have been arranged too. This paper investigated the technical trend of the multi-unit PSA as collecting of the international and domestic journals' papers, conferences papers and reports, and analyzing them. Upon the literature survey, a few statistics, technical issues, and insights were summarized. Both of the fundamental and practical researches need to find a globally accepted methodology to calculate and determine quantitative objectives for a multi-unit PSA. We want to expect that this paper can be shared to understand the current status of multi-unit PSA

  13. Literature Survey on Technical Issues and Insights of Multi-Unit PSA

    Energy Technology Data Exchange (ETDEWEB)

    Baek, Sejin; Park, Soyoung; Heo, Gyunyoung [Kyung Hee Univ., Yongin (Korea, Republic of)

    2016-10-15

    The need consider the risk impact in case of multi-unit in a single site increased after the accident at Fukushima Daiichi in March 2011. This means that we have to consider the single-unit initiators impacting the other units and the simultaneous accidents of the multi-unit on the same site. Particularly, this kind of technical concern is serious in case of the Republic of Korea where multi-units had to be located in high-density population area due to geographical features. The Nuclear Safety and Security Commission (NSSC) in the Republic of Korea has been trying to identify the state of the art of international and domestic regulations and techniques on multi-unit risk assessment and planning the road map for the safety researches. However, we have to say that finding a common accepted methodology along with safety criteria for multi-unit PSA was not an easy task up to now. This paper summarizes and analyzes related international and domestic journals' papers, conferences' papers and reports about the multi-unit PSA classifying categories with themes to understand the technical tendency of multi-unit PSA. In addition, some insights that were obtained from this classification have been arranged too. This paper investigated the technical trend of the multi-unit PSA as collecting of the international and domestic journals' papers, conferences papers and reports, and analyzing them. Upon the literature survey, a few statistics, technical issues, and insights were summarized. Both of the fundamental and practical researches need to find a globally accepted methodology to calculate and determine quantitative objectives for a multi-unit PSA. We want to expect that this paper can be shared to understand the current status of multi-unit PSA.

  14. Multi-Label Classification Based on Low Rank Representation for Image Annotation

    Directory of Open Access Journals (Sweden)

    Qiaoyu Tan

    2017-01-01

    Full Text Available Annotating remote sensing images is a challenging task for its labor demanding annotation process and requirement of expert knowledge, especially when images can be annotated with multiple semantic concepts (or labels. To automatically annotate these multi-label images, we introduce an approach called Multi-Label Classification based on Low Rank Representation (MLC-LRR. MLC-LRR firstly utilizes low rank representation in the feature space of images to compute the low rank constrained coefficient matrix, then it adapts the coefficient matrix to define a feature-based graph and to capture the global relationships between images. Next, it utilizes low rank representation in the label space of labeled images to construct a semantic graph. Finally, these two graphs are exploited to train a graph-based multi-label classifier. To validate the performance of MLC-LRR against other related graph-based multi-label methods in annotating images, we conduct experiments on a public available multi-label remote sensing images (Land Cover. We perform additional experiments on five real-world multi-label image datasets to further investigate the performance of MLC-LRR. Empirical study demonstrates that MLC-LRR achieves better performance on annotating images than these comparing methods across various evaluation criteria; it also can effectively exploit global structure and label correlations of multi-label images.

  15. Structural damage detection-oriented multi-type sensor placement with multi-objective optimization

    Science.gov (United States)

    Lin, Jian-Fu; Xu, You-Lin; Law, Siu-Seong

    2018-05-01

    A structural damage detection-oriented multi-type sensor placement method with multi-objective optimization is developed in this study. The multi-type response covariance sensitivity-based damage detection method is first introduced. Two objective functions for optimal sensor placement are then introduced in terms of the response covariance sensitivity and the response independence. The multi-objective optimization problem is formed by using the two objective functions, and the non-dominated sorting genetic algorithm (NSGA)-II is adopted to find the solution for the optimal multi-type sensor placement to achieve the best structural damage detection. The proposed method is finally applied to a nine-bay three-dimensional frame structure. Numerical results show that the optimal multi-type sensor placement determined by the proposed method can avoid redundant sensors and provide satisfactory results for structural damage detection. The restriction on the number of each type of sensors in the optimization can reduce the searching space in the optimization to make the proposed method more effective. Moreover, how to select a most optimal sensor placement from the Pareto solutions via the utility function and the knee point method is demonstrated in the case study.

  16. Multi-Stage Recognition of Speech Emotion Using Sequential Forward Feature Selection

    Directory of Open Access Journals (Sweden)

    Liogienė Tatjana

    2016-07-01

    Full Text Available The intensive research of speech emotion recognition introduced a huge collection of speech emotion features. Large feature sets complicate the speech emotion recognition task. Among various feature selection and transformation techniques for one-stage classification, multiple classifier systems were proposed. The main idea of multiple classifiers is to arrange the emotion classification process in stages. Besides parallel and serial cases, the hierarchical arrangement of multi-stage classification is most widely used for speech emotion recognition. In this paper, we present a sequential-forward-feature-selection-based multi-stage classification scheme. The Sequential Forward Selection (SFS and Sequential Floating Forward Selection (SFFS techniques were employed for every stage of the multi-stage classification scheme. Experimental testing of the proposed scheme was performed using the German and Lithuanian emotional speech datasets. Sequential-feature-selection-based multi-stage classification outperformed the single-stage scheme by 12–42 % for different emotion sets. The multi-stage scheme has shown higher robustness to the growth of emotion set. The decrease in recognition rate with the increase in emotion set for multi-stage scheme was lower by 10–20 % in comparison with the single-stage case. Differences in SFS and SFFS employment for feature selection were negligible.

  17. Design and multi-physics optimization of rotary MRF brakes

    Science.gov (United States)

    Topcu, Okan; Taşcıoğlu, Yiğit; Konukseven, Erhan İlhan

    2018-03-01

    Particle swarm optimization (PSO) is a popular method to solve the optimization problems. However, calculations for each particle will be excessive when the number of particles and complexity of the problem increases. As a result, the execution speed will be too slow to achieve the optimized solution. Thus, this paper proposes an automated design and optimization method for rotary MRF brakes and similar multi-physics problems. A modified PSO algorithm is developed for solving multi-physics engineering optimization problems. The difference between the proposed method and the conventional PSO is to split up the original single population into several subpopulations according to the division of labor. The distribution of tasks and the transfer of information to the next party have been inspired by behaviors of a hunting party. Simulation results show that the proposed modified PSO algorithm can overcome the problem of heavy computational burden of multi-physics problems while improving the accuracy. Wire type, MR fluid type, magnetic core material, and ideal current inputs have been determined by the optimization process. To the best of the authors' knowledge, this multi-physics approach is novel for optimizing rotary MRF brakes and the developed PSO algorithm is capable of solving other multi-physics engineering optimization problems. The proposed method has showed both better performance compared to the conventional PSO and also has provided small, lightweight, high impedance rotary MRF brake designs.

  18. E-Fulfillment and Multi-Channel Distribution – A Review

    NARCIS (Netherlands)

    N.A.H. Agatz (Niels); M. Fleischmann (Moritz); J.A.E.E. van Nunen (Jo)

    2006-01-01

    textabstractThis review addresses the specific supply chain management issues of Internet fulfillment in a multi-channel environment. It provides a systematic overview of managerial planning tasks and reviews corresponding quantitative models. In this way, we aim to enhance the understanding of

  19. Multi-Attribute Vickrey Auctions when Utility Functions are Unknown

    NARCIS (Netherlands)

    Máhr, T.; De Weerdt, M.M.

    2006-01-01

    Multi-attribute auctions allow negotiations over multiple attributes besides price. For example in task allocation, service providers can define their service by means of multiple attributes, such as quality of service, deadlines, or delay penalties. Auction mechanisms assume that the players have

  20. Organization of Multi-controller Interaction in Software Defined Networks

    Directory of Open Access Journals (Sweden)

    Sergey V. Morzhov

    2018-01-01

    Full Text Available Software Defined Networking (SDN is a promising paradigm for network management. It is a centralized network intelligence on a dedicated server, which runs network operating system, and is called SDN controller. It was assumed that such an architecture should have an improved network performance and monitoring. However, the centralized control architecture of the SDNs brings novel challenges to reliability, scalability, fault tolerance and interoperability. These problems are especially acute for large data center networks and can be solved by combining SDN controllers into clusters, called multi-controllers. Multi-controller architecture became very important for SDN-enabled networks nowadays. This paper gives a comprehensive overview of SDN multi-controller architectures. The authors review several most popular distributed controllers in order to indicate their strengths and weaknesses. They also investigate and classify approaches used. This paper explains in details the difference among various types of multi-controller architectures, the distribution method and the communication system. Furthermore, it provides already implemented architectures and some examples of architectures under consideration by describing their design, communication process, and performance results. In this paper, the authors show their own classification of multi-controllers and claim that, despite the existence of undeniable advantages, all reviewed controllers have serious drawbacks, which must be eliminated. These drawbacks hamper the development of multi-controllers and their widespread adoption in corporate networks. In the end, the authors conclude that now it is impossible to find a solution capable to solve all the tasks assigned to it adequately and fully. The article is published in the authors’ wording.

  1. User-assisted visual search and tracking across distributed multi-camera networks

    Science.gov (United States)

    Raja, Yogesh; Gong, Shaogang; Xiang, Tao

    2011-11-01

    Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.

  2. A Resource Logic for Multi-Agent Plan Merging

    NARCIS (Netherlands)

    De Weerdt, M.M.; Bos, A.; Tonino, H.; Witteveen, C.

    2003-01-01

    In a multi-agent system, agents are carrying out certain tasks by executing plans. Consequently, the problem of finding a plan, given a certain goal, has been given a lot of attention in the literature. Instead of concentrating on this problem, the focus of this paper is on cooperation between

  3. Reduced but broader prefrontal activity in patients with schizophrenia during n-back working memory tasks: a multi-channel near-infrared spectroscopy study.

    Science.gov (United States)

    Koike, Shinsuke; Takizawa, Ryu; Nishimura, Yukika; Kinou, Masaru; Kawasaki, Shingo; Kasai, Kiyoto

    2013-09-01

    Caudal regions of the prefrontal cortex, including the dorsolateral (DLPFC) and ventrolateral (VLPFC) prefrontal cortex, are involved in essential cognitive functions such as working memory. In contrast, more rostral regions, such as the frontopolar cortex (FpC), have integrative functions among cognitive functions and thereby contribute crucially to real-world social activity. Previous functional magnetic resonance imaging studies have shown patients with schizophrenia had different DLPFC activity pattern in response to cognitive load changes compared to healthy controls; however, the spatial relationship between the caudal and rostral prefrontal activation has not been evaluated under less-constrained conditions. Twenty-six patients with schizophrenia and 26 age-, sex-, and premorbid-intelligence-matched healthy controls participated in this study. Hemodynamic changes during n-back working memory tasks with different cognitive loads were measured using multi-channel near-infrared spectroscopy (NIRS). Healthy controls showed significant task-related activity in the bilateral VLPFC and significant task-related decreased activity in the DLPFC, with greater signal changes when the task required more cognitive load. In contrast, patients with schizophrenia showed activation in the more rostral regions, including bilateral DLPFC and FpC. Neither decreased activity nor greater activation in proportion to elevated cognitive load occurred. This multi-channel NIRS study demonstrated that activation intensity did not increase in patients with schizophrenia associated with cognitive load changes, suggesting hypo-frontality as cognitive impairment in schizophrenia. On the other hand, patients had broader prefrontal activity in areas such as the bilateral DLPFC and FpC regions, thus suggesting a hyper-frontality compensatory response. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Design of a multi beam klystron cavity from its single beam parameters

    Energy Technology Data Exchange (ETDEWEB)

    Kant, Deepender, E-mail: dkc@ceeri.ernet.in; Joshi, L. M. [CSIR-Central Electronics Engineering Research Institute, Pilani (India); Janyani, Vijay [Department of ECE, MNIT, Jaipur (India)

    2016-03-09

    The klystron is a well-known microwave amplifier which uses kinetic energy of an electron beam for amplification of the RF signal. There are some limitations of conventional single beam klystron such as high operating voltage, low efficiency and bulky size at higher power levels, which are very effectively handled in Multi Beam Klystron (MBK) that uses multiple low purveyance electron beams for RF interaction. Each beam propagates along its individual transit path through a resonant cavity structure. Multi-Beam klystron cavity design is a critical task due to asymmetric cavity structure and can be simulated by 3D code only. The present paper shall discuss the design of multi beam RF cavities for klystrons operating at 2856 MHz (S-band) and 5 GHz (C-band) respectively. The design approach uses some scaling laws for finding the electron beam parameters of the multi beam device from their single beam counter parts. The scaled beam parameters are then used for finding the design parameters of the multi beam cavities. Design of the desired multi beam cavity can be optimized through iterative simulations in CST Microwave Studio.

  5. Design of a multi beam klystron cavity from its single beam parameters

    International Nuclear Information System (INIS)

    Kant, Deepender; Joshi, L. M.; Janyani, Vijay

    2016-01-01

    The klystron is a well-known microwave amplifier which uses kinetic energy of an electron beam for amplification of the RF signal. There are some limitations of conventional single beam klystron such as high operating voltage, low efficiency and bulky size at higher power levels, which are very effectively handled in Multi Beam Klystron (MBK) that uses multiple low purveyance electron beams for RF interaction. Each beam propagates along its individual transit path through a resonant cavity structure. Multi-Beam klystron cavity design is a critical task due to asymmetric cavity structure and can be simulated by 3D code only. The present paper shall discuss the design of multi beam RF cavities for klystrons operating at 2856 MHz (S-band) and 5 GHz (C-band) respectively. The design approach uses some scaling laws for finding the electron beam parameters of the multi beam device from their single beam counter parts. The scaled beam parameters are then used for finding the design parameters of the multi beam cavities. Design of the desired multi beam cavity can be optimized through iterative simulations in CST Microwave Studio.

  6. Multi-agent: a technique to implement geo-visualization of networked virtual reality

    Science.gov (United States)

    Lin, Zhiyong; Li, Wenjing; Meng, Lingkui

    2007-06-01

    Networked Virtual Reality (NVR) is a system based on net connected and spatial information shared, whose demands cannot be fully meet by the existing architectures and application patterns of VR to some extent. In this paper, we propose a new architecture of NVR based on Multi-Agent framework. which includes the detailed definition of various agents and their functions and full description of the collaboration mechanism, Through the prototype system test with DEM Data and 3D Models Data, the advantages of Multi-Agent based Networked Virtual Reality System in terms of the data loading time, user response time and scene construction time etc. are verified. First, we introduce the characters of Networked Virtual Realty and the characters of Multi-Agent technique in Section 1. Then we give the architecture design of Networked Virtual Realty based on Multi-Agent in Section 2.The Section 2 content includes the rule of task division, the multi-agent architecture design to implement Networked Virtual Realty and the function of agents. Section 3 shows the prototype implementation according to the design. Finally, Section 4 discusses the benefits of using Multi-Agent to implement geovisualization of Networked Virtual Realty.

  7. A single chip pulse processor for nuclear spectroscopy

    International Nuclear Information System (INIS)

    Hilsenrath, F.; Bakke, J.C.; Voss, H.D.

    1985-01-01

    A high performance digital pulse processor, integrated into a single gate array microcircuit, has been developed for spaceflight applications. The new approach takes advantage of the latest CMOS high speed A/D flash converters and low-power gated logic arrays. The pulse processor measures pulse height, pulse area and the required timing information (e.g. multi detector coincidence and pulse pile-up detection). The pulse processor features high throughput rate (e.g. 0.5 Mhz for 2 usec gausssian pulses) and improved differential linearity (e.g. + or - 0.2 LSB for a + or - 1 LSB A/D). Because of the parallel digital architecture of the device, the interface is microprocessor bus compatible. A satellite flight application of this module is presented for use in the X-ray imager and high energy particle spectrometers of the PEM experiment on the Upper Atmospheric Research Satellite

  8. A Heterogeneous Multi-core Architecture with a Hardware Kernel for Control Systems

    DEFF Research Database (Denmark)

    Li, Gang; Guan, Wei; Sierszecki, Krzysztof

    2012-01-01

    Rapid industrialisation has resulted in a demand for improved embedded control systems with features such as predictability, high processing performance and low power consumption. Software kernel implementation on a single processor is becoming more difficult to satisfy those constraints....... This paper presents a multi-core architecture incorporating a hardware kernel on FPGAs, intended for high performance applications in control engineering domain. First, the hardware kernel is investigated on the basis of a component-based real-time kernel HARTEX (Hard Real-Time Executive for Control Systems...

  9. Efficient Support for Matrix Computations on Heterogeneous Multi-core and Multi-GPU Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Fengguang [Univ. of Tennessee, Knoxville, TN (United States); Tomov, Stanimire [Univ. of Tennessee, Knoxville, TN (United States); Dongarra, Jack [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2011-06-01

    We present a new methodology for utilizing all CPU cores and all GPUs on a heterogeneous multicore and multi-GPU system to support matrix computations e ciently. Our approach is able to achieve the objectives of a high degree of parallelism, minimized synchronization, minimized communication, and load balancing. Our main idea is to treat the heterogeneous system as a distributed-memory machine, and to use a heterogeneous 1-D block cyclic distribution to allocate data to the host system and GPUs to minimize communication. We have designed heterogeneous algorithms with two di erent tile sizes (one for CPU cores and the other for GPUs) to cope with processor heterogeneity. We propose an auto-tuning method to determine the best tile sizes to attain both high performance and load balancing. We have also implemented a new runtime system and applied it to the Cholesky and QR factorizations. Our experiments on a compute node with two Intel Westmere hexa-core CPUs and three Nvidia Fermi GPUs demonstrate good weak scalability, strong scalability, load balance, and e ciency of our approach.

  10. Multi-Modal Traveler Information System - GCM Corridor Architecture Functional Requirements

    Science.gov (United States)

    1997-11-17

    The Multi-Modal Traveler Information System (MMTIS) project involves a large number of Intelligent Transportation System (ITS) related tasks. It involves research of all ITS initiatives in the Gary-Chicago-Milwaukee (GCM) Corridor which are currently...

  11. MULTI-TEMPORAL AND MULTI-SENSOR IMAGE MATCHING BASED ON LOCAL FREQUENCY INFORMATION

    Directory of Open Access Journals (Sweden)

    X. Liu

    2012-08-01

    Full Text Available Image Matching is often one of the first tasks in many Photogrammetry and Remote Sensing applications. This paper presents an efficient approach to automated multi-temporal and multi-sensor image matching based on local frequency information. Two new independent image representations, Local Average Phase (LAP and Local Weighted Amplitude (LWA, are presented to emphasize the common scene information, while suppressing the non-common illumination and sensor-dependent information. In order to get the two representations, local frequency information is firstly obtained from Log-Gabor wavelet transformation, which is similar to that of the human visual system; then the outputs of odd and even symmetric filters are used to construct the LAP and LWA. The LAP and LWA emphasize on the phase and amplitude information respectively. As these two representations are both derivative-free and threshold-free, they are robust to noise and can keep as much of the image details as possible. A new Compositional Similarity Measure (CSM is also presented to combine the LAP and LWA with the same weight for measuring the similarity of multi-temporal and multi-sensor images. The CSM can make the LAP and LWA compensate for each other and can make full use of the amplitude and phase of local frequency information. In many image matching applications, the template is usually selected without consideration of its matching robustness and accuracy. In order to overcome this problem, a local best matching point detection is presented to detect the best matching template. In the detection method, we employ self-similarity analysis to identify the template with the highest matching robustness and accuracy. Experimental results using some real images and simulation images demonstrate that the presented approach is effective for matching image pairs with significant scene and illumination changes and that it has advantages over other state-of-the-art approaches, which include: the

  12. Proximity and physical navigation in collaborative work with a multi-touch wall-display

    DEFF Research Database (Denmark)

    Jakobsen, Mikkel Rønne; Hornbæk, Kasper

    2012-01-01

    Multi-touch, wall-sized displays afford new forms of collaboration. Yet, most data on collaboration with multi-touch displays come from tabletop settings, where users often sit and where space is a limited resource. We study how two-person groups navigate in relation to a 2.8m!1.2m multi-touch di......-touch display with 24.8 megapixels and to each other when solving a sensemaking task on a document collection. The results show that users physically navigate to shift fluently among different parts of the display and between parallel and joint group work....

  13. Carrier-interleaved orthogonal multi-electrode multi-carrier resistivity-measurement tool

    International Nuclear Information System (INIS)

    Cai, Yu; Sha, Shuang

    2016-01-01

    This paper proposes a new carrier-interleaved orthogonal multi-electrode multi-carrier resistivity-measurement tool used in a cylindrical borehole environment during oil-based mud drilling processes. The new tool is an orthogonal frequency division multiplexing access-based contactless multi-measurand detection tool. The tool can measure formation resistivity in different azimuthal angles and elevational depths. It can measure many more measurands simultaneously in a specified bandwidth than the legacy frequency division multiplexing multi-measurand tool without a channel-select filter while avoiding inter-carrier interference. The paper also shows that formation resistivity is not sensitive to frequency in certain frequency bands. The average resistivity collected from N subcarriers can increase the measurement of the signal-to-noise ratio (SNR) by N times given no amplitude clipping in the current-injection electrode. If the clipping limit is taken into account, with the phase rotation of each single carrier, the amplitude peak-to-average ratio can be reduced by 3 times, and the SNR can achieve a 9/ N times gain over the single-carrier system. The carrier-interleaving technique is also introduced to counter the carrier frequency offset (CFO) effect, where the CFO will cause inter-pad interference. A qualitative analysis and simulations demonstrate that block-interleaving performs better than tone-interleaving when coping with a large CFO. The theoretical analysis also suggests that increasing the subcarrier number can increase the measurement speed or enhance elevational resolution without sacrificing receiver performance. The complex orthogonal multi-pad multi-carrier resistivity logging tool, in which all subcarriers are complex signals, can provide a larger available subcarrier pool than other types of transceivers. (paper)

  14. Modeling Multi-Level Systems

    CERN Document Server

    Iordache, Octavian

    2011-01-01

    This book is devoted to modeling of multi-level complex systems, a challenging domain for engineers, researchers and entrepreneurs, confronted with the transition from learning and adaptability to evolvability and autonomy for technologies, devices and problem solving methods. Chapter 1 introduces the multi-scale and multi-level systems and highlights their presence in different domains of science and technology. Methodologies as, random systems, non-Archimedean analysis, category theory and specific techniques as model categorification and integrative closure, are presented in chapter 2. Chapters 3 and 4 describe polystochastic models, PSM, and their developments. Categorical formulation of integrative closure offers the general PSM framework which serves as a flexible guideline for a large variety of multi-level modeling problems. Focusing on chemical engineering, pharmaceutical and environmental case studies, the chapters 5 to 8 analyze mixing, turbulent dispersion and entropy production for multi-scale sy...

  15. visPIG--a web tool for producing multi-region, multi-track, multi-scale plots of genetic data.

    Directory of Open Access Journals (Sweden)

    Matthew Scales

    Full Text Available We present VISual Plotting Interface for Genetics (visPIG; http://vispig.icr.ac.uk, a web application to produce multi-track, multi-scale, multi-region plots of genetic data. visPIG has been designed to allow users not well versed with mathematical software packages and/or programming languages such as R, Matlab®, Python, etc., to integrate data from multiple sources for interpretation and to easily create publication-ready figures. While web tools such as the UCSC Genome Browser or the WashU Epigenome Browser allow custom data uploads, such tools are primarily designed for data exploration. This is also true for the desktop-run Integrative Genomics Viewer (IGV. Other locally run data visualisation software such as Circos require significant computer skills of the user. The visPIG web application is a menu-based interface that allows users to upload custom data tracks and set track-specific parameters. Figures can be downloaded as PDF or PNG files. For sensitive data, the underlying R code can also be downloaded and run locally. visPIG is multi-track: it can display many different data types (e.g association, functional annotation, intensity, interaction, heat map data,…. It also allows annotation of genes and other custom features in the plotted region(s. Data tracks can be plotted individually or on a single figure. visPIG is multi-region: it supports plotting multiple regions, be they kilo- or megabases apart or even on different chromosomes. Finally, visPIG is multi-scale: a sub-region of particular interest can be 'zoomed' in. We describe the various features of visPIG and illustrate its utility with examples. visPIG is freely available through http://vispig.icr.ac.uk under a GNU General Public License (GPLv3.

  16. Interconnected Levels of Multi-Stage Marketing

    DEFF Research Database (Denmark)

    Vedel, Mette; Geersbro, Jens; Ritter, Thomas

    2012-01-01

    different levels of multi-stage marketing and illustrates these stages with a case study. In addition, a triadic perspective is introduced as an analytical tool for multi-stage marketing research. The results from the case study indicate that multi-stage marketing exists on different levels. Thus, managers...... must not only decide in general on the merits of multi-stage marketing for their firm, but must also decide on which level they will engage in multi-stage marketing. The triadic perspective enables a rich and multi-dimensional understanding of how different business relationships influence each other...... in a multi-stage marketing context. This understanding assists managers in assessing and balancing different aspects of multi- stage marketing. The triadic perspective also offers avenues for further research....

  17. Realization of multi-parameter and multi-state in fault tree computer-aided building software

    International Nuclear Information System (INIS)

    Guo Xiaoli; Tong Jiejuan; Xue Dazhi

    2004-01-01

    More than one parameter and more than one failed state of a parameter are often involved in building fault tree, so it is necessary for fault tree computer-aided building software to deal with multi-parameter and multi-state. Fault Tree Expert System (FTES) has the target of aiding the FT-building work of hydraulic systems. This paper expatiates on how to realize multi-parameter and multi-state in FTES with focus on Knowledge Base and Illation Engine. (author)

  18. The Multi-Functional Implement: A tool to jump-start development

    OpenAIRE

    Moore, Keith M.

    2013-01-01

    Metadata only record This article describes the advantages of the Multi-Functional Implement, a tool that can be used for a variety of farm tasks in the context of conservation agriculture. CCRA-8 (Technology Networks for Sustainable Innovation)

  19. Fast reconstruction of multi-strange hyperons in the CBM experiment

    Energy Technology Data Exchange (ETDEWEB)

    Vassiliev, Iouri [GSI, Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); Collaboration: CBM-Collaboration

    2015-07-01

    The main goal of the CBM experiment is to study the behaviour of nuclear matter at very high baryonic density in which the transition to a deconfined and chirally restored phase is expected to happen. One of the promissing signatures of this new state is the enhanced production of multi-strange particles, therefore the reconstruction of multi-strange hyperons is essential for the understanding of the heavy ion collision dynamics. Another experimental challenge of the CBM experiment is online selection of open charm particles via the displaced vertex of the hadronic decay, Charmonium and low mass vector mesons in the environment of a heavy-ion collision. This task requires fast and efficient track reconstruction algorithms, primary vertex finder and particles finder. Results of feasibility studies of the multi-strange hyperons in the CBM experiment are presented.

  20. Tuneable resolution as a systems biology approach for multi-scale, multi-compartment computational models.

    Science.gov (United States)

    Kirschner, Denise E; Hunt, C Anthony; Marino, Simeone; Fallahi-Sichani, Mohammad; Linderman, Jennifer J

    2014-01-01

    The use of multi-scale mathematical and computational models to study complex biological processes is becoming increasingly productive. Multi-scale models span a range of spatial and/or temporal scales and can encompass multi-compartment (e.g., multi-organ) models. Modeling advances are enabling virtual experiments to explore and answer questions that are problematic to address in the wet-lab. Wet-lab experimental technologies now allow scientists to observe, measure, record, and analyze experiments focusing on different system aspects at a variety of biological scales. We need the technical ability to mirror that same flexibility in virtual experiments using multi-scale models. Here we present a new approach, tuneable resolution, which can begin providing that flexibility. Tuneable resolution involves fine- or coarse-graining existing multi-scale models at the user's discretion, allowing adjustment of the level of resolution specific to a question, an experiment, or a scale of interest. Tuneable resolution expands options for revising and validating mechanistic multi-scale models, can extend the longevity of multi-scale models, and may increase computational efficiency. The tuneable resolution approach can be applied to many model types, including differential equation, agent-based, and hybrid models. We demonstrate our tuneable resolution ideas with examples relevant to infectious disease modeling, illustrating key principles at work. © 2014 The Authors. WIREs Systems Biology and Medicine published by Wiley Periodicals, Inc.

  1. Multi-beam synchronous measurement based on PSD phase detection using frequency-domain multiplexing

    Science.gov (United States)

    Duan, Ying; Qin, Lan; Xue, Lian; Xi, Feng; Mao, Jiubing

    2013-10-01

    According to the principle of centroid measurement, position-sensitive detectors (PSD) are commonly used for micro displacement detection. However, single-beam detection method cannot satisfy such tasks as multi-dimension position measurement, three dimension vision reconstruction, and robot precision positioning, which require synchronous measurement of multiple light beams. Consequently, we designed PSD phase detection method using frequency-domain multiplexing for synchronous detection of multiple modulated light beams. Compared to previous PSD amplitude detection method, the phase detection method using FDM has advantages of simplified measuring system, low cost, high capability of resistance to light interference as well as improved resolution. The feasibility of multi-beam synchronous measurement based on PSD phase detection using FDM was validated by multi-beam measuring experiments. The maximum non-linearity error of the multi-beam synchronous measurement is 6.62%.

  2. Investment Portfolio Formation Using Multi-criteria evaluation Method MULTIMOORA

    Directory of Open Access Journals (Sweden)

    Vilius Vaišvilas

    2017-06-01

    Full Text Available Information that has to be analyzed by investors is complicated and can be interpreted differently by different people, which is why choosing what should be added to the investment portfolio is complicated task. Complexity grows substantially when there are more alternatives to choose from. Multi – criteria evaluation method can be used to choose the best alternatives. Multi–criteria evaluation method MULTIMOORA is not subjective because there is no need to decide ratio of any given variable that is evaluated. MULTIMOORA consists of: formation of ratio system, application of multi – criteria evaluation method as well as investment evaluation and ranking. Purpose of this article is to apply multi – criteria evaluation method MULTIMOORA for the formation and management of investment portfolio from stocks of the Baltic stock market companies. Methods used in the analysis for the article: analysis of scientific literature, statistical analysis, organization and comparison of data, idealization, calculations of MULTIMOORA.

  3. MREG V1.1 : a multi-scale image registration algorithm for SAR applications.

    Energy Technology Data Exchange (ETDEWEB)

    Eichel, Paul H.

    2013-08-01

    MREG V1.1 is the sixth generation SAR image registration algorithm developed by the Signal Processing&Technology Department for Synthetic Aperture Radar applications. Like its predecessor algorithm REGI, it employs a powerful iterative multi-scale paradigm to achieve the competing goals of sub-pixel registration accuracy and the ability to handle large initial offsets. Since it is not model based, it allows for high fidelity tracking of spatially varying terrain-induced misregistration. Since it does not rely on image domain phase, it is equally adept at coherent and noncoherent image registration. This document provides a brief history of the registration processors developed by Dept. 5962 leading up to MREG V1.1, a full description of the signal processing steps involved in the algorithm, and a user's manual with application specific recommendations for CCD, TwoColor MultiView, and SAR stereoscopy.

  4. Multi-objective optimization for generating a weighted multi-model ensemble

    Science.gov (United States)

    Lee, H.

    2017-12-01

    Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic

  5. Emotion regulation and conflict transformation in multi-team systems

    NARCIS (Netherlands)

    Curseu, P.L.; Meeus, M.T.H.

    2014-01-01

    Purpose The aim of this paper is to test the moderating role of emotion regulation in the transformation of both task and process conflict into relationship conflict. Design/methodology/approach A field study of multi-teams systems, in which (94) respondents are engaged in interpersonal and

  6. Multi-component and multi-array TEM detection in karst tunnels

    International Nuclear Information System (INIS)

    Sun, Huaifeng; Li, Shucai; Su, Maoxin; Xue, Yiguo; Li, Xiu; Qi, Zhipeng

    2012-01-01

    Emerging applications of transient electromagnetic methods (TEM) in tunnelling require higher resolution on the distributions and shapes of low resistivity bodies, such as karst water and karst pipes, using multi-component and multi-array receivers. However, there are no apparent resistivity definitions for both vertical and horizontal components with offsets inside the loop. Although the raw field can show the differences of the earth electric structure, it is not straightforward. Apparent resistivity is very convenient and easy for engineers. We have developed a method for multi-component and multi-array TEM which can be applied in tunnelling and defined the expressions of apparent resistivity. This method takes advantage of the difference in resolution among components. A homogeneous half-space model and four typical three-layered models are used to test the effectiveness of the new definition. A field case history is carried out and analysed to demonstrate the viability of this technique. The results suggest that it is feasible to use the technique in tunnelling, especially for identifying the spatial distribution of karst water and karst pipes. (paper)

  7. A New Multi-Sensor Track Fusion Architecture for Multi-Sensor Information Integration

    National Research Council Canada - National Science Library

    Jean, Buddy H; Younker, John; Hung, Chih-Cheng

    2004-01-01

    .... This new technology will integrate multi-sensor information and extract integrated multi-sensor information to detect, track and identify multiple targets at any time, in any place under all weather conditions...

  8. Multi-processor system for real-time flow estimation in medical ultrasound imaging

    DEFF Research Database (Denmark)

    Stetson, Paul F.; Jensen, Jesper Lomborg; Antonius, Peter

    1997-01-01

    the processed data. The generous bandwidth of the links makes it easy to balance the computational load among the processors.In order to manage the shared system memory and to make use of the parallel processing capabilities of the system, a real-time multitasking kernel has been developed. The kernel uses...

  9. MULTI AGENT-BASED ENVIRONMENTAL LANDSCAPE (MABEL) - AN ARTIFICIAL INTELLIGENCE SIMULATION MODEL: SOME EARLY ASSESSMENTS

    OpenAIRE

    Alexandridis, Konstantinos T.; Pijanowski, Bryan C.

    2002-01-01

    The Multi Agent-Based Environmental Landscape model (MABEL) introduces a Distributed Artificial Intelligence (DAI) systemic methodology, to simulate land use and transformation changes over time and space. Computational agents represent abstract relations among geographic, environmental, human and socio-economic variables, with respect to land transformation pattern changes. A multi-agent environment is developed providing task-nonspecific problem-solving abilities, flexibility on achieving g...

  10. A multi-solver quasi-Newton method for the partitioned simulation of fluid-structure interaction

    International Nuclear Information System (INIS)

    Degroote, J; Annerel, S; Vierendeels, J

    2010-01-01

    In partitioned fluid-structure interaction simulations, the flow equations and the structural equations are solved separately. Consequently, the stresses and displacements on both sides of the fluid-structure interface are not automatically in equilibrium. Coupling techniques like Aitken relaxation and the Interface Block Quasi-Newton method with approximate Jacobians from Least-Squares models (IBQN-LS) enforce this equilibrium, even with black-box solvers. However, all existing coupling techniques use only one flow solver and one structural solver. To benefit from the large number of multi-core processors in modern clusters, a new Multi-Solver Interface Block Quasi-Newton (MS-IBQN-LS) algorithm has been developed. This algorithm uses more than one flow solver and structural solver, each running in parallel on a number of cores. One-dimensional and three-dimensional numerical experiments demonstrate that the run time of a simulation decreases as the number of solvers increases, albeit at a slower pace. Hence, the presented multi-solver algorithm accelerates fluid-structure interaction calculations by increasing the number of solvers, especially when the run time does not decrease further if more cores are used per solver.

  11. Investigations of Orchestra Auralizations Using the Multi-Channel Multi-Source Auralization Technique

    DEFF Research Database (Denmark)

    Vigeant, Michelle; Wang, Lily M.; Rindel, Jens Holger

    2008-01-01

    a multi-channel multi-source auralization technique, involving individual five-channel anechoic recordings of each instrumental part of two symphonies. In the first study, these auralizations were subjectively compared to orchestra auralizations made using (a) a single omni-directional source, (b......) a surface source, and (c) single-channel multi-source method. Results show that the multi-source auralizations were rated to be more realistic than the surface source ones and to have larger source width than the single omni-directional source auralizations. No significant differences were found between......Room acoustics computer modeling is a tool for generating impulse responses and auralizations from modeled spaces. The auralizations are commonly made from a single-channel anechoic recording of solo instruments. For this investigation, auralizations of an entire orchestra were created using...

  12. Analysing the performance of dynamic multi-objective optimisation algorithms

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available and the goal of the algorithm is to track a set of tradeoff solutions over time. Analysing the performance of a dynamic multi-objective optimisation algorithm (DMOA) is not a trivial task. For each environment (before a change occurs) the DMOA has to find a set...

  13. Analysis of a yearly multi-round, multi-period, multi-product transmission rights auction

    International Nuclear Information System (INIS)

    Ziogos, N.P.; Bakirtzis, A.G.

    2008-01-01

    A yearly multi-round, multi-period, multi-product transmission rights (TR) auction issuing both point-to-point Financial Transmission Rights (FTRs) and Flow-Gate Rights (FGRs) is studied in this paper. In each round the TR market participants (buyers or sellers) submit their bid or offer prices based on past energy market performance. A Locational Marginal Pricing (LMP) based energy market is assumed. The TR market participants' bid or offer prices reflect their expectation of the average annual LMP differences between withdrawal and injections points for FTRs and of the transmission link capacity prices for FGRs. The TR auction is performed in four rounds; in each round 25% of the entire system capability is awarded. TRs that are awarded in one round are modeled as fixed injections in subsequent rounds. Market participants that have acquired TRs in one round can sell them in subsequent rounds. A market participant can submit bids or offers for on-peak, off-peak or 24-h TRs. A three-area, nine-bus test system with six TR market participants is used for the analysis of the TR auction. (author)

  14. Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Multidisciplinary design and optimization (MDO) tools developed to perform multi-disciplinary analysis based on low fidelity computation methods have been used in...

  15. Multi-head Watson-Crick automata

    OpenAIRE

    Chatterjee, Kingshuk; Ray, Kumar Sankar

    2015-01-01

    Inspired by multi-head finite automata and Watson-Crick automata in this paper, we introduce new structure namely multi-head Watson-Crick automata where we replace the single tape of multi-head finite automaton by a DNA double strand. The content of the second tape is determined using a complementarity relation similar to Watson-Crick complementarity relation. We establish the superiority of our model over multi-head finite automata and also show that both the deterministic and non-determinis...

  16. Scalability of a Low-Cost Multi-Teraflop Linux Cluster for High-End Classical Atomistic and Quantum Mechanical Simulations

    Science.gov (United States)

    Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash

    2003-01-01

    Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.

  17. Re-weighted Discriminatively Embedded K-Means for Multi-view Clustering.

    Science.gov (United States)

    Xu, Jinglin; Han, Junwei; Nie, Feiping; Li, Xuelong

    2017-02-08

    Recent years, more and more multi-view data are widely used in many real world applications. This kind of data (such as image data) are high dimensional and obtained from different feature extractors, which represents distinct perspectives of the data. How to cluster such data efficiently is a challenge. In this paper, we propose a novel multi-view clustering framework, called Re-weighted Discriminatively Embedded KMeans (RDEKM), for this task. The proposed method is a multiview least-absolute residual model which induces robustness to efficiently mitigates the influence of outliers and realizes dimension reduction during multi-view clustering. Specifically, the proposed model is an unsupervised optimization scheme which utilizes Iterative Re-weighted Least Squares to solve leastabsolute residual and adaptively controls the distribution of multiple weights in a re-weighted manner only based on its own low-dimensional subspaces and a common clustering indicator matrix. Furthermore, theoretical analysis (including optimality and convergence analysis) and the optimization algorithm are also presented. Compared to several state-of-the-art multi-view clustering methods, the proposed method substantially improves the accuracy of the clustering results on widely used benchmark datasets, which demonstrates the superiority of the proposed work.

  18. Multi-Modal Traveler Information System - GCM Corridor Architecture Interface Control Requirements

    Science.gov (United States)

    1997-10-31

    The Multi-Modal Traveler Information System (MMTIS) project involves a large number of Intelligent Transportation System (ITS) related tasks. It involves research of all ITS initiatives in the Gary-Chicago-Milwaukee (GCM) Corridor which are currently...

  19. PPM-based System for Guided Waves Communication Through Corrosion Resistant Multi-wire Cables

    Science.gov (United States)

    Trane, G.; Mijarez, R.; Guevara, R.; Pascacio, D.

    Novel wireless communication channels are a necessity in applications surrounded by harsh environments, for instance down-hole oil reservoirs. Traditional radio frequency (RF) communication schemes are not capable of transmitting signals through metal enclosures surrounded by corrosive gases and liquids. As an alternative to RF, a pulse position modulation (PPM) guided waves communication system has been developed and evaluated using a corrosion resistant 4H18 multi-wire cable, commonly used to descend electronic gauges in down-hole oil applications, as the communication medium. The system consists of a transmitter and a receiver that utilizes a PZT crystal, for electrical/mechanical coupling, attached to each extreme of the multi-wire cable. The modulator is based on a microcontroller, which transmits60 kHz guided wave pulses, and the demodulator is based on a commercial digital signal processor (DSP) module that performs real time DSP algorithms. Experimental results are presented, which were obtained using a 1m corrosion resistant 4H18multi-wire cable, commonly used with downhole electronic gauges in the oil sector. Although there was significant dispersion and multiple mode excitations of the transmitted guided wave energy pulses, the results show that data rates on the order of 500 bits per second are readily available employing PPM and simple communications techniques.

  20. High-Level Design for Ultra-Fast Software Defined Radio Prototyping on Multi-Processors Heterogeneous Platforms

    OpenAIRE

    Moy , Christophe; Raulet , Mickaël

    2010-01-01

    International audience; The design of Software Defined Radio (SDR) equipments (terminals, base stations, etc.) is still very challenging. We propose here a design methodology for ultra-fast prototyping on heterogeneous platforms made of GPPs (General Purpose Processors), DSPs (Digital Signal Processors) and FPGAs (Field Programmable Gate Array). Lying on a component-based approach, the methodology mainly aims at automating as much as possible the design from an algorithmic validation to a mul...

  1. Multi-filter spectrophotometry simulations

    Science.gov (United States)

    Callaghan, Kim A. S.; Gibson, Brad K.; Hickson, Paul

    1993-01-01

    To complement both the multi-filter observations of quasar environments described in these proceedings, as well as the proposed UBC 2.7 m Liquid Mirror Telescope (LMT) redshift survey, we have initiated a program of simulated multi-filter spectrophotometry. The goal of this work, still very much in progress, is a better quantitative assessment of the multiband technique as a viable mechanism for obtaining useful redshift and morphological class information from large scale multi-filter surveys.

  2. Case Study of Multi-Unit Risk: Multi-Unit Station Black-Out

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Kyemin; Jang, Seung-cheol [KAERI, Daejeon (Korea, Republic of); Heo, Gyunyoung [Kyung Hee University, Yongin (Korea, Republic of)

    2015-05-15

    After Fukushima Daiichi Accident, importance and public concern for Multi-Unit Risk (MUR) or Probabilistic Safety Assessment (PSA) have been increased. Most of nuclear power plant sites in the world have more than two units. These sites have been facing the problems of MUR or accident such as Fukushima. In case of South Korea, there are generally more than four units on the same site and even more than ten units are also expected. In other words, sites in South Korea also have been facing same problems. Considering number of units on the same site, potential of these problems may be larger than other countries. The purpose of this paper is to perform case study based on another paper submitted in the conference. MUR is depended on various site features such as design, shared systems/structures, layout, environmental condition, and so on. Considering various dependencies, we assessed Multi-Unit Station Black-out (MSBO) accident based on Hanul Unit 3 and 4 model. In this paper, case study for multi-unit risk or PSA had been performed. Our result was incomplete to assess total multi-unit risk because of two challenging issues. First, economic impact had not been evaluated to estimate multi-unit risk. Second, large uncertainties were included in our result because of various assumptions. These issues must be resolved in the future.

  3. Case Study of Multi-Unit Risk: Multi-Unit Station Black-Out

    International Nuclear Information System (INIS)

    Oh, Kyemin; Jang, Seung-cheol; Heo, Gyunyoung

    2015-01-01

    After Fukushima Daiichi Accident, importance and public concern for Multi-Unit Risk (MUR) or Probabilistic Safety Assessment (PSA) have been increased. Most of nuclear power plant sites in the world have more than two units. These sites have been facing the problems of MUR or accident such as Fukushima. In case of South Korea, there are generally more than four units on the same site and even more than ten units are also expected. In other words, sites in South Korea also have been facing same problems. Considering number of units on the same site, potential of these problems may be larger than other countries. The purpose of this paper is to perform case study based on another paper submitted in the conference. MUR is depended on various site features such as design, shared systems/structures, layout, environmental condition, and so on. Considering various dependencies, we assessed Multi-Unit Station Black-out (MSBO) accident based on Hanul Unit 3 and 4 model. In this paper, case study for multi-unit risk or PSA had been performed. Our result was incomplete to assess total multi-unit risk because of two challenging issues. First, economic impact had not been evaluated to estimate multi-unit risk. Second, large uncertainties were included in our result because of various assumptions. These issues must be resolved in the future

  4. Autonomous transmission power adaptation for multi-radio multi-channel wireless mesh networks

    CSIR Research Space (South Africa)

    Olwal, TO

    2009-09-01

    Full Text Available Multi-Radio Multi-Channel (MRMC) systems are key to power control problems in WMNs. Previous studies have emphasized through- put maximization in such systems as the main design challenge and transmission power control treated as a secondary issue...

  5. Autonomous transmission power adaptation for multi-radio multi-channel wireless mesh networks

    CSIR Research Space (South Africa)

    Olwal, TO

    2008-09-01

    Full Text Available Multi-Radio Multi-Channel (MRMC) systems are key to power control problems in WMNs. Previous studies have emphasized throughput maximization in such systems as the main design challenge and transmission power control treated as a secondary issue...

  6. Design of High-Precision Infrared Multi-Touch Screen Based on the EFM32

    Directory of Open Access Journals (Sweden)

    Zhong XIAOLING

    2014-07-01

    Full Text Available Due to the low accuracy of traditional infrared multi-touch screen, it’s difficult to ascertain the touch point. Putting forward a design scheme based on ARM Cortex-M3 kernel EFM32 processor of high precision infrared multi-touch screen. Using tracking scanning area algorithm after accessed electricity for the first time to scan, it greatly improved the scanning efficiency and response speed. Based on the infrared characteristic difference, putting forward a data fitting algorithm, employing the subtraction relationship between the covering area and sampling value to curve fitting, concluding the infrared sampling value of subtraction characteristic curve, establishing a sampling value differential data tables, at last ensuring the precise location of touch point. Besides, practices have proved that the accuracy of the infrared touch screen can up to 0.5 mm. The design uses standard USB port which connected to the PC can also be widely used in various terminals.

  7. Teamwork in Multi-Agent Systems A Formal Approach

    CERN Document Server

    Dunin-Keplicz, Barbara Maria

    2010-01-01

    What makes teamwork tick?. Cooperation matters, in daily life and in complex applications. After all, many tasks need more than a single agent to be effectively performed. Therefore, teamwork rules!. Teams are social groups of agents dedicated to the fulfilment of particular persistent tasks. In modern multiagent environments, heterogeneous teams often consist of autonomous software agents, various types of robots and human beings. Teamwork in Multi-agent Systems: A Formal Approach explains teamwork rules in terms of agents' attitudes and their complex interplay. It provides the first comprehe

  8. Prediction of Quadcopter State through Multi-Microphone Side-Channel Fusion

    NARCIS (Netherlands)

    Koops, Hendrik Vincent; Garg, Kashish; Kim, Munsung; Li, Jonathan; Volk, Anja; Franchetti, Franz

    Improving trust in the state of Cyber-Physical Systems becomes increasingly important as more tasks become autonomous. We present a multi-microphone machine learning fusion approach to accurately predict complex states of a quadcopter drone in flight from the sound it makes using audio content

  9. CGLXTouch: A multi-user multi-touch approach for ultra-high-resolution collaborative workspaces

    KAUST Repository

    Ponto, Kevin

    2011-06-01

    This paper presents an approach for empowering collaborative workspaces through ultra-high resolution tiled display environments concurrently interfaced with multiple multi-touch devices. Multi-touch table devices are supported along with portable multi-touch tablet and phone devices, which can be added to and removed from the system on the fly. Events from these devices are tagged with a device identifier and are synchronized with the distributed display environment, enabling multi-user support. As many portable devices are not equipped to render content directly, a remotely scene is streamed in. The presented approach scales for large numbers of devices, providing access to a multitude of hands-on techniques for collaborative data analysis. © 2011 Elsevier B.V. All rights reserved.

  10. Multi-Band Multi-Tone Tunable Millimeter-Wave Frequency Synthesizer For Satellite Beacon Transmitter

    Science.gov (United States)

    Simons, Rainee N.; Wintucky, Edwin G.

    2016-01-01

    This paper presents the design and test results of a multi-band multi-tone tunable millimeter-wave frequency synthesizer, based on a solid-state frequency comb generator. The intended application of the synthesizer is in a satellite beacon transmitter for radio wave propagation studies at K-band (18 to 26.5 GHz), Q-band (37 to 42 GHz), and E-band (71 to 76 GHz). In addition, the architecture for a compact beacon transmitter, which includes the multi-tone synthesizer, polarizer, horn antenna, and power/control electronics, has been investigated for a notional space-to-ground radio wave propagation experiment payload on a small satellite. The above studies would enable the design of robust high throughput multi-Gbps data rate future space-to-ground satellite communication links.

  11. An Evolutionary Approach for Bilevel Multi-objective Problems

    Science.gov (United States)

    Deb, Kalyanmoy; Sinha, Ankur

    Evolutionary multi-objective optimization (EMO) algorithms have been extensively applied to find multiple near Pareto-optimal solutions over the past 15 years or so. However, EMO algorithms for solving bilevel multi-objective optimization problems have not received adequate attention yet. These problems appear in many applications in practice and involve two levels, each comprising of multiple conflicting objectives. These problems require every feasible upper-level solution to satisfy optimality of a lower-level optimization problem, thereby making them difficult to solve. In this paper, we discuss a recently proposed bilevel EMO procedure and show its working principle on a couple of test problems and on a business decision-making problem. This paper should motivate other EMO researchers to engage more into this important optimization task of practical importance.

  12. A new multi-objective optimization model for preventive maintenance and replacement scheduling of multi-component systems

    Science.gov (United States)

    Moghaddam, Kamran S.; Usher, John S.

    2011-07-01

    In this article, a new multi-objective optimization model is developed to determine the optimal preventive maintenance and replacement schedules in a repairable and maintainable multi-component system. In this model, the planning horizon is divided into discrete and equally-sized periods in which three possible actions must be planned for each component, namely maintenance, replacement, or do nothing. The objective is to determine a plan of actions for each component in the system while minimizing the total cost and maximizing overall system reliability simultaneously over the planning horizon. Because of the complexity, combinatorial and highly nonlinear structure of the mathematical model, two metaheuristic solution methods, generational genetic algorithm, and a simulated annealing are applied to tackle the problem. The Pareto optimal solutions that provide good tradeoffs between the total cost and the overall reliability of the system can be obtained by the solution approach. Such a modeling approach should be useful for maintenance planners and engineers tasked with the problem of developing recommended maintenance plans for complex systems of components.

  13. MULTI-PHOTON PHOSPHOR FEASIBILITY RESEARCH

    Energy Technology Data Exchange (ETDEWEB)

    R. Graham; W. Chow

    2003-05-01

    Development of multi-photon phosphor materials for discharge lamps represents a goal that would achieve up to a doubling of discharge (fluorescent) lamp efficacy. This report reviews the existing literature on multi-photon phosphors, identifies obstacles in developing such phosphors, and recommends directions for future research to address these obstacles. To critically examine issues involved in developing a multi-photon phosphor, the project brought together a team of experts from universities, national laboratories, and an industrial lamp manufacturer. Results and findings are organized into three categories: (1) Multi-Photon Systems and Processes, (2) Chemistry and Materials Issues, and (3) Concepts and Models. Multi-Photon Systems and Processes: This category focuses on how to use our current understanding of multi-photon phosphor systems to design new phosphor systems for application in fluorescent lamps. The quickest way to develop multi-photon lamp phosphors lies in finding sensitizer ions for Gd{sup 3+} and identifying activator ions to red shift the blue emission from Pr{sup 3+} due to the {sup 1}S{sub 0} {yields} {sup 1}I{sub 6} transition associated with the first cascading step. Success in either of these developments would lead to more efficient fluorescent lamps. Chemistry and Materials Issues: The most promising multi-photon phosphors are found in fluoride hosts. However, stability of fluorides in environments typically found in fluorescent lamps needs to be greatly improved. Experimental investigation of fluorides in actual lamp environments needs to be undertaken while working on oxide and oxyfluoride alternative systems for backup. Concepts and Models: Successful design of a multi-photon phosphor system based on cascading transitions of Gd{sup 3+} and Pr{sup 3+} depends critically on how the former can be sensitized and the latter can sensitize an activator ion. Methods to predict energy level diagrams and Judd-Ofelt parameters of multi

  14. Multi-time, multi-scale correlation functions in turbulence and in turbulent models

    NARCIS (Netherlands)

    Biferale, L.; Boffetta, G.; Celani, A.; Toschi, F.

    1999-01-01

    A multifractal-like representation for multi-time, multi-scale velocity correlation in turbulence and dynamical turbulent models is proposed. The importance of subleading contributions to time correlations is highlighted. The fulfillment of the dynamical constraints due to the equations of motion is

  15. Multi-scale simulation for homogenization of cement media

    International Nuclear Information System (INIS)

    Abballe, T.

    2011-01-01

    To solve diffusion problems on cement media, two scales must be taken into account: a fine scale, which describes the micrometers wide microstructures present in the media, and a work scale, which is usually a few meters long. Direct numerical simulations are almost impossible because of the huge computational resources (memory, CPU time) required to assess both scales at the same time. To overcome this problem, we present in this thesis multi-scale resolution methods using both Finite Volumes and Finite Elements, along with their efficient implementations. More precisely, we developed a multi-scale simulation tool which uses the SALOME platform to mesh domains and post-process data, and the parallel calculation code MPCube to solve problems. This SALOME/MPCube tool can solve automatically and efficiently multi-scale simulations. Parallel structure of computer clusters can be use to dispatch the more time-consuming tasks. We optimized most functions to account for cement media specificities. We presents numerical experiments on various cement media samples, e.g. mortar and cement paste. From these results, we manage to compute a numerical effective diffusivity of our cement media and to reconstruct a fine scale solution. (author) [fr

  16. THE INTEGRATION OF EDUCATION IN MULTI-RACIAL AND MULTI-CULTURAL SOCIETY

    Directory of Open Access Journals (Sweden)

    Chamisah Chamisah

    2017-07-01

    Full Text Available This study aims to know the reasons of education ina multi-racial and multi-cultural society demands integration.. The studywhich focuses onMalaysia country typified by three major ethnic groups, namely Malays, Chinese and Indians,foundthat, firstlythe integration of curriculum in creating a holistic education isvital for the society to create a competitive human capital with value laden such as trustworthiness, dedication, creativity, civic awareness and many more. Secondly, integration curriculum emphasizes on the equity and equality of education for all.Through interacting with individuals from a range of religious and ethnic backgrounds, people will learn to understand, accept and embrace differences. Throughsharing experiences and aspirations,a common national identity and ultimately unity can be achieved. Therefore, integrating curriculum is not only to integrate Islamic concepts in the education system but alsoto integrate all the concepts that make one comprehensive curriculum. Thirdly, it is important to integrate the curriculum with a value-laden perspective to encourage solidarity and harmony. Keywords: Multi-racial; Multi-cultural society; Education; Malaysia

  17. First observation of multi-pulse X-ray train via multi-collision laser Compton scattering

    International Nuclear Information System (INIS)

    Kuroda, R.; Toyokawa, H.; Yasumoto, M.; Ikeura-Sekiguchi, H.; Koike, M.; Yamada, K.; Yanagida, T.; Nakajyo, T.; Sakai, F.

    2009-01-01

    A compact hard X-ray source via laser Compton scattering (LCS) has been developed for biological and medical applications at the National Institute of Advanced Industrial Science and Technology (AIST) in Japan. The multi-collision LCS has been investigated in order to enhance the X-ray yields. The first observation of multi-pulse X-ray train with 6 pulses via the multi-collision LCS has been successfully demonstrated between the multi-bunch electron train with 6 bunches and the multi-pulse Ti:Sa laser train with 6 pulses. The 32 MeV electron train was generated from a Cs 2 Te photocathode rf gun with a multi-pulse UV laser and the S-band linac. The Ti:Sa laser train was obtained with the chirp pulse amplification (CPA) including the modified regenerative amplifier. The X-ray train with 6 pulses with 12.6 ns spacing was observed with the micro-channel plate (MCP). The maximum energy of the X-ray is analytically estimated to be about 24 keV and the total number of generated photons was calculated to be about 1.8x10 6 photons/train.

  18. Multi-objective design of PV-wind-diesel-hydrogen-battery systems

    Energy Technology Data Exchange (ETDEWEB)

    Dufo-Lopez, Rodolfo; Bernal-Agustin, Jose L. [Department of Electrical Engineering, University of Zaragoza, Calle Maria de Luna 3, 50018-Zaragoza (Spain)

    2008-12-15

    This paper presents, for the first time, a triple multi-objective design of isolated hybrid systems minimizing, simultaneously, the total cost throughout the useful life of the installation, pollutant emissions (CO{sub 2}) and unmet load. For this task, a multi-objective evolutionary algorithm (MOEA) and a genetic algorithm (GA) have been used in order to find the best combination of components of the hybrid system and control strategies. As an example of application, a complex PV-wind-diesel-hydrogen-battery system has been designed, obtaining a set of possible solutions (Pareto Set). The results achieved demonstrate the practical utility of the developed design method. (author)

  19. Developing a Multi-Dimensional Evaluation Framework for Faculty Teaching and Service Performance

    Science.gov (United States)

    Baker, Diane F.; Neely, Walter P.; Prenshaw, Penelope J.; Taylor, Patrick A.

    2015-01-01

    A task force was created in a small, AACSB-accredited business school to develop a more comprehensive set of standards for faculty performance. The task force relied heavily on faculty input to identify and describe key dimensions that capture effective teaching and service performance. The result is a multi-dimensional framework that will be used…

  20. Multi­-Threaded Algorithms for General purpose Graphics Processor Units in the ATLAS High Level Trigger

    CERN Document Server

    Conde Mui\\~no, Patricia; The ATLAS collaboration

    2016-01-01

    General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with level 1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz level 1 acceptance rate to 1 kHz for recording, requiring an average per­-event processing time of ~250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significant ...

  1. Range based power control for multi-radio multi-channel wireless mesh networks

    CSIR Research Space (South Africa)

    Olwal, TO

    2009-08-01

    Full Text Available Multi-Radio Multi-Channel (MRMC) systems are key to power control problems in Wireless Mesh Networks (WMNs). In this paper, researchers present a range based dynamic power control for MRMC WMNs. First, WMN is represented as a set of disjoint Unified...

  2. Multi-threaded software framework development for the ATLAS experiment

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00226135; Baines, John; Bold, Tomasz; Calafiura, Paolo; Dotti, Andrea; Farrell, Steven; Leggett, Charles; Malon, David; Ritsch, Elmar; Snyder, Scott; Tsulaia, Vakhtang; van Gemmeren, Peter; Wynne, Benjamin

    2016-01-01

    ATLAS's current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognised for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. ATLAS examined the requirements on an updated multi-threaded framework and laid out plans for a new framework, including better support for high level trigger (HLT) use cases, in 2014. In this paper we report on our progress in developing the new multi-threaded task parallel extension of Athena, AthenaMT. Implementing AthenaMT has required many significant code changes. Progress has been made in updating key concepts of the framework, to allow the incorporation of different levels of thread safety in algorithmic code (from un-migrated thread-unsafe code, to thread safe copyable code to reentrant co...

  3. Multi-threaded Software Framework Development for the ATLAS Experiment

    CERN Document Server

    Stewart, Graeme; The ATLAS collaboration; Baines, John; Calafiura, Paolo; Dotti, Andrea; Farrell, Steven; Leggett, Charles; Malon, David; Ritsch, Elmar; Snyder, Scott; Tsulaia, Vakhtang; van Gemmeren, Peter; Wynne, Benjamin

    2016-01-01

    ATLAS's current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognised for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. ATLAS examined the requirements on an updated multi-threaded framework and layed out plans for a new framework, including better support for high level trigger (HLT) use cases, in 2014. In this paper we report on our progress in developing the new multi-threaded task parallel extension of Athena, AthenaMT. Implementing AthenaMT has required many significant code changes. Progress has been made in updating key concepts of the framework, to allow the incorporation of different levels of thread safety in algorithmic code (from un-migrated thread-unsafe code, to thread safe copyable code to reentrant c...

  4. Distributed consensus with visual perception in multi-robot systems

    CERN Document Server

    Montijano, Eduardo

    2015-01-01

    This monograph introduces novel responses to the different problems that arise when multiple robots need to execute a task in cooperation, each robot in the team having a monocular camera as its primary input sensor. Its central proposition is that a consistent perception of the world is crucial for the good development of any multi-robot application. The text focuses on the high-level problem of cooperative perception by a multi-robot system: the idea that, depending on what each robot sees and its current situation, it will need to communicate these things to its fellows whenever possible to share what it has found and keep updated by them in its turn. However, in any realistic scenario, distributed solutions to this problem are not trivial and need to be addressed from as many angles as possible. Distributed Consensus with Visual Perception in Multi-Robot Systems covers a variety of related topics such as: ·         distributed consensus algorithms; ·         data association and robustne...

  5. A Coupled k-Nearest Neighbor Algorithm for Multi-Label Classification

    Science.gov (United States)

    2015-05-22

    classification, an image may contain several concepts simultaneously, such as beach, sunset and kangaroo . Such tasks are usually denoted as multi-label...informatics, a gene can belong to both metabolism and transcription classes; and in music categorization, a song may labeled as Mozart and sad. In the

  6. Randomized benchmarking of single- and multi-qubit control in liquid-state NMR quantum information processing

    International Nuclear Information System (INIS)

    Ryan, C A; Laforest, M; Laflamme, R

    2009-01-01

    Being able to quantify the level of coherent control in a proposed device implementing a quantum information processor (QIP) is an important task for both comparing different devices and assessing a device's prospects with regards to achieving fault-tolerant quantum control. We implement in a liquid-state nuclear magnetic resonance QIP the randomized benchmarking protocol presented by Knill et al (2008 Phys. Rev. A 77 012307). We report an error per randomized π/2 pulse of 1.3±0.1x10 -4 with a single-qubit QIP and show an experimentally relevant error model where the randomized benchmarking gives a signature fidelity decay which is not possible to interpret as a single error per gate. We explore and experimentally investigate multi-qubit extensions of this protocol and report an average error rate for one- and two-qubit gates of 4.7±0.3x10 -3 for a three-qubit QIP. We estimate that these error rates are still not decoherence limited and thus can be improved with modifications to the control hardware and software.

  7. MIMO wireless networks channels, techniques and standards for multi-antenna, multi-user and multi-cell systems

    CERN Document Server

    Clerckx, Bruno

    2013-01-01

    This book is unique in presenting channels, techniques and standards for the next generation of MIMO wireless networks. Through a unified framework, it emphasizes how propagation mechanisms impact the system performance under realistic power constraints. Combining a solid mathematical analysis with a physical and intuitive approach to space-time signal processing, the book progressively derives innovative designs for space-time coding and precoding as well as multi-user and multi-cell techniques, taking into consideration that MIMO channels are often far from ideal. Reflecting developments

  8. Iterative equalization for OFDM systems over wideband Multi-Scale Multi-Lag channels

    NARCIS (Netherlands)

    Xu, T.; Tang, Z.; Remis, R.; Leus, G.

    2012-01-01

    OFDM suffers from inter-carrier interference (ICI) when the channel is time varying. This article seeks to quantify the amount of interference resulting from wideband OFDM channels, which are assumed to follow the multi-scale multi-lag (MSML) model. The MSML channel model results in full channel

  9. Correlations of stock price fluctuations under multi-scale and multi-threshold scenarios

    Science.gov (United States)

    Sui, Guo; Li, Huajiao; Feng, Sida; Liu, Xueyong; Jiang, Meihui

    2018-01-01

    The multi-scale method is widely used in analyzing time series of financial markets and it can provide market information for different economic entities who focus on different periods. Through constructing multi-scale networks of price fluctuation correlation in the stock market, we can detect the topological relationship between each time series. Previous research has not addressed the problem that the original fluctuation correlation networks are fully connected networks and more information exists within these networks that is currently being utilized. Here we use listed coal companies as a case study. First, we decompose the original stock price fluctuation series into different time scales. Second, we construct the stock price fluctuation correlation networks at different time scales. Third, we delete the edges of the network based on thresholds and analyze the network indicators. Through combining the multi-scale method with the multi-threshold method, we bring to light the implicit information of fully connected networks.

  10. Multi-user data acquisition environment

    International Nuclear Information System (INIS)

    Storch, N.A.

    1983-01-01

    The typical data acquisition environment involves data collection and monitoring by a single user. However, in order to support experiments on the Mars facility at Lawrence Livermore National Laboratory, we have had to create a multi-user data acquisition environment where any user can control the data acquisition and several users can monitor and analyze data being collected in real time. This paper describes how we accomplished this on an HP A600 computer. It focuses on the overall system description and user communication with the tasks within the system. Our current implementation is one phase of a long-term software development project

  11. Processor farming in two-level analysis of historical bridge

    Science.gov (United States)

    Krejčí, T.; Kruis, J.; Koudelka, T.; Šejnoha, M.

    2017-11-01

    This contribution presents a processor farming method in connection with a multi-scale analysis. In this method, each macro-scopic integration point or each finite element is connected with a certain meso-scopic problem represented by an appropriate representative volume element (RVE). The solution of a meso-scale problem provides then effective parameters needed on the macro-scale. Such an analysis is suitable for parallel computing because the meso-scale problems can be distributed among many processors. The application of the processor farming method to a real world masonry structure is illustrated by an analysis of Charles bridge in Prague. The three-dimensional numerical model simulates the coupled heat and moisture transfer of one half of arch No. 3. and it is a part of a complex hygro-thermo-mechanical analysis which has been developed to determine the influence of climatic loading on the current state of the bridge.

  12. Image Matrix Processor for Volumetric Computations Final Report CRADA No. TSB-1148-95

    Energy Technology Data Exchange (ETDEWEB)

    Roberson, G. Patrick [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Browne, Jolyon [Advanced Research & Applications Corporation, Sunnyvale, CA (United States)

    2018-01-22

    The development of an Image Matrix Processor (IMP) was proposed that would provide an economical means to perform rapid ray-tracing processes on volume "Giga Voxel" data sets. This was a multi-phased project. The objective of the first phase of the IMP project was to evaluate the practicality of implementing a workstation-based Image Matrix Processor for use in volumetric reconstruction and rendering using hardware simulation techniques. Additionally, ARACOR and LLNL worked together to identify and pursue further funding sources to complete a second phase of this project.

  13. Non-convex multi-objective optimization

    CERN Document Server

    Pardalos, Panos M; Žilinskas, Julius

    2017-01-01

    Recent results on non-convex multi-objective optimization problems and methods are presented in this book, with particular attention to expensive black-box objective functions. Multi-objective optimization methods facilitate designers, engineers, and researchers to make decisions on appropriate trade-offs between various conflicting goals. A variety of deterministic and stochastic multi-objective optimization methods are developed in this book. Beginning with basic concepts and a review of non-convex single-objective optimization problems; this book moves on to cover multi-objective branch and bound algorithms, worst-case optimal algorithms (for Lipschitz functions and bi-objective problems), statistical models based algorithms, and probabilistic branch and bound approach. Detailed descriptions of new algorithms for non-convex multi-objective optimization, their theoretical substantiation, and examples for practical applications to the cell formation problem in manufacturing engineering, the process design in...

  14. Real-time multi-camera video acquisition and processing platform for ADAS

    Science.gov (United States)

    Saponara, Sergio

    2016-04-01

    The paper presents the design of a real-time and low-cost embedded system for image acquisition and processing in Advanced Driver Assisted Systems (ADAS). The system adopts a multi-camera architecture to provide a panoramic view of the objects surrounding the vehicle. Fish-eye lenses are used to achieve a large Field of View (FOV). Since they introduce radial distortion of the images projected on the sensors, a real-time algorithm for their correction is also implemented in a pre-processor. An FPGA-based hardware implementation, re-using IP macrocells for several ADAS algorithms, allows for real-time processing of input streams from VGA automotive CMOS cameras.

  15. Multi-headed comparatives in Portuguese

    Directory of Open Access Journals (Sweden)

    Rui Marques

    2005-06-01

    Full Text Available This paper aims at offering a global picture of the subtype of comparative constructions known as ‘multi-headed comparatives’ (from the fact that they exhibit more than one comparative operator in semantic interdependence. As a prerequisite to the fulfilment of his goal, an attempt will be made to clarify the scope of the notion ‘comparative construction’ and to draw a general typology of such constructions. The boundaries of the notion ‘comparative construction’ are defined by contrasting a “genuine” class of comparative constructions with others that hold some syntactic or semantic resemblance to them. Different typologies will be taken into consideration. As for multi-headed comparatives, even though different examples of these constructions have been identified in the scarce literature on the matter, the discussion on their syntactic patterns and meaning is still embryonic. This paper suggests that the expressive power of these comparatives, which seem to provide a particular strategy of information compression, is higher than has been assumed. Four sub-kinds of multi-headed comparatives are identified, based on meaning differences, namely: multi-headed comparatives with a distributive reading, multi-headed comparatives with a cumulative reading, multi-headed comparatives with a comparison of ‘ratios’ reading, and multi-headed comparatives with a comparison of differences reading. While resorting to some classic English examples, the object language will predominantly be Portuguese.

  16. Mainstreaming Multi-Risk Approaches into Policy

    Directory of Open Access Journals (Sweden)

    Anna Scolobig

    2017-12-01

    Full Text Available Multi-risk environments are characterized by domino effects that often amplify the overall risk. Those include chains of hazardous events and increasing vulnerability, among other types of correlations within the risk process. The recently developed methods for multi-hazard and risk assessment integrate interactions between different risks by using harmonized procedures based on common metrics. While the products of these assessments, such as multi-hazard and -risk indexes, maps, cascade scenarios, or warning systems provide innovative and effective information, they also pose specific challenges to policy makers and practitioners due to their novel cross-disciplinary aspects. In this paper we discuss the institutional barriers to the adoption of multi-risk approaches, summarizing the results of the fieldwork conducted in Italy and Guadeloupe and of workshops with disaster risk reduction practitioners from eleven European countries. Results show the need for a clear identification of responsibilities for the implementation of multi-risk approaches, as institutional frameworks for risk reduction remain to this day primarily single-risk centered. Authorities are rarely officially responsible for the management of domino effects between e.g., tsunamis and industrial accidents, earthquake and landslides, floods and electricity network failures. Other barriers for the implementation of multi-risk approaches include the limited measures to reduce exposure at the household level, inadequate financial capacities at the local level and limited public-private partnerships, especially in case of interactions between natural and industrial risks. Adapting the scale of institutions to that of multi-risk environments remains a major challenge to better mainstream multi-risk approaches into policy. To address it, we propose a multi-risk governance framework, which includes the phases of observation, social and institutional context analysis, generation of

  17. Multi-chamber and multi-layer thiol-ene microchip for cell culture

    DEFF Research Database (Denmark)

    Tan, H. Y.; Hemmingsen, Mette; Lafleur, Josiane P.

    2014-01-01

    We present a multi-layer and multi-chamber microfluidic chip fabricated using two different thiol-ene mixtures. Sandwiched between the thiol-ene chip layers is a commercially available membrane whose morphology has been altered with coatings of thiol-ene mixtures. Experiments have been conducted ...... with the microchip and shown that the fabricated microchip is suitable for long term cell culture....

  18. Methods to Load Balance a GCR Pressure Solver Using a Stencil Framework on Multi- and Many-Core Architectures

    Directory of Open Access Journals (Sweden)

    Milosz Ciznicki

    2015-01-01

    Full Text Available The recent advent of novel multi- and many-core architectures forces application programmers to deal with hardware-specific implementation details and to be familiar with software optimisation techniques to benefit from new high-performance computing machines. Extra care must be taken for communication-intensive algorithms, which may be a bottleneck for forthcoming era of exascale computing. This paper aims to present a high-level stencil framework implemented for the EULerian or LAGrangian model (EULAG that efficiently utilises multi- and many-cores architectures. Only an efficient usage of both many-core processors (CPUs and graphics processing units (GPUs with the flexible data decomposition method can lead to the maximum performance that scales the communication-intensive Generalized Conjugate Residual (GCR elliptic solver with preconditioner.

  19. Psychophysical testing of visual prosthetic devices: a call to establish a multi-national joint task force

    Science.gov (United States)

    Rizzo, Joseph F., III; Ayton, Lauren N.

    2014-04-01

    Recent advances in the field of visual prostheses, as showcased in this special feature of Journal of Neural Engineering , have led to promising results from clinical trials of a number of devices. However, as noted by these groups there are many challenges involved in assessing vision of people with profound vision loss. As such, it is important that there is consistency in the methodology and reporting standards for clinical trials of visual prostheses and, indeed, the broader vision restoration research field. Two visual prosthesis research groups, the Boston Retinal Implant Project (BRIP) and Bionic Vision Australia (BVA), have agreed to work cooperatively to establish a multi-national Joint Task Force. The aim of this Task Force will be to develop a consensus statement to guide the methods used to conduct and report psychophysical and clinical results of humans who receive visual prosthetic devices. The overarching goal is to ensure maximum benefit to the implant recipients, not only in the outcomes of the visual prosthesis itself, but also in enabling them to obtain accurate information about this research with ease. The aspiration to develop a Joint Task Force was first promulgated at the inaugural 'The Eye and the Chip' meeting in September 2000. This meeting was established to promote the development of the visual prosthetic field by applying the principles of inclusiveness, openness, and collegiality among the growing body of researchers in this field. These same principles underlie the intent of this Joint Task Force to enhance the quality of psychophysical research within our community. Despite prior efforts, a critical mass of interested parties could not congeal. Renewed interest for developing joint guidelines has developed recently because of a growing awareness of the challenges of obtaining reliable measurements of visual function in patients who are severely visually impaired (in whom testing is inherently noisy), and of the importance of

  20. Exploring Hardware Support For Scaling Irregular Applications on Multi-node Multi-core Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Secchi, Simone; Ceriani, Marco; Tumeo, Antonino; Villa, Oreste; Palermo, Gianluca; Raffo, Luigi

    2013-06-05

    With the recent emergence of large-scale knowledge dis- covery, data mining and social network analysis, irregular applications have gained renewed interest. Classic cache-based high-performance architectures do not provide optimal performances with such kind of workloads, mainly due to the very low spatial and temporal locality of the irregular control and memory access patterns. In this paper, we present a multi-node, multi-core, fine-grained multi-threaded shared-memory system architecture specifically designed for the execution of large-scale irregular applications, and built on top of three pillars, that we believe are fundamental to support these workloads. First, we offer transparent hardware support for Partitioned Global Address Space (PGAS) to provide a large globally-shared address space with no software library overhead. Second, we employ multi-threaded multi-core processing nodes to achieve the necessary latency tolerance required by accessing global memory, which potentially resides in a remote node. Finally, we devise hardware support for inter-thread synchronization on the whole global address space. We first model the performances by using an analytical model that takes into account the main architecture and application characteristics. We describe the hardware design of the proposed cus- tom architectural building blocks that provide support for the above- mentioned three pillars. Finally, we present a limited-scale evaluation of the system on a multi-board FPGA prototype with typical irregular kernels and benchmarks. The experimental evaluation demonstrates the architecture performance scalability for different configurations of the whole system.