WorldWideScience

Sample records for machine architecture lma

  1. A computer architecture for intelligent machines

    Science.gov (United States)

    Lefebvre, D. R.; Saridis, G. N.

    1992-01-01

    The theory of intelligent machines proposes a hierarchical organization for the functions of an autonomous robot based on the principle of increasing precision with decreasing intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed. The authors present a computer architecture that implements the lower two levels of the intelligent machine. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Execution-level controllers for motion and vision systems are briefly addressed, as well as the Petri net transducer software used to implement coordination-level functions. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  2. Machine-to-machine communications architectures, technology, standards, and applications

    CERN Document Server

    Misic, Vojislav B

    2014-01-01

    With the number of machine-to-machine (M2M)-enabled devices projected to reach 20 to 50 billion by 2020, there is a critical need to understand the demands imposed by such systems. Machine-to-Machine Communications: Architectures, Technology, Standards, and Applications offers rigorous treatment of the many facets of M2M communication, including its integration with current technology.Presenting the work of a different group of international experts in each chapter, the book begins by supplying an overview of M2M technology. It considers proposed standards, cutting-edge applications, architectures, and traffic modeling and includes case studies that highlight the differences between traditional and M2M communications technology.Details a practical scheme for the forward error correction code designInvestigates the effectiveness of the IEEE 802.15.4 low data rate wireless personal area network standard for use in M2M communicationsIdentifies algorithms that will ensure functionality, performance, reliability, ...

  3. Comparison of LMA-ProSealTM with LMA ClassicTM in Anaesthetised Paralysed Children

    Directory of Open Access Journals (Sweden)

    Pravesh Kanthed

    2008-01-01

    Full Text Available The classic laryngeal mask airway (cLMA, though popular in anaesthesia practice provides low oropharyngeal seal pressure and there are concerns with its use during positive pressure ventilation for fear of gastric distension with subsequent gastric regurgitation and pulmonary aspiration. The ProSeal laryngeal mask airway (PLMA is a modified LMA with a larger, wedge shaped cuff and a drain tube. This modification improves the seal around glottis when compared to a cLMA and its drain tube prevents gastric distension and offers protection against aspiration when properly placed. We compared PLMA and cLMA in 100 anaesthetized, paralysed children with 50 patients in each group with respect to ease of insertion, oropharyngeal seal pressure and pharyngolaryngeal morbidity. Gastric tube insertion was also assessed for the PLMA. The ease of insertion and the number of attempts at insertion were found to be comparable in the two groups while the oropharyngeal seal pressure was significantly higher in the PLMA group (P < 0.001. The pharyngolaryngeal morbidity was comparable in both the groups. There was no incidence of regurgitation or aspiration in either group. The PLMA offered high reliability of gastric tube placement and significantly increased oropharyngeal seal pressure over the cLMA. This might have an important implication for use of this device for positive pressure ventilation in children.

  4. Reversible machine code and its abstract processor architecture

    DEFF Research Database (Denmark)

    Axelsen, Holger Bock; Glück, Robert; Yokoyama, Tetsuo

    2007-01-01

    A reversible abstract machine architecture and its reversible machine code are presented and formalized. For machine code to be reversible, both the underlying control logic and each instruction must be reversible. A general class of machine instruction sets was proven to be reversible, building...

  5. Migration of supervisory machine control architectures

    NARCIS (Netherlands)

    Graaf, B.; Weber, S.; Deursen, van A.; Nord, R.; Medvidovic, N.; Krikhaar, R.; Stafford, J.; Bosch, J.

    2005-01-01

    In this position paper, we discuss a first step towards an approach for the migration of supervisory machine control (SMC) architectures. This approach is based on the identification of SMC concerns and the definition of corresponding transformation rules.

  6. Laryngeal mask airway (LMA) artefact resulting in MRI misdiagnosis

    International Nuclear Information System (INIS)

    Schieble, Thomas; Patel, Anuradha; Davidson, Melissa

    2008-01-01

    We report a 7-year-old child who underwent brain MRI for a known seizure disorder. The technique used for general anesthesia included inhalation induction followed by placement of a laryngeal mask airway (LMA) for airway maintenance. Because the reviewing radiologist was unfamiliar with the use of an LMA during anesthesia, and because the attending anesthesiologist did not communicate his technique to the radiologist, an MRI misdiagnosis was reported because of artefact created by the in situ LMA. As a result of this misdiagnosis the child was subjected to unnecessary subsequent testing to rule out a reported anatomic abnormality induced by the LMA. Our case illustrates the need for coordination of patient care among hospital services. (orig.)

  7. Laryngeal mask airway (LMA) artefact resulting in MRI misdiagnosis

    Energy Technology Data Exchange (ETDEWEB)

    Schieble, Thomas [University of Medicine and Dentistry of New Jersey, Department of Anesthesiology, New Jersey Medical School, Newark, NJ (United States); Maimonides Medical Center, Department of Anesthesiology, Brooklyn, NY (United States); Patel, Anuradha; Davidson, Melissa [University of Medicine and Dentistry of New Jersey, Department of Anesthesiology, New Jersey Medical School, Newark, NJ (United States)

    2008-03-15

    We report a 7-year-old child who underwent brain MRI for a known seizure disorder. The technique used for general anesthesia included inhalation induction followed by placement of a laryngeal mask airway (LMA) for airway maintenance. Because the reviewing radiologist was unfamiliar with the use of an LMA during anesthesia, and because the attending anesthesiologist did not communicate his technique to the radiologist, an MRI misdiagnosis was reported because of artefact created by the in situ LMA. As a result of this misdiagnosis the child was subjected to unnecessary subsequent testing to rule out a reported anatomic abnormality induced by the LMA. Our case illustrates the need for coordination of patient care among hospital services. (orig.)

  8. Amphiphilic HPMA-LMA copolymers increase the transport of Rhodamine 123 across a BBB model without harming its barrier integrity.

    Science.gov (United States)

    Hemmelmann, Mirjam; Metz, Verena V; Koynov, Kaloian; Blank, Kerstin; Postina, Rolf; Zentel, Rudolf

    2012-10-28

    The successful non-invasive treatment of diseases associated with the central nervous system (CNS) is generally limited by poor brain permeability of various developed drugs. The blood-brain barrier (BBB) prevents the passage of therapeutics to their site of action. Polymeric drug delivery systems are promising solutions to effectively transport drugs into the brain. We recently showed that amphiphilic random copolymers based on the hydrophilic p(N-(2-hydroxypropyl)-methacrylamide), pHPMA, possessing randomly distributed hydrophobic p(laurylmethacrylate), pLMA, are able to mediate delivery of domperidone into the brain of mice in vivo. To gain further insight into structure-property relations, a library of carefully designed polymers based on p(HPMA) and p(LMA) was synthesized and tested applying an in vitro BBB model which consisted of human brain microvascular endothelial cells (HBMEC). Our model drug Rhodamine 123 (Rh123) exhibits, like domperidone, a low brain permeability since both substances are recognized by efflux transporters at the BBB. Transport studies investigating the impact of the polymer architecture in relation to the content of hydrophobic LMA revealed that random p(HPMA)-co-p(LMA) having 10mol% LMA is the most auspicious system. The copolymer significantly increased the permeability of Rh123 across the HBMEC monolayer whereas transcytosis of the polymer was very low. Further investigations on the mechanism of transport showed that integrity and barrier function of the BBB model were not harmed by the polymer. According to our results, p(HPMA)-co-p(LMA) copolymers are a promising delivery system for neurological therapeutics and their application might open alternative treatment strategies. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Comparison of Three Smart Camera Architectures for Real-Time Machine Vision System

    Directory of Open Access Journals (Sweden)

    Abdul Waheed Malik

    2013-12-01

    Full Text Available This paper presents a machine vision system for real-time computation of distance and angle of a camera from a set of reference points located on a target board. Three different smart camera architectures were explored to compare performance parameters such as power consumption, frame speed and latency. Architecture 1 consists of hardware machine vision modules modeled at Register Transfer (RT level and a soft-core processor on a single FPGA chip. Architecture 2 is commercially available software based smart camera, Matrox Iris GT. Architecture 3 is a two-chip solution composed of hardware machine vision modules on FPGA and an external microcontroller. Results from a performance comparison show that Architecture 2 has higher latency and consumes much more power than Architecture 1 and 3. However, Architecture 2 benefits from an easy programming model. Smart camera system with FPGA and external microcontroller has lower latency and consumes less power as compared to single FPGA chip having hardware modules and soft-core processor.

  10. Architecture Without Explicit Locks for Logic Simulation on SIMD Machines

    OpenAIRE

    Cockshott, W. Paul; Chimeh, Mozhgan Kabiri

    2016-01-01

    The presentation describes an architecture for logic simulation that takes advantages of the features of multi-core SIMD architectures. It uses neither explicit locks nor queues, using instead oblivious simulation. Data structures are targeted to efficient SIMD and multi-core cache operation. We demonstrate high levels of parallelisation on Xeon Phi and AMD multi-core machines. Performance on a Xeon Phi is comparable to or better than on a 1000 core Blue Gene machine.

  11. Software architecture for time-constrained machine vision applications

    Science.gov (United States)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2013-01-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility, because they are normally oriented toward particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse, and inefficient execution on multicore processors. We present a novel software architecture for time-constrained machine vision applications that addresses these issues. The architecture is divided into three layers. The platform abstraction layer provides a high-level application programming interface for the rest of the architecture. The messaging layer provides a message-passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of message. The application layer provides a repository for reusable application modules designed for machine vision applications. These modules, which include acquisition, visualization, communication, user interface, and data processing, take advantage of the power of well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, the proposed architecture is applied to a real machine vision application: a jam detector for steel pickling lines.

  12. Investigation of support vector machine for the detection of architectural distortion in mammographic images

    International Nuclear Information System (INIS)

    Guo, Q; Shao, J; Ruiz, V

    2005-01-01

    This paper investigates detection of architectural distortion in mammographic images using support vector machine. Hausdorff dimension is used to characterise the texture feature of mammographic images. Support vector machine, a learning machine based on statistical learning theory, is trained through supervised learning to detect architectural distortion. Compared to the Radial Basis Function neural networks, SVM produced more accurate classification results in distinguishing architectural distortion abnormality from normal breast parenchyma

  13. Investigation of support vector machine for the detection of architectural distortion in mammographic images

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Q [Department of Cybernetics, University of Reading, Reading RG6 6AY (United Kingdom); Shao, J [Department of Electronics, University of Kent at Canterbury, Kent CT2 7NT (United Kingdom); Ruiz, V [Department of Cybernetics, University of Reading, Reading RG6 6AY (United Kingdom)

    2005-01-01

    This paper investigates detection of architectural distortion in mammographic images using support vector machine. Hausdorff dimension is used to characterise the texture feature of mammographic images. Support vector machine, a learning machine based on statistical learning theory, is trained through supervised learning to detect architectural distortion. Compared to the Radial Basis Function neural networks, SVM produced more accurate classification results in distinguishing architectural distortion abnormality from normal breast parenchyma.

  14. Implications of Structured Programming for Machine Architecture

    NARCIS (Netherlands)

    Tanenbaum, A.S.

    1978-01-01

    Based on an empirical study of more than 10,000 lines of program text written in a GOTO-less language, a machine architecture specifically designed for structured programs is proposed. Since assignment, CALL, RETURN, and IF statements together account for 93 percent of all executable statements,

  15. Flexible software architecture for user-interface and machine control in laboratory automation.

    Science.gov (United States)

    Arutunian, E B; Meldrum, D R; Friedman, N A; Moody, S E

    1998-10-01

    We describe a modular, layered software architecture for automated laboratory instruments. The design consists of a sophisticated user interface, a machine controller and multiple individual hardware subsystems, each interacting through a client-server architecture built entirely on top of open Internet standards. In our implementation, the user-interface components are built as Java applets that are downloaded from a server integrated into the machine controller. The user-interface client can thereby provide laboratory personnel with a familiar environment for experiment design through a standard World Wide Web browser. Data management and security are seamlessly integrated at the machine-controller layer using QNX, a real-time operating system. This layer also controls hardware subsystems through a second client-server interface. This architecture has proven flexible and relatively easy to implement and allows users to operate laboratory automation instruments remotely through an Internet connection. The software architecture was implemented and demonstrated on the Acapella, an automated fluid-sample-processing system that is under development at the University of Washington.

  16. Comparisons of GLM and LMA Observations

    Science.gov (United States)

    Thomas, R. J.; Krehbiel, P. R.; Rison, W.; Stanley, M. A.; Attanasio, A.

    2017-12-01

    Observations from 3-dimensional VHF lightning mapping arrays (LMAs) provide a valuable basis for evaluating the spatial accuracy and detection efficiencies of observations from the recently launched, optical-based Geosynchronous Lightning Mapper (GLM). In this presentation, we describe results of comparing the LMA and GLM observations. First, the observations are compared spatially and temporally at the individual event (pixel) level for sets of individual discharges. For LMA networks in Florida, Colorado, and Oklahoma, the GLM observations are well correlated time-wise with LMA observations but are systematically offset by one- to two pixels ( 10 to 15 or 20 km) in a southwesterly direction from the actual lightning activity. The graphical comparisons show a similar location uncertainty depending on the altitude at which the scattered light is emitted from the parent cloud, due to being observed at slant ranges. Detection efficiencies (DEs) can be accurately determined graphically for intervals where individual flashes in a storm are resolved time-wise, and DEs and false alarm rates can be automated using flash sorting algorithms for overall and/or larger storms. This can be done as a function of flash size and duration, and generally shows high detection rates for larger flashes. Preliminary results during the May 1 2017 ER-2 overflight of Colorado storms indicate decreased detection efficiency if the storm is obscured by an overlying cloud layer.

  17. Software architecture standard for simulation virtual machine, version 2.0

    Science.gov (United States)

    Sturtevant, Robert; Wessale, William

    1994-01-01

    The Simulation Virtual Machine (SBM) is an Ada architecture which eases the effort involved in the real-time software maintenance and sustaining engineering. The Software Architecture Standard defines the infrastructure which all the simulation models are built from. SVM was developed for and used in the Space Station Verification and Training Facility.

  18. EXPANDED ROLE OF LMA IN MINOR OBSTETRIC PROCEDURE: A PROSPECTIVE STUDY

    Directory of Open Access Journals (Sweden)

    Kousalya

    2015-05-01

    Full Text Available INTRODUCTION : The Laryngeal Mask Airway (LMA has been used extensively to provide a safe airway in spontaneously breathing patients who are not at risk from aspiration of gastric contents. The increased risk of aspiration in Obstetric population was initially considered as a relative contra indication for LMA usage. But LMA proved to be safe in this subgroup and in fact significantly decreased tidal volume was noted during IPPV with a decre ased the risk of aspiration. METHOD : This is a prospective study , performed in Niloufer Hospital for Children & Women from June 2011 – January 2014 over a period of 30months. We studied the ease of insertion of single use ILMA and associated complications in 35 ASA 1 obstetric patients. RESULTS : The mean age of the patients was 27.4 years. The mean BMI was 28.4 kg /m 2 . 21 patients were admitted for cerclage (60.0% , 5 Bartholin’s abscess (14.28% , 6 cases of manual removal of placenta (17.14% , 3 cases of vescicular mole for evacuation (8.57%. The duration of anesthesia ranged from 20 - 40 min with a mean duration of 19 minutes. The first time insertion rate was 88.57% , 31 out of 35 patients had the LMA inserted in first attempt. 4 patients needed reinsertio n. None of the patients had aspiration or other complications associated with LMA. There were no failed insertions. CONCLUSION : We conclude that the LMA is effective and safe for in carefully selected ASA 1 pregnant patients in the hands of experienced Ane sthesiologist.

  19. Neural architecture design based on extreme learning machine.

    Science.gov (United States)

    Bueno-Crespo, Andrés; García-Laencina, Pedro J; Sancho-Gómez, José-Luis

    2013-12-01

    Selection of the optimal neural architecture to solve a pattern classification problem entails to choose the relevant input units, the number of hidden neurons and its corresponding interconnection weights. This problem has been widely studied in many research works but their solutions usually involve excessive computational cost in most of the problems and they do not provide a unique solution. This paper proposes a new technique to efficiently design the MultiLayer Perceptron (MLP) architecture for classification using the Extreme Learning Machine (ELM) algorithm. The proposed method provides a high generalization capability and a unique solution for the architecture design. Moreover, the selected final network only retains those input connections that are relevant for the classification task. Experimental results show these advantages. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Logical Evaluation of Consciousness: For Incorporating Consciousness into Machine Architecture

    OpenAIRE

    Padhy, C. N.; Panda, R. R.

    2010-01-01

    Machine Consciousness is the study of consciousness in a biological, philosophical, mathematical and physical perspective and designing a model that can fit into a programmable system architecture. Prime objective of the study is to make the system architecture behave consciously like a biological model does. Present work has developed a feasible definition of consciousness, that characterizes consciousness with four parameters i.e., parasitic, symbiotic, self referral and reproduction. Prese...

  1. Q-switching and efficient harmonic generation from a single-mode LMA photonic bandgap rod fiber laser

    DEFF Research Database (Denmark)

    Laurila, Marko; Saby, Julien; Alkeskjold, Thomas T.

    2011-01-01

    We demonstrate a Single-Mode (SM) Large-Mode-Area (LMA) ytterbium-doped PCF rod fiber laser with stable and close to diffraction limited beam quality with 110W output power. Distributed-Mode-Filtering (DMF) elements integrated in the cladding of the rod fiber provide a robust spatial mode...... with a Mode-Field-Diameter (MFD) of 59 mu m. We further demonstrate high pulse energy Second-Harmonic-Generation (SHG) and Third Harmonic Generation (THG) using a simple Q-switched single-stage rod fiber laser cavity architecture reaching pulse energies up to 1mJ at 515nm and 0.5mJ at 343nm. (C) 2011 Optical...

  2. From scientific instrument to industrial machine : Coping with architectural stress in embedded systems

    NARCIS (Netherlands)

    Doornbos, R.; Loo, S. van

    2012-01-01

    Architectural stress is the inability of a system design to respond to new market demands. It is an important yet often concealed issue in high tech systems. In From scientific instrument to industrial machine, we look at the phenomenon of architectural stress in embedded systems in the context of a

  3. A COMPARISON OF CLINICAL PERFORMANCE OF I-GEL WITH PROSEAL LMA IN PATIENTS UNDERGOING MASTECTOMY

    Directory of Open Access Journals (Sweden)

    Basheer Padinhare Madathil

    2016-04-01

    Full Text Available AIM To assess the ease of insertion of I-gel and ProSeal LMA and incidence of post op complications. Study design-A prospective randomised controlled trial comparing the clinical performance of I-gel and ProSeal LMA. METHODS After induction and good muscle relaxation LMA/I-gel was introduced as per randomised computer allocation. After insertion, nasogastric tube was inserted through the gastric channel. Parameters monitored were heart rate, nubp, SpO2, ETCO2 at 1, 5 minutes after insertion of the device and thereafter every 5 minutes till the end of surgery. In case of failure, airway was secured with an endotracheal tube. Ease of gastric tube insertion was noted at the end of surgery; postop complications were noted. Blood staining of the device, injury to the lips, teeth, and tongue were noted. Incidence of sore throat 24 hrs. after surgery was also noted. Statistical analysis was done with SPSS software. RESULTS Age, height, weight and BMI were comparable in both groups. The airway characteristics was also comparable in both the groups. Ease of introduction was also the same for both the groups, but the time taken was much lesser for I-gel group. The ease of insertion of gastric tube was much easier for the I-gel group. Blood staining of the device was more with the ProSeal LMA group. There was no injury to any of the structures mentioned above. Postop sore throat was more in the ProSeal LMA group. CONCLUSION From our study, we conclude that the airway can be secured much faster with I-gel than ProSeal LMA. Postop sore throat was much less for I-gel than ProSeal LMA. Both were comparable in number of attempts of insertion, gastric tube introduction. Trauma to the airway structures was also minimum with both I-gel and ProSeal LMA.

  4. Sao Paulo Lightning Mapping Array (SP-LMA): Deployment and Plans

    Science.gov (United States)

    Bailey, J. C.; Carey, L. D.; Blakeslee, R. J.; Albrecht, R.; Morales, C. A.; Pinto, O., Jr.

    2011-01-01

    An 8-10 station Lightning Mapping Array (LMA) network is being deployed in the vicinity of Sao Paulo to create the SP-LMA for total lightning measurements in association with the international CHUVA [Cloud processes of tHe main precipitation systems in Brazil: A contribUtion to cloud resolVing modeling and to the GPM (GlobAl Precipitation Measurement)] field campaign. Besides supporting CHUVA science/mission objectives and the Sao Luz Paraitinga intensive operation period (IOP) in December 2011-January 2012, the SP-LMA will support the generation of unique proxy data for the Geostationary Lightning Mapper (GLM) and Advanced Baseline Imager (ABI), both sensors on the NOAA Geostationary Operational Environmental Satellite-R (GOES-R), presently under development and scheduled for a 2015 launch. The proxy data will be used to develop and validate operational algorithms so that they will be ready for use on "day1" following the launch of GOES-R. A preliminary survey of potential sites in the vicinity of Sao Paulo was conducted in December 2009 and January 2010, followed up by a detailed survey in July 2010, with initial network deployment scheduled for October 2010. However, due to a delay in the Sa Luz Paraitinga IOP, the SP-LMA will now be installed in July 2011 and operated for one year. Spacing between stations is on the order of 15-30 km, with the network "diameter" being on the order of 30-40 km, which provides good 3-D lightning mapping 150 km from the network center. Optionally, 1-3 additional stations may be deployed in the vicinity of Sa Jos dos Campos.

  5. Linear relations between leaf mass per area (LMA) and seasonal climate discovered through Linear Manifold Clustering (LMC)

    Science.gov (United States)

    Kiang, N. Y.; Haralick, R. M.; Diky, A.; Kattge, J.; Su, X.

    2016-12-01

    Leaf mass per area (LMA) is a critical variable in plant carbon allocation, correlates with leaf activity traits (photosynthetic activity, respiration), and is a controller of litterfall mass and hence carbon substrate for soil biogeochemistry. Recent advances in understanding the leaf economics spectrum (LES) show that LMA has a strong correlation with leaf life span, a trait that reflects ecological strategy, whereas physiological traits that control leaf activity scale with each other when mass-normalized (Osnas et al., 2013). These functional relations help reduce the number of independent variables in quantifying leaf traits. However, LMA is an independent variable that remains a challenge to specify in dynamic global vegetation models (DGVMs), when vegetation types are classified into a limited number of plant functional types (PFTs) without clear mechanistic drivers for LMA. LMA can range orders of magnitude across plant species, as well as vary within a single plant, both vertically and seasonally. As climate relations in combination with alternative ecological strategies have yet to be well identified for LMA, we have assembled 22,000 records of LMA spanning 0.004 - 33 mg/m2 from the numerous contributors to the TRY database (Kattge et al., 2011), with observations distributed over several climate zones and plant functional categories (growth form, leaf type, phenology). We present linear relations between LMA and climate variables, including seasonal temperature, precipitation, and radiation, as derived through Linear Manifold Clustering (LMC). LMC is a stochastic search technique for identifying linear dependencies between variables in high dimensional space. We identify a set of parsimonious classes of LMA-climate groups based on a metric of minimum description to identify structure in the data set, akin to data compression. The relations in each group are compared to Köppen-Geiger climate classes, with some groups revealing continuous linear relations

  6. The North Alabama Lightning Mapping Array (LMA): A Network Overview

    Science.gov (United States)

    Blakeslee, R. J.; Bailey, J.; Buechler, D.; Goodman, S. J.; McCaul, E. W., Jr.; Hall, J.

    2005-01-01

    The North Alabama Lightning Mapping Array (LMA) is s a 3-D VHF regional lightning detection system that provides on-orbit algorithm validation and instrument performance assessments for the NASA Lightning Imaging Sensor, as well as information on storm kinematics and updraft evolution that offers the potential to improve severe storm warning lead time by up t o 50% and decrease te false alarm r a t e ( for non-tornado producing storms). In support of this latter function, the LMA serves as a principal component of a severe weather test bed to infuse new science and technology into the short-term forecasting of severe and hazardous weather, principally within nearby National Weather Service forecast offices. The LMA, which became operational i n November 2001, consists of VHF receivers deployed across northern Alabama and a base station located at the National Space Science and Technology Center (NSSTC), which is on t h e campus of the University of Alabama in Huntsville. The LMA system locates the sources of impulsive VHF radio signals s from lightning by accurately measuring the time that the signals aririve at the different receiving stations. Each station's records the magnitude and time of the peak lightning radiation signal in successive 80 ms intervals within a local unused television channel (channel 5, 76-82 MHz in our case ) . Typically hundreds of sources per flash can be reconstructed, which i n t u r n produces accurate 3-dimensional lightning image maps (nominally network topology and the links have an effective data throughput rate ranging from 600 kbits s -1 t o 1.5 %its s -1. This presentation provides an overview of t h e North Alabama network, the data processing (both real-time and post processing) and network statistics.

  7. Taxonomy and remote sensing of leaf mass per area (LMA) in humid tropical forests

    Science.gov (United States)

    Gregory P. Asner; Roberta E. Martin; Raul Tupayachi; Ruth Emerson; Paola Martinez; Felipe Sinca; George V.N. Powell; S. Joseph Wright; Ariel E. Lugo

    2011-01-01

    Leaf mass per area (LMA) is a trait of central importance to plant physiology and ecosystem function, but LMA patterns in the upper canopies of humid tropical forests have proved elusive due to tall species and high diversity. We collected top-of-canopy leaf samples from 2873 individuals in 57 sites spread across the Neotropics, Australasia, and Caribbean and Pacific...

  8. Negative pressure pulmonary oedema following use of ProSeal LMA

    Directory of Open Access Journals (Sweden)

    Richa Jain

    2013-01-01

    Full Text Available Negative pressure pulmonary oedema (NPPO is a life threatening condition, manifested due to upper airway obstruction in a spontaneously breathing patient. Upper airway obstruction caused by classic laryngeal mask airway (cLMA and ProSeal laryngeal mask airway (PLMA has been reported, and NPPO has also been reported following the use of cLMA. Search of literature did not confirm NPPO following the use of PLMA. We encountered a female patient of NPPO scheduled for incision and drainage of an abscess who had signs of airway obstruction following PLMA insertion. Multiple attempts were made to get patent airway without success. PLMA was replaced with endotracheal tube following which pink frothy secretion appeared in breathing circuit. Patient was managed successfully with ICU care.

  9. Resonantly cladding-pumped Yb-free Er-doped LMA fiber laser with record high power and efficiency.

    Science.gov (United States)

    Zhang, Jun; Fromzel, Viktor; Dubinskii, Mark

    2011-03-14

    We report the results of our power scaling experiments with resonantly cladding-pumped Er-doped eye-safe large mode area (LMA) fiber laser. While using commercial off-the-shelf LMA fiber we achieved over 88 W of continuous-wave (CW) single transverse mode power at ~1590 nm while pumping at 1532.5 nm. Maximum observed optical-to-optical efficiency was 69%. This result presents, to the best of our knowledge, the highest power reported from resonantly-pumped Yb-free Er-doped LMA fiber laser, as well as the highest efficiency ever reported for any cladding-pumped Er-doped laser, either Yb-co-doped or Yb-free.

  10. The Laryngeal Mask Airway (LMA) as an alternative to airway ...

    African Journals Online (AJOL)

    Adele

    patient dental procedures in the MR patient and in individuals ... The parameters assessed during the pilot study included ease of LMA insertion and its seal, inspiratory pressures with ... mouth opposite to that used for the surgical procedure.

  11. From scientific instrument to industrial machine coping with architectural stress in embedded systems

    CERN Document Server

    Doornbos, Richard

    2012-01-01

    Architectural stress is the inability of a system design to respond to new market demands. It is an important yet often concealed issue in high tech systems. In From scientific instrument to industrial machine, we look at the phenomenon of architectural stress in embedded systems in the context of a transmission electron microscope system built by FEI Company. Traditionally, transmission electron microscopes are manually operated scientific instruments, but they also have enormous potential for use in industrial applications. However, this new market has quite different characteristics. There are strong demands for cost-effective analysis, accurate and precise measurements, and ease-of-use. These demands can be translated into new system qualities, e.g. reliability, predictability and high throughput, as well as new functions, e.g. automation of electron microscopic analyses, automated focusing and positioning functions. From scientific instrument to industrial machine takes a pragmatic approach to the proble...

  12. Improving LMA predictions with non standard interactions

    CERN Document Server

    Das, C R

    2010-01-01

    It has been known for some time that the well established LMA solution to the observed solar neutrino deficit fails to predict a flat energy spectrum for SuperKamiokande as opposed to what the data indicates. It also leads to a Chlorine rate which appears to be too high as compared to the data. We investigate the possible solution to these inconsistencies with non standard neutrino interactions, assuming that they come as extra contributions to the $\

  13. The Laryngeal Mask Airway (LMA) as an alternative to airway ...

    African Journals Online (AJOL)

    Background: To evaluate the possibility of airway management using a laryngeal mask airway (LMA) during dental procedures on mentally retarded (MR) patients and patients with genetic diseases. Design: A prospective pilot study. Setting: University Hospital. Methods: A pilot study was designed to induce general ...

  14. Sao Paulo Lightning Mapping Array (SP-LMA): Deployment, Operation and Initial Data Analysis

    Science.gov (United States)

    Blakeslee, R.; Bailey, J. C.; Carey, L. D.; Rudlosky, S.; Goodman, S. J.; Albrecht, R.; Morales, C. A.; Anseimo, E. M.; Pinto, O.

    2012-01-01

    An 8-10 station Lightning Mapping Array (LMA) network is being deployed in the vicinity of Sao Paulo to create the SP-LMA for total lightning measurements in association with the international CHUVA [Cloud processes of the main precipitation systems in Brazil: A contribution to cloud resolving modeling and to the GPM (Global Precipitation Measurement)] field campaign. Besides supporting CHUVA science/mission objectives and the Sao Luiz do Paraitinga intensive operation period (IOP) in November-December 2011, the SP-LMA will support the generation of unique proxy data for the Geostationary Lightning Mapper (GLM) and Advanced Baseline Imager (ABI), both sensors on the NOAA Geostationary Operational Environmental Satellite-R (GOES-R), presently under development and scheduled for a 2015 launch. The proxy data will be used to develop and validate operational algorithms so that they will be ready for use on "day1" following the launch of GOES-R. A preliminary survey of potential sites in the vicinity of Sao Paulo was conducted in December 2009 and January 2010, followed up by a detailed survey in July 2010, with initial network deployment scheduled for October 2010. However, due to a delay in the Sao Luiz do Paraitinga IOP, the SP-LMA will now be installed in July 2011 and operated for one year. Spacing between stations is on the order of 15-30 km, with the network "diameter" being on the order of 30-40 km, which provides good 3-D lightning mapping 150 km from the network center. Optionally, 1-3 additional stations may be deployed in the vicinity of Sao Jos dos Campos.

  15. The Use of Open Source Software for Open Architecture System on CNC Milling Machine

    Directory of Open Access Journals (Sweden)

    Dalmasius Ganjar Subagio

    2012-03-01

    Full Text Available Computer numerical control (CNC milling machine system cannot be separated from the software required to follow the provisions of the Open Architecture capabilities that have portability, extend ability, interoperability, and scalability. When a prescribed period of a CNC milling machine has passed and the manufacturer decided to discontinue it, then the user will have problems for maintaining the performance of the machine. This paper aims to show that the using of open source software (OSS is the way out to maintain engine performance. With the use of OSS, users no longer depend on the software built by the manufacturer because OSS is open and can be developed independently. In this paper, USBCNC V.3.42 is used as an alternative OSS. The test result shows that the work piece is in match with the desired pattern. The test result shows that the performance of machines using OSS has similar performance with the machine using software from the manufacturer. 

  16. Photographic and LMA observations of a blue starter over a New Mexico thunderstorm

    Science.gov (United States)

    Edens, H. E.; Krehbiel, P. R.; Rison, W.; Hunyady, S. J.

    2010-12-01

    On the evening of August 3, 2010 we photographed a blue starter over an electrically active storm complex about 120 km to the WNW of Langmuir Laboratory in central New Mexico. The event occurred close to a broad overshooting top at an altitude of 15 km above MSL. It was also observed visually and detected by the Lightning Mapping Array (LMA) deployed around the mountaintop observatory. The blue starter appears as a white-blue leader channel propagating away from the storm top not straight upward but at a large angle from vertical, slightly curving upward and transitioning to an increasingly diffuse blue glow. In addition to this leader, a more diffuse glow of blue light from one or two additional leaders is seen in the background. The curved channel of the main leader and the fact that it did not propagate along a straight path upward indicates that a relatively strong local electric field near the storm top existed that dictated leader propagation and direction rather than the large-scale storm electric field. The visible part of the starter is estimated to have developed to about 1 km above the storm top. From the LMA data we infer that the blue starter was a screening layer discharge that initiated between upper positive charge and a negatively charged screening layer. A negative leader appears to initiate at 15 km altitude and propagates downward for 2 to 3 km, after which scattered and ill-defined activity occurred in the cloud between 10 to 15 km altitude. This indicates that the visible part of the blue starter emanating out of the storm top, which was photographed but not detected by the LMA, was positive breakdown. The event lasted for 100 ms in the LMA data. The storm where the starter occurred in was producing predominantly intracloud (IC) flashes at a rate of about 20 per minute. The starter itself occurred independently of other discharges in the storm about 4 seconds after a normal polarity IC flash. About 5 minutes after the first blue starter, a

  17. Sao Paulo Lightning Mapping Array (SP-LMA): Network Assessment and Analyses for Intercomparison Studies and GOES-R Proxy Activities

    Science.gov (United States)

    Bailey, J. C.; Blakeslee, R. J.; Carey, L. D.; Goodman, S. J.; Rudlosky, S. D.; Albrecht, R.; Morales, C. A.; Anselmo, E. M.; Neves, J. R.; Buechler, D. E.

    2014-01-01

    A 12 station Lightning Mapping Array (LMA) network was deployed during October 2011 in the vicinity of Sao Paulo, Brazil (SP-LMA) to contribute total lightning measurements to an international field campaign [CHUVA - Cloud processes of tHe main precipitation systems in Brazil: A contribUtion to cloud resolVing modeling and to the GPM (GlobAl Precipitation Measurement)]. The SP-LMA was operational from November 2011 through March 2012 during the Vale do Paraiba campaign. Sensor spacing was on the order of 15-30 km, with a network diameter on the order of 40-50km. The SP-LMA provides good 3-D lightning mapping out to 150 km from the network center, with 2-D coverage considerably farther. In addition to supporting CHUVA science/mission objectives, the SP-LMA is supporting the generation of unique proxy data for the Geostationary Lightning Mapper (GLM) and Advanced Baseline Imager (ABI), on NOAA's Geostationary Operational Environmental Satellite-R (GOES-R: scheduled for a 2015 launch). These proxy data will be used to develop and validate operational algorithms so that they will be ready to use on "day1" following the GOES-R launch. As the CHUVA Vale do Paraiba campaign opportunity was formulated, a broad community-based interest developed for a comprehensive Lightning Location System (LLS) intercomparison and assessment study, leading to the participation and/or deployment of eight other ground-based networks and the space-based Lightning Imaging Sensor (LIS). The SP-LMA data is being intercompared with lightning observations from other deployed lightning networks to advance our understanding of the capabilities/contributions of each of these networks toward GLM proxy and validation activities. This paper addresses the network assessment including noise reduction criteria, detection efficiency estimates, and statistical and climatological (both temporal and spatially) analyses for intercomparison studies and GOES-R proxy activities.

  18. Enhanced Flexibility and Reusability through State Machine-Based Architectures for Multisensor Intelligent Robotics

    Directory of Open Access Journals (Sweden)

    Héctor Herrero

    2017-05-01

    Full Text Available This paper presents a state machine-based architecture, which enhances the flexibility and reusability of industrial robots, more concretely dual-arm multisensor robots. The proposed architecture, in addition to allowing absolute control of the execution, eases the programming of new applications by increasing the reusability of the developed modules. Through an easy-to-use graphical user interface, operators are able to create, modify, reuse and maintain industrial processes, increasing the flexibility of the cell. Moreover, the proposed approach is applied in a real use case in order to demonstrate its capabilities and feasibility in industrial environments. A comparative analysis is presented for evaluating the presented approach versus traditional robot programming techniques.

  19. Dialogue management in a home machine environment : linguistic components over an agent architecture

    OpenAIRE

    Quesada Moreno, José Francisco; García, Federico; Sena Pichardo, María Esther; Bernal Bermejo, José Ángel; Amores Carredano, José Gabriel de

    2001-01-01

    This paper presents the main characteristics of an Agent-based Architecture for the design and implementation of a Spoken Dialogue System. From a theoretical point of view, the system is based on the Information State Update approach, in particular, the system aims at the management of Natural Command Language Dialogue Moves in a Home Machine Environment. Specifically, the paper is focused on the Natural Language Understanding and Dialogue Management Agents...

  20. Systemic Architecture

    DEFF Research Database (Denmark)

    Poletto, Marco; Pasquero, Claudia

    -up or tactical design, behavioural space and the boundary of the natural and the artificial realms within the city and architecture. A new kind of "real-time world-city" is illustrated in the form of an operational design manual for the assemblage of proto-architectures, the incubation of proto-gardens...... and the coding of proto-interfaces. These prototypes of machinic architecture materialize as synthetic hybrids embedded with biological life (proto-gardens), computational power, behavioural responsiveness (cyber-gardens), spatial articulation (coMachines and fibrous structures), remote sensing (FUNclouds...

  1. Feature recognition and detection for ancient architecture based on machine vision

    Science.gov (United States)

    Zou, Zheng; Wang, Niannian; Zhao, Peng; Zhao, Xuefeng

    2018-03-01

    Ancient architecture has a very high historical and artistic value. The ancient buildings have a wide variety of textures and decorative paintings, which contain a lot of historical meaning. Therefore, the research and statistics work of these different compositional and decorative features play an important role in the subsequent research. However, until recently, the statistics of those components are mainly by artificial method, which consumes a lot of labor and time, inefficiently. At present, as the strong support of big data and GPU accelerated training, machine vision with deep learning as the core has been rapidly developed and widely used in many fields. This paper proposes an idea to recognize and detect the textures, decorations and other features of ancient building based on machine vision. First, classify a large number of surface textures images of ancient building components manually as a set of samples. Then, using the convolution neural network to train the samples in order to get a classification detector. Finally verify its precision.

  2. Endotracheal Intubation in Patients with Unstable Cervical Spine Using LMA-Fastrach and Gum Elastic Bogie

    International Nuclear Information System (INIS)

    Khan, M. U.

    2014-01-01

    Objective: To evaluate the success of alternative technique of ET- intubation in patients with unstable cervical spine with Philadelphia collar around the neck. Study Design: Case series. Place and Duration of Study: The Department of Anaesthesia, College of Medicine, King Saud University, Riyadh, Saudi Arabia, from June 2009 to June 2012. Methodology: Adult patients of either gender with unstable cervical spine wearing Philadelphia collar electively scheduled for cervical spine decompression and fixation more than one level were included. Those with anticipated difficult intubation, mouth opening 27 kg/m2 were excluded. After induction of anaesthesia FT-LMA was inserted. Correct position of FT-LMA was confirmed then soft straight end of gum elastic bogie was passed through FTLMA into trachea. FT-ILMA was removed on bogie. Reinforced silicon ET- tube was rail road on bogie. The bogie was pulled out and position of ET- tube was confirmed with ETCO2, chest movement and auscultation on bag ventilation. The ease of insertion of FT-LMA, ET- intubation and maximum time taken for successful intubation was noted. Results: 26 patients were studied with mean age of 59.3 A +- 2.93 years and M: F ratio of 7:3. The mean time taken from the insertion of gum elastic bogie to the ET intubation was 38.9 A +- 1.20 seconds. The success rate of ET- intubation in the first attempt was 88.4% and 7.6% in two attempts. Intubation failed in one patient. The mean ease of insertion of FT-LMA and ET- intubation in all patients was 46.7 A +- 2.59 and 46.5 A +- 2.66 respectively on VAS ( 0-100). No complication was noted in any patient. Conclusion: This technique is safe and reliable for achieving adequate ventilation and intubation in patients with unstable cervical spine with Philadelphia collar in place. (author)

  3. Randomized comparison of the i-gel™, the LMA Supreme™, and the Laryngeal Tube Suction-D using clinical and fibreoptic assessments in elective patients

    Directory of Open Access Journals (Sweden)

    Russo Sebastian G

    2012-08-01

    Full Text Available Abstract Background The i-gel™, LMA-Supreme (LMA-S and Laryngeal Tube Suction-D (LTS-D are single-use supraglottic airway devices with an inbuilt drainage channel. We compared them with regard to their position in situ as well as to clinical performance data during elective surgery. Methods Prospective, randomized, comparative study of three groups of 40 elective surgical patients each. Speed of insertion and success rates, leak pressures (LP at different cuff pressures, dynamic airway compliance, and signs of postoperative airway morbidity were recorded. Fibreoptic evaluation was used to determine the devices’ position in situ. Results Leak pressures were similar (i-gel™ 25.9, LMA-S 27.1, LTS-D 24.0 cmH2O; the latter two at 60 cmH2O cuff pressure as were insertion times (i-gel™ 10, LMA-S 11, LTS-D 14 sec. LP of the LMA-S was higher than that of the LTS-D at lower cuff pressures (p p p 0.05. Airway morbidity was more pronounced with the LTS-D (p 0.01. Conclusion All devices were suitable for ventilating the patients’ lungs during elective surgery. Trial registration German Clinical Trial Register DRKS00000760

  4. Troubleshooting ProSeal LMA

    Directory of Open Access Journals (Sweden)

    Bimla Sharma

    2009-01-01

    Full Text Available Supraglottic devices have changed the face of the airway management. These devices have contributed in a big way in airway management especially, in the difficult airway scenario significantly decreasing the pharyngolaryngeal morbidity. There is a plethora of these devices, which has been well matched by their wider acceptance in clinical practice. ProSeal laryngeal mask airway (PLMA is one such frequently used device employed for spontaneous as well as controlled ventilation. However, the use of PLMAat tunes maybe associated with certain problems. Some of the problems related with its use are unique while others are akin to the classic laryngeal mask airway (eLMA. However, expertise is needed for its safe and judicious use, correct placement, recognition and management of its various malpositions and complications. The present article describes the tests employed for proper confirmation of placementto assess the ventilatooy and the drain tube functions of the mask, diagnosis of various malpositions and the management of these aspects. All these areas have been highlighted under the heading of troubleshooting PLMA. Many problems can be solved by proper patient and procedure selection, maintaining adequate depth of anaesthesia, diagnosis and management of malpositions. Proper fixation of the device and monitoring cuff pressure intraopera-tively may bring down the incidence of airway morbidity.

  5. lma sõja piirid Ida-Aasias ajaloolaste pilgu läbi / Jaanika Erne

    Index Scriptorium Estoniae

    Erne, Jaanika, 1967-

    2016-01-01

    Raamatuülevaade: The Cold War in East Asia 1945–1991. T. Hasegawa (Ed.). Washington D.C.: Woodrow Wilson Center Press 2011, Stanford: Stanford University Press 2011, viii + 340 lk. Külma sõja aegsetest poliitilistest sündmustest

  6. Photonic lantern adaptive spatial mode control in LMA fiber amplifiers.

    Science.gov (United States)

    Montoya, Juan; Aleshire, Chris; Hwang, Christopher; Fontaine, Nicolas K; Velázquez-Benítez, Amado; Martz, Dale H; Fan, T Y; Ripin, Dan

    2016-02-22

    We demonstrate adaptive-spatial mode control (ASMC) in few-moded double-clad large mode area (LMA) fiber amplifiers by using an all-fiber-based photonic lantern. Three single-mode fiber inputs are used to adaptively inject the appropriate superposition of input modes in a multimode gain fiber to achieve the desired mode at the output. By actively adjusting the relative phase of the single-mode inputs, near-unity coherent combination resulting in a single fundamental mode at the output is achieved.

  7. Eesti ja Venemaa uue külma sõja õhutamise avangardis / Herbert Vainu

    Index Scriptorium Estoniae

    Vainu, Herbert, 1929-2011

    2008-01-01

    Eesti ja Venemaa juhtide mängimine rahvustundel tõstab rahvusvahelist pinget, mis võib viia külma sõja taastekkimisele. Autor kritiseerib muuhulgas president Toomas Hendrik Ilvese poliitilist tegevust ja esinemist Hantõ-Manskiiskis toimunud soome-ugri maailmakongressil. Vabariigi President töövisiidil Venemaal 27.-30.06.2008

  8. Predicting the academic success of architecture students by pre-enrolment requirement: using machine-learning techniques

    Directory of Open Access Journals (Sweden)

    Ralph Olusola Aluko

    2016-12-01

    Full Text Available In recent years, there has been an increase in the number of applicants seeking admission into architecture programmes. As expected, prior academic performance (also referred to as pre-enrolment requirement is a major factor considered during the process of selecting applicants. In the present study, machine learning models were used to predict academic success of architecture students based on information provided in prior academic performance. Two modeling techniques, namely K-nearest neighbour (k-NN and linear discriminant analysis were applied in the study. It was found that K-nearest neighbour (k-NN outperforms the linear discriminant analysis model in terms of accuracy. In addition, grades obtained in mathematics (at ordinary level examinations had a significant impact on the academic success of undergraduate architecture students. This paper makes a modest contribution to the ongoing discussion on the relationship between prior academic performance and academic success of undergraduate students by evaluating this proposition. One of the issues that emerges from these findings is that prior academic performance can be used as a predictor of academic success in undergraduate architecture programmes. Overall, the developed k-NN model can serve as a valuable tool during the process of selecting new intakes into undergraduate architecture programmes in Nigeria.

  9. Machines and Metaphors

    Directory of Open Access Journals (Sweden)

    Ángel Martínez García-Posada

    2016-10-01

    Full Text Available The edition La ley del reloj. Arquitectura, máquinas y cultura moderna (Cátedra, Madrid, 2016 registers the useful paradox of the analogy between architecture and technique. Its author, the architect Eduardo Prieto, also a philosopher, professor and writer, acknowledges the obvious distance from machines to buildings, so great that it can only be solved using strange comparisons, since architecture does not move nor are the machines habitable, however throughout the book, from the origin of the metaphor of the machine, with clarity in his essay and enlightening erudition, he points out with certainty some concomitances of high interest, drawing throughout history a beautiful cartography of the fruitful encounter between organics and mechanics.

  10. Improving LMA predictions with non-standard interactions: neutrino decay in solar matter?

    CERN Document Server

    Das, C R

    2010-01-01

    It has been known for some time that the well established LMA solution to the observed solar neutrino deficit fails to predict a flat energy spectrum for SuperKamiokande as opposed to what the data indicates. It also leads to a Chlorine rate which appears to be too high as compared to the data. We investigate the possible solution to these inconsistencies with non standard neutrino interactions, assuming that they come as extra contributions to the $\

  11. Efficient Machine Learning Approach for Optimizing Scientific Computing Applications on Emerging HPC Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Arumugam, Kamesh [Old Dominion Univ., Norfolk, VA (United States)

    2017-05-01

    Efficient parallel implementations of scientific applications on multi-core CPUs with accelerators such as GPUs and Xeon Phis is challenging. This requires - exploiting the data parallel architecture of the accelerator along with the vector pipelines of modern x86 CPU architectures, load balancing, and efficient memory transfer between different devices. It is relatively easy to meet these requirements for highly structured scientific applications. In contrast, a number of scientific and engineering applications are unstructured. Getting performance on accelerators for these applications is extremely challenging because many of these applications employ irregular algorithms which exhibit data-dependent control-ow and irregular memory accesses. Furthermore, these applications are often iterative with dependency between steps, and thus making it hard to parallelize across steps. As a result, parallelism in these applications is often limited to a single step. Numerical simulation of charged particles beam dynamics is one such application where the distribution of work and memory access pattern at each time step is irregular. Applications with these properties tend to present significant branch and memory divergence, load imbalance between different processor cores, and poor compute and memory utilization. Prior research on parallelizing such irregular applications have been focused around optimizing the irregular, data-dependent memory accesses and control-ow during a single step of the application independent of the other steps, with the assumption that these patterns are completely unpredictable. We observed that the structure of computation leading to control-ow divergence and irregular memory accesses in one step is similar to that in the next step. It is possible to predict this structure in the current step by observing the computation structure of previous steps. In this dissertation, we present novel machine learning based optimization techniques to address

  12. Negative Pressure Pulmonary Edema Following use of Laryngeal Mask Airway (LMA

    Directory of Open Access Journals (Sweden)

    Yesim Bayraktar

    2013-06-01

    Full Text Available Negative pressure pulmonary edema (NPPE following upper airway obstruction is a non-cardiogenic pulmonary edema. The first cause in the etiology of NPPE is developed laryngospasm after intubation or extubation, while the other causes are epiglotitis, croup, hiccups, foreign body aspiration, pharyngeal hematoma and oropharyngeal tumors.The Late diagnosis and treatment causes high morbidity and mortality. The protection of the airway and maintainance of arterial oxygenation will be life saving.In this article we aimed to report  a case of negative pressure pulmonary edema, resolved succesfully after treatment, following use of laryngeal mask airway (LMA.

  13. Open architecture CNC system

    Energy Technology Data Exchange (ETDEWEB)

    Tal, J. [Galil Motion Control Inc., Sunnyvale, CA (United States); Lopez, A.; Edwards, J.M. [Los Alamos National Lab., NM (United States)

    1995-04-01

    In this paper, an alternative solution to the traditional CNC machine tool controller has been introduced. Software and hardware modules have been described and their incorporation in a CNC control system has been outlined. This type of CNC machine tool controller demonstrates that technology is accessible and can be readily implemented into an open architecture machine tool controller. Benefit to the user is greater controller flexibility, while being economically achievable. PC based, motion as well as non-motion features will provide flexibility through a Windows environment. Up-grading this type of controller system through software revisions will keep the machine tool in a competitive state with minimal effort. Software and hardware modules are mass produced permitting competitive procurement and incorporation. Open architecture CNC systems provide diagnostics thus enhancing maintainability, and machine tool up-time. A major concern of traditional CNC systems has been operator training time. Training time can be greatly minimized by making use of Windows environment features.

  14. Balance in machine architecture: Bandwidth on board and offboard, integer/control speed and flops versus memory

    International Nuclear Information System (INIS)

    Fischler, M.

    1992-04-01

    The issues to be addressed here are those of ''balance'' in machine architecture. By this, we mean how much emphasis must be placed on various aspects of the system to maximize its usefulness for physics. There are three components that contribute to the utility of a system: How the machine can be used, how big a problem can be attacked, and what the effective capabilities (power) of the hardware are like. The effective power issue is a matter of evaluating the impact of design decisions trading off architectural features such as memory bandwidth and interprocessor communication capabilities. What is studied is the effect these machine parameters have on how quickly the system can solve desired problems. There is a reasonable method for studying this: One selects a few representative algorithms and computes the impact of changing memory bandwidths, and so forth. The only room for controversy here is in the selection of representative problems. The issue of how big a problem can be attacked boils down to a balance of memory size versus power. Although this is a balance issue it is very different than the effective power situation, because no firm answer can be given at this time. The power to memory ratio is highly problem dependent, and optimizing it requires several pieces of physics input, including: how big a lattice is needed for interesting results; what sort of algorithms are best to use; and how many sweeps are needed to get valid results. We seem to be at the threshold of learning things about these issues, but for now, the memory size issue will necessarily be addressed in terms of best guesses, rules of thumb, and researchers' opinions

  15. High-level language computer architecture

    CERN Document Server

    Chu, Yaohan

    1975-01-01

    High-Level Language Computer Architecture offers a tutorial on high-level language computer architecture, including von Neumann architecture and syntax-oriented architecture as well as direct and indirect execution architecture. Design concepts of Japanese-language data processing systems are discussed, along with the architecture of stack machines and the SYMBOL computer system. The conceptual design of a direct high-level language processor is also described.Comprised of seven chapters, this book first presents a classification of high-level language computer architecture according to the pr

  16. Machine vision systems using machine learning for industrial product inspection

    Science.gov (United States)

    Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony

    2002-02-01

    Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.

  17. Performance evaluation of scientific programs on advanced architecture computers

    International Nuclear Information System (INIS)

    Walker, D.W.; Messina, P.; Baille, C.F.

    1988-01-01

    Recently a number of advanced architecture machines have become commercially available. These new machines promise better cost-performance then traditional computers, and some of them have the potential of competing with current supercomputers, such as the Cray X/MP, in terms of maximum performance. This paper describes an on-going project to evaluate a broad range of advanced architecture computers using a number of complete scientific application programs. The computers to be evaluated include distributed- memory machines such as the NCUBE, INTEL and Caltech/JPL hypercubes, and the MEIKO computing surface, shared-memory, bus architecture machines such as the Sequent Balance and the Alliant, very long instruction word machines such as the Multiflow Trace 7/200 computer, traditional supercomputers such as the Cray X.MP and Cray-2, and SIMD machines such as the Connection Machine. Currently 11 application codes from a number of scientific disciplines have been selected, although it is not intended to run all codes on all machines. Results are presented for two of the codes (QCD and missile tracking), and future work is proposed

  18. HEP specific benchmarks of virtual machines on multi-core CPU architectures

    International Nuclear Information System (INIS)

    Alef, M; Gable, I

    2010-01-01

    Virtualization technologies such as Xen can be used in order to satisfy the disparate and often incompatible system requirements of different user groups in shared-use computing facilities. This capability is particularly important for HEP applications, which often have restrictive requirements. The use of virtualization adds flexibility, however, it is essential that the virtualization technology place little overhead on the HEP application. We present an evaluation of the practicality of running HEP applications in multiple Virtual Machines (VMs) on a single multi-core Linux system. We use the benchmark suite used by the HEPiX CPU Benchmarking Working Group to give a quantitative evaluation relevant to the HEP community. Benchmarks are packaged inside VMs and then the VMs are booted onto a single multi-core system. Benchmarks are then simultaneously executed on each VM to simulate highly loaded VMs running HEP applications. These techniques are applied to a variety of multi-core CPU architectures and VM configurations.

  19. Advanced Electrical Machines and Machine-Based Systems for Electric and Hybrid Vehicles

    OpenAIRE

    Ming Cheng; Le Sun; Giuseppe Buja; Lihua Song

    2015-01-01

    The paper presents a number of advanced solutions on electric machines and machine-based systems for the powertrain of electric vehicles (EVs). Two types of systems are considered, namely the drive systems designated to the EV propulsion and the power split devices utilized in the popular series-parallel hybrid electric vehicle architecture. After reviewing the main requirements for the electric drive systems, the paper illustrates advanced electric machine topologies, including a stator perm...

  20. High-performance reconfigurable hardware architecture for restricted Boltzmann machines.

    Science.gov (United States)

    Ly, Daniel Le; Chow, Paul

    2010-11-01

    Despite the popularity and success of neural networks in research, the number of resulting commercial or industrial applications has been limited. A primary cause for this lack of adoption is that neural networks are usually implemented as software running on general-purpose processors. Hence, a hardware implementation that can exploit the inherent parallelism in neural networks is desired. This paper investigates how the restricted Boltzmann machine (RBM), which is a popular type of neural network, can be mapped to a high-performance hardware architecture on field-programmable gate array (FPGA) platforms. The proposed modular framework is designed to reduce the time complexity of the computations through heavily customized hardware engines. A method to partition large RBMs into smaller congruent components is also presented, allowing the distribution of one RBM across multiple FPGA resources. The framework is tested on a platform of four Xilinx Virtex II-Pro XC2VP70 FPGAs running at 100 MHz through a variety of different configurations. The maximum performance was obtained by instantiating an RBM of 256 × 256 nodes distributed across four FPGAs, which resulted in a computational speed of 3.13 billion connection-updates-per-second and a speedup of 145-fold over an optimized C program running on a 2.8-GHz Intel processor.

  1. Tracheal intubation in patients with cervical spine immobilization: a comparison of the Airwayscope, LMA CTrach, and the Macintosh laryngoscopes.

    LENUS (Irish Health Repository)

    Malik, M A

    2009-05-01

    The purpose of this study was to evaluate the effectiveness of the Pentax AWS, and the LMA CTrach, in comparison with the Macintosh laryngoscope, when performing tracheal intubation in patients with neck immobilization using manual in-line axial cervical spine stabilization.

  2. Machine performance assessment and enhancement for a hexapod machine

    Energy Technology Data Exchange (ETDEWEB)

    Mou, J.I. [Arizona State Univ., Tempe, AZ (United States); King, C. [Sandia National Labs., Livermore, CA (United States). Integrated Manufacturing Systems Center

    1998-03-19

    The focus of this study is to develop a sensor fused process modeling and control methodology to model, assess, and then enhance the performance of a hexapod machine for precision product realization. Deterministic modeling technique was used to derive models for machine performance assessment and enhancement. Sensor fusion methodology was adopted to identify the parameters of the derived models. Empirical models and computational algorithms were also derived and implemented to model, assess, and then enhance the machine performance. The developed sensor fusion algorithms can be implemented on a PC-based open architecture controller to receive information from various sensors, assess the status of the process, determine the proper action, and deliver the command to actuators for task execution. This will enhance a hexapod machine`s capability to produce workpieces within the imposed dimensional tolerances.

  3. Connection machine: a computer architecture based on cellular automata

    Energy Technology Data Exchange (ETDEWEB)

    Hillis, W D

    1984-01-01

    This paper describes the connection machine, a programmable computer based on cellular automata. The essential idea behind the connection machine is that a regular locally-connected cellular array can be made to behave as if the processing cells are connected into any desired topology. When the topology of the machine is chosen to match the topology of the application program, the result is a fast, powerful computing engine. The connection machine was originally designed to implement knowledge retrieval operations in artificial intelligence programs, but the hardware and the programming techniques are apparently applicable to a much larger class of problems. A machine with 100000 processing cells is currently being constructed. 27 references.

  4. Architecture for interlock systems: reliability analysis with regard to safety and availability

    International Nuclear Information System (INIS)

    Wagner, S.; Apollonio, A.; Schmidt, R.; Zerlauth, M.; Vergara-Fernandez, A.

    2012-01-01

    For particle accelerators like LHC and other large experimental physics facilities like ITER, the machine protection relies on complex interlock systems. In the design of interlock loops for the signal exchange in machine protection systems, the choice of the hardware architecture impacts on machine safety and availability. The reliable performance of a machine stop (leaving the machine in a safe state) in case of an emergency, is an inherent requirement. The constraints in terms of machine availability on the other hand may differ from one facility to another. Spurious machine stops, lowering machine availability, may to a certain extent be tolerated in facilities where they do not cause undue equipment wear-out. In order to compare various interlock loop architectures in terms of safety and availability, the occurrence frequencies of related scenarios have been calculated in a reliability analysis, using a generic analytical model. This paper presents the results and illustrates the potential of the analysis method for supporting the choice of interlock system architectures. The results show the advantages of a 2003 (3 redundant lines with 2-out-of-3 voting) over the 6 architectures under consideration for systems with high requirements in both safety and availability

  5. ATCA for Machines-- Advanced Telecommunications Computing Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, R.S.; /SLAC

    2008-04-22

    The Advanced Telecommunications Computing Architecture is a new industry open standard for electronics instrument modules and shelves being evaluated for the International Linear Collider (ILC). It is the first industrial standard designed for High Availability (HA). ILC availability simulations have shown clearly that the capabilities of ATCA are needed in order to achieve acceptable integrated luminosity. The ATCA architecture looks attractive for beam instruments and detector applications as well. This paper provides an overview of ongoing R&D including application of HA principles to power electronics systems.

  6. ATCA for Machines-- Advanced Telecommunications Computing Architecture

    International Nuclear Information System (INIS)

    Larsen, R

    2008-01-01

    The Advanced Telecommunications Computing Architecture is a new industry open standard for electronics instrument modules and shelves being evaluated for the International Linear Collider (ILC). It is the first industrial standard designed for High Availability (HA). ILC availability simulations have shown clearly that the capabilities of ATCA are needed in order to achieve acceptable integrated luminosity. The ATCA architecture looks attractive for beam instruments and detector applications as well. This paper provides an overview of ongoing R and D including application of HA principles to power electronics systems

  7. Advanced Electrical Machines and Machine-Based Systems for Electric and Hybrid Vehicles

    Directory of Open Access Journals (Sweden)

    Ming Cheng

    2015-09-01

    Full Text Available The paper presents a number of advanced solutions on electric machines and machine-based systems for the powertrain of electric vehicles (EVs. Two types of systems are considered, namely the drive systems designated to the EV propulsion and the power split devices utilized in the popular series-parallel hybrid electric vehicle architecture. After reviewing the main requirements for the electric drive systems, the paper illustrates advanced electric machine topologies, including a stator permanent magnet (stator-PM motor, a hybrid-excitation motor, a flux memory motor and a redundant motor structure. Then, it illustrates advanced electric drive systems, such as the magnetic-geared in-wheel drive and the integrated starter generator (ISG. Finally, three machine-based implementations of the power split devices are expounded, built up around the dual-rotor PM machine, the dual-stator PM brushless machine and the magnetic-geared dual-rotor machine. As a conclusion, the development trends in the field of electric machines and machine-based systems for EVs are summarized.

  8. Machine Protection

    International Nuclear Information System (INIS)

    Zerlauth, Markus; Schmidt, Rüdiger; Wenninger, Jörg

    2012-01-01

    The present architecture of the machine protection system is being recalled and the performance of the associated systems during the 2011 run will be briefly summarized. An analysis of the causes of beam dumps as well as an assessment of the dependability of the machine protection systems (MPS) itself is being presented. Emphasis will be given to events that risked exposing parts of the machine to damage. Further improvements and mitigations of potential holes in the protection systems will be evaluated along with their impact on the 2012 run. The role of rMPP during the various operational phases (commissioning, intensity ramp up, MDs...) will be discussed along with a proposal for the intensity ramp up for the start of beam operation in 2012

  9. Machine Protection

    Energy Technology Data Exchange (ETDEWEB)

    Zerlauth, Markus; Schmidt, Rüdiger; Wenninger, Jörg [European Organization for Nuclear Research, Geneva (Switzerland)

    2012-07-01

    The present architecture of the machine protection system is being recalled and the performance of the associated systems during the 2011 run will be briefly summarized. An analysis of the causes of beam dumps as well as an assessment of the dependability of the machine protection systems (MPS) itself is being presented. Emphasis will be given to events that risked exposing parts of the machine to damage. Further improvements and mitigations of potential holes in the protection systems will be evaluated along with their impact on the 2012 run. The role of rMPP during the various operational phases (commissioning, intensity ramp up, MDs...) will be discussed along with a proposal for the intensity ramp up for the start of beam operation in 2012.

  10. Machine Protection

    CERN Document Server

    Zerlauth, Markus; Wenninger, Jörg

    2012-01-01

    The present architecture of the machine protection system is being recalled and the performance of the associated systems during the 2011 run will be briefly summarized. An analysis of the causes of beam dumps as well as an assessment of the dependability of the machine protection systems (MPS) itself is being presented. Emphasis will be given to events that risked exposing parts of the machine to damage. Further improvements and mitigations of potential holes in the protection systems will be evaluated along with their impact on the 2012 run. The role of rMPP during the various operational phases (commissioning, intensity ramp up, MDs...) will be discussed along with a proposal for the intensity ramp up for the start of beam operation in 2012.

  11. VLSI and system architecture-the new development of system 5G

    Energy Technology Data Exchange (ETDEWEB)

    Sakamura, K.; Sekino, A.; Kodaka, T.; Uehara, T.; Aiso, H.

    1982-01-01

    A research and development proposal is presented for VLSI CAD systems and for a hardware environment called system 5G on which the VLSI CAD systems run. The proposed CAD systems use a hierarchically organized design language to enable design of anything from basic architectures of VLSI to VLSI mask patterns in a uniform manner. The cad systems will eventually become intelligent cad systems that acquire design knowledge and perform automatic design of VLSI chips when the characteristic requirements of VLSI chip is given. System 5G will consist of superinference machines and the 5G communication network. The superinference machine will be built based on a functionally distributed architecture connecting inferommunication network. The superinference machine will be built based on a functionally distributed architecture connecting inference machines and relational data base machines via a high-speed local network. The transfer rate of the local network will be 100 mbps at the first stage of the project and will be improved to 1 gbps. Remote access to the superinference machine will be possible through the 5G communication network. Access to system 5G will use the 5G network architecture protocol. The users will access the system 5G using standardized 5G personal computers. 5G personal logic programming stations, very high intelligent terminals providing an instruction set that supports predicate logic and input/output facilities for audio and graphical information.

  12. Reconfigurable support vector machine classifier with approximate computing

    NARCIS (Netherlands)

    van Leussen, M.J.; Huisken, J.; Wang, L.; Jiao, H.; De Gyvez, J.P.

    2017-01-01

    Support Vector Machine (SVM) is one of the most popular machine learning algorithms. An energy-efficient SVM classifier is proposed in this paper, where approximate computing is utilized to reduce energy consumption and silicon area. A hardware architecture with reconfigurable kernels and

  13. Functional language and data flow architectures

    Science.gov (United States)

    Ercegovac, M. D.; Patel, D. R.; Lang, T.

    1983-01-01

    This is a tutorial article about language and architecture approaches for highly concurrent computer systems based on the functional style of programming. The discussion concentrates on the basic aspects of functional languages, and sequencing models such as data-flow, demand-driven and reduction which are essential at the machine organization level. Several examples of highly concurrent machines are described.

  14. A physical implementation of the Turing machine accessed through Web

    Directory of Open Access Journals (Sweden)

    Marijo Maracic

    2008-11-01

    Full Text Available A Turing machine has an important role in education in the field of computer science, as it is a milestone in courses related to automata theory, theory of computation and computer architecture. Its value is also recognized in the Computing Curricula proposed by the Association for Computing Machinery (ACM and IEEE Computer Society. In this paper we present a physical implementation of the Turing machine accessed through Web. To enable remote access to the Turing machine, an implementation of the client-server architecture is built. The web interface is described in detail and illustrations of remote programming, initialization and the computation of the Turing machine are given. Advantages of such approach and expected benefits obtained by using remotely accessible physical implementation of the Turing machine as an educational tool in the teaching process are discussed.

  15. Selecting a Benchmark Suite to Profile High-Performance Computing (HPC) Machines

    Science.gov (United States)

    2014-11-01

    architectures. Machines now contain central processing units (CPUs), graphics processing units (GPUs), and many integrated core ( MIC ) architecture all...evaluate the feasibility and applicability of a new architecture just released to the market . Researchers are often unsure how available resources will...architectures. Having a suite of programs running on different architectures, such as GPUs, MICs , and CPUs, adds complexity and technical challenges

  16. Architecture is always in the middle…

    Directory of Open Access Journals (Sweden)

    Tim Gough

    2015-12-01

    Full Text Available This essay proposes an ontology of architecture that takes its lead from the bread and butter of architecture: a flat ontology opposed to Cartesianism in the sense that no differentiation between realms (body/mind, high/low is accepted. The work of Spinoza and Deleuze is referred to in order to flesh out such an ontology, whose aim is to destroy the very desire for architecture and architectural theory to even pose the question about the difference between bread-and-butter architecture and high architecture. Architecture is shown to be of the nature of an assemblage, of a machine or a haecceity (to use Deleuze and Guattari’s phrase, and the implications of this in relation to the question of composition and reception are outlined.

  17. Nanorobot architecture for medical target identification

    International Nuclear Information System (INIS)

    Cavalcanti, Adriano; Shirinzadeh, Bijan; Freita, Robert A Jr; Hogg, Tad

    2008-01-01

    This work has an innovative approach for the development of nanorobots with sensors for medicine. The nanorobots operate in a virtual environment comparing random, thermal and chemical control techniques. The nanorobot architecture model has nanobioelectronics as the basis for manufacturing integrated system devices with embedded nanobiosensors and actuators, which facilitates its application for medical target identification and drug delivery. The nanorobot interaction with the described workspace shows how time actuation is improved based on sensor capabilities. Therefore, our work addresses the control and the architecture design for developing practical molecular machines. Advances in nanotechnology are enabling manufacturing nanosensors and actuators through nanobioelectronics and biologically inspired devices. Analysis of integrated system modeling is one important aspect for supporting nanotechnology in the fast development towards one of the most challenging new fields of science: molecular machines. The use of 3D simulation can provide interactive tools for addressing nanorobot choices on sensing, hardware architecture design, manufacturing approaches, and control methodology investigation

  18. Nanorobot architecture for medical target identification

    Energy Technology Data Exchange (ETDEWEB)

    Cavalcanti, Adriano [CAN Center for Automation in Nanobiotech, Melbourne VIC 3168 (Australia); Shirinzadeh, Bijan [Robotics and Mechatronics Research Laboratory, Department of Mechanical Engineering, Monash University, Clayton, Melbourne VIC 3800 (Australia); Freita, Robert A Jr [Institute for Molecular Manufacturing, Pilot Hill, CA 95664 (United States); Hogg, Tad [Hewlett-Packard Laboratories, Palo Alto, CA 94304 (United States)

    2008-01-09

    This work has an innovative approach for the development of nanorobots with sensors for medicine. The nanorobots operate in a virtual environment comparing random, thermal and chemical control techniques. The nanorobot architecture model has nanobioelectronics as the basis for manufacturing integrated system devices with embedded nanobiosensors and actuators, which facilitates its application for medical target identification and drug delivery. The nanorobot interaction with the described workspace shows how time actuation is improved based on sensor capabilities. Therefore, our work addresses the control and the architecture design for developing practical molecular machines. Advances in nanotechnology are enabling manufacturing nanosensors and actuators through nanobioelectronics and biologically inspired devices. Analysis of integrated system modeling is one important aspect for supporting nanotechnology in the fast development towards one of the most challenging new fields of science: molecular machines. The use of 3D simulation can provide interactive tools for addressing nanorobot choices on sensing, hardware architecture design, manufacturing approaches, and control methodology investigation.

  19. Modular reconfigurable machines incorporating modular open architecture control

    CSIR Research Space (South Africa)

    Padayachee, J

    2008-01-01

    Full Text Available degrees of freedom on a single platform. A corresponding modular Open Architecture Control (OAC) system is presented. OAC overcomes the inflexibility of fixed proprietary automation, ensuring that MRMs provide the reconfigurability and extensibility...

  20. A minimal architecture for joint action

    DEFF Research Database (Denmark)

    Vesper, Cordula; Butterfill, Stephen; Knoblich, Günther

    2010-01-01

    What kinds of processes and representations make joint action possible? In this paper we suggest a minimal architecture for joint action that focuses on representations, action monitoring and action prediction processes, as well as ways of simplifying coordination. The architecture spells out...... minimal requirements for an individual agent to engage in a joint action. We discuss existing evidence in support of the architecture as well as open questions that remain to be empirically addressed. In addition, we suggest possible interfaces between the minimal architecture and other approaches...... to joint action. The minimal architecture has implications for theorizing about the emergence of joint action, for human-machine interaction, and for understanding how coordination can be facilitated by exploiting relations between multiple agents’ actions and between actions and the environment....

  1. Randomised Comparison of the AMBU AuraOnce Laryngeal Mask and the LMA Unique Laryngeal Mask Airway in Spontaneously Breathing Adults

    OpenAIRE

    Williams, Daryl Lindsay; Zeng, James M.; Alexander, Karl D.; Andrews, David T.

    2012-01-01

    We conducted a randomised single-blind controlled trial comparing the LMA-Unique (LMAU) and the AMBU AuraOnce (AMBU) disposable laryngeal mask in spontaneously breathing adult patients undergoing general anaesthesia. Eighty-two adult patients (ASA status I–IV) were randomly allocated to receive the LMAU or AMBU and were blinded to device selection. Patients received a standardized anesthetic and all airway devices were inserted by trained anaesthetists. Size selection was guided by manufactur...

  2. Virtual Things for Machine Learning Applications

    OpenAIRE

    Bovet , Gérôme; Ridi , Antonio; Hennebert , Jean

    2014-01-01

    International audience; Internet-of-Things (IoT) devices, especially sensors are pro-ducing large quantities of data that can be used for gather-ing knowledge. In this field, machine learning technologies are increasingly used to build versatile data-driven models. In this paper, we present a novel architecture able to ex-ecute machine learning algorithms within the sensor net-work, presenting advantages in terms of privacy and data transfer efficiency. We first argument that some classes of ...

  3. TensorFlow: A system for large-scale machine learning

    OpenAIRE

    Abadi, Martín; Barham, Paul; Chen, Jianmin; Chen, Zhifeng; Davis, Andy; Dean, Jeffrey; Devin, Matthieu; Ghemawat, Sanjay; Irving, Geoffrey; Isard, Michael; Kudlur, Manjunath; Levenberg, Josh; Monga, Rajat; Moore, Sherry; Murray, Derek G.

    2016-01-01

    TensorFlow is a machine learning system that operates at large scale and in heterogeneous environments. TensorFlow uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph across many machines in a cluster, and within a machine across multiple computational devices, including multicore CPUs, general-purpose GPUs, and custom designed ASICs known as Tensor Processing Units (TPUs). This architecture gives flexib...

  4. Architecture and program structures for a special purpose finite element computer

    Energy Technology Data Exchange (ETDEWEB)

    Norrie, D.H.; Norrie, C.W.

    1983-01-01

    The development of very large scale integration (VLSI) has made special-purpose computers economically possible. With such a machine, the loss of flexibility compared with a general-purpose computer can be offset by the increased speed which can be obtained by tailoring the architecture to the particular problem or class of problem. The first kind of special-purpose machine has its architecture modelled on the physical structure of the problem and the second kind has its design tailored to the computational algorithm used. The parallel finite element machine (PARFEM) being designed at the University of Calgary for the solution of finite element problems is of the second kind. Its conceptual design is described and progress to date outlined. 14 references.

  5. Machine Protection: Availability for Particle Accelerators

    CERN Document Server

    Apollonio, Andrea; Schmidt, Ruediger

    2015-03-16

    Machine availability is a key indicator for the performance of the next generation of particle accelerators. Availability requirements need to be carefully considered during the design phase to achieve challenging objectives in different fields, as e.g. particle physics and material science. For existing and future High-Power facilities, such as ESS (European Spallation Source) and HL-LHC (High-Luminosity LHC), operation with unprecedented beam power requires highly dependable Machine Protection Systems (MPS) to avoid any damage-induced downtime. Due to the high complexity of accelerator systems, finding the optimal balance between equipment safety and accelerator availability is challenging. The MPS architecture, as well as the choice of electronic components, have a large influence on the achievable level of availability. In this thesis novel methods to address the availability of accelerators and their protection systems are presented. Examples of studies related to dependable MPS architectures are given i...

  6. Neural networks for perception human and machine perception

    CERN Document Server

    Wechsler, Harry

    1991-01-01

    Neural Networks for Perception, Volume 1: Human and Machine Perception focuses on models for understanding human perception in terms of distributed computation and examples of PDP models for machine perception. This book addresses both theoretical and practical issues related to the feasibility of both explaining human perception and implementing machine perception in terms of neural network models. The book is organized into two parts. The first part focuses on human perception. Topics on network model ofobject recognition in human vision, the self-organization of functional architecture in t

  7. A new LAN concept for LEP machine networks

    CERN Document Server

    Guerrero, L E

    1995-01-01

    LEP networks, implemented in 1987, are based on two Token-ring backbones using TDM as the transmission medium. The general topology is based on routers and on a distributed backbone. To avoid the instabilities introduced by the TDM and all the conversion layers it has been decided to upgrade the LEP machine network and to evaluate a new concept for the overall network topology. The new concept will also fulfil the basic requirements for the future LHC network. The new approach relies on a large infrastructure which connects all the eight underground pits of LEP with single-mode fibres from the Prevessin control room (PCR). From the bottom of the pits, the two adjacent alcoves will be cabled with multi-mode fibres. FDDI has been selected as the MAC protocol. This new concept is based on switching and routing between the PCR and the eight pits. In each pit a hub will switch between the FDDI LMA backbone and the local Ethernet segments. Two of these segments will reach the alcoves by means of a 10Base-F link. In...

  8. Machine en Theater. Ontwerpconcepten van winkelgebouwen

    NARCIS (Netherlands)

    Kooijman, D.C.

    1999-01-01

    Machine and Theater, Design Concepts for Shop Buildings is a richly illustrated study of the architectural and urban development of retail buildings, focusing on six essential shop types: the passage and the department store in particular in Germany and France in the nineteenth century; supermarkets

  9. Laban Movement Analysis towards Behavior Patterns

    Science.gov (United States)

    Santos, Luís; Dias, Jorge

    This work presents a study about the use of Laban Movement Analysis (LMA) as a robust tool to describe human basic behavior patterns, to be applied in human-machine interaction. LMA is a language used to describe and annotate dancing movements and is divided in components [1]: Body, Space, Shape and Effort. Despite its general framework is widely used in physical and mental therapy [2], it has found little application in the engineering domain. Rett J. [3] proposed to implement LMA using Bayesian Networks. However LMA component models have not yet been fully implemented. A study on how to approach behavior using LMA is presented. Behavior is a complex feature and movement chain, but we believe that most basic behavior primitives can be discretized in simple features. Correctly identifying Laban parameters and the movements the authors feel that good patterns can be found within a specific set of basic behavior semantics.

  10. SCADA Architecture for Natural Gas plant

    Directory of Open Access Journals (Sweden)

    Turc Traian

    2009-12-01

    Full Text Available The paper describes the Natural Gas Plant SCADA architecture. The main purpose of SCADA system is remote monitoring and controlling of any industrial plant. The SCADA hardware architecture is based on multi-dropping system allowing connecting a large number of different fiels devices. The SCADA Server gathers data from gas plant and stores data to a MtSQL database. The SCADA server is connected to other SCADA client application offers a intuitive and user-friendly HMI. The main benefit of using SCADA is real time displaying of gas plant state. The main contriobution of the authors consists in designing SCADA architecture based on multi-dropping system and Human Machine Interface.

  11. Advanced customization in architectural design and construction

    CERN Document Server

    Naboni, Roberto

    2015-01-01

    This book presents the state of the art in advanced customization within the sector of architectural design and construction, explaining important new technologies that are boosting design, product and process innovation and identifying the challenges to be confronted as we move toward a mass customization construction industry. Advanced machinery and software integration are discussed, as well as an overview of the manufacturing techniques offered through digital methods that are acquiring particular significance within the field of digital architecture. CNC machining, Robotic Fabrication, and Additive Manufacturing processes are all clearly explained, highlighting their ability to produce personalized architectural forms and unique construction components. Cutting-edge case studies in digitally fabricated architectural realizations are described and, looking towards the future, a new model of 100% customized architecture for design and construction is presented. The book is an excellent guide to the profoun...

  12. Proposed hardware architectures of particle filter for object tracking

    Science.gov (United States)

    Abd El-Halym, Howida A.; Mahmoud, Imbaby Ismail; Habib, SED

    2012-12-01

    In this article, efficient hardware architectures for particle filter (PF) are presented. We propose three different architectures for Sequential Importance Resampling Filter (SIRF) implementation. The first architecture is a two-step sequential PF machine, where particle sampling, weight, and output calculations are carried out in parallel during the first step followed by sequential resampling in the second step. For the weight computation step, a piecewise linear function is used instead of the classical exponential function. This decreases the complexity of the architecture without degrading the results. The second architecture speeds up the resampling step via a parallel, rather than a serial, architecture. This second architecture targets a balance between hardware resources and the speed of operation. The third architecture implements the SIRF as a distributed PF composed of several processing elements and central unit. All the proposed architectures are captured using VHDL synthesized using Xilinx environment, and verified using the ModelSim simulator. Synthesis results confirmed the resource reduction and speed up advantages of our architectures.

  13. Lessons from 2011 - Machine protection

    International Nuclear Information System (INIS)

    Zerlauth, M.; Schmidt, R.; Wenninger, J.

    2012-01-01

    The present architecture of the machine protection system is being recalled and the performance of the associated systems during the 2011 run will be briefly summarized. The LHC Machine Protection and Equipment Systems have been working extremely well during the 2011 run. Ever more failures are captured before effects on the particle beams are seen (i.e. no beam losses or orbit changes are observed). An analysis of the causes of beam dumps as well as an assessment of the dependability of the machine protection systems (MPS) itself is being presented. Emphasis will be given to events that risked exposing parts of the machine to damage. Further improvements and mitigations of potential holes in the protection systems will be evaluated along with their impact on the 2012 run. The role of rMPP during the various operational phases (commissioning, intensity ramp up, MDs...) will be discussed along with a proposal for the intensity ramp up for the start of beam operation in 2012

  14. Experience with a clustered parallel reduction machine

    NARCIS (Netherlands)

    Beemster, M.; Hartel, Pieter H.; Hertzberger, L.O.; Hofman, R.F.H.; Langendoen, K.G.; Li, L.L.; Milikowski, R.; Vree, W.G.; Barendregt, H.P.; Mulder, J.C.

    A clustered architecture has been designed to exploit divide and conquer parallelism in functional programs. The programming methodology developed for the machine is based on explicit annotations and program transformations. It has been successfully applied to a number of algorithms resulting in a

  15. Implementation of a Monte Carlo algorithm for neutron transport on a massively parallel SIMD machine

    International Nuclear Information System (INIS)

    Baker, R.S.

    1992-01-01

    We present some results from the recent adaptation of a vectorized Monte Carlo algorithm to a massively parallel architecture. The performance of the algorithm on a single processor Cray Y-MP and a Thinking Machine Corporations CM-2 and CM-200 is compared for several test problems. The results show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when the algorithms are applied to realistic problems which require extensive variance reduction. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well

  16. Implementation of a Monte Carlo algorithm for neutron transport on a massively parallel SIMD machine

    International Nuclear Information System (INIS)

    Baker, R.S.

    1993-01-01

    We present some results from the recent adaptation of a vectorized Monte Carlo algorithm to a massively parallel architecture. The performance of the algorithm on a single processor Cray Y-MP and a Thinking Machine Corporations CM-2 and CM-200 is compared for several test problems. The results show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when the algorithms are applied to realistic problems which require extensive variance reduction. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well. (orig.)

  17. A Machine Learning Concept for DTN Routing

    Science.gov (United States)

    Dudukovich, Rachel; Hylton, Alan; Papachristou, Christos

    2017-01-01

    This paper discusses the concept and architecture of a machine learning based router for delay tolerant space networks. The techniques of reinforcement learning and Bayesian learning are used to supplement the routing decisions of the popular Contact Graph Routing algorithm. An introduction to the concepts of Contact Graph Routing, Q-routing and Naive Bayes classification are given. The development of an architecture for a cross-layer feedback framework for DTN (Delay-Tolerant Networking) protocols is discussed. Finally, initial simulation setup and results are given.

  18. Automatic Generation of Machine Emulators: Efficient Synthesis of Robust Virtual Machines for Legacy Software Migration

    DEFF Research Database (Denmark)

    Franz, Michael; Gal, Andreas; Probst, Christian

    2006-01-01

    As older mainframe architectures become obsolete, the corresponding le- gacy software is increasingly executed via platform emulators running on top of more modern commodity hardware. These emulators are virtual machines that often include a combination of interpreters and just-in-time compilers....... Implementing interpreters and compilers for each combination of emulated and target platform independently of each other is a redundant and error-prone task. We describe an alternative approach that automatically synthesizes specialized virtual-machine interpreters and just-in-time compilers, which...... then execute on top of an existing software portability platform such as Java. The result is a considerably reduced implementation effort....

  19. HiMoP: A three-component architecture to create more human-acceptable social-assistive robots : Motivational architecture for assistive robots.

    Science.gov (United States)

    Rodríguez-Lera, Francisco J; Matellán-Olivera, Vicente; Conde-González, Miguel Á; Martín-Rico, Francisco

    2018-05-01

    Generation of autonomous behavior for robots is a general unsolved problem. Users perceive robots as repetitive tools that do not respond to dynamic situations. This research deals with the generation of natural behaviors in assistive service robots for dynamic domestic environments, particularly, a motivational-oriented cognitive architecture to generate more natural behaviors in autonomous robots. The proposed architecture, called HiMoP, is based on three elements: a Hierarchy of needs to define robot drives; a set of Motivational variables connected to robot needs; and a Pool of finite-state machines to run robot behaviors. The first element is inspired in Alderfer's hierarchy of needs, which specifies the variables defined in the motivational component. The pool of finite-state machine implements the available robot actions, and those actions are dynamically selected taking into account the motivational variables and the external stimuli. Thus, the robot is able to exhibit different behaviors even under similar conditions. A customized version of the "Speech Recognition and Audio Detection Test," proposed by the RoboCup Federation, has been used to illustrate how the architecture works and how it dynamically adapts and activates robots behaviors taking into account internal variables and external stimuli.

  20. Efficient universal computing architectures for decoding neural activity.

    Directory of Open Access Journals (Sweden)

    Benjamin I Rapoport

    Full Text Available The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain- machine interfaces (BMIs. Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain- machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than [Formula: see text]. We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA implementation of this portion

  1. LHCb experience with running jobs in virtual machines

    CERN Document Server

    McNab, A; Luzzi, C

    2015-01-01

    The LHCb experiment has been running production jobs in virtual machines since 2013 as part of its DIRAC-based infrastructure. We describe the architecture of these virtual machines and the steps taken to replicate the WLCG worker node environment expected by user and production jobs. This relies on the uCernVM system for providing root images for virtual machines. We use the CernVM-FS distributed filesystem to supply the root partition files, the LHCb software stack, and the bootstrapping scripts necessary to configure the virtual machines for us. Using this approach, we have been able to minimise the amount of contextualisation which must be provided by the virtual machine managers. We explain the process by which the virtual machine is able to receive payload jobs submitted to DIRAC by users and production managers, and how this differs from payloads executed within conventional DIRAC pilot jobs on batch queue based sites. We describe our operational experiences in running production on VM based sites mana...

  2. Implementing an Intrusion Detection System in the Mysea Architecture

    National Research Council Canada - National Science Library

    Tenhunen, Thomas

    2008-01-01

    .... The objective of this thesis is to design an intrusion detection system (IDS) architecture that permits administrators operating on MYSEA client machines to conveniently view and analyze IDS alerts from the single level networks...

  3. Bioinspired Architecture Selection for Multitask Learning

    Directory of Open Access Journals (Sweden)

    Andrés Bueno-Crespo

    2017-06-01

    Full Text Available Faced with a new concept to learn, our brain does not work in isolation. It uses all previously learned knowledge. In addition, the brain is able to isolate the knowledge that does not benefit us, and to use what is actually useful. In machine learning, we do not usually benefit from the knowledge of other learned tasks. However, there is a methodology called Multitask Learning (MTL, which is based on the idea that learning a task along with other related tasks produces a transfer of information between them, what can be advantageous for learning the first one. This paper presents a new method to completely design MTL architectures, by including the selection of the most helpful subtasks for the learning of the main task, and the optimal network connections. In this sense, the proposed method realizes a complete design of the MTL schemes. The method is simple and uses the advantages of the Extreme Learning Machine to automatically design a MTL machine, eliminating those factors that hinder, or do not benefit, the learning process of the main task. This architecture is unique and it is obtained without testing/error methodologies that increase the computational complexity. The results obtained over several real problems show the good performances of the designed networks with this method.

  4. Cognitive Architectures and Autonomy: A Comparative Review

    Science.gov (United States)

    Thórisson, Kristinn; Helgasson, Helgi

    2012-05-01

    One of the original goals of artificial intelligence (AI) research was to create machines with very general cognitive capabilities and a relatively high level of autonomy. It has taken the field longer than many had expected to achieve even a fraction of this goal; the community has focused on building specific, targeted cognitive processes in isolation, and as of yet no system exists that integrates a broad range of capabilities or presents a general solution to autonomous acquisition of a large set of skills. Among the reasons for this are the highly limited machine learning and adaptation techniques available, and the inherent complexity of integrating numerous cognitive and learning capabilities in a coherent architecture. In this paper we review selected systems and architectures built expressly to address integrated skills. We highlight principles and features of these systems that seem promising for creating generally intelligent systems with some level of autonomy, and discuss them in the context of the development of future cognitive architectures. Autonomy is a key property for any system to be considered generally intelligent, in our view; we use this concept as an organizing principle for comparing the reviewed systems. Features that remain largely unaddressed in present research, but seem nevertheless necessary for such efforts to succeed, are also discussed.

  5. Light-operated machines based on threaded molecular structures.

    Science.gov (United States)

    Credi, Alberto; Silvi, Serena; Venturi, Margherita

    2014-01-01

    Rotaxanes and related species represent the most common implementation of the concept of artificial molecular machines, because the supramolecular nature of the interactions between the components and their interlocked architecture allow a precise control on the position and movement of the molecular units. The use of light to power artificial molecular machines is particularly valuable because it can play the dual role of "writing" and "reading" the system. Moreover, light-driven machines can operate without accumulation of waste products, and photons are the ideal inputs to enable autonomous operation mechanisms. In appropriately designed molecular machines, light can be used to control not only the stability of the system, which affects the relative position of the molecular components but also the kinetics of the mechanical processes, thereby enabling control on the direction of the movements. This step forward is necessary in order to make a leap from molecular machines to molecular motors.

  6. Machine learning analysis of binaural rowing sounds

    DEFF Research Database (Denmark)

    Johard, Leonard; Ruffaldi, Emanuele; Hoffmann, Pablo F.

    2011-01-01

    Techniques for machine hearing are increasing their potentiality due to new application domains. In this work we are addressing the analysis of rowing sounds in natural context for the purpose of supporting a training system based on virtual environments. This paper presents the acquisition metho...... methodology and the evaluation of different machine learning techniques for classifying rowing-sound data. We see that a combination of principal component analysis and shallow networks perform equally well as deep architectures, while being much faster to train.......Techniques for machine hearing are increasing their potentiality due to new application domains. In this work we are addressing the analysis of rowing sounds in natural context for the purpose of supporting a training system based on virtual environments. This paper presents the acquisition...

  7. Four wind speed multi-step forecasting models using extreme learning machines and signal decomposing algorithms

    International Nuclear Information System (INIS)

    Liu, Hui; Tian, Hong-qi; Li, Yan-fei

    2015-01-01

    Highlights: • A hybrid architecture is proposed for the wind speed forecasting. • Four algorithms are used for the wind speed multi-scale decomposition. • The extreme learning machines are employed for the wind speed forecasting. • All the proposed hybrid models can generate the accurate results. - Abstract: Realization of accurate wind speed forecasting is important to guarantee the safety of wind power utilization. In this paper, a new hybrid forecasting architecture is proposed to realize the wind speed accurate forecasting. In this architecture, four different hybrid models are presented by combining four signal decomposing algorithms (e.g., Wavelet Decomposition/Wavelet Packet Decomposition/Empirical Mode Decomposition/Fast Ensemble Empirical Mode Decomposition) and Extreme Learning Machines. The originality of the study is to investigate the promoted percentages of the Extreme Learning Machines by those mainstream signal decomposing algorithms in the multiple step wind speed forecasting. The results of two forecasting experiments indicate that: (1) the method of Extreme Learning Machines is suitable for the wind speed forecasting; (2) by utilizing the decomposing algorithms, all the proposed hybrid algorithms have better performance than the single Extreme Learning Machines; (3) in the comparisons of the decomposing algorithms in the proposed hybrid architecture, the Fast Ensemble Empirical Mode Decomposition has the best performance in the three-step forecasting results while the Wavelet Packet Decomposition has the best performance in the one and two step forecasting results. At the same time, the Wavelet Packet Decomposition and the Fast Ensemble Empirical Mode Decomposition are better than the Wavelet Decomposition and the Empirical Mode Decomposition in all the step predictions, respectively; and (4) the proposed algorithms are effective in the wind speed accurate predictions

  8. Second Generation Dutch Pulsar Machine - PuMa-II

    NARCIS (Netherlands)

    Karuppusamy, Ramesh; Stappers, Ben; Slump, Cornelis H.; van der Klis, Michiel

    2004-01-01

    The Second Generation Pulsar Machine (PuMa- II) is under development for the Westerbork Synthesis Radio Telescope. This is a summary of th e system design and architecture. We show that state of the art pulsar research is possible with commercially available hardware components. This approach

  9. Assessing Implicit Knowledge in BIM Models with Machine Learning

    DEFF Research Database (Denmark)

    Krijnen, Thomas; Tamke, Martin

    2015-01-01

    architects and engineers are able to deduce non-explicitly explicitly stated information, which is often the core of the transported architectural information. This paper investigates how machine learning approaches allow a computational system to deduce implicit knowledge from a set of BIM models....

  10. Software design of the hybrid robot machine for ITER vacuum vessel assembly and maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Li, Ming, E-mail: Ming.Li@lut.fi [Laboratory of Intelligent Machines, Lappeenranta University of Technology (Finland); Wu, Huapeng; Handroos, Heikki [Laboratory of Intelligent Machines, Lappeenranta University of Technology (Finland); Yang, Guangyou [School of Mechanical Engineering, Hubei University of Technology, Wuhan (China)

    2013-10-15

    A specific software design is elaborated in this paper for the hybrid robot machine used for the ITER vacuum vessel (VV) assembly and maintenance. In order to provide the multi-machining-function as well as the complicated, flexible and customizable GUI designing satisfying the non-standardized VV assembly process in one hand, and in another hand guarantee the stringent machining precision in the real-time motion control of robot machine, a client–server-control software architecture is proposed, which separates the user interaction, data communication and robot control implementation into different software layers. Correspondingly, three particular application protocols upon the TCP/IP are designed to transmit the data, command and status between the client and the server so as to deal with the abundant data streaming in the software. In order not to be affected by the graphic user interface (GUI) modification process in the future experiment in VV assembly working field, the real-time control system is realized as a stand-alone module in the architecture to guarantee the controlling performance of the robot machine. After completing the software development, a milling operation is tested on the robot machine, and the result demonstrates that both the specific GUI operability and the real-time motion control performance could be guaranteed adequately in the software design.

  11. Software design of the hybrid robot machine for ITER vacuum vessel assembly and maintenance

    International Nuclear Information System (INIS)

    Li, Ming; Wu, Huapeng; Handroos, Heikki; Yang, Guangyou

    2013-01-01

    A specific software design is elaborated in this paper for the hybrid robot machine used for the ITER vacuum vessel (VV) assembly and maintenance. In order to provide the multi-machining-function as well as the complicated, flexible and customizable GUI designing satisfying the non-standardized VV assembly process in one hand, and in another hand guarantee the stringent machining precision in the real-time motion control of robot machine, a client–server-control software architecture is proposed, which separates the user interaction, data communication and robot control implementation into different software layers. Correspondingly, three particular application protocols upon the TCP/IP are designed to transmit the data, command and status between the client and the server so as to deal with the abundant data streaming in the software. In order not to be affected by the graphic user interface (GUI) modification process in the future experiment in VV assembly working field, the real-time control system is realized as a stand-alone module in the architecture to guarantee the controlling performance of the robot machine. After completing the software development, a milling operation is tested on the robot machine, and the result demonstrates that both the specific GUI operability and the real-time motion control performance could be guaranteed adequately in the software design

  12. 25th Annual International Symposium on Field-Programmable Custom Computing Machines

    CERN Document Server

    The IEEE Symposium on Field-Programmable Custom Computing Machines is the original and premier forum for presenting and discussing new research related to computing that exploits the unique features and capabilities of FPGAs and other reconfigurable hardware. Over the past two decades, FCCM has been the place to present papers on architectures, tools, and programming models for field-programmable custom computing machines as well as applications that use such systems.

  13. Understanding and modelling man-machine interaction

    International Nuclear Information System (INIS)

    Cacciabue, P.C.

    1996-01-01

    This paper gives an overview of the current state of the art in man-machine system interaction studies, focusing on the problems derived from highly automated working environments and the role of humans in the control loop. In particular, it is argued that there is a need for sound approaches to the design and analysis of man-machine interaction (MMI), which stem from the contribution of three expertises in interfacing domains, namely engineering, computer science and psychology: engineering for understanding and modelling plants and their material and energy conservation principles; psychology for understanding and modelling humans an their cognitive behaviours; computer science for converting models in sound simulations running in appropriate computer architectures. (orig.)

  14. Understanding and modelling Man-Machine Interaction

    International Nuclear Information System (INIS)

    Cacciabue, P.C.

    1991-01-01

    This paper gives an overview of the current state of the art in man machine systems interaction studies, focusing on the problems derived from highly automated working environments and the role of humans in the control loop. In particular, it is argued that there is a need for sound approaches to design and analysis of Man-Machine Interaction (MMI), which stem from the contribution of three expertises in interfacing domains, namely engineering, computer science and psychology: engineering for understanding and modelling plants and their material and energy conservation principles; psychology for understanding and modelling humans and their cognitive behaviours; computer science for converting models in sound simulations running in appropriate computer architectures. (author)

  15. Architecture Knowledge for Evaluating Scalable Databases

    Science.gov (United States)

    2015-01-16

    Architecture Knowledge for Evaluating Scalable Databases 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Nurgaliev... Scala , Erlang, Javascript Cursor-based queries Supported, Not Supported JOIN queries Supported, Not Supported Complex data types Lists, maps, sets...is therefore needed, using technology such as machine learning to extract content from product documentation. The terminology used in the database

  16. CHRONOS architecture: Experiences with an open-source services-oriented architecture for geoinformatics

    Science.gov (United States)

    Fils, D.; Cervato, C.; Reed, J.; Diver, P.; Tang, X.; Bohling, G.; Greer, D.

    2009-01-01

    CHRONOS's purpose is to transform Earth history research by seamlessly integrating stratigraphic databases and tools into a virtual on-line stratigraphic record. In this paper, we describe the various components of CHRONOS's distributed data system, including the encoding of semantic and descriptive data into a service-based architecture. We give examples of how we have integrated well-tested resources available from the open-source and geoinformatic communities, like the GeoSciML schema and the simple knowledge organization system (SKOS), into the services-oriented architecture to encode timescale and phylogenetic synonymy data. We also describe on-going efforts to use geospatially enhanced data syndication and informally including semantic information by embedding it directly into the XHTML Document Object Model (DOM). XHTML DOM allows machine-discoverable descriptive data such as licensing and citation information to be incorporated directly into data sets retrieved by users. ?? 2008 Elsevier Ltd. All rights reserved.

  17. WATERLOPP V2/64: A highly parallel machine for numerical computation

    Science.gov (United States)

    Ostlund, Neil S.

    1985-07-01

    Current technological trends suggest that the high performance scientific machines of the future are very likely to consist of a large number (greater than 1024) of processors connected and communicating with each other in some as yet undetermined manner. Such an assembly of processors should behave as a single machine in obtaining numerical solutions to scientific problems. However, the appropriate way of organizing both the hardware and software of such an assembly of processors is an unsolved and active area of research. It is particularly important to minimize the organizational overhead of interprocessor comunication, global synchronization, and contention for shared resources if the performance of a large number ( n) of processors is to be anything like the desirable n times the performance of a single processor. In many situations, adding a processor actually decreases the performance of the overall system since the extra organizational overhead is larger than the extra processing power added. The systolic loop architecture is a new multiple processor architecture which attemps at a solution to the problem of how to organize a large number of asynchronous processors into an effective computational system while minimizing the organizational overhead. This paper gives a brief overview of the basic systolic loop architecture, systolic loop algorithms for numerical computation, and a 64-processor implementation of the architecture, WATERLOOP V2/64, that is being used as a testbed for exploring the hardware, software, and algorithmic aspects of the architecture.

  18. LHCb experience with running jobs in virtual machines

    Science.gov (United States)

    McNab, A.; Stagni, F.; Luzzi, C.

    2015-12-01

    The LHCb experiment has been running production jobs in virtual machines since 2013 as part of its DIRAC-based infrastructure. We describe the architecture of these virtual machines and the steps taken to replicate the WLCG worker node environment expected by user and production jobs. This relies on the uCernVM system for providing root images for virtual machines. We use the CernVM-FS distributed filesystem to supply the root partition files, the LHCb software stack, and the bootstrapping scripts necessary to configure the virtual machines for us. Using this approach, we have been able to minimise the amount of contextualisation which must be provided by the virtual machine managers. We explain the process by which the virtual machine is able to receive payload jobs submitted to DIRAC by users and production managers, and how this differs from payloads executed within conventional DIRAC pilot jobs on batch queue based sites. We describe our operational experiences in running production on VM based sites managed using Vcycle/OpenStack, Vac, and HTCondor Vacuum. Finally we show how our use of these resources is monitored using Ganglia and DIRAC.

  19. Layered Architectures for Quantum Computers and Quantum Repeaters

    Science.gov (United States)

    Jones, Nathan C.

    This chapter examines how to organize quantum computers and repeaters using a systematic framework known as layered architecture, where machine control is organized in layers associated with specialized tasks. The framework is flexible and could be used for analysis and comparison of quantum information systems. To demonstrate the design principles in practice, we develop architectures for quantum computers and quantum repeaters based on optically controlled quantum dots, showing how a myriad of technologies must operate synchronously to achieve fault-tolerance. Optical control makes information processing in this system very fast, scalable to large problem sizes, and extendable to quantum communication.

  20. WWER NPPs fuel handling machine control system

    International Nuclear Information System (INIS)

    Mini, G.; Rossi, G.; Barabino, M.; Casalini, M.

    2001-01-01

    In order to increase the safety level of the fuel handling machine on WWER NPPs, Ansaldo Nucleare was asked to design and supply a new Control System. Two FHM Control System units have been already supplied for Temelin NPP and others supplies are in process for the Atommash company, which has in charge the supply of FHMs for NPPs located in Russia, Ukraine and China. The Fuel Handling Machine (FHM) Control System is an integrated system capable of a complete management of nuclear fuel assemblies. The computer-based system takes into account all the operational safety interlocks so that it is able to avoid incorrect and dangerous manoeuvres in the case of operator error. Control system design criteria, hardware and software architecture, and quality assurance control, are in accordance with the most recent international requirements and standards, and in particular for electromagnetic disturbance immunity demands and seismic compatibility. The hardware architecture of the control system is based on ABB INFI 90 system. The microprocessor-based ABB INFI 90 system incorporates and improves upon many of the time proven control capabilities of Bailey Network 90, validated over 14,000 installations world-wide. The control system complies all the former designed sensors and devices of the machine and markedly the angular position measurement sensors named 'selsyn' of Russian design. Nevertheless it is fully compatible with all the most recent sensors and devices currently available on the market (for ex. Multiturn absolute encoders). All control logic components were developed using standard INFI 90 Engineering Work Station, interconnecting blocks extracted from an extensive SAMA library by using a graphical approach (CAD) and allowing an easier intelligibility, more flexibility and updated and coherent documentation. The data acquisition system and the Man Machine Interface are implemented by ABB in co-operation with Ansaldo. The flexible and powerful software structure

  1. VVER NPPs fuel handling machine control system

    International Nuclear Information System (INIS)

    Mini, G.; Rossi, G.; Barabino, M.; Casalini, M.

    2002-01-01

    In order to increase the safety level of the fuel handling machine on WWER NPPs, Ansaldo Nucleare was asked to design and supply a new Control System. Two Fuel Handling Machine (FHM) Control System units have been already supplied for Temelin NPP and others supply are in process for the Atommash company, which has in charge the supply of FHMs for NPPs located in Russia, Ukraine and China.The computer-based system takes into account all the operational safety interlocks so that it is able to avoid incorrect and dangerous manoeuvres in the case of operator error. Control system design criteria, hardware and software architecture, and quality assurance control, are in accordance with the most recent international requirements and standards, and in particular for electromagnetic disturbance immunity demands and seismic compatibility. The hardware architecture of the control system is based on ABB INFI 90 system. The microprocessor-based ABB INFI 90 system incorporates and improves upon many of the time proven control capabilities of Bailey Network 90, validated over 14,000 installations world-wide.The control system complies all the former designed sensors and devices of the machine and markedly the angular position measurement sensors named 'selsyn' of Russian design. Nevertheless it is fully compatible with all the most recent sensors and devices currently available on the market (for ex. Multiturn absolute encoders).All control logic were developed using standard INFI 90 Engineering Work Station, interconnecting blocks extracted from an extensive SAMA library by using a graphical approach (CAD) and allowing and easier intelligibility, more flexibility and updated and coherent documentation. The data acquisition system and the Man Machine Interface are implemented by ABB in co-operation with Ansaldo. The flexible and powerful software structure of 1090 Work-stations (APMS - Advanced Plant Monitoring System, or Tenore NT) has been successfully used to interface the

  2. Stable architectures for deep neural networks

    Science.gov (United States)

    Haber, Eldad; Ruthotto, Lars

    2018-01-01

    Deep neural networks have become invaluable tools for supervised machine learning, e.g. classification of text or images. While often offering superior results over traditional techniques and successfully expressing complicated patterns in data, deep architectures are known to be challenging to design and train such that they generalize well to new data. Critical issues with deep architectures are numerical instabilities in derivative-based learning algorithms commonly called exploding or vanishing gradients. In this paper, we propose new forward propagation techniques inspired by systems of ordinary differential equations (ODE) that overcome this challenge and lead to well-posed learning problems for arbitrarily deep networks. The backbone of our approach is our interpretation of deep learning as a parameter estimation problem of nonlinear dynamical systems. Given this formulation, we analyze stability and well-posedness of deep learning and use this new understanding to develop new network architectures. We relate the exploding and vanishing gradient phenomenon to the stability of the discrete ODE and present several strategies for stabilizing deep learning for very deep networks. While our new architectures restrict the solution space, several numerical experiments show their competitiveness with state-of-the-art networks.

  3. Languages, compilers and run-time environments for distributed memory machines

    CERN Document Server

    Saltz, J

    1992-01-01

    Papers presented within this volume cover a wide range of topics related to programming distributed memory machines. Distributed memory architectures, although having the potential to supply the very high levels of performance required to support future computing needs, present awkward programming problems. The major issue is to design methods which enable compilers to generate efficient distributed memory programs from relatively machine independent program specifications. This book is the compilation of papers describing a wide range of research efforts aimed at easing the task of programmin

  4. A Systematic Review on Recent Advances in mHealth Systems: Deployment Architecture for Emergency Response.

    Science.gov (United States)

    Gonzalez, Enrique; Peña, Raul; Avila, Alfonso; Vargas-Rosales, Cesar; Munoz-Rodriguez, David

    2017-01-01

    The continuous technological advances in favor of mHealth represent a key factor in the improvement of medical emergency services. This systematic review presents the identification, study, and classification of the most up-to-date approaches surrounding the deployment of architectures for mHealth. Our review includes 25 articles obtained from databases such as IEEE Xplore, Scopus, SpringerLink, ScienceDirect, and SAGE. This review focused on studies addressing mHealth systems for outdoor emergency situations. In 60% of the articles, the deployment architecture relied in the connective infrastructure associated with emergent technologies such as cloud services, distributed services, Internet-of-things, machine-to-machine, vehicular ad hoc network, and service-oriented architecture. In 40% of the literature review, the deployment architecture for mHealth considered traditional connective infrastructure. Only 20% of the studies implemented an energy consumption protocol to extend system lifetime. We concluded that there is a need for more integrated solutions specifically for outdoor scenarios. Energy consumption protocols are needed to be implemented and evaluated. Emergent connective technologies are redefining the information management and overcome traditional technologies.

  5. A Systematic Review on Recent Advances in mHealth Systems: Deployment Architecture for Emergency Response

    Directory of Open Access Journals (Sweden)

    Enrique Gonzalez

    2017-01-01

    Full Text Available The continuous technological advances in favor of mHealth represent a key factor in the improvement of medical emergency services. This systematic review presents the identification, study, and classification of the most up-to-date approaches surrounding the deployment of architectures for mHealth. Our review includes 25 articles obtained from databases such as IEEE Xplore, Scopus, SpringerLink, ScienceDirect, and SAGE. This review focused on studies addressing mHealth systems for outdoor emergency situations. In 60% of the articles, the deployment architecture relied in the connective infrastructure associated with emergent technologies such as cloud services, distributed services, Internet-of-things, machine-to-machine, vehicular ad hoc network, and service-oriented architecture. In 40% of the literature review, the deployment architecture for mHealth considered traditional connective infrastructure. Only 20% of the studies implemented an energy consumption protocol to extend system lifetime. We concluded that there is a need for more integrated solutions specifically for outdoor scenarios. Energy consumption protocols are needed to be implemented and evaluated. Emergent connective technologies are redefining the information management and overcome traditional technologies.

  6. Machine learning methods for planning

    CERN Document Server

    Minton, Steven

    1993-01-01

    Machine Learning Methods for Planning provides information pertinent to learning methods for planning and scheduling. This book covers a wide variety of learning methods and learning architectures, including analogical, case-based, decision-tree, explanation-based, and reinforcement learning.Organized into 15 chapters, this book begins with an overview of planning and scheduling and describes some representative learning systems that have been developed for these tasks. This text then describes a learning apprentice for calendar management. Other chapters consider the problem of temporal credi

  7. An Efficient Reconfigurable Architecture for Fingerprint Recognition

    Directory of Open Access Journals (Sweden)

    Satish S. Bhairannawar

    2016-01-01

    Full Text Available The fingerprint identification is an efficient biometric technique to authenticate human beings in real-time Big Data Analytics. In this paper, we propose an efficient Finite State Machine (FSM based reconfigurable architecture for fingerprint recognition. The fingerprint image is resized, and Compound Linear Binary Pattern (CLBP is applied on fingerprint, followed by histogram to obtain histogram CLBP features. Discrete Wavelet Transform (DWT Level 2 features are obtained by the same methodology. The novel matching score of CLBP is computed using histogram CLBP features of test image and fingerprint images in the database. Similarly, the DWT matching score is computed using DWT features of test image and fingerprint images in the database. Further, the matching scores of CLBP and DWT are fused with arithmetic equation using improvement factor. The performance parameters such as TSR (Total Success Rate, FAR (False Acceptance Rate, and FRR (False Rejection Rate are computed using fusion scores with correlation matching technique for FVC2004 DB3 Database. The proposed fusion based VLSI architecture is synthesized on Virtex xc5vlx30T-3 FPGA board using Finite State Machine resulting in optimized parameters.

  8. Mobile virtual synchronous machine for vehicle-to-grid applications

    Energy Technology Data Exchange (ETDEWEB)

    Pelczar, Christopher

    2012-03-20

    The Mobile Virtual Synchronous Machine (VISMA) is a power electronics device for Vehicle to Grid (V2G) applications which behaves like an electromechanical synchronous machine and offers the same beneficial properties to the power network, increasing the inertia in the system, stabilizing the grid voltage, and providing a short-circuit current in case of grid faults. The VISMA performs a real-time simulation of a synchronous machine and calculates the phase currents that an electromagnetic synchronous machine would produce under the same local grid conditions. An inverter with a current controller feeds the currents calculated by the VISMA into the grid. In this dissertation, the requirements for a machine model suitable for the Mobile VISMA are set, and a mathematical model suitable for use in the VISMA algorithm is found and tested in a custom-designed simulation environment prior to implementation on the Mobile VISMA hardware. A new hardware architecture for the Mobile VISMA based on microcontroller and FPGA technologies is presented, and experimental hardware is designed, implemented, and tested. The new architecture is designed in such a way that allows reducing the size and cost of the VISMA, making it suitable for installation in an electric vehicle. A simulation model of the inverter hardware and hysteresis current controller is created, and the simulations are verified with various experiments. The verified model is then used to design a new type of PWM-based current controller for the Mobile VISMA. The performance of the hysteresis- and PWM-based current controllers is evaluated and compared for different operational modes of the VISMA and configurations of the inverter hardware. Finally, the behavior of the VISMA during power network faults is examined. A desired behavior of the VISMA during network faults is defined, and experiments are performed which verify that the VISMA, inverter hardware, and current controllers are capable of supporting this

  9. Machine translation with minimal reliance on parallel resources

    CERN Document Server

    Tambouratzis, George; Sofianopoulos, Sokratis

    2017-01-01

    This book provides a unified view on a new methodology for Machine Translation (MT). This methodology extracts information from widely available resources (extensive monolingual corpora) while only assuming the existence of a very limited parallel corpus, thus having a unique starting point to Statistical Machine Translation (SMT). In this book, a detailed presentation of the methodology principles and system architecture is followed by a series of experiments, where the proposed system is compared to other MT systems using a set of established metrics including BLEU, NIST, Meteor and TER. Additionally, a free-to-use code is available, that allows the creation of new MT systems. The volume is addressed to both language professionals and researchers. Prerequisites for the readers are very limited and include a basic understanding of the machine translation as well as of the basic tools of natural language processing.

  10. Towards Horizontal Architecture for Autonomic M2M Service Networks

    Directory of Open Access Journals (Sweden)

    Juhani Latvakoski

    2014-05-01

    Full Text Available Today, increasing number of industrial application cases rely on the Machine to Machine (M2M services exposed from physical devices. Such M2M services enable interaction of physical world with the core processes of company information systems. However, there are grand challenges related to complexity and “vertical silos” limiting the M2M market scale and interoperability. It is here expected that horizontal approach for the system architecture is required for solving these challenges. Therefore, a set of architectural principles and key enablers for the horizontal architecture have been specified in this work. A selected set of key enablers called as autonomic M2M manager, M2M service capabilities, M2M messaging system, M2M gateways towards energy constrained M2M asset devices and creation of trust to enable end-to-end security for M2M applications have been developed. The developed key enablers have been evaluated separately in different scenarios dealing with smart metering, car sharing and electric bike experiments. The evaluation results shows that the provided architectural principles, and developed key enablers establish a solid ground for future research and seem to enable communication between objects and applications, which are not initially been designed to communicate together. The aim as the next step in this research is to create a combined experimental system to evaluate the system interoperability and performance in a more detailed manner.

  11. Machine protection: availability for particle accelerators

    International Nuclear Information System (INIS)

    Apollonio, A.

    2015-01-01

    Machine availability is a key indicator for the performance of the next generation of particle accelerators. Availability requirements need to be carefully considered during the design phase to achieve challenging objectives in different fields, as e.g. particle physics and material science. For existing and future High-Power facilities, such as ESS (European Spallation Source) and HL-LHC (High-Luminosity LHC), operation with unprecedented beam power requires highly dependable Machine Protection Systems (MPS) to avoid any damage-induced downtime. Due to the high complexity of accelerator systems, finding the optimal balance between equipment safety and accelerator availability is challenging. The MPS architecture, as well as the choice of electronic components, have a large influence on the achievable level of availability. In this thesis novel methods to address the availability of accelerators and their protection systems are presented. Examples of studies related to dependable MPS architectures are given in the thesis, both for Linear accelerators (Linac4, ESS) and circular particle colliders (LHC and HL-LHC). A study of suitable architectures for interlock systems of future availability-critical facilities is presented. Different methods have been applied to assess the anticipated levels of accelerator availability. The thesis presents the prediction of the performance (integrated luminosity for a particle collider) of LHC and future LHC up- grades, based on a Monte Carlo model that allows reproducing a realistic timeline of LHC operation. This model does not only account for the contribution of MPS, but extends to all systems relevant for LHC operation. Results are extrapolated to LHC run 2, run 3 and HL-LHC to derive individual system requirements, based on the target integrated luminosity. (author)

  12. Terra Harvest software architecture

    Science.gov (United States)

    Humeniuk, Dave; Klawon, Kevin

    2012-06-01

    Under the Terra Harvest Program, the DIA has the objective of developing a universal Controller for the Unattended Ground Sensor (UGS) community. The mission is to define, implement, and thoroughly document an open architecture that universally supports UGS missions, integrating disparate systems, peripherals, etc. The Controller's inherent interoperability with numerous systems enables the integration of both legacy and future UGS System (UGSS) components, while the design's open architecture supports rapid third-party development to ensure operational readiness. The successful accomplishment of these objectives by the program's Phase 3b contractors is demonstrated via integration of the companies' respective plug-'n'-play contributions that include controllers, various peripherals, such as sensors, cameras, etc., and their associated software drivers. In order to independently validate the Terra Harvest architecture, L-3 Nova Engineering, along with its partner, the University of Dayton Research Institute, is developing the Terra Harvest Open Source Environment (THOSE), a Java Virtual Machine (JVM) running on an embedded Linux Operating System. The Use Cases on which the software is developed support the full range of UGS operational scenarios such as remote sensor triggering, image capture, and data exfiltration. The Team is additionally developing an ARM microprocessor-based evaluation platform that is both energy-efficient and operationally flexible. The paper describes the overall THOSE architecture, as well as the design decisions for some of the key software components. Development process for THOSE is discussed as well.

  13. Baseline Architecture of ITER Control System

    Science.gov (United States)

    Wallander, A.; Di Maio, F.; Journeaux, J.-Y.; Klotz, W.-D.; Makijarvi, P.; Yonekawa, I.

    2011-08-01

    The control system of ITER consists of thousands of computers processing hundreds of thousands of signals. The control system, being the primary tool for operating the machine, shall integrate, control and coordinate all these computers and signals and allow a limited number of staff to operate the machine from a central location with minimum human intervention. The primary functions of the ITER control system are plant control, supervision and coordination, both during experimental pulses and 24/7 continuous operation. The former can be split in three phases; preparation of the experiment by defining all parameters; executing the experiment including distributed feed-back control and finally collecting, archiving, analyzing and presenting all data produced by the experiment. We define the control system as a set of hardware and software components with well defined characteristics. The architecture addresses the organization of these components and their relationship to each other. We distinguish between physical and functional architecture, where the former defines the physical connections and the latter the data flow between components. In this paper, we identify the ITER control system based on the plant breakdown structure. Then, the control system is partitioned into a workable set of bounded subsystems. This partition considers at the same time the completeness and the integration of the subsystems. The components making up subsystems are identified and defined, a naming convention is introduced and the physical networks defined. Special attention is given to timing and real-time communication for distributed control. Finally we discuss baseline technologies for implementing the proposed architecture based on analysis, market surveys, prototyping and benchmarking carried out during the last year.

  14. Computer Security Primer: Systems Architecture, Special Ontology and Cloud Virtual Machines

    Science.gov (United States)

    Waguespack, Leslie J.

    2014-01-01

    With the increasing proliferation of multitasking and Internet-connected devices, security has reemerged as a fundamental design concern in information systems. The shift of IS curricula toward a largely organizational perspective of security leaves little room for focus on its foundation in systems architecture, the computational underpinnings of…

  15. Adaption of commercial off the shelf modules for reconfigurable machine tool design

    CSIR Research Space (South Africa)

    Mpofu, K

    2008-01-01

    Full Text Available . University of Ljubljana (Slovenia) Machine Design Approach. Butala and Sluga [4] view the architecture of the machine tool as a system structure which is reflected in its configuration and which impacts the systems performance. The interfaces... process movements. This approach was also implemented in a computer aided planning system, they clarify the need of having the features to be implemented embedded in the collective drives that constitute it. This resulted in an adaption...

  16. Object-Oriented Support for Adaptive Methods on Paranel Machines

    Directory of Open Access Journals (Sweden)

    Sandeep Bhatt

    1993-01-01

    Full Text Available This article reports on experiments from our ongoing project whose goal is to develop a C++ library which supports adaptive and irregular data structures on distributed memory supercomputers. We demonstrate the use of our abstractions in implementing "tree codes" for large-scale N-body simulations. These algorithms require dynamically evolving treelike data structures, as well as load-balancing, both of which are widely believed to make the application difficult and cumbersome to program for distributed-memory machines. The ease of writing the application code on top of our C++ library abstractions (which themselves are application independent, and the low overhead of the resulting C++ code (over hand-crafted C code supports our belief that object-oriented approaches are eminently suited to programming distributed-memory machines in a manner that (to the applications programmer is architecture-independent. Our contribution in parallel programming methodology is to identify and encapsulate general classes of communication and load-balancing strategies useful across applications and MIMD architectures. This article reports experimental results from simulations of half a million particles using multiple methods.

  17. Machine en Theater. Ontwerpconcepten van winkelgebouwen

    OpenAIRE

    Kooijman, D.C.

    1999-01-01

    Machine and Theater, Design Concepts for Shop Buildings is a richly illustrated study of the architectural and urban development of retail buildings, focusing on six essential shop types: the passage and the department store in particular in Germany and France in the nineteenth century; supermarkets and malls and their relation to the suburbanisation and the emerging car use; and the peripheral retail park and location-free virtual store as the most recent developments. On the basis of a larg...

  18. GREAT: a web portal for Genome Regulatory Architecture Tools.

    Science.gov (United States)

    Bouyioukos, Costas; Bucchini, François; Elati, Mohamed; Képès, François

    2016-07-08

    GREAT (Genome REgulatory Architecture Tools) is a novel web portal for tools designed to generate user-friendly and biologically useful analysis of genome architecture and regulation. The online tools of GREAT are freely accessible and compatible with essentially any operating system which runs a modern browser. GREAT is based on the analysis of genome layout -defined as the respective positioning of co-functional genes- and its relation with chromosome architecture and gene expression. GREAT tools allow users to systematically detect regular patterns along co-functional genomic features in an automatic way consisting of three individual steps and respective interactive visualizations. In addition to the complete analysis of regularities, GREAT tools enable the use of periodicity and position information for improving the prediction of transcription factor binding sites using a multi-view machine learning approach. The outcome of this integrative approach features a multivariate analysis of the interplay between the location of a gene and its regulatory sequence. GREAT results are plotted in web interactive graphs and are available for download either as individual plots, self-contained interactive pages or as machine readable tables for downstream analysis. The GREAT portal can be reached at the following URL https://absynth.issb.genopole.fr/GREAT and each individual GREAT tool is available for downloading. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. The LHC Collimator Controls Architecture - Design and beam tests

    CERN Document Server

    Redaelli, S; Gander, P; Jonker, M; Lamont, M; Losito, R; Masi, A; Sobczak, M

    2007-01-01

    The LHC collimation system will require simultaneous management by the LHC control system of more than 500 jaw positioning mechanisms in order to ensure the required beam cleaning and machine protection performance in all machine phases, from injection at 450 GeV to collision at 7 TeV. Each jaw positionis a critical parameter for the machine safety. In this paper, the architecture of the LHC collimator controls is presented. The basic design to face the accurate control of the LHC collimators and the interfaces to the other components of LHC Software Application and control infrastructures are described. The full controls system has been tested in a real accelerator environment in the CERN SPS during beam tests with a full scale collimator prototype. The results and the lessons learned are presented.

  20. Ecological Design of Cooperative Human-Machine Interfaces for Safety of Intelligent Transport Systems

    Directory of Open Access Journals (Sweden)

    Orekhov Aleksandr

    2016-01-01

    Full Text Available The paper describes research results in the domain of cooperative intelligent transport systems. The requirements for human-machine interface considering safety issue of for intelligent transport systems (ITSare analyzed. Profiling of the requirements to cooperative human-machine interface (CHMI for such systems including requirements to usability and safety is based on a set of standards for ITSs. An approach and design technique of cooperative human-machine interface for ITSs are suggested. The architecture of cloud-based CHMI for intelligent transport systems has been developed. The prototype of software system CHMI4ITSis described.

  1. An investigative study towards constructing anthropocentric Man-Machine System design evaluation methodology

    International Nuclear Information System (INIS)

    Yoshikawa, H.; Gofuku, A.; Itoh, T.; Sasaki, K.

    1992-01-01

    A methodological investigation has been conducted for evaluating the reliability of man-machine interaction in the total Man-Machine System (MMS) from the view-point of safety maintenance for emergent situations of nuclear power plant. Basic considerations in our study are: (i) what are the MMS design data to be evaluated, (ii) how are those MMS design data should be treated, and (iii) how the introduction effects of various operator support tools can be evaluated. The methods of both qualitative and quantitative MMS design evaluation are summarized in this paper, with the system architecture based on man-machine interaction simulation and the related cognitive human error factor analysis. (author)

  2. Specification, Design, and Analysis of Advanced HUMS Architectures

    Science.gov (United States)

    Mukkamala, Ravi

    2004-01-01

    During the two-year project period, we have worked on several aspects of domain-specific architectures for HUMS. In particular, we looked at using scenario-based approach for the design and designed a language for describing such architectures. The language is now being used in all aspects of our HUMS design. In particular, we have made contributions in the following areas. 1) We have employed scenarios in the development of HUMS in three main areas. They are: (a) To improve reusability by using scenarios as a library indexing tool and as a domain analysis tool; (b) To improve maintainability by recording design rationales from two perspectives - problem domain and solution domain; (c) To evaluate the software architecture. 2) We have defined a new architectural language called HADL or HUMS Architectural Definition Language. It is a customized version of xArch/xADL. It is based on XML and, hence, is easily portable from domain to domain, application to application, and machine to machine. Specifications written in HADL can be easily read and parsed using the currently available XML parsers. Thus, there is no need to develop a plethora of software to support HADL. 3) We have developed an automated design process that involves two main techniques: (a) Selection of solutions from a large space of designs; (b) Synthesis of designs. However, the automation process is not an absolute Artificial Intelligence (AI) approach though it uses a knowledge-based system that epitomizes a specific HUMS domain. The process uses a database of solutions as an aid to solve the problems rather than creating a new design in the literal sense. Since searching is adopted as the main technique, the challenges involved are: (a) To minimize the effort in searching the database where a very large number of possibilities exist; (b) To develop representations that could conveniently allow us to depict design knowledge evolved over many years; (c) To capture the required information that aid the

  3. Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms

    Science.gov (United States)

    Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel

    2017-01-01

    With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies. PMID:29399237

  4. Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms.

    Science.gov (United States)

    Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel

    2014-01-01

    With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies.

  5. Machines for lattice gauge theory

    International Nuclear Information System (INIS)

    Mackenzie, P.B.

    1989-05-01

    The most promising approach to the solution of the theory of strong interactions is large scale numerical simulation using the techniques of lattice gauge theory. At the present time, computing requirements for convincing calculations of the properties of hadrons exceed the capabilities of even the most powerful commercial supercomputers. This has led to the development of massively parallel computers dedicated to lattice gauge theory. This talk will discuss the computing requirements behind these machines, and general features of the components and architectures of the half dozen major projects now in existence. 20 refs., 1 fig

  6. Behavioral simulation of a nuclear power plant operator crew for human-machine system design

    International Nuclear Information System (INIS)

    Furuta, K.; Shimada, T.; Kondo, S.

    1999-01-01

    This article proposes an architecture of behavioral simulation of an operator crew in a nuclear power plant including group processes and interactions between the operators and their working environment. An operator model was constructed based on the conceptual human information processor and then substantiated as a knowledge-based system with multiple sets of knowledge base and blackboard, each of which represents an individual operator. From a trade-off between reality and practicality, we adopted an architecture of simulation that consists of the operator, plant and environment models in order to consider operator-environment interactions. The simulation system developed on this framework and called OCCS was tested using a scenario of BWR plant operation. The case study showed that operator-environment interactions have significant effects on operator crew performance and that they should be considered properly for simulating behavior of human-machine systems. The proposed architecture contributed to more realistic simulation in comparison with an experimental result, and a good prospect has been obtained that computer simulation of an operator crew is feasible and useful for human-machine system design. (orig.)

  7. Computational capabilities of multilayer committee machines

    Energy Technology Data Exchange (ETDEWEB)

    Neirotti, J P [NCRG, Aston University, Birmingham (United Kingdom); Franco, L, E-mail: j.p.neirotti@aston.ac.u [Depto. de Lenguajes y Ciencias de la Computacion, Universidad de Malaga (Spain)

    2010-11-05

    We obtained an analytical expression for the computational complexity of many layered committee machines with a finite number of hidden layers (L < {infinity}) using the generalization complexity measure introduced by Franco et al (2006) IEEE Trans. Neural Netw. 17 578. Although our result is valid in the large-size limit and for an overlap synaptic matrix that is ultrametric, it provides a useful tool for inferring the appropriate architecture a network must have to reproduce an arbitrary realizable Boolean function.

  8. Rio: a dynamic self-healing services architecture using Jini networking technology

    Science.gov (United States)

    Clarke, James B.

    2002-06-01

    Current mainstream distributed Java architectures offer great capabilities embracing conventional enterprise architecture patterns and designs. These traditional systems provide robust transaction oriented environments that are in large part focused on data and host processors. Typically, these implementations require that an entire application be deployed on every machine that will be used as a compute resource. In order for this to happen, the application is usually taken down, installed and started with all systems in-sync and knowing about each other. Static environments such as these present an extremely difficult environment to setup, deploy and administer.

  9. Virtualization - A Key Cost Saver in NASA Multi-Mission Ground System Architecture

    Science.gov (United States)

    Swenson, Paul; Kreisler, Stephen; Sager, Jennifer A.; Smith, Dan

    2014-01-01

    With science team budgets being slashed, and a lack of adequate facilities for science payload teams to operate their instruments, there is a strong need for innovative new ground systems that are able to provide necessary levels of capability processing power, system availability and redundancy while maintaining a small footprint in terms of physical space, power utilization and cooling.The ground system architecture being presented is based off of heritage from several other projects currently in development or operations at Goddard, but was designed and built specifically to meet the needs of the Science and Planetary Operations Control Center (SPOCC) as a low-cost payload command, control, planning and analysis operations center. However, this SPOCC architecture was designed to be generic enough to be re-used partially or in whole by other labs and missions (since its inception that has already happened in several cases!)The SPOCC architecture leverages a highly available VMware-based virtualization cluster with shared SAS Direct-Attached Storage (DAS) to provide an extremely high-performing, low-power-utilization and small-footprint compute environment that provides Virtual Machine resources shared among the various tenant missions in the SPOCC. The storage is also expandable, allowing future missions to chain up to 7 additional 2U chassis of storage at an extremely competitive cost if they require additional archive or virtual machine storage space.The software architecture provides a fully-redundant GMSEC-based message bus architecture based on the ActiveMQ middleware to track all health and safety status within the SPOCC ground system. All virtual machines utilize the GMSEC system agents to report system host health over the GMSEC bus, and spacecraft payload health is monitored using the Hammers Integrated Test and Operations System (ITOS) Galaxy Telemetry and Command (TC) system, which performs near-real-time limit checking and data processing on the

  10. An Energy-Efficient Multi-Tier Architecture for Fall Detection Using Smartphones.

    Science.gov (United States)

    Guvensan, M Amac; Kansiz, A Oguz; Camgoz, N Cihan; Turkmen, H Irem; Yavuz, A Gokhan; Karsligil, M Elif

    2017-06-23

    Automatic detection of fall events is vital to providing fast medical assistance to the causality, particularly when the injury causes loss of consciousness. Optimization of the energy consumption of mobile applications, especially those which run 24/7 in the background, is essential for longer use of smartphones. In order to improve energy-efficiency without compromising on the fall detection performance, we propose a novel 3-tier architecture that combines simple thresholding methods with machine learning algorithms. The proposed method is implemented on a mobile application, called uSurvive, for Android smartphones. It runs as a background service and monitors the activities of a person in daily life and automatically sends a notification to the appropriate authorities and/or user defined contacts when it detects a fall. The performance of the proposed method was evaluated in terms of fall detection performance and energy consumption. Real life performance tests conducted on two different models of smartphone demonstrate that our 3-tier architecture with feature reduction could save up to 62% of energy compared to machine learning only solutions. In addition to this energy saving, the hybrid method has a 93% of accuracy, which is superior to thresholding methods and better than machine learning only solutions.

  11. Mapping robust parallel multigrid algorithms to scalable memory architectures

    Science.gov (United States)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. In this paper, we look at the parallel implementation of a V-cycle multiple semicoarsened grid (MSG) algorithm on distributed-memory architectures such as the Intel iPSC/860 and Paragon computers. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. This paper describes a mapping of an MSG algorithm to distributed-memory architectures that demonstrates how both levels of parallelism can be exploited. The result is a robust and effective multigrid algorithm for distributed-memory machines.

  12. Methods and Software Architecture for Activity Recognition from Position Data

    DEFF Research Database (Denmark)

    Godsk, Torben

    This thesis describes my studies on the subject of recognizing cow activities from satellite based position data. The studies comprise methods and software architecture for activity recognition from position data, applied to cow activity recognition. The development of methods and software....... The results of these calculations are applied to a given standard machine learning algorithm, and the activity, performed by the cow as the measurements were recorded, is recognized. The software architecture integrates these methods and ensures flexible activity recognition. For instance, it is flexible...... in relation to the use of different sensors modalities and/or within different domains. In addition, the methods and their integration with the software architecture ensures both robust and accurate activity recognition. Utilized, it enables me to classify the five activities robustly and with high success...

  13. Virtual Machine Language 2.1

    Science.gov (United States)

    Riedel, Joseph E.; Grasso, Christopher A.

    2012-01-01

    VML (Virtual Machine Language) is an advanced computing environment that allows spacecraft to operate using mechanisms ranging from simple, time-oriented sequencing to advanced, multicomponent reactive systems. VML has developed in four evolutionary stages. VML 0 is a core execution capability providing multi-threaded command execution, integer data types, and rudimentary branching. VML 1 added named parameterized procedures, extensive polymorphism, data typing, branching, looping issuance of commands using run-time parameters, and named global variables. VML 2 added for loops, data verification, telemetry reaction, and an open flight adaptation architecture. VML 2.1 contains major advances in control flow capabilities for executable state machines. On the resource requirements front, VML 2.1 features a reduced memory footprint in order to fit more capability into modestly sized flight processors, and endian-neutral data access for compatibility with Intel little-endian processors. Sequence packaging has been improved with object-oriented programming constructs and the use of implicit (rather than explicit) time tags on statements. Sequence event detection has been significantly enhanced with multi-variable waiting, which allows a sequence to detect and react to conditions defined by complex expressions with multiple global variables. This multi-variable waiting serves as the basis for implementing parallel rule checking, which in turn, makes possible executable state machines. The new state machine feature in VML 2.1 allows the creation of sophisticated autonomous reactive systems without the need to develop expensive flight software. Users specify named states and transitions, along with the truth conditions required, before taking transitions. Transitions with the same signal name allow separate state machines to coordinate actions: the conditions distributed across all state machines necessary to arm a particular signal are evaluated, and once found true, that

  14. Memory Based Machine Intelligence Techniques in VLSI hardware

    OpenAIRE

    James, Alex Pappachen

    2012-01-01

    We briefly introduce the memory based approaches to emulate machine intelligence in VLSI hardware, describing the challenges and advantages. Implementation of artificial intelligence techniques in VLSI hardware is a practical and difficult problem. Deep architectures, hierarchical temporal memories and memory networks are some of the contemporary approaches in this area of research. The techniques attempt to emulate low level intelligence tasks and aim at providing scalable solutions to high ...

  15. Engineering artificial machines from designable DNA materials for biomedical applications.

    Science.gov (United States)

    Qi, Hao; Huang, Guoyou; Han, Yulong; Zhang, Xiaohui; Li, Yuhui; Pingguan-Murphy, Belinda; Lu, Tian Jian; Xu, Feng; Wang, Lin

    2015-06-01

    Deoxyribonucleic acid (DNA) emerges as building bricks for the fabrication of nanostructure with complete artificial architecture and geometry. The amazing ability of DNA in building two- and three-dimensional structures raises the possibility of developing smart nanomachines with versatile controllability for various applications. Here, we overviewed the recent progresses in engineering DNA machines for specific bioengineering and biomedical applications.

  16. A memory-array architecture for computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Balsara, P.T.

    1989-01-01

    With the fast advances in the area of computer vision and robotics there is a growing need for machines that can understand images at a very high speed. A conventional von Neumann computer is not suited for this purpose because it takes a tremendous amount of time to solve most typical image processing problems. Exploiting the inherent parallelism present in various vision tasks can significantly reduce the processing time. Fortunately, parallelism is increasingly affordable as hardware gets cheaper. Thus it is now imperative to study computer vision in a parallel processing framework. The author should first design a computational structure which is well suited for a wide range of vision tasks and then develop parallel algorithms which can run efficiently on this structure. Recent advances in VLSI technology have led to several proposals for parallel architectures for computer vision. In this thesis he demonstrates that a memory array architecture with efficient local and global communication capabilities can be used for high speed execution of a wide range of computer vision tasks. This architecture, called the Access Constrained Memory Array Architecture (ACMAA), is efficient for VLSI implementation because of its modular structure, simple interconnect and limited global control. Several parallel vision algorithms have been designed for this architecture. The choice of vision problems demonstrates the versatility of ACMAA for a wide range of vision tasks. These algorithms were simulated on a high level ACMAA simulator running on the Intel iPSC/2 hypercube, a parallel architecture. The results of this simulation are compared with those of sequential algorithms running on a single hypercube node. Details of the ACMAA processor architecture are also presented.

  17. Supervisory Control System Architecture for Advanced Small Modular Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Cetiner, Sacit M [ORNL; Cole, Daniel L [University of Pittsburgh; Fugate, David L [ORNL; Kisner, Roger A [ORNL; Melin, Alexander M [ORNL; Muhlheim, Michael David [ORNL; Rao, Nageswara S [ORNL; Wood, Richard Thomas [ORNL

    2013-08-01

    This technical report was generated as a product of the Supervisory Control for Multi-Modular SMR Plants project within the Instrumentation, Control and Human-Machine Interface technology area under the Advanced Small Modular Reactor (SMR) Research and Development Program of the U.S. Department of Energy. The report documents the definition of strategies, functional elements, and the structural architecture of a supervisory control system for multi-modular advanced SMR (AdvSMR) plants. This research activity advances the state-of-the art by incorporating decision making into the supervisory control system architectural layers through the introduction of a tiered-plant system approach. The report provides a brief history of hierarchical functional architectures and the current state-of-the-art, describes a reference AdvSMR to show the dependencies between systems, presents a hierarchical structure for supervisory control, indicates the importance of understanding trip setpoints, applies a new theoretic approach for comparing architectures, identifies cyber security controls that should be addressed early in system design, and describes ongoing work to develop system requirements and hardware/software configurations.

  18. ISA-97 Compliant Architecture Testbed (ICAT) Projectry Organizations

    Science.gov (United States)

    1992-03-30

    by the System Integracion Directorate of the USAISEC, August 29, 1992. The report discusses the refinement of the ISA-97 Compliant Architecture Model...browser and iconic representations of system objects and resources. When the user is interacting with an application which has multiple compo- nents, it is...computer communications, it is not uncommon for large information systems to be shared by users on multiple machines. The trend towards the desktop

  19. Architecture of the Vibrio cholerae toxin-coregulated pilus machine revealed by electron cryotomography

    DEFF Research Database (Denmark)

    Chang, Yi Wei; Kjær, Andreas; Ortega, Davi R.

    2017-01-01

    ,2. T4aP are more widespread and are involved in cell motility 3, DNA transfer 4, host predation 5 and electron transfer 6. T4bP are less prevalent and are mainly found in enteropathogenic bacteria, where they play key roles in host colonization 7. Following similar work on T4aP machines 8,9, here we...... sequence homology to components of the previously analysed Myxococcus xanthus T4aP machine (T4aPM), we find that their structures are nevertheless remarkably similar. Based on homologies with components of the M. xanthus T4aPM and additional reconstructions of TCPM mutants in which the non...

  20. Peer-to-peer architectures for exascale computing : LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Vorobeychik, Yevgeniy; Mayo, Jackson R.; Minnich, Ronald G.; Armstrong, Robert C.; Rudish, Donald W.

    2010-09-01

    The goal of this research was to investigate the potential for employing dynamic, decentralized software architectures to achieve reliability in future high-performance computing platforms. These architectures, inspired by peer-to-peer networks such as botnets that already scale to millions of unreliable nodes, hold promise for enabling scientific applications to run usefully on next-generation exascale platforms ({approx} 10{sup 18} operations per second). Traditional parallel programming techniques suffer rapid deterioration of performance scaling with growing platform size, as the work of coping with increasingly frequent failures dominates over useful computation. Our studies suggest that new architectures, in which failures are treated as ubiquitous and their effects are considered as simply another controllable source of error in a scientific computation, can remove such obstacles to exascale computing for certain applications. We have developed a simulation framework, as well as a preliminary implementation in a large-scale emulation environment, for exploration of these 'fault-oblivious computing' approaches. High-performance computing (HPC) faces a fundamental problem of increasing total component failure rates due to increasing system sizes, which threaten to degrade system reliability to an unusable level by the time the exascale range is reached ({approx} 10{sup 18} operations per second, requiring of order millions of processors). As computer scientists seek a way to scale system software for next-generation exascale machines, it is worth considering peer-to-peer (P2P) architectures that are already capable of supporting 10{sup 6}-10{sup 7} unreliable nodes. Exascale platforms will require a different way of looking at systems and software because the machine will likely not be available in its entirety for a meaningful execution time. Realistic estimates of failure rates range from a few times per day to more than once per hour for these

  1. Big Data, Internet of Things and Cloud Convergence--An Architecture for Secure E-Health Applications.

    Science.gov (United States)

    Suciu, George; Suciu, Victor; Martian, Alexandru; Craciunescu, Razvan; Vulpe, Alexandru; Marcu, Ioana; Halunga, Simona; Fratu, Octavian

    2015-11-01

    Big data storage and processing are considered as one of the main applications for cloud computing systems. Furthermore, the development of the Internet of Things (IoT) paradigm has advanced the research on Machine to Machine (M2M) communications and enabled novel tele-monitoring architectures for E-Health applications. However, there is a need for converging current decentralized cloud systems, general software for processing big data and IoT systems. The purpose of this paper is to analyze existing components and methods of securely integrating big data processing with cloud M2M systems based on Remote Telemetry Units (RTUs) and to propose a converged E-Health architecture built on Exalead CloudView, a search based application. Finally, we discuss the main findings of the proposed implementation and future directions.

  2. High Accuracy Nonlinear Control and Estimation for Machine Tool Systems

    DEFF Research Database (Denmark)

    Papageorgiou, Dimitrios

    Component mass production has been the backbone of industry since the second industrial revolution, and machine tools are producing parts of widely varying size and design complexity. The ever-increasing level of automation in modern manufacturing processes necessitates the use of more...... sophisticated machine tool systems that are adaptable to different workspace conditions, while at the same time being able to maintain very narrow workpiece tolerances. The main topic of this thesis is to suggest control methods that can maintain required manufacturing tolerances, despite moderate wear and tear....... The purpose is to ensure that full accuracy is maintained between service intervals and to advice when overhaul is needed. The thesis argues that quality of manufactured components is directly related to the positioning accuracy of the machine tool axes, and it shows which low level control architectures...

  3. CHANGING PARADIGMS IN SPACE THEORIES: Recapturing 20th Century Architectural History

    Directory of Open Access Journals (Sweden)

    Gül Kaçmaz Erk

    2013-03-01

    Full Text Available The concept of space entered architectural history as late as 1893. Studies in art opened up the discussion, and it has been studied in various ways in architecture ever since. This article aims to instigate an additional reading to architectural history, one that is not supported by “isms” but based on space theories in the 20th century. Objectives of the article are to bring the concept of space and its changing paradigms to the attention of architectural researchers, to introduce a conceptual framework to classify and clarify theories of space, and to enrich the discussions on the 20th century architecture through theories that are beyond styles. The introduction of space in architecture will revolve around subject-object relationships, three-dimensionality and senses. Modern space will be discussed through concepts such as empathy, perception, abstraction, and geometry. A scientific approach will follow to study the concept of place through environment, event, behavior, and design methods. Finally, the reearch will look at contemporary approaches related to digitally  supported space via concepts like reality-virtuality, mediated experience, and relationship with machines.

  4. AHaH Computing–From Metastable Switches to Attractors to Machine Learning

    Science.gov (United States)

    Nugent, Michael Alexander; Molter, Timothy Wesley

    2014-01-01

    Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH) plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures–all key capabilities of biological nervous systems and modern machine learning algorithms with real world application. PMID:24520315

  5. AHaH computing-from metastable switches to attractors to machine learning.

    Directory of Open Access Journals (Sweden)

    Michael Alexander Nugent

    Full Text Available Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures-all key capabilities of biological nervous systems and modern machine learning algorithms with real world application.

  6. Modelling of human-machine interaction in equipment design of manufacturing cells

    Science.gov (United States)

    Cochran, David S.; Arinez, Jorge F.; Collins, Micah T.; Bi, Zhuming

    2017-08-01

    This paper proposes a systematic approach to model human-machine interactions (HMIs) in supervisory control of machining operations; it characterises the coexistence of machines and humans for an enterprise to balance the goals of automation/productivity and flexibility/agility. In the proposed HMI model, an operator is associated with a set of behavioural roles as a supervisor for multiple, semi-automated manufacturing processes. The model is innovative in the sense that (1) it represents an HMI based on its functions for process control but provides the flexibility for ongoing improvements in the execution of manufacturing processes; (2) it provides a computational tool to define functional requirements for an operator in HMIs. The proposed model can be used to design production systems at different levels of an enterprise architecture, particularly at the machine level in a production system where operators interact with semi-automation to accomplish the goal of 'autonomation' - automation that augments the capabilities of human beings.

  7. The development of an open architecture control system for CBN high speed grinding

    OpenAIRE

    Silva, E. Jannone da; Biffi, M.; Oliveira, J. F. G. de

    2004-01-01

    The aim of this project is the development of an open architecture controlling (OAC) system to be applied in the high speed grinding process using CBN tools. Besides other features, the system will allow a new monitoring and controlling strategy, by the adoption of open architecture CNC combined with multi-sensors, a PC and third-party software. The OAC system will be implemented in a high speed CBN grinding machine, which is being developed in a partnership between the University of São Paul...

  8. Central system of Interlock of ITER, high integrity architecture

    International Nuclear Information System (INIS)

    Prieto, I.; Martinez, G.; Lopez, C.

    2014-01-01

    The CIS (Central Interlock System), along with the CODAC system and CSS (Central Safety System), form the central I and C systems of ITER. The CIS is responsible for implementing the core functions of protection (Central Interlock Functions) through different systems of plant (Plant Systems) within the overall strategy of investment protection for ITER. IBERDROLA supports engineering to define and develop the control architecture of CIS according to the stringent requirements of integrity, availability and response time. For functions with response times of the order of half a second is selected PLC High availability of industrial range. However, due to the nature of the machine itself, certain functions must be able to act under the millisecond, so it has had to develop a solution based on FPGA (Field Programmable Gate Array) capable of meeting the requirements architecture. In this article CIS architecture is described, as well as the process for the development and validation of the selected platforms. (Author)

  9. Progress in a novel architecture for high performance processing

    Science.gov (United States)

    Zhang, Zhiwei; Liu, Meng; Liu, Zijun; Du, Xueliang; Xie, Shaolin; Ma, Hong; Ding, Guangxin; Ren, Weili; Zhou, Fabiao; Sun, Wenqin; Wang, Huijuan; Wang, Donglin

    2018-04-01

    The high performance processing (HPP) is an innovative architecture which targets on high performance computing with excellent power efficiency and computing performance. It is suitable for data intensive applications like supercomputing, machine learning and wireless communication. An example chip with four application-specific integrated circuit (ASIC) cores which is the first generation of HPP cores has been taped out successfully under Taiwan Semiconductor Manufacturing Company (TSMC) 40 nm low power process. The innovative architecture shows great energy efficiency over the traditional central processing unit (CPU) and general-purpose computing on graphics processing units (GPGPU). Compared with MaPU, HPP has made great improvement in architecture. The chip with 32 HPP cores is being developed under TSMC 16 nm field effect transistor (FFC) technology process and is planed to use commercially. The peak performance of this chip can reach 4.3 teraFLOPS (TFLOPS) and its power efficiency is up to 89.5 gigaFLOPS per watt (GFLOPS/W).

  10. Towards freeform curved blazed gratings using diamond machining

    Science.gov (United States)

    Bourgenot, C.; Robertson, D. J.; Stelter, D.; Eikenberry, S.

    2016-07-01

    Concave blazed gratings greatly simplify the architecture of spectrographs by reducing the number of optical components. The production of these gratings using diamond-machining offers practically no limits in the design of the grating substrate shape, with the possibility of making large sag freeform surfaces unlike the alternative and traditional method of holography and ion etching. In this paper, we report on the technological challenges and progress in the making of these curved blazed gratings using an ultra-high precision 5 axes Moore-Nanotech machine. We describe their implementation in an integral field unit prototype called IGIS (Integrated Grating Imaging Spectrograph) where freeform curved gratings are used as pupil mirrors. The goal is to develop the technologies for the production of the next generation of low-cost, compact, high performance integral field unit spectrometers.

  11. Bio-inspired adaptive feedback error learning architecture for motor control.

    Science.gov (United States)

    Tolu, Silvia; Vanegas, Mauricio; Luque, Niceto R; Garrido, Jesús A; Ros, Eduardo

    2012-10-01

    This study proposes an adaptive control architecture based on an accurate regression method called Locally Weighted Projection Regression (LWPR) and on a bio-inspired module, such as a cerebellar-like engine. This hybrid architecture takes full advantage of the machine learning module (LWPR kernel) to abstract an optimized representation of the sensorimotor space while the cerebellar component integrates this to generate corrective terms in the framework of a control task. Furthermore, we illustrate how the use of a simple adaptive error feedback term allows to use the proposed architecture even in the absence of an accurate analytic reference model. The presented approach achieves an accurate control with low gain corrective terms (for compliant control schemes). We evaluate the contribution of the different components of the proposed scheme comparing the obtained performance with alternative approaches. Then, we show that the presented architecture can be used for accurate manipulation of different objects when their physical properties are not directly known by the controller. We evaluate how the scheme scales for simulated plants of high Degrees of Freedom (7-DOFs).

  12. Design of a real-time open architecture controller for a reconfigurable machine tool

    CSIR Research Space (South Africa)

    Masekamela, I

    2008-11-01

    Full Text Available The paper presents the design and the development of a real-time, open architecture controller that is used for control of reconfigurable manufacturing tools (RMTs) in reconfigurable manufacturing systems (RMS). The controller that is presented can...

  13. ADAPTING HYBRID MACHINE TRANSLATION TECHNIQUES FOR CROSS-LANGUAGE TEXT RETRIEVAL SYSTEM

    Directory of Open Access Journals (Sweden)

    P. ISWARYA

    2017-03-01

    Full Text Available This research work aims in developing Tamil to English Cross - language text retrieval system using hybrid machine translation approach. The hybrid machine translation system is a combination of rule based and statistical based approaches. In an existing word by word translation system there are lot of issues and some of them are ambiguity, Out-of-Vocabulary words, word inflections, and improper sentence structure. To handle these issues, proposed architecture is designed in such a way that, it contains Improved Part-of-Speech tagger, machine learning based morphological analyser, collocation based word sense disambiguation procedure, semantic dictionary, and tense markers with gerund ending rules, and two pass transliteration algorithm. From the experimental results it is clear that the proposed Tamil Query based translation system achieves significantly better translation quality over existing system, and reaches 95.88% of monolingual performance.

  14. An expert system for vibration based diagnostics of rotating machines

    International Nuclear Information System (INIS)

    Korteniemi, A.

    1990-01-01

    Very often changes in the mechanical condition of the rotating machinery can be observed as changes in its vibration. This paper presents an expert system for vibration-based diagnosis of rotating machines by describing the architecture of the developed prototype system. The importance of modelling the problem solving knowledge as well as the domain knowledge is emphasized by presenting the knowledge in several levels

  15. From green architecture to architectural green

    DEFF Research Database (Denmark)

    Earon, Ofri

    2011-01-01

    that describes the architectural exclusivity of this particular architecture genre. The adjective green expresses architectural qualities differentiating green architecture from none-green architecture. Currently, adding trees and vegetation to the building’s facade is the main architectural characteristics...... they have overshadowed the architectural potential of green architecture. The paper questions how a green space should perform, look like and function. Two examples are chosen to demonstrate thorough integrations between green and space. The examples are public buildings categorized as pavilions. One......The paper investigates the topic of green architecture from an architectural point of view and not an energy point of view. The purpose of the paper is to establish a debate about the architectural language and spatial characteristics of green architecture. In this light, green becomes an adjective...

  16. A Focus on Triazolium as a Multipurpose Molecular Station for pH-Sensitive Interlocked Crown-Ether-Based Molecular Machines.

    Science.gov (United States)

    Coutrot, Frédéric

    2015-10-01

    The control of motion of one element with respect to others in an interlocked architecture allows for different co-conformational states of a molecule. This can result in variations of physical or chemical properties. The increase of knowledge in the field of molecular interactions led to the design, the synthesis, and the study of various systems of molecular machinery in a wide range of interlocked architectures. In this field, the discovery of new molecular stations for macrocycles is an attractive way to conceive original molecular machines. In the very recent past, the triazolium moiety proved to interact with crown ethers in interlocked molecules, so that it could be used as an ideal molecular station. It also served as a molecular barrier in order to lock interlaced structures or to compartmentalize interlocked molecular machines. This review describes the recently reported examples of pH-sensitive triazolium-containing molecular machines and their peculiar features.

  17. The importance of layout and configuration data for flexibility during commissionning and operation of the LHC machine protection systems

    CERN Document Server

    Mariethoz, Julien; Le Roux, Pascal; Bernard, Frederic; Harrison, Robert; Zerlauth, Markus

    2006-01-01

    Due to the large stored energies in both magnets and particle beams, the Large Hadron Collider (LHC) requires a large inventory of machine protection systems, as e.g. powering interlock systems, based on a series of distributed industrial controllers for the protection of the more than 10'000 normal and superconducting magnets. Such systems are required to be at the same time fast, reliable and secure but also flexible and configurable to allow for automated commissioning, remote monitoring and optimization during later operation. Based on the generic hardware architecture of the LHC machine protection systems presented at EPAC 2002 [2] and ICALEPS 2003, the use of configuration data for protection systems in view of the required reliability and safety is discussed. To achieve the very high level of reliability, it is required to use a coherent description of the layout of the accelerator components and of the associated machine protection architecture and their logical interconnections. Mechanisms to guarant...

  18. Kinetic Digitally-Driven Architectural Structures as ‘Marginal’ Objects – a Conceptual Framework

    Directory of Open Access Journals (Sweden)

    Sokratis Yiannoudes

    2014-07-01

    Full Text Available Although the most important reasons for designing digitally-driven kinetic architectural structures seem to be practical ones, namely functional flexibility and adaptation to changing conditions and needs, this paper argues that there is possibly an additional socio-cultural aspect driving their design and construction. Through this argument, the paper attempts to debate their status and question their concepts and practices.Looking at the design explorations and discourses of real or visionary technologically-augmented architecture since the 1960s, one cannot fail to notice the use of biological metaphors and concepts to describe them – an attempt to ‘naturalise’ them which culminates today in the conception of kinetic structures and intelligent environments as literally ‘alive’. Examining these attitudes in contemporary examples, the paper demonstrates that digitally-driven kinetic structures can be conceived as artificial ‘living’ machines that undermine the boundary between the natural and the artificial. It argues that by ‘humanising’ these structures, attributing biological characteristics such as self-initiated motion, intelligence and reactivity, their designers are ‘trying’ to subvert and blur the human-machine (-architecture discontinuity.The argument is developed by building a conceptual framework which is based on evidence from the social studies of science and technology, in particular their critique in modern nature-culture and human-machine distinctions, as well as the history and theory of artificial life which discuss the cultural significance and sociology of ‘living’ objects. In particular, the paper looks into the techno-scientific discourses and practices which, since the 18th century, have been exploring the creation of ‘marginal’ objects, i.e. seemingly alive objects made to challenge the nature-artifice boundary.

  19. Architecture on Architecture

    DEFF Research Database (Denmark)

    Olesen, Karen

    2016-01-01

    that is not scientific or academic but is more like a latent body of data that we find embedded in existing works of architecture. This information, it is argued, is not limited by the historical context of the work. It can be thought of as a virtual capacity – a reservoir of spatial configurations that can...... correlation between the study of existing architectures and the training of competences to design for present-day realities.......This paper will discuss the challenges faced by architectural education today. It takes as its starting point the double commitment of any school of architecture: on the one hand the task of preserving the particular knowledge that belongs to the discipline of architecture, and on the other hand...

  20. Çalışanların İş Güvencesizliğinin İşten Ayrılma Niyetleri Üzerindeki Etkisi: Alanya Bölgesindeki Beş Yıldızlı Otellerde Bir Araştırma

    OpenAIRE

    KARACAOĞLU, Korhan

    2018-01-01

    İş güvencesizliği, mevcut işin sürekliliğine engel olacak her türlü yasal veya yasal olmayan örgütsel değişimler sonucunda ortaya çıkan, işgörende, belirsizlik düşüncesine dayalı olarak işini kaybetme kaygısı doğuran durumlardır. Alan yazında iş güvencesizliğinin sonuçlarından biri olan işten ayrılma niyeti ise bir çalışanın yakın bir zamanda işine son verme isteğiyle ilgili düşüncesi olarak tanımlanmaktadır. Bu çalışmada iş güvencesizliği ile işten ayrılma niyeti arasındaki etkileşim Alanya ...

  1. Executable Architecture of Net Enabled Operations: State Machine of Federated Nodes

    Science.gov (United States)

    2009-11-01

    verbal descriptions from operators) of the current Command and Control (C2) practices into model form. In theory these should be Standard Operating...faudra une grande quantité de données pour faire en sorte que le modèle reflète les processus véritables, les auteurs recommandent que la machine à...descriptions from operators) of the current C2 practices into model form. In theory these should be SOPs that execute as a thread from start to finish. The

  2. Housing Value Forecasting Based on Machine Learning Methods

    OpenAIRE

    Mu, Jingyi; Wu, Fang; Zhang, Aihua

    2014-01-01

    In the era of big data, many urgent issues to tackle in all walks of life all can be solved via big data technique. Compared with the Internet, economy, industry, and aerospace fields, the application of big data in the area of architecture is relatively few. In this paper, on the basis of the actual data, the values of Boston suburb houses are forecast by several machine learning methods. According to the predictions, the government and developers can make decisions about whether developing...

  3. A software architecture for adaptive modular sensing systems.

    Science.gov (United States)

    Lyle, Andrew C; Naish, Michael D

    2010-01-01

    By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration.

  4. Other programmatic agencies in the metropolis: a machinic approach to urban reterritorialization processes

    Directory of Open Access Journals (Sweden)

    Igor Guatelli

    2013-06-01

    Full Text Available What if the strength of the architectural object were associated with program and spatial strategies engendered at the service of “habitability” and future sociabilities rather than with the building of monumental architectural gadgets and optical events in the landscape? Based on the Deleuzean (from the philosopher Gilles Deleuze machinic phylum as well as concepts associated with it such as “bonding” and “agency,” using the Lacanian approach (from the psychiatrist Jacques Lacan to the gadget concept and the Derridian concept (from the philosopher Jacques Derrida of “supplement,” this article discusses a shift of the most current senses and representations of contemporary urban architectural design historically associated with the notable (meaning the wish to be noticed formal and composite materialization of the artistic object at the service of programmed sociabilities towards nother conceptualization. The building of architectural supports from residual (according to Deleuze, the possibility of producing other wishes, far from the dominant capitalist logic, lies in residues in the residual flows produced by the capital itself programmatic and spatial agencies emerges as a critical path to the categorical imperative of the generalizing global logic. It is a logic based on non-territorial landscapes and centered on investments in the composite view and intentional spatial and programmatic imprisonments in familiar formulae originating from domesticated and standardized prêt-à-utiliser thinking. To think about other architectural spatial and programmatic agencies originating from residues and flows that simultaneously rise from and escape the global logic is to bet on the chance of non-programmed sociabilities taking place. Ceasing to think about architecture as a formal object in its artistic and paradigmatic dimension would mean to conceive it as an urban syntagmatic machine of [de]constructive power

  5. Joint multiple fully connected convolutional neural network with extreme learning machine for hepatocellular carcinoma nuclei grading.

    Science.gov (United States)

    Li, Siqi; Jiang, Huiyan; Pang, Wenbo

    2017-05-01

    Accurate cell grading of cancerous tissue pathological image is of great importance in medical diagnosis and treatment. This paper proposes a joint multiple fully connected convolutional neural network with extreme learning machine (MFC-CNN-ELM) architecture for hepatocellular carcinoma (HCC) nuclei grading. First, in preprocessing stage, each grayscale image patch with the fixed size is obtained using center-proliferation segmentation (CPS) method and the corresponding labels are marked under the guidance of three pathologists. Next, a multiple fully connected convolutional neural network (MFC-CNN) is designed to extract the multi-form feature vectors of each input image automatically, which considers multi-scale contextual information of deep layer maps sufficiently. After that, a convolutional neural network extreme learning machine (CNN-ELM) model is proposed to grade HCC nuclei. Finally, a back propagation (BP) algorithm, which contains a new up-sample method, is utilized to train MFC-CNN-ELM architecture. The experiment comparison results demonstrate that our proposed MFC-CNN-ELM has superior performance compared with related works for HCC nuclei grading. Meanwhile, external validation using ICPR 2014 HEp-2 cell dataset shows the good generalization of our MFC-CNN-ELM architecture. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Discussion paper for a highly parallel array processor-based machine

    International Nuclear Information System (INIS)

    Hagstrom, R.; Bolotin, G.; Dawson, J.

    1984-01-01

    The architectural plant for a quickly realizable implementation of a highly parallel special-purpose computer system with peak performance in the range of 6 billion floating point operations per second is discussed. The architecture is suitable to Lattice Gauge theoretical computations of fundamental physics interest and may be applicable to a range of other problems which deal with numerically intensive computational problems. The plan is quickly realizable because it employs a maximum of commercially available hardware subsystems and because the architecture is software-transparent to the individual processors, allowing straightforward re-use of whatever commercially available operating-systems and support software that is suitable to run on the commercially-produced processors. A tiny prototype instrument, designed along this architecture has already operated. A few elementary examples of programs which can run efficiently are presented. The large machine which the authors would propose to build would be based upon a highly competent array-processor, the ST-100 Array Processor, and specific design possibilities are discussed. The first step toward realizing this plan practically is to install a single ST-100 to allow algorithm development to proceed while a demonstration unit is built using two of the ST-100 Array Processors

  7. Benchmarking high performance computing architectures with CMS’ skeleton framework

    Science.gov (United States)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  8. System Center 2012 R2 Virtual Machine Manager cookbook

    CERN Document Server

    Cardoso, Edvaldo Alessandro

    2014-01-01

    This book is a step-by-step guide packed with recipes that cover architecture design and planning. The book is also full of deployment tips, techniques, and solutions. If you are a solutions architect, technical consultant, administrator, or any other virtualization enthusiast who needs to use Microsoft System Center Virtual Machine Manager in a real-world environment, then this is the book for you. We assume that you have previous experience with Windows 2012 R2 and Hyper-V.

  9. A computer architecture for the implementation of SDL

    Energy Technology Data Exchange (ETDEWEB)

    Crutcher, L A

    1989-01-01

    Finite State Machines (FSMs) are a part of well-established automata theory. The FSM model is useful in all stages of system design, from abstract specification to implementation in hardware. The FSM model has been studied as a technique in software design, and the implementation of this type of software considered. The Specification and Description Language (SDL) has been considered in detail as an example of this approach. The complexity of systems designed using SDL warrants their implementation through a programmed computer. A benchmark for the implementation of SDL has been established and the performance of SDL on three particular computer architectures investigated. Performance is judged according to this benchmark and also the ease of implementation, which is related to the confidence of a correct implementation. The implementation on 68000s and transputers is considered as representative of established and state-of-the-art microprocessors respectively. A third architecture that uses a processor that has been proposed specifically for the implementation of SDL is considered as a high-level custom architecture. Analysis and measurements of the benchmark on each architecture indicates that the execution time of SDL decreases by an order of magnitude from the 68000 to the transputer to the custom architecture. The ease of implementation is also greater when the execution time is reduced. A study of some real applications of SDL indicates that the benchmark figures are reflected in user-oriented measures of performance such as data throughput and response time. A high-level architecture such as the one proposed here for SDL can provide benefits in terms of execution time and correctness.

  10. Compiler design handbook optimizations and machine code generation

    CERN Document Server

    Srikant, YN

    2003-01-01

    The widespread use of object-oriented languages and Internet security concerns are just the beginning. Add embedded systems, multiple memory banks, highly pipelined units operating in parallel, and a host of other advances and it becomes clear that current and future computer architectures pose immense challenges to compiler designers-challenges that already exceed the capabilities of traditional compilation techniques. The Compiler Design Handbook: Optimizations and Machine Code Generation is designed to help you meet those challenges. Written by top researchers and designers from around the

  11. The FAIR timing master: a discussion of performance requirements and architectures for a high-precision timing system

    International Nuclear Information System (INIS)

    Kreider, M.

    2012-01-01

    Production chains in a particle accelerator are complex structures with many inter-dependencies and multiple paths to consider. This ranges from system initialization and synchronization of numerous machines to interlock handling and appropriate contingency measures like beam dump scenarios. The FAIR facility will employ White-Rabbit, a time based system which delivers an instruction and a corresponding execution time to a machine. In order to meet the deadlines in any given production chain, instructions need to be sent out ahead of time. For this purpose, code execution and message delivery times need to be known in advance. The FAIR Timing Master needs to be reliably capable of satisfying these timing requirements as well as being fault tolerant. Event sequences of recorded production chains indicate that low reaction times to internal and external events and fast, parallel execution are required. This suggests a slim architecture, especially devised for this purpose. Using the thread model of an OS or other high level programs on a generic CPU would be counterproductive when trying to achieve deterministic processing times. This paper deals with the analysis of said requirements as well as a comparison of known processor and virtual machine architectures and the possibilities of parallelization in programmable hardware. In addition, existing proposals at GSI will be checked against these findings. The final goal will be to determine the best instruction set for modeling any given production chain and devising a suitable architecture to execute these models. (authors)

  12. MABAL: a Novel Deep-Learning Architecture for Machine-Assisted Bone Age Labeling.

    Science.gov (United States)

    Mutasa, Simukayi; Chang, Peter D; Ruzal-Shapiro, Carrie; Ayyala, Rama

    2018-02-05

    Bone age assessment (BAA) is a commonly performed diagnostic study in pediatric radiology to assess skeletal maturity. The most commonly utilized method for assessment of BAA is the Greulich and Pyle method (Pediatr Radiol 46.9:1269-1274, 2016; Arch Dis Child 81.2:172-173, 1999) atlas. The evaluation of BAA can be a tedious and time-consuming process for the radiologist. As such, several computer-assisted detection/diagnosis (CAD) methods have been proposed for automation of BAA. Classical CAD tools have traditionally relied on hard-coded algorithmic features for BAA which suffer from a variety of drawbacks. Recently, the advent and proliferation of convolutional neural networks (CNNs) has shown promise in a variety of medical imaging applications. There have been at least two published applications of using deep learning for evaluation of bone age (Med Image Anal 36:41-51, 2017; JDI 1-5, 2017). However, current implementations are limited by a combination of both architecture design and relatively small datasets. The purpose of this study is to demonstrate the benefits of a customized neural network algorithm carefully calibrated to the evaluation of bone age utilizing a relatively large institutional dataset. In doing so, this study will aim to show that advanced architectures can be successfully trained from scratch in the medical imaging domain and can generate results that outperform any existing proposed algorithm. The training data consisted of 10,289 images of different skeletal age examinations, 8909 from the hospital Picture Archiving and Communication System at our institution and 1383 from the public Digital Hand Atlas Database. The data was separated into four cohorts, one each for male and female children above the age of 8, and one each for male and female children below the age of 10. The testing set consisted of 20 radiographs of each 1-year-age cohort from 0 to 1 years to 14-15+ years, half male and half female. The testing set included left

  13. Robotic fabrication in architecture, art, and design

    CERN Document Server

    Braumann, Johannes

    2013-01-01

    Architects, artists, and designers have been fascinated by robots for many decades, from Villemard’s utopian vision of an architect building a house with robotic labor in 1910, to the design of buildings that are robots themselves, such as Archigram’s Walking City. Today, they are again approaching the topic of robotic fabrication but this time employing a different strategy: instead of utopian proposals like Archigram’s or the highly specialized robots that were used by Japan’s construction industry in the 1990s, the current focus of architectural robotics is on industrial robots. These robotic arms have six degrees of freedom and are widely used in industry, especially for automotive production lines. What makes robotic arms so interesting for the creative industry is their multi-functionality: instead of having to develop specialized machines, a multifunctional robot arm can be equipped with a wide range of end-effectors, similar to a human hand using various tools. Therefore, architectural researc...

  14. Nonvolatile Memory Materials for Neuromorphic Intelligent Machines.

    Science.gov (United States)

    Jeong, Doo Seok; Hwang, Cheol Seong

    2018-04-18

    Recent progress in deep learning extends the capability of artificial intelligence to various practical tasks, making the deep neural network (DNN) an extremely versatile hypothesis. While such DNN is virtually built on contemporary data centers of the von Neumann architecture, physical (in part) DNN of non-von Neumann architecture, also known as neuromorphic computing, can remarkably improve learning and inference efficiency. Particularly, resistance-based nonvolatile random access memory (NVRAM) highlights its handy and efficient application to the multiply-accumulate (MAC) operation in an analog manner. Here, an overview is given of the available types of resistance-based NVRAMs and their technological maturity from the material- and device-points of view. Examples within the strategy are subsequently addressed in comparison with their benchmarks (virtual DNN in deep learning). A spiking neural network (SNN) is another type of neural network that is more biologically plausible than the DNN. The successful incorporation of resistance-based NVRAM in SNN-based neuromorphic computing offers an efficient solution to the MAC operation and spike timing-based learning in nature. This strategy is exemplified from a material perspective. Intelligent machines are categorized according to their architecture and learning type. Also, the functionality and usefulness of NVRAM-based neuromorphic computing are addressed. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Analysis of machining and machine tools

    CERN Document Server

    Liang, Steven Y

    2016-01-01

    This book delivers the fundamental science and mechanics of machining and machine tools by presenting systematic and quantitative knowledge in the form of process mechanics and physics. It gives readers a solid command of machining science and engineering, and familiarizes them with the geometry and functionality requirements of creating parts and components in today’s markets. The authors address traditional machining topics, such as: single and multiple point cutting processes grinding components accuracy and metrology shear stress in cutting cutting temperature and analysis chatter They also address non-traditional machining, such as: electrical discharge machining electrochemical machining laser and electron beam machining A chapter on biomedical machining is also included. This book is appropriate for advanced undergraduate and graduate mechani cal engineering students, manufacturing engineers, and researchers. Each chapter contains examples, exercises and their solutions, and homework problems that re...

  16. Face Recognition in Humans and Machines

    Science.gov (United States)

    O'Toole, Alice; Tistarelli, Massimo

    The study of human face recognition by psychologists and neuroscientists has run parallel to the development of automatic face recognition technologies by computer scientists and engineers. In both cases, there are analogous steps of data acquisition, image processing, and the formation of representations that can support the complex and diverse tasks we accomplish with faces. These processes can be understood and compared in the context of their neural and computational implementations. In this chapter, we present the essential elements of face recognition by humans and machines, taking a perspective that spans psychological, neural, and computational approaches. From the human side, we overview the methods and techniques used in the neurobiology of face recognition, the underlying neural architecture of the system, the role of visual attention, and the nature of the representations that emerges. From the computational side, we discuss face recognition technologies and the strategies they use to overcome challenges to robust operation over viewing parameters. Finally, we conclude the chapter with a look at some recent studies that compare human and machine performances at face recognition.

  17. Architectures Toward Reusable Science Data Systems

    Science.gov (United States)

    Moses, John

    2015-01-01

    Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research and NOAAs Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience we expect to find architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.

  18. INFIBRA: machine vision inspection of acrylic fiber production

    Science.gov (United States)

    Davies, Roger; Correia, Bento A. B.; Contreiras, Jose; Carvalho, Fernando D.

    1998-10-01

    This paper describes the implementation of INFIBRA, a machine vision system for the inspection of acrylic fiber production lines. The system was developed by INETI under a contract from Fisipe, Fibras Sinteticas de Portugal, S.A. At Fisipe there are ten production lines in continuous operation, each approximately 40 m in length. A team of operators used to perform periodic manual visual inspection of each line in conditions of high ambient temperature and humidity. It is not surprising that failures in the manual inspection process occurred with some frequency, with consequences that ranged from reduced fiber quality to production stoppages. The INFIBRA system architecture is a specialization of a generic, modular machine vision architecture based on a network of Personal Computers (PCs), each equipped with a low cost frame grabber. Each production line has a dedicated PC that performs automatic inspection, using specially designed metrology algorithms, via four video cameras located at key positions on the line. The cameras are mounted inside custom-built, hermetically sealed water-cooled housings to protect them from the unfriendly environment. The ten PCs, one for each production line, communicate with a central PC via a standard Ethernet connection. The operator controls all aspects of the inspection process, from configuration through to handling alarms, via a simple graphical interface on the central PC. At any time the operator can also view on the central PC's screen the live image from any one of the 40 cameras employed by the system.

  19. Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT

    Energy Technology Data Exchange (ETDEWEB)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    2011-07-27

    Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy in reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.

  20. An architecture for robotic system integration

    International Nuclear Information System (INIS)

    Butler, P.L.; Reister, D.B.; Gourley, C.S.; Thayer, S.M.

    1993-01-01

    An architecture has been developed to provide an object-oriented framework for the integration of multiple robotic subsystems into a single integrated system. By using an object-oriented approach, all subsystems can interface with each other, and still be able to be customized for specific subsystem interface needs. The object-oriented framework allows the communications between subsystems to be hidden from the interface specification itself. Thus, system designers can concentrate on what the subsystems are to do, not how to communicate. This system has been developed for the Environmental Restoration and Waste Management Decontamination and Decommissioning Project at Oak Ridge National Laboratory. In this system, multiple subsystems are defined to separate the functional units of the integrated system. For example, a Human-Machine Interface (HMI) subsystem handles the high-level machine coordination and subsystem status display. The HMI also provides status-logging facilities and safety facilities for use by the remaining subsystems. Other subsystems have been developed to provide specific functionality, and many of these can be reused by other projects

  1. A Software Architecture for Adaptive Modular Sensing Systems

    Directory of Open Access Journals (Sweden)

    Andrew C. Lyle

    2010-08-01

    Full Text Available By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration.

  2. Architecture independent environment for developing engineering software on MIMD computers

    Science.gov (United States)

    Valimohamed, Karim A.; Lopez, L. A.

    1990-01-01

    Engineers are constantly faced with solving problems of increasing complexity and detail. Multiple Instruction stream Multiple Data stream (MIMD) computers have been developed to overcome the performance limitations of serial computers. The hardware architectures of MIMD computers vary considerably and are much more sophisticated than serial computers. Developing large scale software for a variety of MIMD computers is difficult and expensive. There is a need to provide tools that facilitate programming these machines. First, the issues that must be considered to develop those tools are examined. The two main areas of concern were architecture independence and data management. Architecture independent software facilitates software portability and improves the longevity and utility of the software product. It provides some form of insurance for the investment of time and effort that goes into developing the software. The management of data is a crucial aspect of solving large engineering problems. It must be considered in light of the new hardware organizations that are available. Second, the functional design and implementation of a software environment that facilitates developing architecture independent software for large engineering applications are described. The topics of discussion include: a description of the model that supports the development of architecture independent software; identifying and exploiting concurrency within the application program; data coherence; engineering data base and memory management.

  3. Metrics of brain network architecture capture the impact of disease in children with epilepsy

    Directory of Open Access Journals (Sweden)

    Michael J. Paldino

    2017-01-01

    Conclusions: We observed that a machine learning algorithm accurately predicted epilepsy duration based on global metrics of network architecture derived from resting state fMRI. These findings suggest that network metrics have the potential to form the basis for statistical models that translate quantitative imaging data into patient-level markers of cognitive deterioration.

  4. An Operating Environment for the Jellybean Machine

    Science.gov (United States)

    1988-05-01

    aU.h oc M a W. <- 00* mm VA(- Iurtble on ~ ~ ~ F* I N. IMT ft." I Inup. PO mgLma~ ~L 1UPJ Wqal~o m a fta 6w4 COL 1 4 em a s ;; g jwl-w ~ IN 4 PM at NP...Addre in Al mdA2. RO is fmd . NEW Sin ofobjec.n0 Cre a new object of size O and Close of object in RI class R1, and rems the objects IDin R. RI gts trhad

  5. Improvement of Shade Resilience in Photovoltaic Modules Using Buck Converters in a Smart Module Architecture

    Directory of Open Access Journals (Sweden)

    S. Zahra Mirbagheri Golroodbari

    2018-01-01

    Full Text Available Partial shading has a nonlinear effect on the performance of photovoltaic (PV modules. Different methods of optimizing energy harvesting under partial shading conditions have been suggested to mitigate this issue. In this paper, a smart PV module architecture is proposed for improvement of shade resilience in a PV module consisting of 60 silicon solar cells, which compensates the current drops caused by partial shading. The architecture consists of groups of series-connected solar cells in parallel to a DC-DC buck converter. The number of cell groups is optimized with respect to cell and converter specifications using a least-squares support vector machine method. A generic model is developed to simulate the behavior of the smart architecture under different shading patterns, using high time resolution irradiance data. In this research the shading patterns are a combination of random and pole shadows. To investigate the shade resilience, results for the smart architecture are compared with an ideal module, and also ordinary series and parallel connected architectures. Although the annual yield for the smart architecture is 79.5% of the yield of an ideal module, we show that the smart architecture outperforms a standard series connected module by 47%, and a parallel architecture by 13.4%.

  6. Comparative life cycle assessment of disposable and reusable laryngeal mask airways.

    Science.gov (United States)

    Eckelman, Matthew; Mosher, Margo; Gonzalez, Andres; Sherman, Jodi

    2012-05-01

    Growing awareness of the negative impacts from the practice of health care on the environment and public health calls for the routine inclusion of life cycle criteria into the decision-making process of device selection. Here we present a life cycle assessment of 2 laryngeal mask airways (LMAs), a one-time-use disposable Unique™ LMA and a 40-time-use reusable Classic™ LMA. In life cycle assessment, the basis of comparison is called the "functional unit." For this report, the functional unit of the disposable and reusable LMAs was taken to be maintenance of airway patency by 40 disposable LMAs or 40 uses of 1 reusable LMA. This was a cradle-to-grave study that included inputs and outputs for the manufacture, transport, use, and waste phases of the LMAs. The environmental impacts of the 2 LMAs were estimated using SimaPro life cycle assessment software and the Building for Environmental and Economic Sustainability impact assessment method. Sensitivity and simple life cycle cost analyses were conducted to aid in interpretation of the results. The reusable LMA was found to have a more favorable environmental profile than the disposable LMA as used at Yale New Haven Hospital. The most important sources of impacts for the disposable LMA were the production of polymers, packaging, and waste management, whereas for the reusable LMA, washing and sterilization dominated for most impact categories. The differences in environmental impacts between these devices strongly favor reusable devices. These benefits must be weighed against concerns regarding transmission of infection. Health care facilities can decrease their environmental impacts by using reusable LMAs, to a lesser extent by selecting disposable LMA models that are not made of certain plastics, and by ordering in bulk from local distributors. Certain practices would further reduce the environmental impacts of reusable LMAs, such as increasing the number of devices autoclaved in a single cycle to 10 (-25% GHG

  7. Generation of a Multicomponent Library of Disulfide Donor-Acceptor Architectures Using Dynamic Combinatorial Chemistry.

    Science.gov (United States)

    Drożdż, Wojciech; Kołodziejski, Michał; Markiewicz, Grzegorz; Jenczak, Anna; Stefankiewicz, Artur R

    2015-07-17

    We describe here the generation of new donor-acceptor disulfide architectures obtained in aqueous solution at physiological pH. The application of a dynamic combinatorial chemistry approach allowed us to generate a large number of new disulfide macrocyclic architectures together with a new type of [2]catenanes consisting of four distinct components. Up to fifteen types of structurally-distinct dynamic architectures have been generated through one-pot disulfide exchange reactions between four thiol-functionalized aqueous components. The distribution of disulfide products formed was found to be strongly dependent on the structural features of the thiol components employed. This work not only constitutes a success in the synthesis of topologically- and morphologically-complex targets, but it may also open new horizons for the use of this methodology in the construction of molecular machines.

  8. A comparison of neural network architectures for the prediction of MRR in EDM

    Science.gov (United States)

    Jena, A. R.; Das, Raja

    2017-11-01

    The aim of the research work is to predict the material removal rate of a work-piece in electrical discharge machining (EDM). Here, an effort has been made to predict the material removal rate through back-propagation neural network (BPN) and radial basis function neural network (RBFN) for a work-piece of AISI D2 steel. The input parameters for the architecture are discharge-current (Ip), pulse-duration (Ton), and duty-cycle (τ) taken for consideration to obtained the output for material removal rate of the work-piece. In the architecture, it has been observed that radial basis function neural network is comparatively faster than back-propagation neural network but logically back-propagation neural network results more real value. Therefore BPN may consider as a better process in this architecture for consistent prediction to save time and money for conducting experiments.

  9. Dynamic Modeling and Analysis of the Large-Scale Rotary Machine with Multi-Supporting

    Directory of Open Access Journals (Sweden)

    Xuejun Li

    2011-01-01

    Full Text Available The large-scale rotary machine with multi-supporting, such as rotary kiln and rope laying machine, is the key equipment in the architectural, chemistry, and agriculture industries. The body, rollers, wheels, and bearings constitute a chain multibody system. Axis line deflection is a vital parameter to determine mechanics state of rotary machine, thus body axial vibration needs to be studied for dynamic monitoring and adjusting of rotary machine. By using the Riccati transfer matrix method, the body system of rotary machine is divided into many subsystems composed of three elements, namely, rigid disk, elastic shaft, and linear spring. Multiple wheel-bearing structures are simplified as springs. The transfer matrices of the body system and overall transfer equation are developed, as well as the response overall motion equation. Taken a rotary kiln as an instance, natural frequencies, modal shape, and response vibration with certain exciting axis line deflection are obtained by numerical computing. The body vibration modal curves illustrate the cause of dynamical errors in the common axis line measurement methods. The displacement response can be used for further measurement dynamical error analysis and compensation. The response overall motion equation could be applied to predict the body motion under abnormal mechanics condition, and provide theory guidance for machine failure diagnosis.

  10. Gesture-controlled interfaces for self-service machines and other applications

    Science.gov (United States)

    Cohen, Charles J. (Inventor); Beach, Glenn (Inventor); Cavell, Brook (Inventor); Foulk, Gene (Inventor); Jacobus, Charles J. (Inventor); Obermark, Jay (Inventor); Paul, George (Inventor)

    2004-01-01

    A gesture recognition interface for use in controlling self-service machines and other devices is disclosed. A gesture is defined as motions and kinematic poses generated by humans, animals, or machines. Specific body features are tracked, and static and motion gestures are interpreted. Motion gestures are defined as a family of parametrically delimited oscillatory motions, modeled as a linear-in-parameters dynamic system with added geometric constraints to allow for real-time recognition using a small amount of memory and processing time. A linear least squares method is preferably used to determine the parameters which represent each gesture. Feature position measure is used in conjunction with a bank of predictor bins seeded with the gesture parameters, and the system determines which bin best fits the observed motion. Recognizing static pose gestures is preferably performed by localizing the body/object from the rest of the image, describing that object, and identifying that description. The disclosure details methods for gesture recognition, as well as the overall architecture for using gesture recognition to control of devices, including self-service machines.

  11. Open Architecture Data System for NASA Langley Combined Loads Test System

    Science.gov (United States)

    Lightfoot, Michael C.; Ambur, Damodar R.

    1998-01-01

    The Combined Loads Test System (COLTS) is a new structures test complex that is being developed at NASA Langley Research Center (LaRC) to test large curved panels and cylindrical shell structures. These structural components are representative of aircraft fuselage sections of subsonic and supersonic transport aircraft and cryogenic tank structures of reusable launch vehicles. Test structures are subjected to combined loading conditions that simulate realistic flight load conditions. The facility consists of two pressure-box test machines and one combined loads test machine. Each test machine possesses a unique set of requirements or research data acquisition and real-time data display. Given the complex nature of the mechanical and thermal loads to be applied to the various research test articles, each data system has been designed with connectivity attributes that support both data acquisition and data management functions. This paper addresses the research driven data acquisition requirements for each test machine and demonstrates how an open architecture data system design not only meets those needs but provides robust data sharing between data systems including the various control systems which apply spectra of mechanical and thermal loading profiles.

  12. Performance analysis of IMS based LTE and WIMAX integration architectures

    Directory of Open Access Journals (Sweden)

    A. Bagubali

    2016-12-01

    Full Text Available In the current networking field many research works are going on regarding the integration of different wireless technologies, with the aim of providing uninterrupted connectivity to the user anywhere, with high data rates due to increased demand. However, the number of objects like smart devices, industrial machines, smart homes, connected by wireless interface is dramatically increasing due to the evolution of cloud computing and internet of things technology. This Paper begins with the challenges involved in such integrations and then explains the role of different couplings and different architectures. This paper also gives further improvement in the LTE and Wimax integration architectures to provide seamless vertical handover and flexible quality of service for supporting voice, video, multimedia services over IP network and mobility management with the help of IMS networks. Evaluation of various parameters like handover delay, cost of signalling, packet loss,, is done and the performance of the interworking architecture is analysed from the simulation results. Finally, it concludes that the cross layer scenario is better than the non cross layer scenario.

  13. Machine rates for selected forest harvesting machines

    Science.gov (United States)

    R.W. Brinker; J. Kinard; Robert Rummer; B. Lanford

    2002-01-01

    Very little new literature has been published on the subject of machine rates and machine cost analysis since 1989 when the Alabama Agricultural Experiment Station Circular 296, Machine Rates for Selected Forest Harvesting Machines, was originally published. Many machines discussed in the original publication have undergone substantial changes in various aspects, not...

  14. Evaluating LMA and CLAMP: Using information criteria to choose a model for estimating elevation

    Science.gov (United States)

    Miller, I.; Green, W.; Zaitchik, B.; Brandon, M.; Hickey, L.

    2005-12-01

    The morphology of leaves and composition of the flora respond strongly to the moisture and temperature of their environment. Elevation and latitude correlate, at first order, to these atmospheric parameters. An obvious modern example of this relationship between leaf morphology and environment is the tree line, where boreal forests give way to artic (high latitude) or alpine (high elevation) tundra. Several quantitative methods, all of which rely on uniformitarianism, have been developed to estimate paleoelevation using fossil leaf morphology. These include 1) the univariate leaf-margin analysis (LMA), which estimates mean annual temperature (MAT) by the positive linear correlation between MAT and P, the proportion of entire or smooth to non-entire or toothed margined woody dicot angiosperm leaves within a flora and 2) the Climate Leaf Analysis Multivariate Program (CLAMP) which uses Canonical Correspondence Analysis (CCA) to estimate MAT, moist enthalpy, and other atmospheric parameters using 31 explanatory leaf characters from woody dicot angiosperms. Given a difference in leaf-estimated MAT or moist enthalpy between contemporaneous, synlatitudinal fossil floras-one at sea-level, the other at an unknown paleoelevation-paleoelevation may be estimated. These methods have been widely applied to orogenic settings and concentrate particularly in the Western US. We introduce the use of information criteria to compare different models for estimating elevation and show how the additional complexity of the CLAMP analytical methodology does not necessarily improve on the elevation estimates produced by simpler regression models. In addition, we discuss the signal-to-noise ratio in the data, give confidence intervals for detecting elevations, and address the problem of spatial autocorrelation and irregular sampling in the data.

  15. Giro form reading machine

    Science.gov (United States)

    Minh Ha, Thien; Niggeler, Dieter; Bunke, Horst; Clarinval, Jose

    1995-08-01

    Although giro forms are used by many people in daily life for money remittance in Switzerland, the processing of these forms at banks and post offices is only partly automated. We describe an ongoing project for building an automatic system that is able to recognize various items printed or written on a giro form. The system comprises three main components, namely, an automatic form feeder, a camera system, and a computer. These components are connected in such a way that the system is able to process a bunch of forms without any human interactions. We present two real applications of our system in the field of payment services, which require the reading of both machine printed and handwritten information that may appear on a giro form. One particular feature of giro forms is their flexible layout, i.e., information items are located differently from one form to another, thus requiring an additional analysis step to localize them before recognition. A commercial optical character recognition software package is used for recognition of machine-printed information, whereas handwritten information is read by our own algorithms, the details of which are presented. The system is implemented by using a client/server architecture providing a high degree of flexibility to change. Preliminary results are reported supporting our claim that the system is usable in practice.

  16. Housing Value Forecasting Based on Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Jingyi Mu

    2014-01-01

    Full Text Available In the era of big data, many urgent issues to tackle in all walks of life all can be solved via big data technique. Compared with the Internet, economy, industry, and aerospace fields, the application of big data in the area of architecture is relatively few. In this paper, on the basis of the actual data, the values of Boston suburb houses are forecast by several machine learning methods. According to the predictions, the government and developers can make decisions about whether developing the real estate on corresponding regions or not. In this paper, support vector machine (SVM, least squares support vector machine (LSSVM, and partial least squares (PLS methods are used to forecast the home values. And these algorithms are compared according to the predicted results. Experiment shows that although the data set exists serious nonlinearity, the experiment result also show SVM and LSSVM methods are superior to PLS on dealing with the problem of nonlinearity. The global optimal solution can be found and best forecasting effect can be achieved by SVM because of solving a quadratic programming problem. In this paper, the different computation efficiencies of the algorithms are compared according to the computing times of relevant algorithms.

  17. Enhanced risk management by an emerging multi-agent architecture

    Science.gov (United States)

    Lin, Sin-Jin; Hsu, Ming-Fu

    2014-07-01

    Classification in imbalanced datasets has attracted much attention from researchers in the field of machine learning. Most existing techniques tend not to perform well on minority class instances when the dataset is highly skewed because they focus on minimising the forecasting error without considering the relative distribution of each class. This investigation proposes an emerging multi-agent architecture, grounded on cooperative learning, to solve the class-imbalanced classification problem. Additionally, this study deals further with the obscure nature of the multi-agent architecture and expresses comprehensive rules for auditors. The results from this study indicate that the presented model performs satisfactorily in risk management and is able to tackle a highly class-imbalanced dataset comparatively well. Furthermore, the knowledge visualised process, supported by real examples, can assist both internal and external auditors who must allocate limited detecting resources; they can take the rules as roadmaps to modify the auditing programme.

  18. SASAgent: an agent based architecture for search, retrieval and composition of scientific models.

    Science.gov (United States)

    Felipe Mendes, Luiz; Silva, Laryssa; Matos, Ely; Braga, Regina; Campos, Fernanda

    2011-07-01

    Scientific computing is a multidisciplinary field that goes beyond the use of computer as machine where researchers write simple texts, presentations or store analysis and results of their experiments. Because of the huge hardware/software resources invested in experiments and simulations, this new approach to scientific computing currently adopted by research groups is well represented by e-Science. This work aims to propose a new architecture based on intelligent agents to search, recover and compose simulation models, generated in the context of research projects related to biological domain. The SASAgent architecture is described as a multi-tier, comprising three main modules, where CelO ontology satisfies requirements put by e-science projects mainly represented by the semantic knowledge base. Preliminary results suggest that the proposed architecture is promising to achieve requirements found in e-Science projects, considering mainly the biological domain. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. The mathematics of the modernist villa architectural analysis using space syntax and isovists

    CERN Document Server

    Ostwald, Michael J

    2018-01-01

    This book presents the first detailed mathematical analysis of the social, cognitive and experiential properties of Modernist domestic architecture. The Modern Movement in architecture, which came to prominence during the first half of the twentieth century, may have been famous for its functional forms and machine-made aesthetic, but it also sought to challenge the way people inhabit, understand and experience space. Ludwig Mies van der Rohe’s buildings were not only minimalist and transparent, they were designed to subvert traditional social hierarchies. Frank Lloyd Wright’s organic Modernism not only attempted to negotiate a more responsive relationship between nature and architecture, but also shape the way people experience space. Richard Neutra’s Californian Modernism is traditionally celebrated for its sleek, geometric forms, but his intention was to use design to support a heightened understanding of context. Glenn Murcutt’s pristine pavilions, seemingly the epitome of regional Modernism, actu...

  20. Preference learning for cognitive modeling: a case study on entertainment preferences

    DEFF Research Database (Denmark)

    Yannakakis, Georgios; Maragoudakis, Manolis; Hallam, John

    2009-01-01

    Learning from preferences, which provide means for expressing a subject's desires, constitutes an important topic in machine learning research. This paper presents a comparative study of four alternative instance preference learning algorithms (both linear and nonlinear). The case study...... investigated is to learn to predict the expressed entertainment preferences of children when playing physical games built on their personalized playing features (entertainment modeling). Two of the approaches are derived from the literature--the large-margin algorithm (LMA) and preference learning...... with Gaussian processes--while the remaining two are custom-designed approaches for the problem under investigation: meta-LMA and neuroevolution. Preference learning techniques are combined with feature set selection methods permitting the construction of effective preference models, given suitable individual...

  1. Fire and collapse, Faculty of Architecture building, Delft University of Technology: Data collection and preliminary analyses

    NARCIS (Netherlands)

    Meacham, B.; Park, H.; Engelhardt, M.; Kirk, A.; Kodur, V.; Straalen, IJ.J.; Maljaars, J.; Weeren, K. van; Feijter, R. de; Both, K.

    2010-01-01

    On the morning of May 13, 2008, a fire that started in a coffee vending machine on the 6th floor of the 13-story Faculty of Architecture Building at the Delft University of Technology (TUD), Delft, the Netherlands, quickly developed into an extreme loading event. Although all building occupants

  2. Architectural slicing

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2013-01-01

    Architectural prototyping is a widely used practice, con- cerned with taking architectural decisions through experiments with light- weight implementations. However, many architectural decisions are only taken when systems are already (partially) implemented. This is prob- lematic in the context...... of architectural prototyping since experiments with full systems are complex and expensive and thus architectural learn- ing is hindered. In this paper, we propose a novel technique for harvest- ing architectural prototypes from existing systems, \\architectural slic- ing", based on dynamic program slicing. Given...... a system and a slicing criterion, architectural slicing produces an architectural prototype that contain the elements in the architecture that are dependent on the ele- ments in the slicing criterion. Furthermore, we present an initial design and implementation of an architectural slicer for Java....

  3. Man/machine interface for a nuclear cask remote handling control station: system design requirements

    International Nuclear Information System (INIS)

    Clarke, M.M.; Kreifeldt, J.G.; Draper, J.V.

    1984-01-01

    Design requirements are presented for a control station of a proposed semi-automated facility for remote handling of nuclear waste casks. Functional and operational man/machine interface: controls, displays, software format, station architecture, and work environment. In addition, some input is given to the design of remote sensing systems in the cask handling areas. 18 references, 9 figures, 12 tables

  4. Machine learning and pattern recognition from surface molecular architectures.

    Science.gov (United States)

    Maksov, Artem; Ziatdinov, Maxim; Fujii, Shintaro; Sumpter, Bobby; Kalinin, Sergei

    The ability to utilize molecular assemblies as data storage devices requires capability to identify individual molecular states on a scale of thousands of molecules. We present a novel method of applying machine learning techniques for extraction of positional and rotational information from ultra-high vacuum scanning tunneling microscopy (STM) images and apply it to self-assembled monolayer of π-bowl sumanene molecules on gold. From density functional theory (DFT) simulations, we assume existence of distinct polar and multiple azimuthal rotational states. We use DFT-generated templates in conjunction with Markov Chain Monte Carlo (MCMC) sampler and noise modeling to create synthetic images representative of our model. We extract positional information of each molecule and use nearest neighbor criteria to construct a graph input to Markov Random Field (MRF) model to identify polar rotational states. We train a convolutional Neural Network (cNN) on a synthetic dataset and combine it with MRF model to classify molecules based on their azimuthal rotational state. We demonstrate effectiveness of such approach compared to other methods. Finally, we apply our approach to experimental images and achieve complete rotational class information extraction. This research was sponsored by the Division of Materials Sciences and Engineering, Office of Science, Basic Energy Sciences, US DOE.

  5. Emerging opportunities in enterprise integration with open architecture computer numerical controls

    Science.gov (United States)

    Hudson, Christopher A.

    1997-01-01

    The shift to open-architecture machine tool computer numerical controls is providing new opportunities for metal working oriented manufacturers to streamline the entire 'art to part' process. Production cycle times, accuracy, consistency, predictability and process reliability are just some of the factors that can be improved, leading to better manufactured product at lower costs. Open architecture controllers are allowing manufacturers to apply general purpose software and hardware tools increase where previous approaches relied on proprietary and unique hardware and software. This includes DNC, SCADA, CAD, and CAM, where the increasing use of general purpose components is leading to lower cost system that are also more reliable and robust than the past proprietary approaches. In addition, a number of new opportunities exist, which in the past were likely impractical due to cost or performance constraints.

  6. Evaluation of existing and proposed computer architectures for future ground-based systems

    Science.gov (United States)

    Schulbach, C.

    1985-01-01

    Parallel processing architectures and techniques used in current supercomputers are described and projections are made of future advances. Presently, the von Neumann sequential processing pattern has been accelerated by having separate I/O processors, interleaved memories, wide memories, independent functional units and pipelining. Recent supercomputers have featured single-input, multiple data stream architectures, which have different processors for performing various operations (vector or pipeline processors). Multiple input, multiple data stream machines have also been developed. Data flow techniques, wherein program instructions are activated only when data are available, are expected to play a large role in future supercomputers, along with increased parallel processor arrays. The enhanced operational speeds are essential for adequately treating data from future spacecraft remote sensing instruments such as the Thematic Mapper.

  7. SUSTAINABLE ARCHITECTURE : WHAT ARCHITECTURE STUDENTS THINK

    OpenAIRE

    SATWIKO, PRASASTO

    2013-01-01

    Sustainable architecture has become a hot issue lately as the impacts of climate change become more intense. Architecture educations have responded by integrating knowledge of sustainable design in their curriculum. However, in the real life, new buildings keep coming with designs that completely ignore sustainable principles. This paper discusses the results of two national competitions on sustainable architecture targeted for architecture students (conducted in 2012 and 2013). The results a...

  8. Architecture

    OpenAIRE

    Clear, Nic

    2014-01-01

    When discussing science fiction’s relationship with architecture, the usual practice is to look at the architecture “in” science fiction—in particular, the architecture in SF films (see Kuhn 75-143) since the spaces of literary SF present obvious difficulties as they have to be imagined. In this essay, that relationship will be reversed: I will instead discuss science fiction “in” architecture, mapping out a number of architectural movements and projects that can be viewed explicitly as scien...

  9. [A new machinability test machine and the machinability of composite resins for core built-up].

    Science.gov (United States)

    Iwasaki, N

    2001-06-01

    A new machinability test machine especially for dental materials was contrived. The purpose of this study was to evaluate the effects of grinding conditions on machinability of core built-up resins using this machine, and to confirm the relationship between machinability and other properties of composite resins. The experimental machinability test machine consisted of a dental air-turbine handpiece, a control weight unit, a driving unit of the stage fixing the test specimen, and so on. The machinability was evaluated as the change in volume after grinding using a diamond point. Five kinds of core built-up resins and human teeth were used in this study. The machinabilities of these composite resins increased with an increasing load during grinding, and decreased with repeated grinding. There was no obvious correlation between the machinability and Vickers' hardness; however, a negative correlation was observed between machinability and scratch width.

  10. Evaluation of an IP Fabric network architecture for CERN's data center

    CERN Document Server

    AUTHOR|(CDS)2156318; Barceló Ordinas, José M.

    CERN has a large-scale data center with over 11500 servers used to analyze massive amounts of data acquired from the physics experiments and to provide IT services to workers. Its current network architecture is based on the classic three-tier design and it uses both IPv4 and IPv6. Between the access and aggregation layers the traffic is switched in Layer 2, while between aggregation and core it is routed using dual-stack OSPF. A new architecture is needed to increase redundancy and to provide virtual machine mobility and traffic isolation. The state-of-the-art architecture IP Fabric with EVPN is evaluated as a possible solution. The evaluation comprises a study of different features and options, including BGP table scalability and autonomous system number distributions. The proposed solution contains eBGP as the routing protocol, a route control policy, fast convergence mechanisms and an EVPN overlay with iBGP routing and VXLAN encapsulation. The solution is tested in the lab with the network equipment curre...

  11. Unsupervised process monitoring and fault diagnosis with machine learning methods

    CERN Document Server

    Aldrich, Chris

    2013-01-01

    This unique text/reference describes in detail the latest advances in unsupervised process monitoring and fault diagnosis with machine learning methods. Abundant case studies throughout the text demonstrate the efficacy of each method in real-world settings. The broad coverage examines such cutting-edge topics as the use of information theory to enhance unsupervised learning in tree-based methods, the extension of kernel methods to multiple kernel learning for feature extraction from data, and the incremental training of multilayer perceptrons to construct deep architectures for enhanced data

  12. Modeling Architectural Patterns Using Architectural Primitives

    NARCIS (Netherlands)

    Zdun, Uwe; Avgeriou, Paris

    2005-01-01

    Architectural patterns are a key point in architectural documentation. Regrettably, there is poor support for modeling architectural patterns, because the pattern elements are not directly matched by elements in modeling languages, and, at the same time, patterns support an inherent variability that

  13. Artificial Neural Networks as an Architectural Design Tool-Generating New Detail Forms Based On the Roman Corinthian Order Capital

    Science.gov (United States)

    Radziszewski, Kacper

    2017-10-01

    The following paper presents the results of the research in the field of the machine learning, investigating the scope of application of the artificial neural networks algorithms as a tool in architectural design. The computational experiment was held using the backward propagation of errors method of training the artificial neural network, which was trained based on the geometry of the details of the Roman Corinthian order capital. During the experiment, as an input training data set, five local geometry parameters combined has given the best results: Theta, Pi, Rho in spherical coordinate system based on the capital volume centroid, followed by Z value of the Cartesian coordinate system and a distance from vertical planes created based on the capital symmetry. Additionally during the experiment, artificial neural network hidden layers optimal count and structure was found, giving results of the error below 0.2% for the mentioned before input parameters. Once successfully trained artificial network, was able to mimic the details composition on any other geometry type given. Despite of calculating the transformed geometry locally and separately for each of the thousands of surface points, system could create visually attractive and diverse, complex patterns. Designed tool, based on the supervised learning method of machine learning, gives possibility of generating new architectural forms- free of the designer’s imagination bounds. Implementing the infinitely broad computational methods of machine learning, or Artificial Intelligence in general, not only could accelerate and simplify the design process, but give an opportunity to explore never seen before, unpredictable forms or everyday architectural practice solutions.

  14. The Machine within the Machine

    CERN Multimedia

    Katarina Anthony

    2014-01-01

    Although Virtual Machines are widespread across CERN, you probably won't have heard of them unless you work for an experiment. Virtual machines - known as VMs - allow you to create a separate machine within your own, allowing you to run Linux on your Mac, or Windows on your Linux - whatever combination you need.   Using a CERN Virtual Machine, a Linux analysis software runs on a Macbook. When it comes to LHC data, one of the primary issues collaborations face is the diversity of computing environments among collaborators spread across the world. What if an institute cannot run the analysis software because they use different operating systems? "That's where the CernVM project comes in," says Gerardo Ganis, PH-SFT staff member and leader of the CernVM project. "We were able to respond to experimentalists' concerns by providing a virtual machine package that could be used to run experiment software. This way, no matter what hardware they have ...

  15. Using Pipelined XNOR Logic to Reduce SEU Risks in State Machines

    Science.gov (United States)

    Le, Martin; Zheng, Xin; Katanyoutant, Sunant

    2008-01-01

    Single-event upsets (SEUs) pose great threats to avionic systems state machine control logic, which are frequently used to control sequence of events and to qualify protocols. The risks of SEUs manifest in two ways: (a) the state machine s state information is changed, causing the state machine to unexpectedly transition to another state; (b) due to the asynchronous nature of SEU, the state machine's state registers become metastable, consequently causing any combinational logic associated with the metastable registers to malfunction temporarily. Effect (a) can be mitigated with methods such as triplemodular redundancy (TMR). However, effect (b) cannot be eliminated and can degrade the effectiveness of any mitigation method of effect (a). Although there is no way to completely eliminate the risk of SEU-induced errors, the risk can be made very small by use of a combination of very fast state-machine logic and error-detection logic. Therefore, one goal of two main elements of the present method is to design the fastest state-machine logic circuitry by basing it on the fastest generic state-machine design, which is that of a one-hot state machine. The other of the two main design elements is to design fast error-detection logic circuitry and to optimize it for implementation in a field-programmable gate array (FPGA) architecture: In the resulting design, the one-hot state machine is fitted with a multiple-input XNOR gate for detection of illegal states. The XNOR gate is implemented with lookup tables and with pipelines for high speed. In this method, the task of designing all the logic must be performed manually because no currently available logic synthesis software tool can produce optimal solutions of design problems of this type. However, some assistance is provided by a script, written for this purpose in the Python language (an object-oriented interpretive computer language) to automatically generate hardware description language (HDL) code from state

  16. SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC)

    Science.gov (United States)

    Zhang, Xiang; Chen, Zhangwei

    2013-01-01

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385

  17. PGHPF – An Optimizing High Performance Fortran Compiler for Distributed Memory Machines

    Directory of Open Access Journals (Sweden)

    Zeki Bozkus

    1997-01-01

    Full Text Available High Performance Fortran (HPF is the first widely supported, efficient, and portable parallel programming language for shared and distributed memory systems. HPF is realized through a set of directive-based extensions to Fortran 90. It enables application developers and Fortran end-users to write compact, portable, and efficient software that will compile and execute on workstations, shared memory servers, clusters, traditional supercomputers, or massively parallel processors. This article describes a production-quality HPF compiler for a set of parallel machines. Compilation techniques such as data and computation distribution, communication generation, run-time support, and optimization issues are elaborated as the basis for an HPF compiler implementation on distributed memory machines. The performance of this compiler on benchmark programs demonstrates that high efficiency can be achieved executing HPF code on parallel architectures.

  18. Flexible human machine interface for process diagnostics

    International Nuclear Information System (INIS)

    Reifman, J.; Graham, G.E.; Wei, T.Y.C.; Brown, K.R.; Chin, R.Y.

    1996-01-01

    A flexible human machine interface to design and display graphical and textual process diagnostic information is presented. The system operates on different computer hardware platforms, including PCs under MS Windows and UNIX Workstations under X-Windows, in a client-server architecture. The interface system is customized for specific process applications in a graphical user interface development environment by overlaying the image of the process piping and instrumentation diagram with display objects that are highlighted in color during diagnostic display. Customization of the system is presented for Commonwealth Edison's Braidwood PWR Chemical and Volume Control System with transients simulated by a full-scale operator-training simulator and diagnosed by a computer-based system

  19. Effect of Machining Velocity in Nanoscale Machining Operations

    International Nuclear Information System (INIS)

    Islam, Sumaiya; Khondoker, Noman; Ibrahim, Raafat

    2015-01-01

    The aim of this study is to investigate the generated forces and deformations of single crystal Cu with (100), (110) and (111) crystallographic orientations at nanoscale machining operation. A nanoindenter equipped with nanoscratching attachment was used for machining operations and in-situ observation of a nano scale groove. As a machining parameter, the machining velocity was varied to measure the normal and cutting forces. At a fixed machining velocity, different levels of normal and cutting forces were generated due to different crystallographic orientations of the specimens. Moreover, after machining operation percentage of elastic recovery was measured and it was found that both the elastic and plastic deformations were responsible for producing a nano scale groove within the range of machining velocities from 250-1000 nm/s. (paper)

  20. Softwarization of Mobile Network Functions towards Agile and Energy Efficient 5G Architectures: A Survey

    Directory of Open Access Journals (Sweden)

    Dlamini Thembelihle

    2017-01-01

    Full Text Available Future mobile networks (MNs are required to be flexible with minimal infrastructure complexity, unlike current ones that rely on proprietary network elements to offer their services. Moreover, they are expected to make use of renewable energy to decrease their carbon footprint and of virtualization technologies for improved adaptability and flexibility, thus resulting in green and self-organized systems. In this article, we discuss the application of software defined networking (SDN and network function virtualization (NFV technologies towards softwarization of the mobile network functions, taking into account different architectural proposals. In addition, we elaborate on whether mobile edge computing (MEC, a new architectural concept that uses NFV techniques, can enhance communication in 5G cellular networks, reducing latency due to its proximity deployment. Besides discussing existing techniques, expounding their pros and cons and comparing state-of-the-art architectural proposals, we examine the role of machine learning and data mining tools, analyzing their use within fully SDN- and NFV-enabled mobile systems. Finally, we outline the challenges and the open issues related to evolved packet core (EPC and MEC architectures.

  1. The implementation of common object request broker architecture (CORBA) for controlling robot arm via web

    International Nuclear Information System (INIS)

    Syed Mahamad Zuhdi Amin; Mohd Yazid Idris; Wan Mohd Nasir Wan Kadir

    2001-01-01

    This paper presents the employment of the Common Object Request Broker Architecture (CORBA) technology in the implementation of our distributed Arm Robot Controller (ARC). CORBA is an industrial standard architecture based on distributed abstract object model, which is developed by Object Management Group (OMG). The architecture consists of five components i.e. Object Request Broker (ORB), Interface Definition Language (IDL), Dynamic Invocation Interface (DII), Interface Repositories (IR) and Object adapter (OA). CORBA objects are different from typical programming objects in three ways i.e. they can be executed on any platform, located anywhere on the network and written in any language that supports IDL mapping. In the implementation of the system, 5 degree of freedom (DOF) arm robot RCS 6.0 and Java as a programming mapping to the CORBA IDL. By implementing this architecture, the objects in the server machine can be distributed over the network in order to run the controller. the ultimate goal for our ARC system is to demonstrate concurrent execution of multiple arm robots through multiple instantiations of distributed object components. (Author)

  2. Machine Phase Fullerene Nanotechnology: 1996

    Science.gov (United States)

    Globus, Al; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    NASA has used exotic materials for spacecraft and experimental aircraft to good effect for many decades. In spite of many advances, transportation to space still costs about $10,000 per pound. Drexler has proposed a hypothetical nanotechnology based on diamond and investigated the properties of such molecular systems. These studies and others suggest enormous potential for aerospace systems. Unfortunately, methods to realize diamonoid nanotechnology are at best highly speculative. Recent computational efforts at NASA Ames Research Center and computation and experiment elsewhere suggest that a nanotechnology of machine phase functionalized fullerenes may be synthetically relatively accessible and of great aerospace interest. Machine phase materials are (hypothetical) materials consisting entirely or in large part of microscopic machines. In a sense, most living matter fits this definition. To begin investigation of fullerene nanotechnology, we used molecular dynamics to study the properties of carbon nanotube based gears and gear/shaft configurations. Experiments on C60 and quantum calculations suggest that benzyne may react with carbon nanotubes to form gear teeth. Han has computationally demonstrated that molecular gears fashioned from (14,0) single-walled carbon nanotubes and benzyne teeth should operate well at 50-100 gigahertz. Results suggest that rotation can be converted to rotating or linear motion, and linear motion may be converted into rotation. Preliminary results suggest that these mechanical systems can be cooled by a helium atmosphere. Furthermore, Deepak has successfully simulated using helical electric fields generated by a laser to power fullerene gears once a positive and negative charge have been added to form a dipole. Even with mechanical motion, cooling, and power; creating a viable nanotechnology requires support structures, computer control, a system architecture, a variety of components, and some approach to manufacture. Additional

  3. Prediction of Machine Tool Condition Using Support Vector Machine

    International Nuclear Information System (INIS)

    Wang Peigong; Meng Qingfeng; Zhao Jian; Li Junjie; Wang Xiufeng

    2011-01-01

    Condition monitoring and predicting of CNC machine tools are investigated in this paper. Considering the CNC machine tools are often small numbers of samples, a condition predicting method for CNC machine tools based on support vector machines (SVMs) is proposed, then one-step and multi-step condition prediction models are constructed. The support vector machines prediction models are used to predict the trends of working condition of a certain type of CNC worm wheel and gear grinding machine by applying sequence data of vibration signal, which is collected during machine processing. And the relationship between different eigenvalue in CNC vibration signal and machining quality is discussed. The test result shows that the trend of vibration signal Peak-to-peak value in surface normal direction is most relevant to the trend of surface roughness value. In trends prediction of working condition, support vector machine has higher prediction accuracy both in the short term ('One-step') and long term (multi-step) prediction compared to autoregressive (AR) model and the RBF neural network. Experimental results show that it is feasible to apply support vector machine to CNC machine tool condition prediction.

  4. Research on a Novel Parallel Engraving Machine and its Key Technologies

    Directory of Open Access Journals (Sweden)

    Zhang Shi-hui

    2008-11-01

    Full Text Available In order to compensate the disadvantages of conventional engraving machine and exert the advantages of parallel mechanism, a novel parallel engraving machine is presented and some key technologies are studied in this paper. Mechanism performances are analyzed in terms of the first and the second order influence coefficient matrix firstly. So the sizes of mechanism, which are better for all the performance indices of both kinematics and dynamics, can be confirmed and the restriction due to considering only the first order influence coefficient matrix in the past is broken through. Therefore, the theory basis for designing the mechanism size of novel engraving machine with better performances is provided. In addition, method for tool path planning and control technology for engraving force is also studied in the paper. The proposed algorithm for tool path planning on curved surface can be applied to arbitrary spacial curved surface in theory, control technology for engraving force based on fuzzy neural network(FNN has well adaptability to the changing environment. Research on teleoperation for parallel engraving machine based on B/S architecture resolves the key problems such as control mode, sharing mechanism for multiuser, real-time control for engraving job and real-time transmission for video information. Simulation results further show the feasibility and validity of the proposed methods.

  5. Research on a Novel Parallel Engraving Machine and its Key Technologies

    Directory of Open Access Journals (Sweden)

    Kong Ling-fu

    2004-12-01

    Full Text Available In order to compensate the disadvantages of conventional engraving machine and exert the advantages of parallel mechanism, a novel parallel engraving machine is presented and some key technologies are studied in this paper. Mechanism performances are analyzed in terms of the first and the second order influence coefficient matrix firstly. So the sizes of mechanism, which are better for all the performance indices of both kinematics and dynamics, can be confirmed and the restriction due to considering only the first order influence coefficient matrix in the past is broken through. Therefore, the theory basis for designing the mechanism size of novel engraving machine with better performances is provided. In addition, method for tool path planning and control technology for engraving force is also studied in the paper. The proposed algorithm for tool path planning on curved surface can be applied to arbitrary spacial curved surface in theory, control technology for engraving force based on fuzzy neural network(FNN has well adaptability to the changing environment. Research on teleoperation for parallel engraving machine based on B/S architecture resolves the key problems such as control mode, sharing mechanism for multiuser, real-time control for engraving job and real-time transmission for video information. Simulation results further show the feasibility and validity of the proposed methods.

  6. Architectural prototyping

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2004-01-01

    A major part of software architecture design is learning how specific architectural designs balance the concerns of stakeholders. We explore the notion of "architectural prototypes", correspondingly architectural prototyping, as a means of using executable prototypes to investigate stakeholders...

  7. ANN-PSO Integrated Optimization Methodology for Intelligent Control of MMC Machining

    Science.gov (United States)

    Chandrasekaran, Muthumari; Tamang, Santosh

    2017-08-01

    Metal Matrix Composites (MMC) show improved properties in comparison with non-reinforced alloys and have found increased application in automotive and aerospace industries. The selection of optimum machining parameters to produce components of desired surface roughness is of great concern considering the quality and economy of manufacturing process. In this study, a surface roughness prediction model for turning Al-SiCp MMC is developed using Artificial Neural Network (ANN). Three turning parameters viz., spindle speed ( N), feed rate ( f) and depth of cut ( d) were considered as input neurons and surface roughness was an output neuron. ANN architecture having 3 -5 -1 is found to be optimum and the model predicts with an average percentage error of 7.72 %. Particle Swarm Optimization (PSO) technique is used for optimizing parameters to minimize machining time. The innovative aspect of this work is the development of an integrated ANN-PSO optimization method for intelligent control of MMC machining process applicable to manufacturing industries. The robustness of the method shows its superiority for obtaining optimum cutting parameters satisfying desired surface roughness. The method has better convergent capability with minimum number of iterations.

  8. Architectural communication: Intra and extra activity of architecture

    Directory of Open Access Journals (Sweden)

    Stamatović-Vučković Slavica

    2013-01-01

    Full Text Available Apart from a brief overview of architectural communication viewed from the standpoint of theory of information and semiotics, this paper contains two forms of dualistically viewed architectural communication. The duality denotation/connotation (”primary” and ”secondary” architectural communication is one of semiotic postulates taken from Umberto Eco who viewed architectural communication as a semiotic phenomenon. In addition, architectural communication can be viewed as an intra and an extra activity of architecture where the overall activity of the edifice performed through its spatial manifestation may be understood as an act of communication. In that respect, the activity may be perceived as the ”behavior of architecture”, which corresponds to Lefebvre’s production of space.

  9. Environmentally Friendly Machining

    CERN Document Server

    Dixit, U S; Davim, J Paulo

    2012-01-01

    Environment-Friendly Machining provides an in-depth overview of environmentally-friendly machining processes, covering numerous different types of machining in order to identify which practice is the most environmentally sustainable. The book discusses three systems at length: machining with minimal cutting fluid, air-cooled machining and dry machining. Also covered is a way to conserve energy during machining processes, along with useful data and detailed descriptions for developing and utilizing the most efficient modern machining tools. Researchers and engineers looking for sustainable machining solutions will find Environment-Friendly Machining to be a useful volume.

  10. The Architecture and Administration of the ATLAS Online Computing System

    CERN Document Server

    Dobson, M; Ertorer, E; Garitaonandia, H; Leahu, L; Leahu, M; Malciu, I M; Panikashvili, E; Topurov, A; Ünel, G; Computing In High Energy and Nuclear Physics

    2006-01-01

    The needs of ATLAS experiment at the upcoming LHC accelerator, CERN, in terms of data transmission rates and processing power require a large cluster of computers (of the order of thousands) administrated and exploited in a coherent and optimal manner. Requirements like stability, robustness and fast recovery in case of failure impose a server-client system architecture with servers distributed in a tree like structure and clients booted from the network. For security reasons, the system should be accessible only through an application gateway and, also to ensure the autonomy of the system, the network services should be provided internally by dedicated machines in synchronization with CERN IT department's central services. The paper describes a small scale implementation of the system architecture that fits the given requirements and constraints. Emphasis will be put on the mechanisms and tools used to net boot the clients via the "Boot With Me" project and to synchronize information within the cluster via t...

  11. Architectural freedom and industrialized architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    to explain that architecture can be thought as a complex and diverse design through customization, telling exactly the revitalized storey about the change to a contemporary sustainable and better performing expression in direct relation to the given context. Through the last couple of years we have...... proportions, to organize the process on site choosing either one room wall components or several rooms wall components – either horizontally or vertically. Combined with the seamless joint the playing with these possibilities the new industrialized architecture can deliver variations in choice of solutions...... for retrofit design. If we add the question of the installations e.g. ventilation to this systematic thinking of building technique we get a diverse and functional architecture, thereby creating a new and clearer story telling about new and smart system based thinking behind architectural expression....

  12. Architectural freedom and industrialized architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    to explain that architecture can be thought as a complex and diverse design through customization, telling exactly the revitalized storey about the change to a contemporary sustainable and better performing expression in direct relation to the given context. Through the last couple of years we have...... expression in the specific housing area. It is the aim of this article to expand the different design strategies which architects can use – to give the individual project attitudes and designs with architectural quality. Through the customized component production it is possible to choose different...... for retrofit design. If we add the question of the installations e.g. ventilation to this systematic thinking of building technique we get a diverse and functional architecture, thereby creating a new and clearer story telling about new and smart system based thinking behind architectural expression....

  13. Architectural freedom and industrialised architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    Architectural freedom and industrialized architecture. Inge Vestergaard, Associate Professor, Cand. Arch. Aarhus School of Architecture, Denmark Noerreport 20, 8000 Aarhus C Telephone +45 89 36 0000 E-mai l inge.vestergaard@aarch.dk Based on the repetitive architecture from the "building boom" 1960...... customization, telling exactly the revitalized storey about the change to a contemporary sustainable and better performed expression in direct relation to the given context. Through the last couple of years we have in Denmark been focusing a more sustainable and low energy building technique, which also include...... to the building physic problems a new industrialized period has started based on light weight elements basically made of wooden structures, faced with different suitable materials meant for individual expression for the specific housing area. It is the purpose of this article to widen up the different design...

  14. Evaluation of a server-client architecture for accelerator modeling and simulation

    International Nuclear Information System (INIS)

    Bowling, B.A.; Akers, W.; Shoaee, H.; Watson, W.; Zeijts, J. van; Witherspoon, S.

    1997-01-01

    Traditional approaches to computational modeling and simulation often utilize a batch method for code execution using file-formatted input/output. This method of code implementation was generally chosen for several factors, including CPU throughput and availability, complexity of the required modeling problem, and presentation of computation results. With the advent of faster computer hardware and the advances in networking and software techniques, other program architectures for accelerator modeling have recently been employed. Jefferson Laboratory has implemented a client/server solution for accelerator beam transport modeling utilizing a query-based I/O. The goal of this code is to provide modeling information for control system applications and to serve as a computation engine for general modeling tasks, such as machine studies. This paper performs a comparison between the batch execution and server/client architectures, focusing on design and implementation issues, performance, and general utility towards accelerator modeling demands

  15. Porting the 3D Gyrokinetic Particle-in-cell Code GTC to the CRAY/NEC SX-6 Vector Architecture: Perspectives and Challenges

    International Nuclear Information System (INIS)

    Ethier, S.; Lin, Z.

    2003-01-01

    Several years of optimization on the super-scalar architecture has made it more difficult to port the current version of the 3D particle-in-cell code GTC to the CRAY/NEC SX-6 vector architecture. This paper explains the initial work that has been done to port this code to the SX-6 computer and to optimize the most time consuming parts. Early performance results are shown and compared to the same test done on the IBM SP Power 3 and Power 4 machines

  16. Enterprise architecture evaluation using architecture framework and UML stereotypes

    Directory of Open Access Journals (Sweden)

    Narges Shahi

    2014-08-01

    Full Text Available There is an increasing need for enterprise architecture in numerous organizations with complicated systems with various processes. Support for information technology, organizational units whose elements maintain complex relationships increases. Enterprise architecture is so effective that its non-use in organizations is regarded as their institutional inability in efficient information technology management. The enterprise architecture process generally consists of three phases including strategic programing of information technology, enterprise architecture programing and enterprise architecture implementation. Each phase must be implemented sequentially and one single flaw in each phase may result in a flaw in the whole architecture and, consequently, in extra costs and time. If a model is mapped for the issue and then it is evaluated before enterprise architecture implementation in the second phase, the possible flaws in implementation process are prevented. In this study, the processes of enterprise architecture are illustrated through UML diagrams, and the architecture is evaluated in programming phase through transforming the UML diagrams to Petri nets. The results indicate that the high costs of the implementation phase will be reduced.

  17. Application of parallelized software architecture to an autonomous ground vehicle

    Science.gov (United States)

    Shakya, Rahul; Wright, Adam; Shin, Young Ho; Momin, Orko; Petkovsek, Steven; Wortman, Paul; Gautam, Prasanna; Norton, Adam

    2011-01-01

    This paper presents improvements made to Q, an autonomous ground vehicle designed to participate in the Intelligent Ground Vehicle Competition (IGVC). For the 2010 IGVC, Q was upgraded with a new parallelized software architecture and a new vision processor. Improvements were made to the power system reducing the number of batteries required for operation from six to one. In previous years, a single state machine was used to execute the bulk of processing activities including sensor interfacing, data processing, path planning, navigation algorithms and motor control. This inefficient approach led to poor software performance and made it difficult to maintain or modify. For IGVC 2010, the team implemented a modular parallel architecture using the National Instruments (NI) LabVIEW programming language. The new architecture divides all the necessary tasks - motor control, navigation, sensor data collection, etc. into well-organized components that execute in parallel, providing considerable flexibility and facilitating efficient use of processing power. Computer vision is used to detect white lines on the ground and determine their location relative to the robot. With the new vision processor and some optimization of the image processing algorithm used last year, two frames can be acquired and processed in 70ms. With all these improvements, Q placed 2nd in the autonomous challenge.

  18. Equivalence of restricted Boltzmann machines and tensor network states

    Science.gov (United States)

    Chen, Jing; Cheng, Song; Xie, Haidong; Wang, Lei; Xiang, Tao

    2018-02-01

    The restricted Boltzmann machine (RBM) is one of the fundamental building blocks of deep learning. RBM finds wide applications in dimensional reduction, feature extraction, and recommender systems via modeling the probability distributions of a variety of input data including natural images, speech signals, and customer ratings, etc. We build a bridge between RBM and tensor network states (TNS) widely used in quantum many-body physics research. We devise efficient algorithms to translate an RBM into the commonly used TNS. Conversely, we give sufficient and necessary conditions to determine whether a TNS can be transformed into an RBM of given architectures. Revealing these general and constructive connections can cross fertilize both deep learning and quantum many-body physics. Notably, by exploiting the entanglement entropy bound of TNS, we can rigorously quantify the expressive power of RBM on complex data sets. Insights into TNS and its entanglement capacity can guide the design of more powerful deep learning architectures. On the other hand, RBM can represent quantum many-body states with fewer parameters compared to TNS, which may allow more efficient classical simulations.

  19. Software architecture 2

    CERN Document Server

    Oussalah, Mourad Chabanne

    2014-01-01

    Over the past 20 years, software architectures have significantly contributed to the development of complex and distributed systems. Nowadays, it is recognized that one of the critical problems in the design and development of any complex software system is its architecture, i.e. the organization of its architectural elements. Software Architecture presents the software architecture paradigms based on objects, components, services and models, as well as the various architectural techniques and methods, the analysis of architectural qualities, models of representation of architectural templa

  20. Lightweight enterprise architectures

    CERN Document Server

    Theuerkorn, Fenix

    2004-01-01

    STATE OF ARCHITECTUREArchitectural ChaosRelation of Technology and Architecture The Many Faces of Architecture The Scope of Enterprise Architecture The Need for Enterprise ArchitectureThe History of Architecture The Current Environment Standardization Barriers The Need for Lightweight Architecture in the EnterpriseThe Cost of TechnologyThe Benefits of Enterprise Architecture The Domains of Architecture The Gap between Business and ITWhere Does LEA Fit? LEA's FrameworkFrameworks, Methodologies, and Approaches The Framework of LEATypes of Methodologies Types of ApproachesActual System Environmen

  1. Software architecture 1

    CERN Document Server

    Oussalah , Mourad Chabane

    2014-01-01

    Over the past 20 years, software architectures have significantly contributed to the development of complex and distributed systems. Nowadays, it is recognized that one of the critical problems in the design and development of any complex software system is its architecture, i.e. the organization of its architectural elements. Software Architecture presents the software architecture paradigms based on objects, components, services and models, as well as the various architectural techniques and methods, the analysis of architectural qualities, models of representation of architectural template

  2. Hybrid machining processes perspectives on machining and finishing

    CERN Document Server

    Gupta, Kapil; Laubscher, R F

    2016-01-01

    This book describes various hybrid machining and finishing processes. It gives a critical review of the past work based on them as well as the current trends and research directions. For each hybrid machining process presented, the authors list the method of material removal, machining system, process variables and applications. This book provides a deep understanding of the need, application and mechanism of hybrid machining processes.

  3. Implementation of neural networks on 'Connection Machine'

    International Nuclear Information System (INIS)

    Belmonte, Ghislain

    1990-12-01

    This report is a first approach to the notion of neural networks and their possible applications within the framework of artificial intelligence activities of the Department of Applied Mathematics of the Limeil-Valenton Research Center. The first part is an introduction to the field of neural networks; the main neural network models are described in this section. The applications of neural networks in the field of classification have mainly been studied because they could more particularly help to solve some of the decision support problems dealt with by the C.E.A. As the neural networks perform a large number of parallel operations, it was therefore logical to use a parallel architecture computer: the Connection Machine (which uses 16384 processors and is located at E.T.C.A. Arcueil). The second part presents some generalities on the parallelism and the Connection Machine, and two implementations of neural networks on Connection Machine. The first of these implementations concerns one of the most used algorithms to realize the learning of neural networks: the Gradient Retro-propagation algorithm. The second one, less common, concerns a network of neurons destined mainly to the recognition of forms: the Fukushima Neocognitron. The latter is studied by the C.E.A. of Bruyeres-le-Chatel in order to realize an embedded system (including hardened circuits) for the fast recognition of forms [fr

  4. Nonlinear machine learning in soft materials engineering and design

    Science.gov (United States)

    Ferguson, Andrew

    The inherently many-body nature of molecular folding and colloidal self-assembly makes it challenging to identify the underlying collective mechanisms and pathways governing system behavior, and has hindered rational design of soft materials with desired structure and function. Fundamentally, there exists a predictive gulf between the architecture and chemistry of individual molecules or colloids and the collective many-body thermodynamics and kinetics. Integrating machine learning techniques with statistical thermodynamics provides a means to bridge this divide and identify emergent folding pathways and self-assembly mechanisms from computer simulations or experimental particle tracking data. We will survey a few of our applications of this framework that illustrate the value of nonlinear machine learning in understanding and engineering soft materials: the non-equilibrium self-assembly of Janus colloids into pinwheels, clusters, and archipelagos; engineering reconfigurable ''digital colloids'' as a novel high-density information storage substrate; probing hierarchically self-assembling onjugated asphaltenes in crude oil; and determining macromolecular folding funnels from measurements of single experimental observables. We close with an outlook on the future of machine learning in soft materials engineering, and share some personal perspectives on working at this disciplinary intersection. We acknowledge support for this work from a National Science Foundation CAREER Award (Grant No. DMR-1350008) and the Donors of the American Chemical Society Petroleum Research Fund (ACS PRF #54240-DNI6).

  5. Indigenous architecture as a context-oriented architecture, a look at ...

    African Journals Online (AJOL)

    What has become problematic as the achievement of international style and globalization of architecture during the time has been the purely technological look at architecture, and the architecture without belonging to a place. In recent decades, the topic of sustainable architecture and reconsidering indigenous architecture ...

  6. Architecture in the Islamic Civilization: Muslim Building or Islamic Architecture

    OpenAIRE

    Yassin, Ayat Ali; Utaberta, Dr. Nangkula

    2012-01-01

    The main problem of the theory in the arena of islamic architecture is affected by some of its Westernthoughts, and stereotyping the islamic architecture according to Western thoughts; this leads to the breakdownof the foundations in the islamic architecture. It is a myth that islamic architecture is subjected to theinfluence from foreign architectures. This paper will highlight the dialectical concept of islamic architecture ormuslim buildings and the areas of recognition in islamic architec...

  7. DeepX: Deep Learning Accelerator for Restricted Boltzmann Machine Artificial Neural Networks.

    Science.gov (United States)

    Kim, Lok-Won

    2018-05-01

    Although there have been many decades of research and commercial presence on high performance general purpose processors, there are still many applications that require fully customized hardware architectures for further computational acceleration. Recently, deep learning has been successfully used to learn in a wide variety of applications, but their heavy computation demand has considerably limited their practical applications. This paper proposes a fully pipelined acceleration architecture to alleviate high computational demand of an artificial neural network (ANN) which is restricted Boltzmann machine (RBM) ANNs. The implemented RBM ANN accelerator (integrating network size, using 128 input cases per batch, and running at a 303-MHz clock frequency) integrated in a state-of-the art field-programmable gate array (FPGA) (Xilinx Virtex 7 XC7V-2000T) provides a computational performance of 301-billion connection-updates-per-second and about 193 times higher performance than a software solution running on general purpose processors. Most importantly, the architecture enables over 4 times (12 times in batch learning) higher performance compared with a previous work when both are implemented in an FPGA device (XC2VP70).

  8. VISUALIZATION SKILLS FOR THE NEW ARCHITECTURAL FORMS

    Directory of Open Access Journals (Sweden)

    Khaled Nassar

    2010-07-01

    Full Text Available The practice of architecture is continuously changing, mirroring the paradigm shifts in the world it builds for. With increasing use of digital technology, we need to ensure that learning and teaching do not shift from the fundamental skill set required of an architect. Architectural problems are unique in their nature, requiring volumetric visualization and problem solving skills, and although many of these skills can be replicated using digital technology, can digital technology replace the cognitive development, which occurs through manual problem solving? Over the last three decades we have seen the almost ubiquitous use of computers in the design practice and professional studios with increasingly more complex forms being thought of and turned into buildings. This development obviously raises challenging questions of architectural theory and perplexing issues for those concerned with the future of architectural education and its effect on the design process. But how can this effect be analyzed subjectively remains an open question. Recent research efforts have shown that perception and visualization abilities reflect the quality of a design outcome. Very limited research however exists which attempts to understand or document the spatial analysis and visualization abilities of new generations of architects. This paper reports on a novel scalable test that could be used to investigate the processing and synthesis of visual information related to the new kinds of free form encountered in today’s architecture. Unlike traditional missing views and orthographic projection problems, proposed test can be used to accurate assess freeform visualization. A number of 2-mainfold very high genus surfaces were selected. Physical models of these surfaces are manufactured from a durable thermoplastic material by Fused Deposition Modeling rapid prototyping machines. Students are then asked to position a digital model of the surface to match that of the

  9. Characterization of robotics parallel algorithms and mapping onto a reconfigurable SIMD machine

    Science.gov (United States)

    Lee, C. S. G.; Lin, C. T.

    1989-01-01

    The kinematics, dynamics, Jacobian, and their corresponding inverse computations are six essential problems in the control of robot manipulators. Efficient parallel algorithms for these computations are discussed and analyzed. Their characteristics are identified and a scheme on the mapping of these algorithms to a reconfigurable parallel architecture is presented. Based on the characteristics including type of parallelism, degree of parallelism, uniformity of the operations, fundamental operations, data dependencies, and communication requirement, it is shown that most of the algorithms for robotic computations possess highly regular properties and some common structures, especially the linear recursive structure. Moreover, they are well-suited to be implemented on a single-instruction-stream multiple-data-stream (SIMD) computer with reconfigurable interconnection network. The model of a reconfigurable dual network SIMD machine with internal direct feedback is introduced. A systematic procedure internal direct feedback is introduced. A systematic procedure to map these computations to the proposed machine is presented. A new scheduling problem for SIMD machines is investigated and a heuristic algorithm, called neighborhood scheduling, that reorders the processing sequence of subtasks to reduce the communication time is described. Mapping results of a benchmark algorithm are illustrated and discussed.

  10. Development of PC based data acquisition system for universal test machine

    International Nuclear Information System (INIS)

    Nageswara Rao, T.S.V.R.; Hari Prasad, V.; Satyadev, B.; Banarjee, P.K.

    2010-01-01

    To determine the tensile properties of nuclear fuel tubes and other components, the Universal Test Machine is being used in Material testing section of Quality Assurance, NFC. This machine consists of Chart Recorder to chart the Load Vs. Strain graph. The tensile properties of the test material viz. Ultimate Tensile Strength (UTS), Yield Strength (YS), and Young's Modulus (e) etc. are usually determined by graphical method using a ruler. To overcome the problems faced due to embargo and non-availability of spares of recorder, a PC based Data Acquisition System (DAS) with necessary software algorithm was developed for automatic calculation of tensile properties by extracting the linear portion of tensile test curve where the Tangent and Secant Modulus coincide, without intervention of the user. This developmental work reduces human error in calculation, facilitates the use of state-of-the art technology and the risk of obsolescence by employing PC based architecture. (author)

  11. Software architecture analysis tool : software architecture metrics collection

    NARCIS (Netherlands)

    Muskens, J.; Chaudron, M.R.V.; Westgeest, R.

    2002-01-01

    The Software Engineering discipline lacks the ability to evaluate software architectures. Here we describe a tool for software architecture analysis that is based on metrics. Metrics can be used to detect possible problems and bottlenecks in software architectures. Even though metrics do not give a

  12. Splendidly blended: a machine learning set up for CDU control

    Science.gov (United States)

    Utzny, Clemens

    2017-06-01

    As the concepts of machine learning and artificial intelligence continue to grow in importance in the context of internet related applications it is still in its infancy when it comes to process control within the semiconductor industry. Especially the branch of mask manufacturing presents a challenge to the concepts of machine learning since the business process intrinsically induces pronounced product variability on the background of small plate numbers. In this paper we present the architectural set up of a machine learning algorithm which successfully deals with the demands and pitfalls of mask manufacturing. A detailed motivation of this basic set up followed by an analysis of its statistical properties is given. The machine learning set up for mask manufacturing involves two learning steps: an initial step which identifies and classifies the basic global CD patterns of a process. These results form the basis for the extraction of an optimized training set via balanced sampling. A second learning step uses this training set to obtain the local as well as global CD relationships induced by the manufacturing process. Using two production motivated examples we show how this approach is flexible and powerful enough to deal with the exacting demands of mask manufacturing. In one example we show how dedicated covariates can be used in conjunction with increased spatial resolution of the CD map model in order to deal with pathological CD effects at the mask boundary. The other example shows how the model set up enables strategies for dealing tool specific CD signature differences. In this case the balanced sampling enables a process control scheme which allows usage of the full tool park within the specified tight tolerance budget. Overall, this paper shows that the current rapid developments off the machine learning algorithms can be successfully used within the context of semiconductor manufacturing.

  13. Neutron transport on the connection machine

    International Nuclear Information System (INIS)

    Robin, F.

    1991-12-01

    Monte Carlo methods are heavily used at CEA and account for a a large part of the total CPU time of industrial codes. In the present work (done in the frame of the Parallel Computing Project of the CEL-V Applied Mathematics Department) we study and implement on the Connection Machine an optimised Monte Carlo algorithm for solving the neutron transport equation. This allows us to investigate the suitability of such an architecture for this kind of problem. This report describes the chosen methodology, the algorithm and its performances. We found that programming the CM-2 in CM Fortran is relatively easy and we got interesting performances as, on a 16 k, CM-2 they are the same level as those obtained on one processor of a CRAY X-MP with a well optimized vector code

  14. Machine terms dictionary

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1979-04-15

    This book gives descriptions of machine terms which includes machine design, drawing, the method of machine, machine tools, machine materials, automobile, measuring and controlling, electricity, basic of electron, information technology, quality assurance, Auto CAD and FA terms and important formula of mechanical engineering.

  15. Approach to design neural cryptography: a generalized architecture and a heuristic rule.

    Science.gov (United States)

    Mu, Nankun; Liao, Xiaofeng; Huang, Tingwen

    2013-06-01

    Neural cryptography, a type of public key exchange protocol, is widely considered as an effective method for sharing a common secret key between two neural networks on public channels. How to design neural cryptography remains a great challenge. In this paper, in order to provide an approach to solve this challenge, a generalized network architecture and a significant heuristic rule are designed. The proposed generic framework is named as tree state classification machine (TSCM), which extends and unifies the existing structures, i.e., tree parity machine (TPM) and tree committee machine (TCM). Furthermore, we carefully study and find that the heuristic rule can improve the security of TSCM-based neural cryptography. Therefore, TSCM and the heuristic rule can guide us to designing a great deal of effective neural cryptography candidates, in which it is possible to achieve the more secure instances. Significantly, in the light of TSCM and the heuristic rule, we further expound that our designed neural cryptography outperforms TPM (the most secure model at present) on security. Finally, a series of numerical simulation experiments are provided to verify validity and applicability of our results.

  16. A model for Intelligent Random Access Memory architecture (IRAM) cellular automata algorithms on the Associative String Processing machine (ASTRA)

    CERN Document Server

    Rohrbach, F; Vesztergombi, G

    1997-01-01

    In the near future, the computer performance will be completely determined by how long it takes to access memory. There are bottle-necks in memory latency and memory-to processor interface bandwidth. The IRAM initiative could be the answer by putting Processor-In-Memory (PIM). Starting from the massively parallel processing concept, one reached a similar conclusion. The MPPC (Massively Parallel Processing Collaboration) project and the 8K processor ASTRA machine (Associative String Test bench for Research \\& Applications) developed at CERN \\cite{kuala} can be regarded as a forerunner of the IRAM concept. The computing power of the ASTRA machine, regarded as an IRAM with 64 one-bit processors on a 64$\\times$64 bit-matrix memory chip machine, has been demonstrated by running statistical physics algorithms: one-dimensional stochastic cellular automata, as a simple model for dynamical phase transitions. As a relevant result for physics, the damage spreading of this model has been investigated.

  17. Deep neural mapping support vector machines.

    Science.gov (United States)

    Li, Yujian; Zhang, Ting

    2017-09-01

    The choice of kernel has an important effect on the performance of a support vector machine (SVM). The effect could be reduced by NEUROSVM, an architecture using multilayer perceptron for feature extraction and SVM for classification. In binary classification, a general linear kernel NEUROSVM can be theoretically simplified as an input layer, many hidden layers, and an SVM output layer. As a feature extractor, the sub-network composed of the input and hidden layers is first trained together with a virtual ordinary output layer by backpropagation, then with the output of its last hidden layer taken as input of the SVM classifier for further training separately. By taking the sub-network as a kernel mapping from the original input space into a feature space, we present a novel model, called deep neural mapping support vector machine (DNMSVM), from the viewpoint of deep learning. This model is also a new and general kernel learning method, where the kernel mapping is indeed an explicit function expressed as a sub-network, different from an implicit function induced by a kernel function traditionally. Moreover, we exploit a two-stage procedure of contrastive divergence learning and gradient descent for DNMSVM to jointly training an adaptive kernel mapping instead of a kernel function, without requirement of kernel tricks. As a whole of the sub-network and the SVM classifier, the joint training of DNMSVM is done by using gradient descent to optimize the objective function with the sub-network layer-wise pre-trained via contrastive divergence learning of restricted Boltzmann machines. Compared to the separate training of NEUROSVM, the joint training is a new algorithm for DNMSVM to have advantages over NEUROSVM. Experimental results show that DNMSVM can outperform NEUROSVM and RBFSVM (i.e., SVM with the kernel of radial basis function), demonstrating its effectiveness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Benchmarking Data Analysis and Machine Learning Applications on the Intel KNL Many-Core Processor

    OpenAIRE

    Byun, Chansup; Kepner, Jeremy; Arcand, William; Bestor, David; Bergeron, Bill; Gadepally, Vijay; Houle, Michael; Hubbell, Matthew; Jones, Michael; Klein, Anna; Michaleas, Peter; Milechin, Lauren; Mullen, Julie; Prout, Andrew; Rosa, Antonio

    2017-01-01

    Knights Landing (KNL) is the code name for the second-generation Intel Xeon Phi product family. KNL has generated significant interest in the data analysis and machine learning communities because its new many-core architecture targets both of these workloads. The KNL many-core vector processor design enables it to exploit much higher levels of parallelism. At the Lincoln Laboratory Supercomputing Center (LLSC), the majority of users are running data analysis applications such as MATLAB and O...

  19. Some relations between quantum Turing machines and Turing machines

    OpenAIRE

    Sicard, Andrés; Vélez, Mario

    1999-01-01

    For quantum Turing machines we present three elements: Its components, its time evolution operator and its local transition function. The components are related with the components of deterministic Turing machines, the time evolution operator is related with the evolution of reversible Turing machines and the local transition function is related with the transition function of probabilistic and reversible Turing machines.

  20. Measurements of the LHCb software stack on the ARM architecture

    International Nuclear Information System (INIS)

    Kartik, S Vijay; Couturier, Ben; Clemencic, Marco; Neufeld, Niko

    2014-01-01

    The ARM architecture is a power-efficient design that is used in most processors in mobile devices all around the world today since they provide reasonable compute performance per watt. The current LHCb software stack is designed (and thus expected) to build and run on machines with the x86/x86 6 4 architecture. This paper outlines the process of measuring the performance of the LHCb software stack on the ARM architecture – specifically, the ARMv7 architecture on Cortex-A9 processors from NVIDIA and on full-fledged ARM servers with chipsets from Calxeda – and makes comparisons with the performance on x86 6 4 architectures on the Intel Xeon L5520/X5650 and AMD Opteron 6272. The paper emphasises the aspects of performance per core with respect to the power drawn by the compute nodes for the given performance – this ensures a fair real-world comparison with much more 'powerful' Intel/AMD processors. The comparisons of these real workloads in the context of LHCb are also complemented with the standard synthetic benchmarks HEPSPEC and Coremark. The pitfalls and solutions for the non-trivial task of porting the source code to build for the ARMv7 instruction set are presented. The specific changes in the build process needed for ARM-specific portions of the software stack are described, to serve as pointers for further attempts taken up by other groups in this direction. Cases where architecture-specific tweaks at the assembler lever (both in ROOT and the LHCb software stack) were needed for a successful compile are detailed – these cases are good indicators of where/how the software stack as well as the build system can be made more portable and multi-arch friendly. The experience gained from the tasks described in this paper are intended to i) assist in making an informed choice about ARM-based server solutions as a feasible low-power alternative to the current compute nodes, and ii) revisit the software design and build system for portability and

  1. Open architecture design and approach for the Integrated Sensor Architecture (ISA)

    Science.gov (United States)

    Moulton, Christine L.; Krzywicki, Alan T.; Hepp, Jared J.; Harrell, John; Kogut, Michael

    2015-05-01

    Integrated Sensor Architecture (ISA) is designed in response to stovepiped integration approaches. The design, based on the principles of Service Oriented Architectures (SOA) and Open Architectures, addresses the problem of integration, and is not designed for specific sensors or systems. The use of SOA and Open Architecture approaches has led to a flexible, extensible architecture. Using these approaches, and supported with common data formats, open protocol specifications, and Department of Defense Architecture Framework (DoDAF) system architecture documents, an integration-focused architecture has been developed. ISA can help move the Department of Defense (DoD) from costly stovepipe solutions to a more cost-effective plug-and-play design to support interoperability.

  2. Carrier Current Line Systems Technologies in M2M Architecture for Wireless Communication

    Directory of Open Access Journals (Sweden)

    Hua-Ching Chen

    2016-01-01

    Full Text Available This paper investigates the Carrier Current Line Systems (CCLS technologies of Machine to Machine (M2M architecture which applied for mobile station coverage working with metro, high speed railway, and subway such as analysis for public transport of an indoor transition system. It is based on the theory and practical engineering principle which provide guidelines and formulas for link budget design to help designers fully control and analyze the single output power of uplink and downlink between Fiber Repeaters (FR and mobile station as well as base station. Finally, the results of this leakage cable system are successfully applied to indoor coverage design for metro rapid transit system which are easily installed cellular over fiber solutions for WCDMA/LTE access is becoming Ubiquitous Network to Internet of Thing (IOT real case hierarchy of telecommunication.

  3. Support vector machine in machine condition monitoring and fault diagnosis

    Science.gov (United States)

    Widodo, Achmad; Yang, Bo-Suk

    2007-08-01

    Recently, the issue of machine condition monitoring and fault diagnosis as a part of maintenance system became global due to the potential advantages to be gained from reduced maintenance costs, improved productivity and increased machine availability. This paper presents a survey of machine condition monitoring and fault diagnosis using support vector machine (SVM). It attempts to summarize and review the recent research and developments of SVM in machine condition monitoring and diagnosis. Numerous methods have been developed based on intelligent systems such as artificial neural network, fuzzy expert system, condition-based reasoning, random forest, etc. However, the use of SVM for machine condition monitoring and fault diagnosis is still rare. SVM has excellent performance in generalization so it can produce high accuracy in classification for machine condition monitoring and diagnosis. Until 2006, the use of SVM in machine condition monitoring and fault diagnosis is tending to develop towards expertise orientation and problem-oriented domain. Finally, the ability to continually change and obtain a novel idea for machine condition monitoring and fault diagnosis using SVM will be future works.

  4. A Java-based enterprise system architecture for implementing a continuously supported and entirely Web-based exercise solution.

    Science.gov (United States)

    Wang, Zhihui; Kiryu, Tohru

    2006-04-01

    Since machine-based exercise still uses local facilities, it is affected by time and place. We designed a web-based system architecture based on the Java 2 Enterprise Edition that can accomplish continuously supported machine-based exercise. In this system, exercise programs and machines are loosely coupled and dynamically integrated on the site of exercise via the Internet. We then extended the conventional health promotion model, which contains three types of players (users, exercise trainers, and manufacturers), by adding a new player: exercise program creators. Moreover, we developed a self-describing strategy to accommodate a variety of exercise programs and provide ease of use to users on the web. We illustrate our novel design with examples taken from our feasibility study on a web-based cycle ergometer exercise system. A biosignal-based workload control approach was introduced to ensure that users performed appropriate exercise alone.

  5. Analysis of Architecture Pattern Usage in Legacy System Architecture Documentation

    NARCIS (Netherlands)

    Harrison, Neil B.; Avgeriou, Paris

    2008-01-01

    Architecture patterns are an important tool in architectural design. However, while many architecture patterns have been identified, there is little in-depth understanding of their actual use in software architectures. For instance, there is no overview of how many patterns are used per system or

  6. On the Architectural Engineering Competences in Architectural Design

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    2007-01-01

    In 1997 a new education in Architecture & Design at Department of Architecture and Design, Aalborg University was started with 50 students. During the recent years this number has increased to approximately 100 new students each year, i.e. approximately 500 students are following the 3 years...... bachelor (BSc) and the 2 years master (MSc) programme. The first 5 semesters are common for all students followed by 5 semesters with specialization into Architectural Design, Urban Design, Industrial Design or Digital Design. The present paper gives a short summary of the architectural engineering...

  7. Connecting Architecture and Implementation

    Science.gov (United States)

    Buchgeher, Georg; Weinreich, Rainer

    Software architectures are still typically defined and described independently from implementation. To avoid architectural erosion and drift, architectural representation needs to be continuously updated and synchronized with system implementation. Existing approaches for architecture representation like informal architecture documentation, UML diagrams, and Architecture Description Languages (ADLs) provide only limited support for connecting architecture descriptions and implementations. Architecture management tools like Lattix, SonarJ, and Sotoarc and UML-tools tackle this problem by extracting architecture information directly from code. This approach works for low-level architectural abstractions like classes and interfaces in object-oriented systems but fails to support architectural abstractions not found in programming languages. In this paper we present an approach for linking and continuously synchronizing a formalized architecture representation to an implementation. The approach is a synthesis of functionality provided by code-centric architecture management and UML tools and higher-level architecture analysis approaches like ADLs.

  8. MMI concept and I and C architecture

    International Nuclear Information System (INIS)

    Maillart, H.

    1997-01-01

    The basic design of the I and C for the European pressurized water reactor (EPR) will establish the basis for a preliminary safety assessment and cost and feasibility evaluation. In order to avoid a premature link to a rapidly aging technology, the design aims as far as possible to establish product independent requirements, open to off-the-shelf equipment and thus benefitting from the latest progress in I and C technology in the moment of plant erection. The field of man-machine interface design serves as example to explain the approach, and the resulting overall I and C architecture is outlined. The design team comprising the active participation of designers and utilities, leads to optimal integration of feedback of experience from running plants and other design projects. (orig.)

  9. Simulations of Quantum Turing Machines by Quantum Multi-Stack Machines

    OpenAIRE

    Qiu, Daowen

    2005-01-01

    As was well known, in classical computation, Turing machines, circuits, multi-stack machines, and multi-counter machines are equivalent, that is, they can simulate each other in polynomial time. In quantum computation, Yao [11] first proved that for any quantum Turing machines $M$, there exists quantum Boolean circuit $(n,t)$-simulating $M$, where $n$ denotes the length of input strings, and $t$ is the number of move steps before machine stopping. However, the simulations of quantum Turing ma...

  10. Security solutions: strategy and architecture

    Science.gov (United States)

    Seto, Myron W. L.

    2002-04-01

    Producers of banknotes, other documents of value and brand name goods are being presented constantly with new challenges due to the ever increasing sophistication of easily-accessible desktop publishing and color copying machines, which can be used for counterfeiting. Large crime syndicates have also shown that they have the means and the willingness to invest large sums of money to mimic security features. To ensure sufficient and appropriate protection, a coherent security strategy has to be put into place. The feature has to be appropriately geared to fight against the different types of attacks and attackers, and to have the right degree of sophistication or ease of authentication depending upon by whom or where a check is made. Furthermore, the degree of protection can be considerably increased by taking a multi-layered approach and using an open platform architecture. Features can be stratified to encompass overt, semi-covert, covert and forensic features.

  11. How organisation of architecture documentation affects architectural knowledge retrieval

    NARCIS (Netherlands)

    de Graaf, K.A.; Liang, P.; Tang, A.; Vliet, J.C.

    A common approach to software architecture documentation in industry projects is the use of file-based documents. This approach offers a single-dimensional arrangement of the architectural knowledge. Knowledge retrieval from file-based architecture documentation is efficient if the organisation of

  12. Development of an evaluation technique for human-machine interface

    Energy Technology Data Exchange (ETDEWEB)

    Min, Dae Hwan; Koo, Sang Hui; Ahn, Won Yeong; Ryu, Yeong Shin [Korea Univ., Seoul (Korea, Republic of)

    1997-07-15

    The purpose of this study is two-fold : firstly to establish an evaluation technique for HMI(Human Machine Interface) in NPPs(Nuclear Power Plants) and secondly to develop an architecture of a support system which can be used for the evaluation of HMI. In order to establish an evaluation technique, this study conducted literature review on basic theories of cognitive science studies and summarized the cognitive characteristics of humans. This study also surveyed evaluation techniques of HMI in general, and reviewed studies on the evaluation of HMI in NPPs. On the basis of this survey, the study established a procedure for the evaluation of HMI in NPPs in Korea and laid a foundation for empirical verification.

  13. Development of an evaluation technique for human-machine interface

    International Nuclear Information System (INIS)

    Min, Dae Hwan; Koo, Sang Hui; Ahn, Won Yeong; Ryu, Yeong Shin

    1997-07-01

    The purpose of this study is two-fold : firstly to establish an evaluation technique for HMI(Human Machine Interface) in NPPs(Nuclear Power Plants) and secondly to develop an architecture of a support system which can be used for the evaluation of HMI. In order to establish an evaluation technique, this study conducted literature review on basic theories of cognitive science studies and summarized the cognitive characteristics of humans. This study also surveyed evaluation techniques of HMI in general, and reviewed studies on the evaluation of HMI in NPPs. On the basis of this survey, the study established a procedure for the evaluation of HMI in NPPs in Korea and laid a foundation for empirical verification

  14. Virtual Planning, Control, and Machining for a Modular-Based Automated Factory Operation in an Augmented Reality Environment

    Science.gov (United States)

    Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S.; Phoon, Sin Ye

    2016-01-01

    This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively. PMID:27271840

  15. Virtual Planning, Control, and Machining for a Modular-Based Automated Factory Operation in an Augmented Reality Environment

    Science.gov (United States)

    Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S.; Phoon, Sin Ye

    2016-06-01

    This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively.

  16. Virtual Planning, Control, and Machining for a Modular-Based Automated Factory Operation in an Augmented Reality Environment.

    Science.gov (United States)

    Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S; Phoon, Sin Ye

    2016-06-07

    This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively.

  17. Using the PALS Architecture to Verify a Distributed Topology Control Protocol for Wireless Multi-Hop Networks in the Presence of Node Failures

    Directory of Open Access Journals (Sweden)

    José Meseguer

    2010-09-01

    Full Text Available The PALS architecture reduces distributed, real-time asynchronous system design to the design of a synchronous system under reasonable requirements. Assuming logical synchrony leads to fewer system behaviors and provides a conceptually simpler paradigm for engineering purposes. One of the current limitations of the framework is that from a set of independent "synchronous machines", one must compose the entire synchronous system by hand, which is tedious and error-prone. We use Maude's meta-level to automatically generate a synchronous composition from user-provided component machines and a description of how the machines communicate with each other. We then use the new capabilities to verify the correctness of a distributed topology control protocol for wireless networks in the presence of nodes that may fail.

  18. Machine tool structures

    CERN Document Server

    Koenigsberger, F

    1970-01-01

    Machine Tool Structures, Volume 1 deals with fundamental theories and calculation methods for machine tool structures. Experimental investigations into stiffness are discussed, along with the application of the results to the design of machine tool structures. Topics covered range from static and dynamic stiffness to chatter in metal cutting, stability in machine tools, and deformations of machine tool structures. This volume is divided into three sections and opens with a discussion on stiffness specifications and the effect of stiffness on the behavior of the machine under forced vibration c

  19. Extension of an existing control and monitoring system: architecture 7

    International Nuclear Information System (INIS)

    Soulabaille, Y.

    1991-01-01

    Tore Supra Tokamak is controlled by Architecture 7. This system comprises 3 levels: Man-machine system, automatism management and exchanges with the plant. Performing it presents, nevertheless some limitations: time response is only half a second allowing to manage 95% of Tore Supra processes, the remaining 5% requires one millisecond. The first aim is the extension of functionalities by a fast automat giving one microsecond cycle. The fast automat is applied to the poloidal field. Of main concern for fusion experiments it allows the creation of a plasma current. The second aim is the possibility to use softwares found on the computer market [fr

  20. Capital Architecture: Situating symbolism parallel to architectural methods and technology

    Science.gov (United States)

    Daoud, Bassam

    Capital Architecture is a symbol of a nation's global presence and the cultural and social focal point of its inhabitants. Since the advent of High-Modernism in Western cities, and subsequently decolonised capitals, civic architecture no longer seems to be strictly grounded in the philosophy that national buildings shape the legacy of government and the way a nation is regarded through its built environment. Amidst an exceedingly globalized architectural practice and with the growing concern of key heritage foundations over the shortcomings of international modernism in representing its immediate socio-cultural context, the contextualization of public architecture within its sociological, cultural and economic framework in capital cities became the key denominator of this thesis. Civic architecture in capital cities is essential to confront the challenges of symbolizing a nation and demonstrating the legitimacy of the government'. In today's dominantly secular Western societies, governmental architecture, especially where the seat of political power lies, is the ultimate form of architectural expression in conveying a sense of identity and underlining a nation's status. Departing with these convictions, this thesis investigates the embodied symbolic power, the representative capacity, and the inherent permanence in contemporary architecture, and in its modes of production. Through a vast study on Modern architectural ideals and heritage -- in parallel to methodologies -- the thesis stimulates the future of large scale governmental building practices and aims to identify and index the key constituents that may respond to the lack representation in civic architecture in capital cities.

  1. Software architecture evolution

    DEFF Research Database (Denmark)

    Barais, Olivier; Le Meur, Anne-Francoise; Duchien, Laurence

    2008-01-01

    Software architectures must frequently evolve to cope with changing requirements, and this evolution often implies integrating new concerns. Unfortunately, when the new concerns are crosscutting, existing architecture description languages provide little or no support for this kind of evolution....... The software architect must modify multiple elements of the architecture manually, which risks introducing inconsistencies. This chapter provides an overview, comparison and detailed treatment of the various state-of-the-art approaches to describing and evolving software architectures. Furthermore, we discuss...... one particular framework named Tran SAT, which addresses the above problems of software architecture evolution. Tran SAT provides a new element in the software architecture descriptions language, called an architectural aspect, for describing new concerns and their integration into an existing...

  2. Electricity of machine tool

    International Nuclear Information System (INIS)

    Gijeon media editorial department

    1977-10-01

    This book is divided into three parts. The first part deals with electricity machine, which can taints from generator to motor, motor a power source of machine tool, electricity machine for machine tool such as switch in main circuit, automatic machine, a knife switch and pushing button, snap switch, protection device, timer, solenoid, and rectifier. The second part handles wiring diagram. This concludes basic electricity circuit of machine tool, electricity wiring diagram in your machine like milling machine, planer and grinding machine. The third part introduces fault diagnosis of machine, which gives the practical solution according to fault diagnosis and the diagnostic method with voltage and resistance measurement by tester.

  3. Architectural design decisions

    NARCIS (Netherlands)

    Jansen, Antonius Gradus Johannes

    2008-01-01

    A software architecture can be considered as the collection of key decisions concerning the design of the software of a system. Knowledge about this design, i.e. architectural knowledge, is key for understanding a software architecture and thus the software itself. Architectural knowledge is mostly

  4. Machine medical ethics

    CERN Document Server

    Pontier, Matthijs

    2015-01-01

    The essays in this book, written by researchers from both humanities and sciences, describe various theoretical and experimental approaches to adding medical ethics to a machine in medical settings. Medical machines are in close proximity with human beings, and getting closer: with patients who are in vulnerable states of health, who have disabilities of various kinds, with the very young or very old, and with medical professionals. In such contexts, machines are undertaking important medical tasks that require emotional sensitivity, knowledge of medical codes, human dignity, and privacy. As machine technology advances, ethical concerns become more urgent: should medical machines be programmed to follow a code of medical ethics? What theory or theories should constrain medical machine conduct? What design features are required? Should machines share responsibility with humans for the ethical consequences of medical actions? How ought clinical relationships involving machines to be modeled? Is a capacity for e...

  5. Humanizing machines: Anthropomorphization of slot machines increases gambling.

    Science.gov (United States)

    Riva, Paolo; Sacchi, Simona; Brambilla, Marco

    2015-12-01

    Do people gamble more on slot machines if they think that they are playing against humanlike minds rather than mathematical algorithms? Research has shown that people have a strong cognitive tendency to imbue humanlike mental states to nonhuman entities (i.e., anthropomorphism). The present research tested whether anthropomorphizing slot machines would increase gambling. Four studies manipulated slot machine anthropomorphization and found that exposing people to an anthropomorphized description of a slot machine increased gambling behavior and reduced gambling outcomes. Such findings emerged using tasks that focused on gambling behavior (Studies 1 to 3) as well as in experimental paradigms that included gambling outcomes (Studies 2 to 4). We found that gambling outcomes decrease because participants primed with the anthropomorphic slot machine gambled more (Study 4). Furthermore, we found that high-arousal positive emotions (e.g., feeling excited) played a role in the effect of anthropomorphism on gambling behavior (Studies 3 and 4). Our research indicates that the psychological process of gambling-machine anthropomorphism can be advantageous for the gaming industry; however, this may come at great expense for gamblers' (and their families') economic resources and psychological well-being. (c) 2015 APA, all rights reserved).

  6. On the impact of approximate computation in an analog DeSTIN architecture.

    Science.gov (United States)

    Young, Steven; Lu, Junjie; Holleman, Jeremy; Arel, Itamar

    2014-05-01

    Deep machine learning (DML) holds the potential to revolutionize machine learning by automating rich feature extraction, which has become the primary bottleneck of human engineering in pattern recognition systems. However, the heavy computational burden renders DML systems implemented on conventional digital processors impractical for large-scale problems. The highly parallel computations required to implement large-scale deep learning systems are well suited to custom hardware. Analog computation has demonstrated power efficiency advantages of multiple orders of magnitude relative to digital systems while performing nonideal computations. In this paper, we investigate typical error sources introduced by analog computational elements and their impact on system-level performance in DeSTIN--a compositional deep learning architecture. These inaccuracies are evaluated on a pattern classification benchmark, clearly demonstrating the robustness of the underlying algorithm to the errors introduced by analog computational elements. A clear understanding of the impacts of nonideal computations is necessary to fully exploit the efficiency of analog circuits.

  7. Architecture Governance: The Importance of Architecture Governance for Achieving Operationally Responsive Ground Systems

    Science.gov (United States)

    Kolar, Mike; Estefan, Jeff; Giovannoni, Brian; Barkley, Erik

    2011-01-01

    Topics covered (1) Why Governance and Why Now? (2) Characteristics of Architecture Governance (3) Strategic Elements (3a) Architectural Principles (3b) Architecture Board (3c) Architecture Compliance (4) Architecture Governance Infusion Process. Governance is concerned with decision making (i.e., setting directions, establishing standards and principles, and prioritizing investments). Architecture governance is the practice and orientation by which enterprise architectures and other architectures are managed and controlled at an enterprise-wide level

  8. Resident Space Object Characterization and Behavior Understanding via Machine Learning and Ontology-based Bayesian Networks

    Science.gov (United States)

    Furfaro, R.; Linares, R.; Gaylor, D.; Jah, M.; Walls, R.

    2016-09-01

    In this paper, we present an end-to-end approach that employs machine learning techniques and Ontology-based Bayesian Networks (BN) to characterize the behavior of resident space objects. State-of-the-Art machine learning architectures (e.g. Extreme Learning Machines, Convolutional Deep Networks) are trained on physical models to learn the Resident Space Object (RSO) features in the vectorized energy and momentum states and parameters. The mapping from measurements to vectorized energy and momentum states and parameters enables behavior characterization via clustering in the features space and subsequent RSO classification. Additionally, Space Object Behavioral Ontologies (SOBO) are employed to define and capture the domain knowledge-base (KB) and BNs are constructed from the SOBO in a semi-automatic fashion to execute probabilistic reasoning over conclusions drawn from trained classifiers and/or directly from processed data. Such an approach enables integrating machine learning classifiers and probabilistic reasoning to support higher-level decision making for space domain awareness applications. The innovation here is to use these methods (which have enjoyed great success in other domains) in synergy so that it enables a "from data to discovery" paradigm by facilitating the linkage and fusion of large and disparate sources of information via a Big Data Science and Analytics framework.

  9. Minimalism in architecture: Architecture as a language of its identity

    Directory of Open Access Journals (Sweden)

    Vasilski Dragana

    2012-01-01

    Full Text Available Every architectural work is created on the principle that includes the meaning, and then this work is read like an artifact of the particular meaning. Resources by which the meaning is built primarily, susceptible to transformation, as well as routing of understanding (decoding messages carried by a work of architecture, are subject of semiotics and communication theories, which have played significant role for the architecture and the architect. Minimalism in architecture, as a paradigm of the XXI century architecture, means searching for essence located in the irreducible minimum. Inspired use of architectural units (archetypical elements, trough the fatasm of simplicity, assumes the primary responsibility for providing the object identity, because it participates in language formation and therefore in its reading. Volume is form by clean language that builds the expression of the fluid areas liberated of recharge needs. Reduced architectural language is appropriating to the age marked by electronic communications.

  10. Space Station data management system architecture

    Science.gov (United States)

    Mallary, William E.; Whitelaw, Virginia A.

    1987-01-01

    Within the Space Station program, the Data Management System (DMS) functions in a dual role. First, it provides the hardware resources and software services which support the data processing, data communications, and data storage functions of the onboard subsystems and payloads. Second, it functions as an integrating entity which provides a common operating environment and human-machine interface for the operation and control of the orbiting Space Station systems and payloads by both the crew and the ground operators. This paper discusses the evolution and derivation of the requirements and issues which have had significant effect on the design of the Space Station DMS, describes the DMS components and services which support system and payload operations, and presents the current architectural view of the system as it exists in October 1986; one-and-a-half years into the Space Station Phase B Definition and Preliminary Design Study.

  11. Code-expanded radio access protocol for machine-to-machine communications

    DEFF Research Database (Denmark)

    Thomsen, Henning; Kiilerich Pratas, Nuno; Stefanovic, Cedomir

    2013-01-01

    The random access methods used for support of machine-to-machine, also referred to as Machine-Type Communications, in current cellular standards are derivatives of traditional framed slotted ALOHA and therefore do not support high user loads efficiently. We propose an approach that is motivated b...... subframes and orthogonal preambles, the amount of available contention resources is drastically increased, enabling the massive support of Machine-Type Communication users that is beyond the reach of current systems.......The random access methods used for support of machine-to-machine, also referred to as Machine-Type Communications, in current cellular standards are derivatives of traditional framed slotted ALOHA and therefore do not support high user loads efficiently. We propose an approach that is motivated...... by the random access method employed in LTE, which significantly increases the amount of contention resources without increasing the system resources, such as contention subframes and preambles. This is accomplished by a logical, rather than physical, extension of the access method in which the available system...

  12. Architectural Narratives

    DEFF Research Database (Denmark)

    Kiib, Hans

    2010-01-01

    a functional framework for these concepts, but tries increasingly to endow the main idea of the cultural project with a spatially aesthetic expression - a shift towards “experience architecture.” A great number of these projects typically recycle and reinterpret narratives related to historical buildings......In this essay, I focus on the combination of programs and the architecture of cultural projects that have emerged within the last few years. These projects are characterized as “hybrid cultural projects,” because they intend to combine experience with entertainment, play, and learning. This essay...... and architectural heritage; another group tries to embed new performative technologies in expressive architectural representation. Finally, this essay provides a theoretical framework for the analysis of the political rationales of these projects and for the architectural representation bridges the gap between...

  13. Architecture & Environment

    Science.gov (United States)

    Erickson, Mary; Delahunt, Michael

    2010-01-01

    Most art teachers would agree that architecture is an important form of visual art, but they do not always include it in their curriculums. In this article, the authors share core ideas from "Architecture and Environment," a teaching resource that they developed out of a long-term interest in teaching architecture and their fascination with the…

  14. Exporting Humanist Architecture

    DEFF Research Database (Denmark)

    Nielsen, Tom

    2016-01-01

    The article is a chapter in the catalogue for the Danish exhibition at the 2016 Architecture Biennale in Venice. The catalogue is conceived at an independent book exploring the theme Art of Many - The Right to Space. The chapter is an essay in this anthology tracing and discussing the different...... values and ethical stands involved in the export of Danish Architecture. Abstract: Danish architecture has, in a sense, been driven by an unwritten contract between the architects and the democratic state and its institutions. This contract may be viewed as an ethos – an architectural tradition...... with inherent aesthetic and moral values. Today, however, Danish architecture is also an export commodity. That raises questions, which should be debated as openly as possible. What does it mean for architecture and architects to practice in cultures and under political systems that do not use architecture...

  15. Fragments of Architecture

    DEFF Research Database (Denmark)

    Bang, Jacob Sebastian

    2016-01-01

    Topic 3: “Case studies dealing with the artistic and architectural work of architects worldwide, and the ties between specific artistic and architectural projects, methodologies and products”......Topic 3: “Case studies dealing with the artistic and architectural work of architects worldwide, and the ties between specific artistic and architectural projects, methodologies and products”...

  16. Simple machines

    CERN Document Server

    Graybill, George

    2007-01-01

    Just how simple are simple machines? With our ready-to-use resource, they are simple to teach and easy to learn! Chocked full of information and activities, we begin with a look at force, motion and work, and examples of simple machines in daily life are given. With this background, we move on to different kinds of simple machines including: Levers, Inclined Planes, Wedges, Screws, Pulleys, and Wheels and Axles. An exploration of some compound machines follows, such as the can opener. Our resource is a real time-saver as all the reading passages, student activities are provided. Presented in s

  17. Notes about the Palais des Machines of 1889 in Paris: space, structure and ornament

    Directory of Open Access Journals (Sweden)

    Oscar Linares de la Torre

    2018-04-01

    Full Text Available The Palais des Machines of the Paris Universal Exposition of 1889, designed by the architect Charles Louis Ferdinand Dutert (1845-1906 and the engineer Victor Contamin (1840-1893, is undoubtedly an icon of the 19th century architecture: its powerful spatiality, its portentous structure and its straightforward tectonics have rightly received high praise by critics and architects from the second half of the 20th century. However, critical tradition and historiography from the end of the last century have frequently offered a biased interpretation of this work, aimed at underlining certain architectonic values for then presenting them as a direct product of the author’s will. The aim of this article is to explain, altogether and with maximum transparency, how the conjunction between certain circumstantial issues and the will/ability of both authors made possible the construction of one of the most important works of the nineteenth-century architecture. To achieve this, the three most celebrated architectural aspects of the building are analysed: the huge scale of the central space, the particular structural system chosen and the uneven usage of ornament.

  18. Enterprise architecture management

    DEFF Research Database (Denmark)

    Rahimi, Fatemeh; Gøtze, John; Møller, Charles

    2017-01-01

    Despite the growing interest in enterprise architecture management, researchers and practitioners lack a shared understanding of its applications in organizations. Building on findings from a literature review and eight case studies, we develop a taxonomy that categorizes applications of enterprise...... architecture management based on three classes of enterprise architecture scope. Organizations may adopt enterprise architecture management to help form, plan, and implement IT strategies; help plan and implement business strategies; or to further complement the business strategy-formation process....... The findings challenge the traditional IT-centric view of enterprise architecture management application and suggest enterprise architecture management as an approach that could support the consistent design and evolution of an organization as a whole....

  19. Enterprise architecture management

    DEFF Research Database (Denmark)

    Rahimi, Fatemeh; Gøtze, John; Møller, Charles

    2017-01-01

    architecture management based on three classes of enterprise architecture scope. Organizations may adopt enterprise architecture management to help form, plan, and implement IT strategies; help plan and implement business strategies; or to further complement the business strategy-formation process......Despite the growing interest in enterprise architecture management, researchers and practitioners lack a shared understanding of its applications in organizations. Building on findings from a literature review and eight case studies, we develop a taxonomy that categorizes applications of enterprise....... The findings challenge the traditional IT-centric view of enterprise architecture management application and suggest enterprise architecture management as an approach that could support the consistent design and evolution of an organization as a whole....

  20. Les Machines pour le Big Data : Vers une Informatique Quantique et Cognitive.

    OpenAIRE

    Teboul , Bruno; Amri , Taoufik

    2014-01-01

    Cet article est une analyse prospective sur les mutations technologiques qui affecteront l’informatique et ses machines dans un avenir proche afin de répondre aux grands défis soulevés par notre société du tout digital. Nous pensons que ces mutations seront à la fois « quantique » et « cognitive ». Nous étayerons notre analyse en revenant sur ce qui fonde encore aujourd’hui nos ordinateurs, à savoir une architecture vieille de plus d’un demi-siècle, qui est responsable des espoirs déchus de l...

  1. Superconducting rotating machines

    International Nuclear Information System (INIS)

    Smith, J.L. Jr.; Kirtley, J.L. Jr.; Thullen, P.

    1975-01-01

    The opportunities and limitations of the applications of superconductors in rotating electric machines are given. The relevant properties of superconductors and the fundamental requirements for rotating electric machines are discussed. The current state-of-the-art of superconducting machines is reviewed. Key problems, future developments and the long range potential of superconducting machines are assessed

  2. Machine learning algorithms for the creation of clinical healthcare enterprise systems

    Science.gov (United States)

    Mandal, Indrajit

    2017-10-01

    Clinical recommender systems are increasingly becoming popular for improving modern healthcare systems. Enterprise systems are persuasively used for creating effective nurse care plans to provide nurse training, clinical recommendations and clinical quality control. A novel design of a reliable clinical recommender system based on multiple classifier system (MCS) is implemented. A hybrid machine learning (ML) ensemble based on random subspace method and random forest is presented. The performance accuracy and robustness of proposed enterprise architecture are quantitatively estimated to be above 99% and 97%, respectively (above 95% confidence interval). The study then extends to experimental analysis of the clinical recommender system with respect to the noisy data environment. The ranking of items in nurse care plan is demonstrated using machine learning algorithms (MLAs) to overcome the drawback of the traditional association rule method. The promising experimental results are compared against the sate-of-the-art approaches to highlight the advancement in recommendation technology. The proposed recommender system is experimentally validated using five benchmark clinical data to reinforce the research findings.

  3. Highly parallel machines and future of scientific computing

    International Nuclear Information System (INIS)

    Singh, G.S.

    1992-01-01

    Computing requirement of large scale scientific computing has always been ahead of what state of the art hardware could supply in the form of supercomputers of the day. And for any single processor system the limit to increase in the computing power was realized a few years back itself. Now with the advent of parallel computing systems the availability of machines with the required computing power seems a reality. In this paper the author tries to visualize the future large scale scientific computing in the penultimate decade of the present century. The author summarized trends in parallel computers and emphasize the need for a better programming environment and software tools for optimal performance. The author concludes this paper with critique on parallel architectures, software tools and algorithms. (author). 10 refs., 2 tabs

  4. Sustainable machining

    CERN Document Server

    2017-01-01

    This book provides an overview on current sustainable machining. Its chapters cover the concept in economic, social and environmental dimensions. It provides the reader with proper ways to handle several pollutants produced during the machining process. The book is useful on both undergraduate and postgraduate levels and it is of interest to all those working with manufacturing and machining technology.

  5. Modeling Architectural Patterns’ Behavior Using Architectural Primitives

    NARCIS (Netherlands)

    Waqas Kamal, Ahmad; Avgeriou, Paris

    2008-01-01

    Architectural patterns have an impact on both the structure and the behavior of a system at the architecture design level. However, it is challenging to model patterns’ behavior in a systematic way because modeling languages do not provide the appropriate abstractions and because each pattern

  6. Space and Architecture's Current Line of Research? A Lunar Architecture Workshop With An Architectural Agenda.

    Science.gov (United States)

    Solomon, D.; van Dijk, A.

    The "2002 ESA Lunar Architecture Workshop" (June 3-16) ESTEC, Noordwijk, NL and V2_Lab, Rotterdam, NL) is the first-of-its-kind workshop for exploring the design of extra-terrestrial (infra) structures for human exploration of the Moon and Earth-like planets introducing 'architecture's current line of research', and adopting an architec- tural criteria. The workshop intends to inspire, engage and challenge 30-40 European masters students from the fields of aerospace engineering, civil engineering, archi- tecture, and art to design, validate and build models of (infra) structures for Lunar exploration. The workshop also aims to open up new physical and conceptual terrain for an architectural agenda within the field of space exploration. A sound introduc- tion to the issues, conditions, resources, technologies, and architectural strategies will initiate the workshop participants into the context of lunar architecture scenarios. In my paper and presentation about the development of the ideology behind this work- shop, I will comment on the following questions: * Can the contemporary architectural agenda offer solutions that affect the scope of space exploration? It certainly has had an impression on urbanization and colonization of previously sparsely populated parts of Earth. * Does the current line of research in architecture offer any useful strategies for com- bining scientific interests, commercial opportunity, and public space? What can be learned from 'state of the art' architecture that blends commercial and public pro- grammes within one location? * Should commercial 'colonisation' projects in space be required to provide public space in a location where all humans present are likely to be there in a commercial context? Is the wave in Koolhaas' new Prada flagship store just a gesture to public space, or does this new concept in architecture and shopping evolve the public space? * What can we learn about designing (infra-) structures on the Moon or any other

  7. Asynchronized synchronous machines

    CERN Document Server

    Botvinnik, M M

    1964-01-01

    Asynchronized Synchronous Machines focuses on the theoretical research on asynchronized synchronous (AS) machines, which are "hybrids” of synchronous and induction machines that can operate with slip. Topics covered in this book include the initial equations; vector diagram of an AS machine; regulation in cases of deviation from the law of full compensation; parameters of the excitation system; and schematic diagram of an excitation regulator. The possible applications of AS machines and its calculations in certain cases are also discussed. This publication is beneficial for students and indiv

  8. Machine Shop Lathes.

    Science.gov (United States)

    Dunn, James

    This guide, the second in a series of five machine shop curriculum manuals, was designed for use in machine shop courses in Oklahoma. The purpose of the manual is to equip students with basic knowledge and skills that will enable them to enter the machine trade at the machine-operator level. The curriculum is designed so that it can be used in…

  9. Sensor Architecture and Task Classification for Agricultural Vehicles and Environments

    Directory of Open Access Journals (Sweden)

    Francisco Rovira-Más

    2010-12-01

    Full Text Available The long time wish of endowing agricultural vehicles with an increasing degree of autonomy is becoming a reality thanks to two crucial facts: the broad diffusion of global positioning satellite systems and the inexorable progress of computers and electronics. Agricultural vehicles are currently the only self-propelled ground machines commonly integrating commercial automatic navigation systems. Farm equipment manufacturers and satellite-based navigation system providers, in a joint effort, have pushed this technology to unprecedented heights; yet there are many unresolved issues and an unlimited potential still to uncover. The complexity inherent to intelligent vehicles is rooted in the selection and coordination of the optimum sensors, the computer reasoning techniques to process the acquired data, and the resulting control strategies for automatic actuators. The advantageous design of the network of onboard sensors is necessary for the future deployment of advanced agricultural vehicles. This article analyzes a variety of typical environments and situations encountered in agricultural fields, and proposes a sensor architecture especially adapted to cope with them. The strategy proposed groups sensors into four specific subsystems: global localization, feedback control and vehicle pose, non-visual monitoring, and local perception. The designed architecture responds to vital vehicle tasks classified within three layers devoted to safety, operative information, and automatic actuation. The success of this architecture, implemented and tested in various agricultural vehicles over the last decade, rests on its capacity to integrate redundancy and incorporate new technologies in a practical way.

  10. Data management and communication networks for Man-Machine Interface System in Korea Advanced Liquid MEtal Reactor : its functionality and design requirements

    International Nuclear Information System (INIS)

    Cha, Kyung Ho; Park, Gun Ok; Suh, Sang Moon; Kim, Jang Yeol; Kwon, Kee Choon

    1998-01-01

    The DAta management and Communication NETworks(DACONET), which it is designed as a subsystem for Man-Machine Interface System of Korea Advanced LIquid MEtal Reactor(KALIMER MMIS) and advanced design concept is approached, is described. The DACONET has its roles of providing the real-time data transmission and communication paths between MMIS systems, providing the quality data for protection, monitoring and control of KALIMER and logging the static and dynamic behavioral data during KALIMER operation. The DACONET is characterized as the distributed real-time system architecture with high performance. Future direction, in which advanced technology is being continually applied to Man-Machine Interface System development and communication networks of KALIMER MMIS

  11. Data management and communication networks for Man-Machine Interface System in Korea Advanced Liquid MEtal Reactor : its functionality and design requirements

    Energy Technology Data Exchange (ETDEWEB)

    Cha, Kyung Ho; Park, Gun Ok; Suh, Sang Moon; Kim, Jang Yeol; Kwon, Kee Choon [KAERI, Taejon (Korea, Republic of)

    1998-05-01

    The DAta management and Communication NETworks(DACONET), which it is designed as a subsystem for Man-Machine Interface System of Korea Advanced LIquid MEtal Reactor(KALIMER MMIS) and advanced design concept is approached, is described. The DACONET has its roles of providing the real-time data transmission and communication paths between MMIS systems, providing the quality data for protection, monitoring and control of KALIMER and logging the static and dynamic behavioral data during KALIMER operation. The DACONET is characterized as the distributed real-time system architecture with high performance. Future direction, in which advanced technology is being continually applied to Man-Machine Interface System development and communication networks of KALIMER MMIS.

  12. Methodical Design of Software Architecture Using an Architecture Design Assistant (ArchE)

    Science.gov (United States)

    2005-04-01

    PA 15213-3890 Methodical Design of Software Architecture Using an Architecture Design Assistant (ArchE) Felix Bachmann and Mark Klein Software...DATES COVERED 00-00-2005 to 00-00-2005 4. TITLE AND SUBTITLE Methodical Design of Software Architecture Using an Architecture Design Assistant...important for architecture design – quality requirements and constraints are most important Here’s some evidence: If the only concern is

  13. Migration of vectorized iterative solvers to distributed memory architectures

    Energy Technology Data Exchange (ETDEWEB)

    Pommerell, C. [AT& T Bell Labs., Murray Hill, NJ (United States); Ruehl, R. [CSCS-ETH, Manno (Switzerland)

    1994-12-31

    Both necessity and opportunity motivate the use of high-performance computers for iterative linear solvers. Necessity results from the size of the problems being solved-smaller problems are often better handled by direct methods. Opportunity arises from the formulation of the iterative methods in terms of simple linear algebra operations, even if this {open_quote}natural{close_quotes} parallelism is not easy to exploit in irregularly structured sparse matrices and with good preconditioners. As a result, high-performance implementations of iterative solvers have attracted a lot of interest in recent years. Most efforts are geared to vectorize or parallelize the dominating operation-structured or unstructured sparse matrix-vector multiplication, or to increase locality and parallelism by reformulating the algorithm-reducing global synchronization in inner products or local data exchange in preconditioners. Target architectures for iterative solvers currently include mostly vector supercomputers and architectures with one or few optimized (e.g., super-scalar and/or super-pipelined RISC) processors and hierarchical memory systems. More recently, parallel computers with physically distributed memory and a better price/performance ratio have been offered by vendors as a very interesting alternative to vector supercomputers. However, programming comfort on such distributed memory parallel processors (DMPPs) still lags behind. Here the authors are concerned with iterative solvers and their changing computing environment. In particular, they are considering migration from traditional vector supercomputers to DMPPs. Application requirements force one to use flexible and portable libraries. They want to extend the portability of iterative solvers rather than reimplementing everything for each new machine, or even for each new architecture.

  14. Architectural freedom and industrialised architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2012-01-01

    to the building physic problems a new industrialized period has started based on light weight elements basically made of wooden structures, faced with different suitable materials meant for individual expression for the specific housing area. It is the purpose of this article to widen up the different design...... to this systematic thinking of the building technique we get a diverse and functional architecture. Creating a new and clearer story telling about new and smart system based thinking behind the architectural expression....

  15. Preemptive Architecture: Explosive Art and Future Architectures in Cursed Urban Zones

    Directory of Open Access Journals (Sweden)

    Stahl Stenslie

    2017-04-01

    Full Text Available This article describes the art and architectural research project Preemptive Architecture that uses artistic strategies and approaches to create bomb-ready architectural structures that act as instruments for the undoing of violence in war. Increasing environmental usability through destruction represents an inverse strategy that reverses common thinking patterns about warfare, art and architecture. Building structures predestined for a construc­tive destruction becomes a creative act. One of the main motivations behind this paper is to challenge and expand the material thinking as well as the socio-political conditions related to artistic, architectural and design based practices.   Article received: December 12, 2016; Article accepted: January 10, 2017; Published online: April 20, 2017 Original scholarly paper How to cite this article: Stenslie, Stahl, and Magne Wiggen. "Preemptive Architecture: Explosive Art and Future Architectures in Cursed Urban Zones." AM Journal of Art and Media Studies 12 (2017: 29-39. doi: 10.25038/am.v0i12.165

  16. Modeling the Office of Science ten year facilities plan: The PERI Architecture Tiger Team

    International Nuclear Information System (INIS)

    Supinski, Bronis R de; Gamblin, Todd; Schulz, Martin

    2009-01-01

    The Performance Engineering Institute (PERI) originally proposed a tiger team activity as a mechanism to target significant effort optimizing key Office of Science applications, a model that was successfully realized with the assistance of two JOULE metric teams. However, the Office of Science requested a new focus beginning in 2008: assistance in forming its ten year facilities plan. To meet this request, PERI formed the Architecture Tiger Team, which is modeling the performance of key science applications on future architectures, with S3D, FLASH and GTC chosen as the first application targets. In this activity, we have measured the performance of these applications on current systems in order to understand their baseline performance and to ensure that our modeling activity focuses on the right versions and inputs of the applications. We have applied a variety of modeling techniques to anticipate the performance of these applications on a range of anticipated systems. While our initial findings predict that Office of Science applications will continue to perform well on future machines from major hardware vendors, we have also encountered several areas in which we must extend our modeling techniques in order to fulfill our mission accurately and completely. In addition, we anticipate that models of a wider range of applications will reveal critical differences between expected future systems, thus providing guidance for future Office of Science procurement decisions, and will enable DOE applications to exploit machines in future facilities fully.

  17. A Navier-Strokes Chimera Code on the Connection Machine CM-5: Design and Performance

    Science.gov (United States)

    Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)

    1994-01-01

    We have implemented a three-dimensional compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the 'chimera' approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. A parallel machine like the CM-5 is well-suited for finite-difference methods on structured grids. The regular pattern of connections of a structured mesh maps well onto the architecture of the machine. So the first design choice, finite differences on a structured mesh, is natural. We use centered differences in space, with added artificial dissipation terms. When numerically solving the Navier-Stokes equations, there are liable to be some mesh cells near a solid body that are small in at least one direction. This mesh cell geometry can impose a very severe CFL (Courant-Friedrichs-Lewy) condition on the time step for explicit time-stepping methods. Thus, though explicit time-stepping is well-suited to the architecture of the machine, we have adopted implicit time-stepping. We have further taken the approximate factorization approach. This creates the need to solve large banded linear systems and creates the first possible barrier to an efficient algorithm. To overcome this first possible barrier we have considered two options. The first is just to solve the banded linear systems with data spread over the whole machine, using whatever fast method is available. This option is adequate for solving scalar tridiagonal systems, but for scalar pentadiagonal or block tridiagonal systems it is somewhat slower than desired. The second option is to 'transpose' the flow and geometry variables as part of the time-stepping process: Start with x-lines of data in-processor. Form explicit terms in x, then transpose so y-lines of data are

  18. Strategies and Principles of Distributed Machine Learning on Big Data

    Directory of Open Access Journals (Sweden)

    Eric P. Xing

    2016-06-01

    Full Text Available The rise of big data has led to new demands for machine learning (ML systems to learn complex models, with millions to billions of parameters, that promise adequate capacity to digest massive datasets and offer powerful predictive analytics (such as high-dimensional latent features, intermediate representations, and decision functions thereupon. In order to run ML algorithms at such scales, on a distributed cluster with tens to thousands of machines, it is often the case that significant engineering efforts are required—and one might fairly ask whether such engineering truly falls within the domain of ML research. Taking the view that “big” ML systems can benefit greatly from ML-rooted statistical and algorithmic insights—and that ML researchers should therefore not shy away from such systems design—we discuss a series of principles and strategies distilled from our recent efforts on industrial-scale ML solutions. These principles and strategies span a continuum from application, to engineering, and to theoretical research and development of big ML systems and architectures, with the goal of understanding how to make them efficient, generally applicable, and supported with convergence and scaling guarantees. They concern four key questions that traditionally receive little attention in ML research: How can an ML program be distributed over a cluster? How can ML computation be bridged with inter-machine communication? How can such communication be performed? What should be communicated between machines? By exposing underlying statistical and algorithmic characteristics unique to ML programs but not typically seen in traditional computer programs, and by dissecting successful cases to reveal how we have harnessed these principles to design and develop both high-performance distributed ML software as well as general-purpose ML frameworks, we present opportunities for ML researchers and practitioners to further shape and enlarge the area

  19. Methods to Load Balance a GCR Pressure Solver Using a Stencil Framework on Multi- and Many-Core Architectures

    Directory of Open Access Journals (Sweden)

    Milosz Ciznicki

    2015-01-01

    Full Text Available The recent advent of novel multi- and many-core architectures forces application programmers to deal with hardware-specific implementation details and to be familiar with software optimisation techniques to benefit from new high-performance computing machines. Extra care must be taken for communication-intensive algorithms, which may be a bottleneck for forthcoming era of exascale computing. This paper aims to present a high-level stencil framework implemented for the EULerian or LAGrangian model (EULAG that efficiently utilises multi- and many-cores architectures. Only an efficient usage of both many-core processors (CPUs and graphics processing units (GPUs with the flexible data decomposition method can lead to the maximum performance that scales the communication-intensive Generalized Conjugate Residual (GCR elliptic solver with preconditioner.

  20. Machining of Machine Elements Made of Polymer Composite Materials

    Science.gov (United States)

    Baurova, N. I.; Makarov, K. A.

    2017-12-01

    The machining of the machine elements that are made of polymer composite materials (PCMs) or are repaired using them is considered. Turning, milling, and drilling are shown to be most widely used among all methods of cutting PCMs. Cutting conditions for the machining of PCMs are presented. The factors that most strongly affect the roughness parameters and the accuracy of cutting PCMs are considered.

  1. Enterprise architecture patterns practical solutions for recurring IT-architecture problems

    CERN Document Server

    Perroud, Thierry

    2013-01-01

    Every enterprise architect faces similar problems when designing and governing the enterprise architecture of a medium to large enterprise. Design patterns are a well-established concept in software engineering, used to define universally applicable solution schemes. By applying this approach to enterprise architectures, recurring problems in the design and implementation of enterprise architectures can be solved over all layers, from the business layer to the application and data layer down to the technology layer.Inversini and Perroud describe patterns at the level of enterprise architecture

  2. MUF architecture /art London

    DEFF Research Database (Denmark)

    Svenningsen Kajita, Heidi

    2009-01-01

    Om MUF architecture samt interview med Liza Fior og Katherine Clarke, partnere i muf architecture/art......Om MUF architecture samt interview med Liza Fior og Katherine Clarke, partnere i muf architecture/art...

  3. VERNACULAR ARCHITECTURE: AN INTRODUCTORY COURSE TO LEARN ARCHITECTURE IN INDIA

    Directory of Open Access Journals (Sweden)

    Miki Desai

    2010-07-01

    Full Text Available “The object in view of both my predecessors in office and by myself has been rather to bring out the reasoning powers of individual students, so that they may understand the inner meaning of the old forms and their original function and may develop and modernize and gradually produce an architecture, Indian in character, but at the same time as suited to present day India as the old styles were to their own times and environment.” Claude Batley-1940; Lang, Desai, Desai, 1997 (p.143. The article introduces teaching philosophy, content and method of Basic Design I and II for first year students of architecture at the Faculty of Architecture, Centre for Environmental Planning and Technology (CEPT University, Ahmedabad, India. It is framed within the Indian perspective of architectural education from the British colonial times. Commencing with important academic literature and biases of the initial colonial period, it quickly traces architectural education in CEPT, the sixteenth school of post-independent India, set up in 1962, discussing the foundation year teaching imparted. The school was Modernist and avant-garde. The author introduced these two courses against the back drop of the Universalist Modernist credo of architecture and education. In the courses, the primary philosophy behind learning design emerges from heuristic method. The aim of the first course is seen as infusing interest in visual world, development of manual skills and dexterity through the dictum of ‘Look-feel-reason out-evaluate’ and ‘observe-record-interpret-synthesize transform express’. Due to the lack of architectural orientation in Indian schooling; the second course assumes vernacular architecture as a reasonable tool for a novice to understand the triangular relationship of society, architecture and physical context and its impact on design. The students are analytically exposed to the regional variety of architectures logically stemming from the geo

  4. Improving Machining Accuracy of CNC Machines with Innovative Design Methods

    Science.gov (United States)

    Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.

    2018-03-01

    The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.

  5. Machinability of nickel based alloys using electrical discharge machining process

    Science.gov (United States)

    Khan, M. Adam; Gokul, A. K.; Bharani Dharan, M. P.; Jeevakarthikeyan, R. V. S.; Uthayakumar, M.; Thirumalai Kumaran, S.; Duraiselvam, M.

    2018-04-01

    The high temperature materials such as nickel based alloys and austenitic steel are frequently used for manufacturing critical aero engine turbine components. Literature on conventional and unconventional machining of steel materials is abundant over the past three decades. However the machining studies on superalloy is still a challenging task due to its inherent property and quality. Thus this material is difficult to be cut in conventional processes. Study on unconventional machining process for nickel alloys is focused in this proposed research. Inconel718 and Monel 400 are the two different candidate materials used for electrical discharge machining (EDM) process. Investigation is to prepare a blind hole using copper electrode of 6mm diameter. Electrical parameters are varied to produce plasma spark for diffusion process and machining time is made constant to calculate the experimental results of both the material. Influence of process parameters on tool wear mechanism and material removal are considered from the proposed experimental design. While machining the tool has prone to discharge more materials due to production of high energy plasma spark and eddy current effect. The surface morphology of the machined surface were observed with high resolution FE SEM. Fused electrode found to be a spherical structure over the machined surface as clumps. Surface roughness were also measured with surface profile using profilometer. It is confirmed that there is no deviation and precise roundness of drilling is maintained.

  6. Virtual Machine Images Management in Cloud Environments

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Nowadays, the demand for scalability in distributed systems has led a design philosophy in which virtual resources need to be configured in a flexible way to provide services to a large number of users. The configuration and management of such an architecture is challenging (e.g.: 100,000 compute cores on the private cloud together with thousands of cores on external cloud resources). There is the need to process CPU intensive work whilst ensuring that the resources are shared fairly between different users of the system, and guarantee that all nodes are up to date with new images containing the latest software configurations. Different types of automated systems can be used to facilitate the orchestration. CERN’s current system, composed of different technologies such as OpenStack, Packer, Puppet, Rundeck and Docker will be introduced and explained, together with the process used to create new Virtual Machines images at CERN.

  7. Architecture Descriptions. A Contribution to Modeling of Production System Architecture

    DEFF Research Database (Denmark)

    Jepsen, Allan Dam; Hvam, Lars

    a proper understanding of the architecture phenomenon and the ability to describe it in a manner that allow the architecture to be communicated to and handled by stakeholders throughout the company. Despite the existence of several design philosophies in production system design such as Lean, that focus...... a diverse set of stakeholder domains and tools in the production system life cycle. To support such activities, a contribution is made to the identification and referencing of production system elements within architecture descriptions as part of the reference architecture framework. The contribution...

  8. The achievements of the Z-machine; Les exploits de la Z-machine

    Energy Technology Data Exchange (ETDEWEB)

    Larousserie, D

    2008-03-15

    The ZR-machine that represents the latest generation of Z-pinch machines has recently begun preliminary testing before its full commissioning in Albuquerque (Usa). During its test the machine has well operated with electrical currents whose intensities of 26 million Ampere are already 2 times as high as the intensity of the operating current of the previous Z-machine. In 2006 the Z-machine reached temperatures of 2 billions Kelvin while 100 million Kelvin would be sufficient to ignite thermonuclear fusion. In fact the concept of Z-pinch machines was imagined in the fifties but the technological breakthrough that has allowed this recent success and the reborn of Z-machine, was the replacement of gas by an array of metal wires through which the electrical current flows and vaporizes it creating an imploding plasma. It is not well understood why Z-pinch machines generate far more radiation than theoretically expected. (A.C.)

  9. Quantum machine learning.

    Science.gov (United States)

    Biamonte, Jacob; Wittek, Peter; Pancotti, Nicola; Rebentrost, Patrick; Wiebe, Nathan; Lloyd, Seth

    2017-09-13

    Fuelled by increasing computer power and algorithmic advances, machine learning techniques have become powerful tools for finding patterns in data. Quantum systems produce atypical patterns that classical systems are thought not to produce efficiently, so it is reasonable to postulate that quantum computers may outperform classical computers on machine learning tasks. The field of quantum machine learning explores how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers. Recent work has produced quantum algorithms that could act as the building blocks of machine learning programs, but the hardware and software challenges are still considerable.

  10. Machine protection systems

    CERN Document Server

    Macpherson, A L

    2010-01-01

    A summary of the Machine Protection System of the LHC is given, with particular attention given to the outstanding issues to be addressed, rather than the successes of the machine protection system from the 2009 run. In particular, the issues of Safe Machine Parameter system, collimation and beam cleaning, the beam dump system and abort gap cleaning, injection and dump protection, and the overall machine protection program for the upcoming run are summarised.

  11. An overview of the software architecture of the plasma position, current and density realtime controller of the FTU

    International Nuclear Information System (INIS)

    Boncagni, Luca; Vitelli, Riccardo; Carnevale, Daniele; Galperti, Cristian; Artaserse, Giovanni; Pucci, Daniele

    2014-01-01

    Highlights: • We implement the FTU PPCDC system using the MARTe Framework. • We describe how it is logically divided and how it works. • We show experimental examples to describe how it works. - Abstract: Experimental fusion devices requires flexible control systems with a modern architecture, which allows the controller to be distributed and modular. The aforementioned requirements are all fulfilled by MARTe, a multi-platform framework for the development of low-latency hard-real-time control system already used with success in many European machine, it was decided to adopt it as the basis of the new FTU Plasma Position Current Density Control (PPCDC) system and the other coupled realtime systems. The main rationale to revamp the FTU control system was to use new technologies and to easily test different control solutions. MARTe has been proved effective from both the points of view, being platform independent, and having a modular architecture which completely separate the control algorithms from the rest of the infrastructure. We report on the new controller deployed at FTU. In particular, after a brief introduction on the machine, we illustrate the structure of the feedback system, together with a detailed analysis and appropriate experimental examples, of the various GAMs (modules) which make up the controller

  12. An overview of the software architecture of the plasma position, current and density realtime controller of the FTU

    Energy Technology Data Exchange (ETDEWEB)

    Boncagni, Luca, E-mail: luca.boncagni@enea.it [EURATOM - ENEA Fusion Association, Frascati Research Centre, Division of Fusion Physics, Frascati, Rome (Italy); Vitelli, Riccardo; Carnevale, Daniele [Department of Computer Science, Systems and Production, University of Rome Tor Vergata, Rome (Italy); Galperti, Cristian, E-mail: galperti@ifp.cnr.it [EURATOM - ENEA - CNR Fusion Association, CNR-IFP via R. Cozzi 53, 20125 Milan (Italy); Artaserse, Giovanni [EURATOM - ENEA Fusion Association, Frascati Research Centre, Division of Fusion Physics, Frascati, Rome (Italy); Pucci, Daniele [Dipartimento Antonio Ruberti, Universit degli Studi di Roma La Sapienza, Rome (Italy)

    2014-03-15

    Highlights: • We implement the FTU PPCDC system using the MARTe Framework. • We describe how it is logically divided and how it works. • We show experimental examples to describe how it works. - Abstract: Experimental fusion devices requires flexible control systems with a modern architecture, which allows the controller to be distributed and modular. The aforementioned requirements are all fulfilled by MARTe, a multi-platform framework for the development of low-latency hard-real-time control system already used with success in many European machine, it was decided to adopt it as the basis of the new FTU Plasma Position Current Density Control (PPCDC) system and the other coupled realtime systems. The main rationale to revamp the FTU control system was to use new technologies and to easily test different control solutions. MARTe has been proved effective from both the points of view, being platform independent, and having a modular architecture which completely separate the control algorithms from the rest of the infrastructure. We report on the new controller deployed at FTU. In particular, after a brief introduction on the machine, we illustrate the structure of the feedback system, together with a detailed analysis and appropriate experimental examples, of the various GAMs (modules) which make up the controller.

  13. Preliminary Test of Upgraded Conventional Milling Machine into PC Based CNC Milling Machine

    International Nuclear Information System (INIS)

    Abdul Hafid

    2008-01-01

    CNC (Computerized Numerical Control) milling machine yields a challenge to make an innovation in the field of machining. With an action job is machining quality equivalent to CNC milling machine, the conventional milling machine ability was improved to be based on PC CNC milling machine. Mechanically and instrumentally change. As a control replacing was conducted by servo drive and proximity were used. Computer programme was constructed to give instruction into milling machine. The program structure of consists GUI model and ladder diagram. Program was put on programming systems called RTX software. The result of up-grade is computer programming and CNC instruction job. The result was beginning step and it will be continued in next time. With upgrading ability milling machine becomes user can be done safe and optimal from accident risk. By improving performance of milling machine, the user will be more working optimal and safely against accident risk. (author)

  14. Parallel processing algorithms for hydrocodes on a computer with MIMD architecture (DENELCOR's HEP)

    International Nuclear Information System (INIS)

    Hicks, D.L.

    1983-11-01

    In real time simulation/prediction of complex systems such as water-cooled nuclear reactors, if reactor operators had fast simulator/predictors to check the consequences of their operations before implementing them, events such as the incident at Three Mile Island might be avoided. However, existing simulator/predictors such as RELAP run slower than real time on serial computers. It appears that the only way to overcome the barrier to higher computing rates is to use computers with architectures that allow concurrent computations or parallel processing. The computer architecture with the greatest degree of parallelism is labeled Multiple Instruction Stream, Multiple Data Stream (MIMD). An example of a machine of this type is the HEP computer by DENELCOR. It appears that hydrocodes are very well suited for parallelization on the HEP. It is a straightforward exercise to parallelize explicit, one-dimensional Lagrangean hydrocodes in a zone-by-zone parallelization. Similarly, implicit schemes can be parallelized in a zone-by-zone fashion via an a priori, symbolic inversion of the tridiagonal matrix that arises in an implicit scheme. These techniques are extended to Eulerian hydrocodes by using Harlow's rezone technique. The extension from single-phase Eulerian to two-phase Eulerian is straightforward. This step-by-step extension leads to hydrocodes with zone-by-zone parallelization that are capable of two-phase flow simulation. Extensions to two and three spatial dimensions can be achieved by operator splitting. It appears that a zone-by-zone parallelization is the best way to utilize the capabilities of an MIMD machine. 40 references

  15. Architecture and Film

    OpenAIRE

    Mohammad Javaheri, Saharnaz

    2016-01-01

    Film does not exist without architecture. In every movie that has ever been made throughout history, the cinematic image of architecture is embedded within the picture. Throughout my studies and research, I began to see that there is no director who can consciously or unconsciously deny the use of architectural elements in his or her movies. Architecture offers a strong profile to distinguish characters and story. In the early days, films were shot in streets surrounde...

  16. Evaluation of a deep learning architecture for MR imaging prediction of ATRX in glioma patients

    Science.gov (United States)

    Korfiatis, Panagiotis; Kline, Timothy L.; Erickson, Bradley J.

    2018-02-01

    Predicting mutation/loss of alpha-thalassemia/mental retardation syndrome X-linked (ATRX) gene utilizing MR imaging is of high importance since it is a predictor of response and prognosis in brain tumors. In this study, we compare a deep neural network approach based on a residual deep neural network (ResNet) architecture and one based on a classical machine learning approach and evaluate their ability in predicting ATRX mutation status without the need for a distinct tumor segmentation step. We found that the ResNet50 (50 layers) architecture, pre trained on ImageNet data was the best performing model, achieving an accuracy of 0.91 for the test set (classification of a slice as no tumor, ATRX mutated, or mutated) in terms of f1 score in a test set of 35 cases. The SVM classifier achieved 0.63 for differentiating the Flair signal abnormality regions from the test patients based on their mutation status. We report a method that alleviates the need for extensive preprocessing and acts as a proof of concept that deep neural network architectures can be used to predict molecular biomarkers from routine medical images.

  17. Machine learning and parallelism in the reconstruction of LHCb and its upgrade

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00260810

    2016-01-01

    The LHCb detector at the LHC is a general purpose detector in the forward region with a focus on reconstructing decays of c- and b-hadrons. For Run II of the LHC, a new trigger strategy with a real-time reconstruction, alignment and calibration was employed. This was made possible by implementing an oine-like track reconstruction in the high level trigger. However, the ever increasing need for a higher throughput and the move to parallelism in the CPU architectures in the last years necessitated the use of vectorization techniques to achieve the desired speed and a more extensive use of machine learning to veto bad events early on. This document discusses selected improvements in computationally expensive parts of the track reconstruction, like the Kalman filter, as well as an improved approach to get rid of fake tracks using fast machine learning techniques. In the last part, a short overview of the track reconstruction challenges for the upgrade of LHCb, is given. Running a fully software-based trigger, a l...

  18. Architectural geometry

    KAUST Repository

    Pottmann, Helmut

    2014-11-26

    Around 2005 it became apparent in the geometry processing community that freeform architecture contains many problems of a geometric nature to be solved, and many opportunities for optimization which however require geometric understanding. This area of research, which has been called architectural geometry, meanwhile contains a great wealth of individual contributions which are relevant in various fields. For mathematicians, the relation to discrete differential geometry is significant, in particular the integrable system viewpoint. Besides, new application contexts have become available for quite some old-established concepts. Regarding graphics and geometry processing, architectural geometry yields interesting new questions but also new objects, e.g. replacing meshes by other combinatorial arrangements. Numerical optimization plays a major role but in itself would be powerless without geometric understanding. Summing up, architectural geometry has become a rewarding field of study. We here survey the main directions which have been pursued, we show real projects where geometric considerations have played a role, and we outline open problems which we think are significant for the future development of both theory and practice of architectural geometry.

  19. Architectural geometry

    KAUST Repository

    Pottmann, Helmut; Eigensatz, Michael; Vaxman, Amir; Wallner, Johannes

    2014-01-01

    Around 2005 it became apparent in the geometry processing community that freeform architecture contains many problems of a geometric nature to be solved, and many opportunities for optimization which however require geometric understanding. This area of research, which has been called architectural geometry, meanwhile contains a great wealth of individual contributions which are relevant in various fields. For mathematicians, the relation to discrete differential geometry is significant, in particular the integrable system viewpoint. Besides, new application contexts have become available for quite some old-established concepts. Regarding graphics and geometry processing, architectural geometry yields interesting new questions but also new objects, e.g. replacing meshes by other combinatorial arrangements. Numerical optimization plays a major role but in itself would be powerless without geometric understanding. Summing up, architectural geometry has become a rewarding field of study. We here survey the main directions which have been pursued, we show real projects where geometric considerations have played a role, and we outline open problems which we think are significant for the future development of both theory and practice of architectural geometry.

  20. Study of on-machine error identification and compensation methods for micro machine tools

    International Nuclear Information System (INIS)

    Wang, Shih-Ming; Yu, Han-Jen; Lee, Chun-Yi; Chiu, Hung-Sheng

    2016-01-01

    Micro machining plays an important role in the manufacturing of miniature products which are made of various materials with complex 3D shapes and tight machining tolerance. To further improve the accuracy of a micro machining process without increasing the manufacturing cost of a micro machine tool, an effective machining error measurement method and a software-based compensation method are essential. To avoid introducing additional errors caused by the re-installment of the workpiece, the measurement and compensation method should be on-machine conducted. In addition, because the contour of a miniature workpiece machined with a micro machining process is very tiny, the measurement method should be non-contact. By integrating the image re-constructive method, camera pixel correction, coordinate transformation, the error identification algorithm, and trajectory auto-correction method, a vision-based error measurement and compensation method that can on-machine inspect the micro machining errors and automatically generate an error-corrected numerical control (NC) program for error compensation was developed in this study. With the use of the Canny edge detection algorithm and camera pixel calibration, the edges of the contour of a machined workpiece were identified and used to re-construct the actual contour of the work piece. The actual contour was then mapped to the theoretical contour to identify the actual cutting points and compute the machining errors. With the use of a moving matching window and calculation of the similarity between the actual and theoretical contour, the errors between the actual cutting points and theoretical cutting points were calculated and used to correct the NC program. With the use of the error-corrected NC program, the accuracy of a micro machining process can be effectively improved. To prove the feasibility and effectiveness of the proposed methods, micro-milling experiments on a micro machine tool were conducted, and the results

  1. A resource-oriented architecture for a Geospatial Web

    Science.gov (United States)

    Mazzetti, Paolo; Nativi, Stefano

    2010-05-01

    In this presentation we discuss some architectural issues on the design of an architecture for a Geospatial Web, that is an information system for sharing geospatial resources according to the Web paradigm. The success of the Web in building a multi-purpose information space, has raised questions about the possibility of adopting the same approach for systems dedicated to the sharing of more specific resources, such as the geospatial information, that is information characterized by spatial/temporal reference. To this aim an investigation on the nature of the Web and on the validity of its paradigm for geospatial resources is required. The Web was born in the early 90's to provide "a shared information space through which people and machines could communicate" [Berners-Lee 1996]. It was originally built around a small set of specifications (e.g. URI, HTTP, HTML, etc.); however, in the last two decades several other technologies and specifications have been introduced in order to extend its capabilities. Most of them (e.g. the SOAP family) actually aimed to transform the Web in a generic Distributed Computing Infrastructure. While these efforts were definitely successful enabling the adoption of service-oriented approaches for machine-to-machine interactions supporting complex business processes (e.g. for e-Government and e-Business applications), they do not fit in the original concept of the Web. In the year 2000, R. T. Fielding, one of the designers of the original Web specifications, proposes a new architectural style for distributed systems, called REST (Representational State Transfer), aiming to capture the fundamental characteristics of the Web as it was originally conceived [Fielding 2000]. In this view, the nature of the Web lies not so much in the technologies, as in the way they are used. Maintaining the Web architecture conform to the REST style would then assure the scalability, extensibility and low entry barrier of the original Web. On the contrary

  2. Elements of Architecture

    DEFF Research Database (Denmark)

    Elements of Architecture explores new ways of engaging architecture in archaeology. It conceives of architecture both as the physical evidence of past societies and as existing beyond the physical environment, considering how people in the past have not just dwelled in buildings but have existed...

  3. Digitally-Driven Architecture

    Directory of Open Access Journals (Sweden)

    Henriette Bier

    2014-07-01

    Full Text Available The shift from mechanical to digital forces architects to reposition themselves: Architects generate digital information, which can be used not only in designing and fabricating building components but also in embedding behaviours into buildings. This implies that, similar to the way that industrial design and fabrication with its concepts of standardisation and serial production influenced modernist architecture, digital design and fabrication influences contemporary architecture. While standardisation focused on processes of rationalisation of form, mass-customisation as a new paradigm that replaces mass-production, addresses non-standard, complex, and flexible designs. Furthermore, knowledge about the designed object can be encoded in digital data pertaining not just to the geometry of a design but also to its physical or other behaviours within an environment. Digitally-driven architecture implies, therefore, not only digitally-designed and fabricated architecture, it also implies architecture – built form – that can be controlled, actuated, and animated by digital means.In this context, this sixth Footprint issue examines the influence of digital means as pragmatic and conceptual instruments for actuating architecture. The focus is not so much on computer-based systems for the development of architectural designs, but on architecture incorporating digital control, sens­ing, actuating, or other mechanisms that enable buildings to inter­act with their users and surroundings in real time in the real world through physical or sensory change and variation.

  4. National Machine Guarding Program: Part 1. Machine safeguarding practices in small metal fabrication businesses.

    Science.gov (United States)

    Parker, David L; Yamin, Samuel C; Brosseau, Lisa M; Xi, Min; Gordon, Robert; Most, Ivan G; Stanley, Rodney

    2015-11-01

    Metal fabrication workers experience high rates of traumatic occupational injuries. Machine operators in particular face high risks, often stemming from the absence or improper use of machine safeguarding or the failure to implement lockout procedures. The National Machine Guarding Program (NMGP) was a translational research initiative implemented in conjunction with two workers' compensation insures. Insurance safety consultants trained in machine guarding used standardized checklists to conduct a baseline inspection of machine-related hazards in 221 business. Safeguards at the point of operation were missing or inadequate on 33% of machines. Safeguards for other mechanical hazards were missing on 28% of machines. Older machines were both widely used and less likely than newer machines to be properly guarded. Lockout/tagout procedures were posted at only 9% of machine workstations. The NMGP demonstrates a need for improvement in many aspects of machine safety and lockout in small metal fabrication businesses. © 2015 The Authors. American Journal of Industrial Medicine published by Wiley Periodicals, Inc.

  5. Non-conventional electrical machines

    CERN Document Server

    Rezzoug, Abderrezak

    2013-01-01

    The developments of electrical machines are due to the convergence of material progress, improved calculation tools, and new feeding sources. Among the many recent machines, the authors have chosen, in this first book, to relate the progress in slow speed machines, high speed machines, and superconducting machines. The first part of the book is dedicated to materials and an overview of magnetism, mechanic, and heat transfer.

  6. Parallel algorithms on the ASTRA SIMD machine

    International Nuclear Information System (INIS)

    Odor, G.; Rohrbach, F.; Vesztergombi, G.; Varga, G.; Tatrai, F.

    1996-01-01

    In view of the tremendous computing power jump of modern RISC processors the interest in parallel computing seems to be thinning out. Why use a complicated system of parallel processors, if the problem can be solved by a single powerful micro-chip. It is a general law, however, that exponential growth will always end by some kind of a saturation, and then parallelism will again become a hot topic. We try to prepare ourselves for this eventuality. The MPPC project started in 1990 in the keydeys of parallelism and produced four ASTRA machines (presented at CHEP's 92) with 4k processors (which are expandable to 16k) based on yesterday's chip-technology (chip presented at CHEP'91). These machines now provide excellent test-beds for algorithmic developments in a complete, real environment. We are developing for example fast-pattern recognition algorithms which could be used in high-energy physics experiments at the LHC (planned to be operational after 2004 at CERN) for triggering and data reduction. The basic feature of our ASP (Associate String Processor) approach is to use extremely simple (thus very cheap) processor elements but in huge quantities (up to millions of processors) connected together by a very simple string-like communication chain. In this paper we present powerful algorithms based on this architecture indicating the performance perspectives if the hardware quality reaches present or even future technology levels. (author)

  7. PICNIC Architecture.

    Science.gov (United States)

    Saranummi, Niilo

    2005-01-01

    The PICNIC architecture aims at supporting inter-enterprise integration and the facilitation of collaboration between healthcare organisations. The concept of a Regional Health Economy (RHE) is introduced to illustrate the varying nature of inter-enterprise collaboration between healthcare organisations collaborating in providing health services to citizens and patients in a regional setting. The PICNIC architecture comprises a number of PICNIC IT Services, the interfaces between them and presents a way to assemble these into a functioning Regional Health Care Network meeting the needs and concerns of its stakeholders. The PICNIC architecture is presented through a number of views relevant to different stakeholder groups. The stakeholders of the first view are national and regional health authorities and policy makers. The view describes how the architecture enables the implementation of national and regional health policies, strategies and organisational structures. The stakeholders of the second view, the service viewpoint, are the care providers, health professionals, patients and citizens. The view describes how the architecture supports and enables regional care delivery and process management including continuity of care (shared care) and citizen-centred health services. The stakeholders of the third view, the engineering view, are those that design, build and implement the RHCN. The view comprises four sub views: software engineering, IT services engineering, security and data. The proposed architecture is founded into the main stream of how distributed computing environments are evolving. The architecture is realised using the web services approach. A number of well established technology platforms and generic standards exist that can be used to implement the software components. The software components that are specified in PICNIC are implemented in Open Source.

  8. Concept of a computer network architecture for complete automation of nuclear power plants

    International Nuclear Information System (INIS)

    Edwards, R.M.; Ray, A.

    1990-01-01

    The state of the art in automation of nuclear power plants has been largely limited to computerized data acquisition, monitoring, display, and recording of process signals. Complete automation of nuclear power plants, which would include plant operations, control, and management, fault diagnosis, and system reconfiguration with efficient and reliable man/machine interactions, has been projected as a realistic goal. This paper presents the concept of a computer network architecture that would use a high-speed optical data highway to integrate diverse, interacting, and spatially distributed functions that are essential for a fully automated nuclear power plant

  9. Machine learning and predictive data analytics enabling metrology and process control in IC fabrication

    Science.gov (United States)

    Rana, Narender; Zhang, Yunlin; Wall, Donald; Dirahoui, Bachir; Bailey, Todd C.

    2015-03-01

    Integrate circuit (IC) technology is going through multiple changes in terms of patterning techniques (multiple patterning, EUV and DSA), device architectures (FinFET, nanowire, graphene) and patterning scale (few nanometers). These changes require tight controls on processes and measurements to achieve the required device performance, and challenge the metrology and process control in terms of capability and quality. Multivariate data with complex nonlinear trends and correlations generally cannot be described well by mathematical or parametric models but can be relatively easily learned by computing machines and used to predict or extrapolate. This paper introduces the predictive metrology approach which has been applied to three different applications. Machine learning and predictive analytics have been leveraged to accurately predict dimensions of EUV resist patterns down to 18 nm half pitch leveraging resist shrinkage patterns. These patterns could not be directly and accurately measured due to metrology tool limitations. Machine learning has also been applied to predict the electrical performance early in the process pipeline for deep trench capacitance and metal line resistance. As the wafer goes through various processes its associated cost multiplies. It may take days to weeks to get the electrical performance readout. Predicting the electrical performance early on can be very valuable in enabling timely actionable decision such as rework, scrap, feedforward, feedback predicted information or information derived from prediction to improve or monitor processes. This paper provides a general overview of machine learning and advanced analytics application in the advanced semiconductor development and manufacturing.

  10. Advanced Machine learning Algorithm Application for Rotating Machine Health Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Kanemoto, Shigeru; Watanabe, Masaya [The University of Aizu, Aizuwakamatsu (Japan); Yusa, Noritaka [Tohoku University, Sendai (Japan)

    2014-08-15

    The present paper tries to evaluate the applicability of conventional sound analysis techniques and modern machine learning algorithms to rotating machine health monitoring. These techniques include support vector machine, deep leaning neural network, etc. The inner ring defect and misalignment anomaly sound data measured by a rotating machine mockup test facility are used to verify the above various kinds of algorithms. Although we cannot find remarkable difference of anomaly discrimination performance, some methods give us the very interesting eigen patterns corresponding to normal and abnormal states. These results will be useful for future more sensitive and robust anomaly monitoring technology.

  11. Advanced Machine learning Algorithm Application for Rotating Machine Health Monitoring

    International Nuclear Information System (INIS)

    Kanemoto, Shigeru; Watanabe, Masaya; Yusa, Noritaka

    2014-01-01

    The present paper tries to evaluate the applicability of conventional sound analysis techniques and modern machine learning algorithms to rotating machine health monitoring. These techniques include support vector machine, deep leaning neural network, etc. The inner ring defect and misalignment anomaly sound data measured by a rotating machine mockup test facility are used to verify the above various kinds of algorithms. Although we cannot find remarkable difference of anomaly discrimination performance, some methods give us the very interesting eigen patterns corresponding to normal and abnormal states. These results will be useful for future more sensitive and robust anomaly monitoring technology

  12. Architectural Theatricality

    DEFF Research Database (Denmark)

    Tvedebrink, Tenna Doktor Olsen

    environments and a knowledge gap therefore exists in present hospital designs. Consequently, the purpose of this thesis has been to investigate if any research-based knowledge exist supporting the hypothesis that the interior architectural qualities of eating environments influence patient food intake, health...... and well-being, as well as outline a set of basic design principles ‘predicting’ the future interior architectural qualities of patient eating environments. Methodologically the thesis is based on an explorative study employing an abductive approach and hermeneutic-interpretative strategy utilizing tactics...... and food intake, as well as a series of references exist linking the interior architectural qualities of healthcare environments with the health and wellbeing of patients. On the basis of these findings, the thesis presents the concept of Architectural Theatricality as well as a set of design principles...

  13. Architecture of Institution & Home. Architecture as Cultural Medium

    NARCIS (Netherlands)

    Robinson, J.W.

    2004-01-01

    This dissertation addresses how architecture functions as a cultural medium. It does so by by investigating how the architecture of institution and home each construct and support different cultural practices. By studying the design of ordinary settings in terms of how qualitative differences in

  14. On Detailing in Contemporary Architecture

    DEFF Research Database (Denmark)

    Kristensen, Claus; Kirkegaard, Poul Henning

    2010-01-01

    Details in architecture have a significant influence on how architecture is experienced. One can touch the materials and analyse the detailing - thus details give valuable information about the architectural scheme as a whole. The absence of perceptual stimulation like details and materiality...... / tactility can blur the meaning of the architecture and turn it into an empty statement. The present paper will outline detailing in contemporary architecture and discuss the issue with respect to architectural quality. Architectural cases considered as sublime piece of architecture will be presented...

  15. Machine protection for FLASH and the European XFEL

    Energy Technology Data Exchange (ETDEWEB)

    Froehlich, Lars

    2009-05-15

    The Free-Electron Laser in Hamburg (FLASH) and the future European X-Ray Free-Electron Laser (XFEL) are sources of brilliant extremeultraviolet and X-ray radiation pulses. Both facilities are based on superconducting linear accelerators (linacs) that can produce and transport electron beams of high average power. With up to 90 kW or up to 600 kW of power, respectively, these beams hold a serious potential to damage accelerator components. This thesis discusses several passive and active machine protection measures needed to ensure safe operation. At FLASH, dark current from the rf gun electron source has activated several accelerator components to unacceptable radiation levels. Its transport through the linac is investigated with detailed tracking simulations using a parallelized and enhanced version of the tracking code Astra; possible remedies are evaluated. Beam losses can lead to the demagnetization of permanent magnet insertion devices. A number of beam loss scenarios typical for FLASH are investigated with shower simulations. A shielding setup is designed and its efficiency is evaluated. For the design parameters of FLASH, it is concluded that the average relative beam loss in the undulators must be controlled to a level of about 10{sup -8}. FLASH is equipped with an active machine protection system (MPS) comprising more than 80 photomultiplier-based beam loss monitors and several subsystems. The maximum response time to beam losses is less than 4 {mu}s. Setup procedures and calibration algorithms for MPS subsystems and components are introduced and operational problems are addressed. Finally, an architecture for a fully programmable machine protection system for the XFEL is presented. Several options for the topology of this system are reviewed, with the result that an availability goal of at least 0.999 for the MPS is achievable with moderate hardware requirements. (orig.)

  16. Machine protection for FLASH and the European XFEL

    International Nuclear Information System (INIS)

    Froehlich, Lars

    2009-05-01

    The Free-Electron Laser in Hamburg (FLASH) and the future European X-Ray Free-Electron Laser (XFEL) are sources of brilliant extremeultraviolet and X-ray radiation pulses. Both facilities are based on superconducting linear accelerators (linacs) that can produce and transport electron beams of high average power. With up to 90 kW or up to 600 kW of power, respectively, these beams hold a serious potential to damage accelerator components. This thesis discusses several passive and active machine protection measures needed to ensure safe operation. At FLASH, dark current from the rf gun electron source has activated several accelerator components to unacceptable radiation levels. Its transport through the linac is investigated with detailed tracking simulations using a parallelized and enhanced version of the tracking code Astra; possible remedies are evaluated. Beam losses can lead to the demagnetization of permanent magnet insertion devices. A number of beam loss scenarios typical for FLASH are investigated with shower simulations. A shielding setup is designed and its efficiency is evaluated. For the design parameters of FLASH, it is concluded that the average relative beam loss in the undulators must be controlled to a level of about 10 -8 . FLASH is equipped with an active machine protection system (MPS) comprising more than 80 photomultiplier-based beam loss monitors and several subsystems. The maximum response time to beam losses is less than 4 μs. Setup procedures and calibration algorithms for MPS subsystems and components are introduced and operational problems are addressed. Finally, an architecture for a fully programmable machine protection system for the XFEL is presented. Several options for the topology of this system are reviewed, with the result that an availability goal of at least 0.999 for the MPS is achievable with moderate hardware requirements. (orig.)

  17. Electrical machines & drives

    CERN Document Server

    Hammond, P

    1985-01-01

    Containing approximately 200 problems (100 worked), the text covers a wide range of topics concerning electrical machines, placing particular emphasis upon electrical-machine drive applications. The theory is concisely reviewed and focuses on features common to all machine types. The problems are arranged in order of increasing levels of complexity and discussions of the solutions are included where appropriate to illustrate the engineering implications. This second edition includes an important new chapter on mathematical and computer simulation of machine systems and revised discussions o

  18. DNA-based machines.

    Science.gov (United States)

    Wang, Fuan; Willner, Bilha; Willner, Itamar

    2014-01-01

    The base sequence in nucleic acids encodes substantial structural and functional information into the biopolymer. This encoded information provides the basis for the tailoring and assembly of DNA machines. A DNA machine is defined as a molecular device that exhibits the following fundamental features. (1) It performs a fuel-driven mechanical process that mimics macroscopic machines. (2) The mechanical process requires an energy input, "fuel." (3) The mechanical operation is accompanied by an energy consumption process that leads to "waste products." (4) The cyclic operation of the DNA devices, involves the use of "fuel" and "anti-fuel" ingredients. A variety of DNA-based machines are described, including the construction of "tweezers," "walkers," "robots," "cranes," "transporters," "springs," "gears," and interlocked cyclic DNA structures acting as reconfigurable catenanes, rotaxanes, and rotors. Different "fuels", such as nucleic acid strands, pH (H⁺/OH⁻), metal ions, and light, are used to trigger the mechanical functions of the DNA devices. The operation of the devices in solution and on surfaces is described, and a variety of optical, electrical, and photoelectrochemical methods to follow the operations of the DNA machines are presented. We further address the possible applications of DNA machines and the future perspectives of molecular DNA devices. These include the application of DNA machines as functional structures for the construction of logic gates and computing, for the programmed organization of metallic nanoparticle structures and the control of plasmonic properties, and for controlling chemical transformations by DNA machines. We further discuss the future applications of DNA machines for intracellular sensing, controlling intracellular metabolic pathways, and the use of the functional nanostructures for drug delivery and medical applications.

  19. Digitally-Driven Architecture

    Directory of Open Access Journals (Sweden)

    Henriette Bier

    2010-06-01

    Full Text Available The shift from mechanical to digital forces architects to reposition themselves: Architects generate digital information, which can be used not only in designing and fabricating building components but also in embedding behaviours into buildings. This implies that, similar to the way that industrial design and fabrication with its concepts of standardisation and serial production influenced modernist architecture, digital design and fabrication influences contemporary architecture. While standardisa­tion focused on processes of rationalisation of form, mass-customisation as a new paradigm that replaces mass-production, addresses non-standard, complex, and flexible designs. Furthermore, knowledge about the designed object can be encoded in digital data pertaining not just to the geometry of a design but also to its physical or other behaviours within an environment. Digitally-driven architecture implies, therefore, not only digitally-designed and fabricated architecture, it also implies architecture – built form – that can be controlled, actuated, and animated by digital means. In this context, this sixth Footprint issue examines the influence of digital means as prag­matic and conceptual instruments for actuating architecture. The focus is not so much on computer-based systems for the development of architectural designs, but on architecture incorporating digital control, sens­ing, actuating, or other mechanisms that enable buildings to inter­act with their users and surroundings in real time in the real world through physical or sensory change and variation.

  20. Machine translation

    Energy Technology Data Exchange (ETDEWEB)

    Nagao, M

    1982-04-01

    Each language has its own structure. In translating one language into another one, language attributes and grammatical interpretation must be defined in an unambiguous form. In order to parse a sentence, it is necessary to recognize its structure. A so-called context-free grammar can help in this respect for machine translation and machine-aided translation. Problems to be solved in studying machine translation are taken up in the paper, which discusses subjects for semantics and for syntactic analysis and translation software. 14 references.

  1. Architecture and Stages

    DEFF Research Database (Denmark)

    Kiib, Hans

    2009-01-01

    as "experiencescape" - a space between tourism, culture, learning and economy. Strategies related to these challenges involve new architectural concepts and art as ‘engines' for a change. New expressive architecture and old industrial buildings are often combined into hybrid narratives, linking the past...... with the future. But this is not enough. The agenda is to develop architectural spaces, where social interaction and learning are enhanced by art and fun. How can we develop new architectural designs in our inner cities and waterfronts where eventscapes, learning labs and temporal use are merged with everyday...

  2. Architecture in Its Own Shadow

    Directory of Open Access Journals (Sweden)

    Alexander Rappaport

    2016-11-01

    Full Text Available Those who consider themselves architects disapprove of the statements about destruction of the subject of architectural culture, profession and of the subject of architectural theory. At the same time, a deep crisis of both theory and practice is obvious. When theorists of architecture of the 20th and early 21st centuries turned to the subjects external to architecture – sociology, psychology, semiotics, ecology, post-structuralist criticism, etc., instead of enriching and renovating the architectural theory, the results were just the opposite. A brand new and independent paradigm of architecture is needed. It should contain three parts specific by their logical-subject nature: ontology of architecture, methodology of architectural thought and axiology of architectural thought.

  3. Induction machine handbook

    CERN Document Server

    Boldea, Ion

    2002-01-01

    Often called the workhorse of industry, the advent of power electronics and advances in digital control are transforming the induction motor into the racehorse of industrial motion control. Now, the classic texts on induction machines are nearly three decades old, while more recent books on electric motors lack the necessary depth and detail on induction machines.The Induction Machine Handbook fills industry's long-standing need for a comprehensive treatise embracing the many intricate facets of induction machine analysis and design. Moving gradually from simple to complex and from standard to

  4. Chaotic Boltzmann machines

    Science.gov (United States)

    Suzuki, Hideyuki; Imura, Jun-ichi; Horio, Yoshihiko; Aihara, Kazuyuki

    2013-01-01

    The chaotic Boltzmann machine proposed in this paper is a chaotic pseudo-billiard system that works as a Boltzmann machine. Chaotic Boltzmann machines are shown numerically to have computing abilities comparable to conventional (stochastic) Boltzmann machines. Since no randomness is required, efficient hardware implementation is expected. Moreover, the ferromagnetic phase transition of the Ising model is shown to be characterised by the largest Lyapunov exponent of the proposed system. In general, a method to relate probabilistic models to nonlinear dynamics by derandomising Gibbs sampling is presented. PMID:23558425

  5. Rotating electrical machines

    CERN Document Server

    Le Doeuff, René

    2013-01-01

    In this book a general matrix-based approach to modeling electrical machines is promulgated. The model uses instantaneous quantities for key variables and enables the user to easily take into account associations between rotating machines and static converters (such as in variable speed drives).   General equations of electromechanical energy conversion are established early in the treatment of the topic and then applied to synchronous, induction and DC machines. The primary characteristics of these machines are established for steady state behavior as well as for variable speed scenarios. I

  6. JACoW ADAPOS: An architecture for publishing ALICE DCS conditions data

    CERN Document Server

    Lång, John; Bond, Peter; Chochula, Peter; Kurepin, Alexander; Lechman, Mateusz; Pinazza, Ombretta

    2018-01-01

    ALICE Data Point Service (ADAPOS) is a software architecture being developed for the RUN3 period of LHC, as a part of the effort to transmit conditions data from ALICE Detector Control System (DCS) to Event Processing Network (EPN), for distributed processing. The key processes of ADAPOS, Engine and Terminal, run on separate machines, facing different networks. Devices connected to DCS publish their state as DIM services. Engine gets updates to the services, and converts them into a binary stream. Terminal receives it over 0MQ, and maintains an image of the DCS state. It sends copies of the image, at regular intervals, over another 0MQ connection, to a readout process of ALICE Data Acquisition.

  7. Your Sewing Machine.

    Science.gov (United States)

    Peacock, Marion E.

    The programed instruction manual is designed to aid the student in learning the parts, uses, and operation of the sewing machine. Drawings of sewing machine parts are presented, and space is provided for the student's written responses. Following an introductory section identifying sewing machine parts, the manual deals with each part and its…

  8. Machine Learning

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Machine learning, which builds on ideas in computer science, statistics, and optimization, focuses on developing algorithms to identify patterns and regularities in data, and using these learned patterns to make predictions on new observations. Boosted by its industrial and commercial applications, the field of machine learning is quickly evolving and expanding. Recent advances have seen great success in the realms of computer vision, natural language processing, and broadly in data science. Many of these techniques have already been applied in particle physics, for instance for particle identification, detector monitoring, and the optimization of computer resources. Modern machine learning approaches, such as deep learning, are only just beginning to be applied to the analysis of High Energy Physics data to approach more and more complex problems. These classes will review the framework behind machine learning and discuss recent developments in the field.

  9. Architectural technology

    DEFF Research Database (Denmark)

    2005-01-01

    The booklet offers an overall introduction to the Institute of Architectural Technology and its projects and activities, and an invitation to the reader to contact the institute or the individual researcher for further information. The research, which takes place at the Institute of Architectural...... Technology at the Roayl Danish Academy of Fine Arts, School of Architecture, reflects a spread between strategic, goal-oriented pilot projects, commissioned by a ministry, a fund or a private company, and on the other hand projects which originate from strong personal interests and enthusiasm of individual...

  10. Humanizing Architecture

    DEFF Research Database (Denmark)

    Toft, Tanya Søndergaard

    2015-01-01

    The article proposes the urban digital gallery as an opportunity to explore the relationship between ‘human’ and ‘technology,’ through the programming of media architecture. It takes a curatorial perspective when proposing an ontological shift from considering media facades as visual spectacles...... agency and a sense of being by way of dematerializing architecture. This is achieved by way of programming the symbolic to provide new emotional realizations and situations of enlightenment in the public audience. This reflects a greater potential to humanize the digital in media architecture....

  11. Robotic architectures

    CSIR Research Space (South Africa)

    Mtshali, M

    2010-01-01

    Full Text Available In the development of mobile robotic systems, a robotic architecture plays a crucial role in interconnecting all the sub-systems and controlling the system. The design of robotic architectures for mobile autonomous robots is a challenging...

  12. Investigation of the Machining Stability of a Milling Machine with Hybrid Guideway Systems

    Directory of Open Access Journals (Sweden)

    Jui-Pin Hung

    2016-03-01

    Full Text Available This study was aimed to investigate the machining stability of a horizontal milling machine with hybrid guideway systems by finite element method. To this purpose, we first created finite element model of the milling machine with the introduction of the contact stiffness defined at the sliding and rolling interfaces, respectively. Also, the motorized built-in spindle model was created and implemented in the whole machine model. Results of finite element simulations reveal that linear guides with different preloads greatly affect the dynamic responses and machining stability of the horizontal milling machine. The critical cutting depth predicted at the vibration mode associated with the machine tool structure is about 10 mm and 25 mm in the X and Y direction, respectively, while the cutting depth predicted at the vibration mode associated with the spindle structure is about 6.0 mm. Also, the machining stability can be increased when the preload of linear roller guides of the feeding mechanism is changed from lower to higher amount.

  13. Introduction to AC machine design

    CERN Document Server

    Lipo, Thomas A

    2018-01-01

    AC electrical machine design is a key skill set for developing competitive electric motors and generators for applications in industry, aerospace, and defense. This book presents a thorough treatment of AC machine design, starting from basic electromagnetic principles and continuing through the various design aspects of an induction machine. Introduction to AC Machine Design includes one chapter each on the design of permanent magnet machines, synchronous machines, and thermal design. It also offers a basic treatment of the use of finite elements to compute the magnetic field within a machine without interfering with the initial comprehension of the core subject matter. Based on the author's notes, as well as after years of classroom instruction, Introduction to AC Machine Design: * Brings to light more advanced principles of machine design--not just the basic principles of AC and DC machine behavior * Introduces electrical machine design to neophytes while also being a resource for experienced designers * ...

  14. Precision machining commercialization

    International Nuclear Information System (INIS)

    1978-01-01

    To accelerate precision machining development so as to realize more of the potential savings within the next few years of known Department of Defense (DOD) part procurement, the Air Force Materials Laboratory (AFML) is sponsoring the Precision Machining Commercialization Project (PMC). PMC is part of the Tri-Service Precision Machine Tool Program of the DOD Manufacturing Technology Five-Year Plan. The technical resources supporting PMC are provided under sponsorship of the Department of Energy (DOE). The goal of PMC is to minimize precision machining development time and cost risk for interested vendors. PMC will do this by making available the high precision machining technology as developed in two DOE contractor facilities, the Lawrence Livermore Laboratory of the University of California and the Union Carbide Corporation, Nuclear Division, Y-12 Plant, at Oak Ridge, Tennessee

  15. Are there intelligent Turing machines?

    OpenAIRE

    Bátfai, Norbert

    2015-01-01

    This paper introduces a new computing model based on the cooperation among Turing machines called orchestrated machines. Like universal Turing machines, orchestrated machines are also designed to simulate Turing machines but they can also modify the original operation of the included Turing machines to create a new layer of some kind of collective behavior. Using this new model we can define some interested notions related to cooperation ability of Turing machines such as the intelligence quo...

  16. Architecture Sustainability

    NARCIS (Netherlands)

    Avgeriou, Paris; Stal, Michael; Hilliard, Rich

    2013-01-01

    Software architecture is the foundation of software system development, encompassing a system's architects' and stakeholders' strategic decisions. A special issue of IEEE Software is intended to raise awareness of architecture sustainability issues and increase interest and work in the area. The

  17. Coldness production and heat revalorization: particular machines; Production de froid et revalorisation de la chaleur: machines particulieres

    Energy Technology Data Exchange (ETDEWEB)

    Feidt, M. [Universite Henri Poincare - Nancy-1, 54 - Nancy (France)

    2003-10-01

    The machines presented in this article are not the common reverse cycle machines. They use some systems based on different physical principles which have some consequences on the analysis of cycles: 1 - permanent gas machines (thermal separators, pulse gas tube, thermal-acoustic machines); 2 - phase change machines (mechanical vapor compression machines, absorption machines, ejection machines, adsorption machines); 3 - thermoelectric machines (thermoelectric effects, thermodynamic model of a thermoelectric machine). (J.S.)

  18. National machine guarding program: Part 1. Machine safeguarding practices in small metal fabrication businesses

    Science.gov (United States)

    Yamin, Samuel C.; Brosseau, Lisa M.; Xi, Min; Gordon, Robert; Most, Ivan G.; Stanley, Rodney

    2015-01-01

    Background Metal fabrication workers experience high rates of traumatic occupational injuries. Machine operators in particular face high risks, often stemming from the absence or improper use of machine safeguarding or the failure to implement lockout procedures. Methods The National Machine Guarding Program (NMGP) was a translational research initiative implemented in conjunction with two workers' compensation insures. Insurance safety consultants trained in machine guarding used standardized checklists to conduct a baseline inspection of machine‐related hazards in 221 business. Results Safeguards at the point of operation were missing or inadequate on 33% of machines. Safeguards for other mechanical hazards were missing on 28% of machines. Older machines were both widely used and less likely than newer machines to be properly guarded. Lockout/tagout procedures were posted at only 9% of machine workstations. Conclusions The NMGP demonstrates a need for improvement in many aspects of machine safety and lockout in small metal fabrication businesses. Am. J. Ind. Med. 58:1174–1183, 2015. © 2015 The Authors. American Journal of Industrial Medicine published by Wiley Periodicals, Inc. PMID:26332060

  19. Machinic Trajectories’: Appropriated Devices as Post-Digital Drawing Machines

    Directory of Open Access Journals (Sweden)

    Andres Wanner

    2014-12-01

    Full Text Available This article presents a series of works called Machinic Trajectories, consisting of domestic devices appropriated as mechanical drawing machines. These are contextualized within the post-digital discourse, which integrates messy analog conditions into the digital realm. The role of eliciting and examining glitches for investigating a technology is pointed out. Glitches are defined as short-lived, unpremeditated aesthetic results of a failure; they are mostly known as digital phenomena, but I argue that the concept is equally applicable to the output of mechanical machines. Three drawing machines will be presented: The Opener, The Mixer and The Ventilator. In analyzing their drawings, emergent patterns consisting of unpremeditated visual artifacts will be identified and connected to irregularities of the specific technologies. Several other artists who work with mechanical and robotic drawing machines are introduced, to situate the presented works and reflections in a larger context of practice and to investigate how glitch concepts are applicable to such mechanical systems. 

  20. Grid Architecture 2

    Energy Technology Data Exchange (ETDEWEB)

    Taft, Jeffrey D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-01-01

    The report describes work done on Grid Architecture under the auspices of the Department of Electricity Office of Electricity Delivery and Reliability in 2015. As described in the first Grid Architecture report, the primary purpose of this work is to provide stakeholder insight about grid issues so as to enable superior decision making on their part. Doing this requires the creation of various work products, including oft-times complex diagrams, analyses, and explanations. This report provides architectural insights into several important grid topics and also describes work done to advance the science of Grid Architecture as well.

  1. Towards a semantic web layered architecture

    CSIR Research Space (South Africa)

    Gerber, AJ

    2007-02-01

    Full Text Available as an architectural pattern or architectural style [6, 43]. In this section we give a brief description of the con- cepts software architecture and layered architecture. In ad- dition we provide a summary of a list of criteria for layered architectures identified...- els caused some architectural recurrences to evolve. These are described as architectural patterns [6] or architectural styles [43]. Examples of the best known architectural patterns include, but are not limited to, the client/server architectural...

  2. Information Integration Architecture Development

    OpenAIRE

    Faulkner, Stéphane; Kolp, Manuel; Nguyen, Duy Thai; Coyette, Adrien; Do, Thanh Tung; 16th International Conference on Software Engineering and Knowledge Engineering

    2004-01-01

    Multi-Agent Systems (MAS) architectures are gaining popularity for building open, distributed, and evolving software required by systems such as information integration applications. Unfortunately, despite considerable work in software architecture during the last decade, few research efforts have aimed at truly defining patterns and languages for designing such multiagent architectures. We propose a modern approach based on organizational structures and architectural description lan...

  3. Architectural Contestation

    NARCIS (Netherlands)

    Merle, J.

    2012-01-01

    This dissertation addresses the reductive reading of Georges Bataille's work done within the field of architectural criticism and theory which tends to set aside the fundamental ‘broken’ totality of Bataille's oeuvre and also to narrowly interpret it as a mere critique of architectural form,

  4. Self-Improving CNC Milling Machine

    OpenAIRE

    Spilling, Torjus

    2014-01-01

    This thesis is a study of the ability of a CNC milling machine to create parts for itself, and an evaluation of whether or not the machine is able to improve itself by creating new machine parts. This will be explored by using off-the-shelf parts to build an initial machine, using 3D printing/rapid prototyping to create any special parts needed for the initial build. After an initial working machine is completed, the design of the machine parts will be adjusted so that the machine can start p...

  5. Machine Learning.

    Science.gov (United States)

    Kirrane, Diane E.

    1990-01-01

    As scientists seek to develop machines that can "learn," that is, solve problems by imitating the human brain, a gold mine of information on the processes of human learning is being discovered, expert systems are being improved, and human-machine interactions are being enhanced. (SK)

  6. The constraints satisfaction problem approach in the design of an architectural functional layout

    Science.gov (United States)

    Zawidzki, Machi; Tateyama, Kazuyoshi; Nishikawa, Ikuko

    2011-09-01

    A design support system with a new strategy for finding the optimal functional configurations of rooms for architectural layouts is presented. A set of configurations satisfying given constraints is generated and ranked according to multiple objectives. The method can be applied to problems in architectural practice, urban or graphic design-wherever allocation of related geometrical elements of known shape is optimized. Although the methodology is shown using simplified examples-a single story residential building with two apartments each having two rooms-the results resemble realistic functional layouts. One example of a practical size problem of a layout of three apartments with a total of 20 rooms is demonstrated, where the generated solution can be used as a base for a realistic architectural blueprint. The discretization of design space is discussed, followed by application of a backtrack search algorithm used for generating a set of potentially 'good' room configurations. Next the solutions are classified by a machine learning method (FFN) as 'proper' or 'improper' according to the internal communication criteria. Examples of interactive ranking of the 'proper' configurations according to multiple criteria and choosing 'the best' ones are presented. The proposed framework is general and universal-the criteria, parameters and weights can be individually defined by a user and the search algorithm can be adjusted to a specific problem.

  7. Machine learning patterns for neuroimaging-genetic studies in the cloud.

    Science.gov (United States)

    Da Mota, Benoit; Tudoran, Radu; Costan, Alexandru; Varoquaux, Gaël; Brasche, Goetz; Conrod, Patricia; Lemaitre, Herve; Paus, Tomas; Rietschel, Marcella; Frouin, Vincent; Poline, Jean-Baptiste; Antoniu, Gabriel; Thirion, Bertrand

    2014-01-01

    Brain imaging is a natural intermediate phenotype to understand the link between genetic information and behavior or brain pathologies risk factors. Massive efforts have been made in the last few years to acquire high-dimensional neuroimaging and genetic data on large cohorts of subjects. The statistical analysis of such data is carried out with increasingly sophisticated techniques and represents a great computational challenge. Fortunately, increasing computational power in distributed architectures can be harnessed, if new neuroinformatics infrastructures are designed and training to use these new tools is provided. Combining a MapReduce framework (TomusBLOB) with machine learning algorithms (Scikit-learn library), we design a scalable analysis tool that can deal with non-parametric statistics on high-dimensional data. End-users describe the statistical procedure to perform and can then test the model on their own computers before running the very same code in the cloud at a larger scale. We illustrate the potential of our approach on real data with an experiment showing how the functional signal in subcortical brain regions can be significantly fit with genome-wide genotypes. This experiment demonstrates the scalability and the reliability of our framework in the cloud with a 2 weeks deployment on hundreds of virtual machines.

  8. Information architecture. Volume 3: Guidance

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-04-01

    The purpose of this document, as presented in Volume 1, The Foundations, is to assist the Department of Energy (DOE) in developing and promulgating information architecture guidance. This guidance is aimed at increasing the development of information architecture as a Departmentwide management best practice. This document describes departmental information architecture principles and minimum design characteristics for systems and infrastructures within the DOE Information Architecture Conceptual Model, and establishes a Departmentwide standards-based architecture program. The publication of this document fulfills the commitment to address guiding principles, promote standard architectural practices, and provide technical guidance. This document guides the transition from the baseline or defacto Departmental architecture through approved information management program plans and budgets to the future vision architecture. This document also represents another major step toward establishing a well-organized, logical foundation for the DOE information architecture.

  9. Machining of Metal Matrix Composites

    CERN Document Server

    2012-01-01

    Machining of Metal Matrix Composites provides the fundamentals and recent advances in the study of machining of metal matrix composites (MMCs). Each chapter is written by an international expert in this important field of research. Machining of Metal Matrix Composites gives the reader information on machining of MMCs with a special emphasis on aluminium matrix composites. Chapter 1 provides the mechanics and modelling of chip formation for traditional machining processes. Chapter 2 is dedicated to surface integrity when machining MMCs. Chapter 3 describes the machinability aspects of MMCs. Chapter 4 contains information on traditional machining processes and Chapter 5 is dedicated to the grinding of MMCs. Chapter 6 describes the dry cutting of MMCs with SiC particulate reinforcement. Finally, Chapter 7 is dedicated to computational methods and optimization in the machining of MMCs. Machining of Metal Matrix Composites can serve as a useful reference for academics, manufacturing and materials researchers, manu...

  10. Machine technology: a survey

    International Nuclear Information System (INIS)

    Barbier, M.M.

    1981-01-01

    An attempt was made to find existing machines that have been upgraded and that could be used for large-scale decontamination operations outdoors. Such machines are in the building industry, the mining industry, and the road construction industry. The road construction industry has yielded the machines in this presentation. A review is given of operations that can be done with the machines available

  11. Efficient Architectures for Low Latency and High Throughput Trading Systems on the JVM

    Directory of Open Access Journals (Sweden)

    Alexandru LIXANDRU

    2013-01-01

    Full Text Available The motivation for our research starts from the common belief that the Java platform is not suitable for implementing ultra-high performance applications. Java is one of the most widely used software development platform in the world, and it provides the means for rapid development of robust and complex applications that are easy to extend, ensuring short time-to-market of initial deliveries and throughout the lifetime of the system. The Java runtime environment, and especially the Java Virtual Machine, on top of which applications are executed, is the principal source of concerns in regards to its suitability in the electronic trading environment, mainly because of its implicit memory management. In this paper, we intend to identify some of the most common measures that can be taken, both at the Java runtime environment level and at the application architecture level, which can help Java applications achieve ultra-high performance. We also propose two efficient architectures for exchange trading systems that allow for ultra-low latencies and high throughput.

  12. Software architecture as a set of architectural design decisions

    NARCIS (Netherlands)

    Jansen, Anton; Bosch, Jan; Nord, R; Medvidovic, N; Krikhaar, R; Khrhaar, R; Stafford, J; Bosch, J

    2006-01-01

    Software architectures have high costs for change, are complex, and erode during evolution. We believe these problems are partially due to knowledge vaporization. Currently, almost all the knowledge and information about the design decisions the architecture is based on are implicitly embedded in

  13. Characteristics of laser assisted machining for silicon nitride ceramic according to machining parameters

    International Nuclear Information System (INIS)

    Kim, Jong Do; Lee, Su Jin; Suh, Jeong

    2011-01-01

    This paper describes the Laser Assisted Machining (LAM) that cuts and removes softened parts by locally heating the ceramic with laser. Silicon nitride ceramics can be machined with general machining tools as well, because YSiAlON, which was made up ceramics, is soften at about 1,000 .deg. C. In particular, the laser, which concentrates on highly dense energy, can locally heat materials and very effectively control the temperature of the heated part of specimen. Therefore, this paper intends to propose an efficient machining method of ceramic by deducing the machining governing factors of laser assisted machining and understanding its mechanism. While laser power is the machining factor that controls the temperature, the CBN cutting tool could cut the material more easily as the material gets deteriorated from the temperature increase by increasing the laser power, but excessive oxidation can negatively affect the quality of the material surface after machining. As the feed rate and cutting depth increase, the cutting force increases and tool lifespan decreases, but surface oxidation also decreases. In this experiment, the material can be cut to 3 mm of cutting depth. And based on the results of the experiment, the laser assisted machining mechanism is clarified

  14. The architectural design of networks of protein domain architectures.

    Science.gov (United States)

    Hsu, Chia-Hsin; Chen, Chien-Kuo; Hwang, Ming-Jing

    2013-08-23

    Protein domain architectures (PDAs), in which single domains are linked to form multiple-domain proteins, are a major molecular form used by evolution for the diversification of protein functions. However, the design principles of PDAs remain largely uninvestigated. In this study, we constructed networks to connect domain architectures that had grown out from the same single domain for every single domain in the Pfam-A database and found that there are three main distinctive types of these networks, which suggests that evolution can exploit PDAs in three different ways. Further analysis showed that these three different types of PDA networks are each adopted by different types of protein domains, although many networks exhibit the characteristics of more than one of the three types. Our results shed light on nature's blueprint for protein architecture and provide a framework for understanding architectural design from a network perspective.

  15. Optimization and Openmp Parallelization of a Discrete Element Code for Convex Polyhedra on Multi-Core Machines

    Science.gov (United States)

    Chen, Jian; Matuttis, Hans-Georg

    2013-02-01

    We report our experiences with the optimization and parallelization of a discrete element code for convex polyhedra on multi-core machines and introduce a novel variant of the sort-and-sweep neighborhood algorithm. While in theory the whole code in itself parallelizes ideally, in practice the results on different architectures with different compilers and performance measurement tools depend very much on the particle number and optimization of the code. After difficulties with the interpretation of the data for speedup and efficiency are overcome, respectable parallelization speedups could be obtained.

  16. Energy-efficient electrical machines by new materials. Superconductivity in large electrical machines

    International Nuclear Information System (INIS)

    Frauenhofer, Joachim; Arndt, Tabea; Grundmann, Joern

    2013-01-01

    The implementation of superconducting materials in high-power electrical machines results in significant advantages regarding efficiency, size and dynamic behavior when compared to conventional machines. The application of HTS (high-temperature superconductors) in electrical machines allows significantly higher power densities to be achieved for synchronous machines. In order to gain experience with the new technology, Siemens carried out a series of development projects. A 400 kW model motor for the verification of a concept for the new technology was followed by a 4000 kV A generator as highspeed machine - as well as a low-speed 4000 kW propeller motor with high torque. The 4000 kVA generator is still employed to carry out long-term tests and to check components. Superconducting machines have significantly lower weight and envelope dimensions compared to conventional machines, and for this reason alone, they utilize resources better. At the same time, operating losses are slashed to about half and the efficiency increases. Beyond this, they set themselves apart as a result of their special features in operation, such as high overload capability, stiff alternating load behavior and low noise. HTS machines provide significant advantages where the reduction of footprint, weight and losses or the improved dynamic behavior results in significant improvements of the overall system. Propeller motors and generators,for ships, offshore plants, in wind turbine and hydroelectric plants and in large power stations are just some examples. HTS machines can therefore play a significant role when it comes to efficiently using resources and energy as well as reducing the CO 2 emissions.

  17. VIRTUAL MACHINES IN EDUCATION – CNC MILLING MACHINE WITH SINUMERIK 840D CONTROL SYSTEM

    Directory of Open Access Journals (Sweden)

    Ireneusz Zagórski

    2014-11-01

    Full Text Available Machining process nowadays could not be conducted without its inseparable element: cutting edge and frequently numerically controlled milling machines. Milling and lathe machining centres comprise standard equipment in many companies of the machinery industry, e.g. automotive or aircraft. It is for that reason that tertiary education should account for this rising demand. This entails the introduction into the curricula the forms which enable visualisation of machining, milling process and virtual production as well as virtual machining centres simulation. Siemens Virtual Machine (Virtual Workshop sets an example of such software, whose high functionality offers a range of learning experience, such as: learning the design of machine tools, their configuration, basic operation functions as well as basics of CNC.

  18. Machine learning based switching model for electricity load forecasting

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Shu; Lee, Wei-Jen [Energy Systems Research Center, The University of Texas at Arlington, 416 S. College Street, Arlington, TX 76019 (United States); Chen, Luonan [Department of Electronics, Information and Communication Engineering, Osaka Sangyo University, 3-1-1 Nakagaito, Daito, Osaka 574-0013 (Japan)

    2008-06-15

    In deregulated power markets, forecasting electricity loads is one of the most essential tasks for system planning, operation and decision making. Based on an integration of two machine learning techniques: Bayesian clustering by dynamics (BCD) and support vector regression (SVR), this paper proposes a novel forecasting model for day ahead electricity load forecasting. The proposed model adopts an integrated architecture to handle the non-stationarity of time series. Firstly, a BCD classifier is applied to cluster the input data set into several subsets by the dynamics of the time series in an unsupervised manner. Then, groups of SVRs are used to fit the training data of each subset in a supervised way. The effectiveness of the proposed model is demonstrated with actual data taken from the New York ISO and the Western Farmers Electric Cooperative in Oklahoma. (author)

  19. Machine learning based switching model for electricity load forecasting

    Energy Technology Data Exchange (ETDEWEB)

    Fan Shu [Energy Systems Research Center, University of Texas at Arlington, 416 S. College Street, Arlington, TX 76019 (United States); Chen Luonan [Department of Electronics, Information and Communication Engineering, Osaka Sangyo University, 3-1-1 Nakagaito, Daito, Osaka 574-0013 (Japan); Lee, Weijen [Energy Systems Research Center, University of Texas at Arlington, 416 S. College Street, Arlington, TX 76019 (United States)], E-mail: wlee@uta.edu

    2008-06-15

    In deregulated power markets, forecasting electricity loads is one of the most essential tasks for system planning, operation and decision making. Based on an integration of two machine learning techniques: Bayesian clustering by dynamics (BCD) and support vector regression (SVR), this paper proposes a novel forecasting model for day ahead electricity load forecasting. The proposed model adopts an integrated architecture to handle the non-stationarity of time series. Firstly, a BCD classifier is applied to cluster the input data set into several subsets by the dynamics of the time series in an unsupervised manner. Then, groups of SVRs are used to fit the training data of each subset in a supervised way. The effectiveness of the proposed model is demonstrated with actual data taken from the New York ISO and the Western Farmers Electric Cooperative in Oklahoma.

  20. Machine learning based switching model for electricity load forecasting

    International Nuclear Information System (INIS)

    Fan Shu; Chen Luonan; Lee, Weijen

    2008-01-01

    In deregulated power markets, forecasting electricity loads is one of the most essential tasks for system planning, operation and decision making. Based on an integration of two machine learning techniques: Bayesian clustering by dynamics (BCD) and support vector regression (SVR), this paper proposes a novel forecasting model for day ahead electricity load forecasting. The proposed model adopts an integrated architecture to handle the non-stationarity of time series. Firstly, a BCD classifier is applied to cluster the input data set into several subsets by the dynamics of the time series in an unsupervised manner. Then, groups of SVRs are used to fit the training data of each subset in a supervised way. The effectiveness of the proposed model is demonstrated with actual data taken from the New York ISO and the Western Farmers Electric Cooperative in Oklahoma

  1. Architecture and Key Techniques of Augmented Reality Maintenance Guiding System for Civil Aircrafts

    Science.gov (United States)

    hong, Zhou; Wenhua, Lu

    2017-01-01

    Augmented reality technology is introduced into the maintenance related field for strengthened information in real-world scenarios through integration of virtual assistant maintenance information with real-world scenarios. This can lower the difficulty of maintenance, reduce maintenance errors, and improve the maintenance efficiency and quality of civil aviation crews. Architecture of augmented reality virtual maintenance guiding system is proposed on the basis of introducing the definition of augmented reality and analyzing the characteristics of augmented reality virtual maintenance. Key techniques involved, such as standardization and organization of maintenance data, 3D registration, modeling of maintenance guidance information and virtual maintenance man-machine interaction, are elaborated emphatically, and solutions are given.

  2. Scheduling of hybrid types of machines with two-machine flowshop as the first type and a single machine as the second type

    Science.gov (United States)

    Hsiao, Ming-Chih; Su, Ling-Huey

    2018-02-01

    This research addresses the problem of scheduling hybrid machine types, in which one type is a two-machine flowshop and another type is a single machine. A job is either processed on the two-machine flowshop or on the single machine. The objective is to determine a production schedule for all jobs so as to minimize the makespan. The problem is NP-hard since the two parallel machines problem was proved to be NP-hard. Simulated annealing algorithms are developed to solve the problem optimally. A mixed integer programming (MIP) is developed and used to evaluate the performance for two SAs. Computational experiments demonstrate the efficiency of the simulated annealing algorithms, the quality of the simulated annealing algorithms will also be reported.

  3. Data management and communication networks for man-machine interface system in Korea Advanced LIquid MEtal Reactor : Its functionality and design requirements

    Energy Technology Data Exchange (ETDEWEB)

    Cha, Kyung Ho; Park, Gun Ok; Suh, Sang Moon; Kim, Jang Yeol; Kwon, Kee Choon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1999-12-31

    The DAta management and COmmunication NETworks(DACONET), which it is designed as a subsystem for Man-Machine Interface System of Korea Advanced LIquid MEtal Reactor (KALIMER MMIS) and advanced design concept is approached, is described. The DACONET has its roles of providing the real-time data transmission and communication paths between MMIS systems, providing the quality data for protection, monitoring and control of KALIMER and logging the static and dynamic behavioral data during KALIMER operation. The DACONET is characterized as the distributed real-time system architecture with high performance. Future direction, in which advanced technology is being continually applied to Man-Machine Interface System development of Nuclear Power Plants, will be considered for designing data management and communication networks of KALIMER MMIS. 9 refs., 1 fig. (Author)

  4. Data management and communication networks for man-machine interface system in Korea Advanced LIquid MEtal Reactor : Its functionality and design requirements

    Energy Technology Data Exchange (ETDEWEB)

    Cha, Kyung Ho; Park, Gun Ok; Suh, Sang Moon; Kim, Jang Yeol; Kwon, Kee Choon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    The DAta management and COmmunication NETworks(DACONET), which it is designed as a subsystem for Man-Machine Interface System of Korea Advanced LIquid MEtal Reactor (KALIMER MMIS) and advanced design concept is approached, is described. The DACONET has its roles of providing the real-time data transmission and communication paths between MMIS systems, providing the quality data for protection, monitoring and control of KALIMER and logging the static and dynamic behavioral data during KALIMER operation. The DACONET is characterized as the distributed real-time system architecture with high performance. Future direction, in which advanced technology is being continually applied to Man-Machine Interface System development of Nuclear Power Plants, will be considered for designing data management and communication networks of KALIMER MMIS. 9 refs., 1 fig. (Author)

  5. Memory architecture

    NARCIS (Netherlands)

    2012-01-01

    A memory architecture is presented. The memory architecture comprises a first memory and a second memory. The first memory has at least a bank with a first width addressable by a single address. The second memory has a plurality of banks of a second width, said banks being addressable by components

  6. Can You Hear Architecture

    DEFF Research Database (Denmark)

    Ryhl, Camilla

    2016-01-01

    Taking an off set in the understanding of architectural quality being based on multisensory architecture, the paper aims to discuss the current acoustic discourse in inclusive design and its implications to the integration of inclusive design in architectural discourse and practice as well...... as the understanding of user needs. The paper further points to the need to elaborate and nuance the discourse much more, in order to assure inclusion to the many users living with a hearing impairment or, for other reasons, with a high degree of auditory sensitivity. Using the authors’ own research on inclusive...... design and architectural quality for people with a hearing disability and a newly conducted qualitative evaluation research in Denmark as well as architectural theories on multisensory aspects of architectural experiences, the paper uses examples of existing Nordic building cases to discuss the role...

  7. Nonplanar machines

    International Nuclear Information System (INIS)

    Ritson, D.

    1989-05-01

    This talk examines methods available to minimize, but never entirely eliminate, degradation of machine performance caused by terrain following. Breaking of planar machine symmetry for engineering convenience and/or monetary savings must be balanced against small performance degradation, and can only be decided on a case-by-case basis. 5 refs

  8. Ultraprecision machining. Cho seimitsu kako

    Energy Technology Data Exchange (ETDEWEB)

    Suga, T [The Univ. of Tokyo, Tokyo (Japan). Research Center for Advanced Science and Technology

    1992-10-05

    It is said that the image of ultraprecision improved from 0.1[mu]m to 0.01[mu]m within recent years. Ultraprecision machining is a production technology which forms what is called nanotechnology with ultraprecision measuring and ultraprecision control. Accuracy means average machined sizes close to a required value, namely the deflection errors are small; precision means the scattered errors of machined sizes agree very closely. The errors of machining are related to both of the above errors and ultraprecision means the combined errors are very small. In the present ultraprecision machining, the relative precision to the size of a machined object is said to be in the order of 10[sup -6]. The flatness of silicon wafers is usually less than 0.5[mu]m. It is the fact that the appearance of atomic scale machining is awaited as the limit of ultraprecision machining. The machining of removing and adding atomic units using scanning probe microscopes are expected to reach the limit actually. 2 refs.

  9. Vital architecture, slow momentum policy

    DEFF Research Database (Denmark)

    Braae, Ellen Marie

    2010-01-01

    A reflection on the relation between Danish landscape architecture policy and the statements made through current landscape architectural project.......A reflection on the relation between Danish landscape architecture policy and the statements made through current landscape architectural project....

  10. Theory and practice in machining systems

    CERN Document Server

    Ito, Yoshimi

    2017-01-01

    This book describes machining technology from a wider perspective by considering it within the machining space. Machining technology is one of the metal removal activities that occur at the machining point within the machining space. The machining space consists of structural configuration entities, e.g., the main spindle, the turret head and attachments such the chuck and mandrel, and also the form-generating movement of the machine tool itself. The book describes fundamental topics, including the form-generating movement of the machine tool and the important roles of the attachments, before moving on to consider the supply of raw materials into the machining space, and the discharge of swarf from it, and then machining technology itself. Building on the latest research findings “Theory and Practice in Machining System” discusses current challenges in machining. Thus, with the inclusion of introductory and advanced topics, the book can be used as a guide and survey of machining technology for students an...

  11. Machine Learning and Parallelism in the Reconstruction of LHCb and its Upgrade

    Science.gov (United States)

    De Cian, Michel

    2016-11-01

    The LHCb detector at the LHC is a general purpose detector in the forward region with a focus on reconstructing decays of c- and b-hadrons. For Run II of the LHC, a new trigger strategy with a real-time reconstruction, alignment and calibration was employed. This was made possible by implementing an offline-like track reconstruction in the high level trigger. However, the ever increasing need for a higher throughput and the move to parallelism in the CPU architectures in the last years necessitated the use of vectorization techniques to achieve the desired speed and a more extensive use of machine learning to veto bad events early on. This document discusses selected improvements in computationally expensive parts of the track reconstruction, like the Kalman filter, as well as an improved approach to get rid of fake tracks using fast machine learning techniques. In the last part, a short overview of the track reconstruction challenges for the upgrade of LHCb, is given. Running a fully software-based trigger, a large gain in speed in the reconstruction has to be achieved to cope with the 40 MHz bunch-crossing rate. Two possible approaches for techniques exploiting massive parallelization are discussed.

  12. Machine Learning and Parallelism in the Reconstruction of LHCb and its Upgrade

    International Nuclear Information System (INIS)

    Cian, Michel De

    2016-01-01

    The LHCb detector at the LHC is a general purpose detector in the forward region with a focus on reconstructing decays of c- and b-hadrons. For Run II of the LHC, a new trigger strategy with a real-time reconstruction, alignment and calibration was employed. This was made possible by implementing an offline-like track reconstruction in the high level trigger. However, the ever increasing need for a higher throughput and the move to parallelism in the CPU architectures in the last years necessitated the use of vectorization techniques to achieve the desired speed and a more extensive use of machine learning to veto bad events early on. This document discusses selected improvements in computationally expensive parts of the track reconstruction, like the Kalman filter, as well as an improved approach to get rid of fake tracks using fast machine learning techniques. In the last part, a short overview of the track reconstruction challenges for the upgrade of LHCb, is given. Running a fully software-based trigger, a large gain in speed in the reconstruction has to be achieved to cope with the 40 MHz bunch-crossing rate. Two possible approaches for techniques exploiting massive parallelization are discussed

  13. The Architecture Improvement Method: cost management and systematic learning about strategic product architectures

    NARCIS (Netherlands)

    de Weerd-Nederhof, Petronella C.; Wouters, Marc; Teuns, Steven J.A.; Hissel, Paul H.

    2007-01-01

    The architecture improvement method (AIM) is a method for multidisciplinary product architecture improvement, addressing uncertainty and complexity and incorporating feedback loops, facilitating trade-off decision making during the architecture creation process. The research reported in this paper

  14. Face machines

    Energy Technology Data Exchange (ETDEWEB)

    Hindle, D.

    1999-06-01

    The article surveys latest equipment available from the world`s manufacturers of a range of machines for tunnelling. These are grouped under headings: excavators; impact hammers; road headers; and shields and tunnel boring machines. Products of thirty manufacturers are referred to. Addresses and fax numbers of companies are supplied. 5 tabs., 13 photos.

  15. The ABC Adaptive Fusion Architecture

    DEFF Research Database (Denmark)

    Bunde-Pedersen, Jonathan; Mogensen, Martin; Bardram, Jakob Eyvind

    2006-01-01

    and early implementation of a systemcapable of adapting to its operating environment, choosingthe best fit combination of the client-server and peerto-peer architectures. The architecture creates a seamlessintegration between a centralized hybrid architecture and adecentralized architecture, relying on what...

  16. Architecture humanitarian emergencies

    DEFF Research Database (Denmark)

    Gomez-Guillamon, Maria; Eskemose Andersen, Jørgen; Contreras, Jorge Lobos

    2013-01-01

    Introduced by scientific articles conserning architecture and human rights in light of cultures, emergencies, social equality and sustainability, democracy, economy, artistic development and science into architecture. Concluding in definition of needs for new roles, processes and education of arc......, Architettura di Alghero in Italy, Architecture and Design of Kocaeli University in Turkey, University of Aguascalientes in Mexico, Architectura y Urbanismo of University of Chile and Escuela de Architectura of Universidad Austral in Chile....

  17. Minimalism in architecture: Abstract conceptualization of architecture

    Directory of Open Access Journals (Sweden)

    Vasilski Dragana

    2015-01-01

    Full Text Available Minimalism in architecture contains the idea of the minimum as a leading creative tend to be considered and interpreted in working through phenomena of empathy and abstraction. In the Western culture, the root of this idea is found in empathy of Wilhelm Worringer and abstraction of Kasimir Malevich. In his dissertation, 'Abstraction and Empathy' Worringer presented his thesis on the psychology of style through which he explained the two opposing basic forms: abstraction and empathy. His conclusion on empathy as a psychological basis of observation expression is significant due to the verbal congruence with contemporary minimalist expression. His intuition was enhenced furthermore by figure of Malevich. Abstraction, as an expression of inner unfettered inspiration, has played a crucial role in the development of modern art and architecture of the twentieth century. Abstraction, which is one of the basic methods of learning in psychology (separating relevant from irrelevant features, Carl Jung is used to discover ideas. Minimalism in architecture emphasizes the level of abstraction to which the individual functions are reduced. Different types of abstraction are present: in the form as well as function of the basic elements: walls and windows. The case study is an example of Sou Fujimoto who is unequivocal in its commitment to the autonomy of abstract conceptualization of architecture.

  18. Tattoo machines, needles and utilities.

    Science.gov (United States)

    Rosenkilde, Frank

    2015-01-01

    Starting out as a professional tattooist back in 1977 in Copenhagen, Denmark, Frank Rosenkilde has personally experienced the remarkable development of tattoo machines, needles and utilities: all the way from home-made equipment to industrial products of substantially improved quality. Machines can be constructed like the traditional dual-coil and single-coil machines or can be e-coil, rotary and hybrid machines, with the more convenient and precise rotary machines being the recent trend. This development has resulted in disposable needles and utilities. Newer machines are more easily kept clean and protected with foil to prevent crosscontaminations and infections. The machines and the tattooists' knowledge and awareness about prevention of infection have developed hand-in-hand. For decades, Frank Rosenkilde has been collecting tattoo machines. Part of his collection is presented here, supplemented by his personal notes. © 2015 S. Karger AG, Basel.

  19. Deep learning architectures for multi-label classification of intelligent health risk prediction.

    Science.gov (United States)

    Maxwell, Andrew; Li, Runzhi; Yang, Bei; Weng, Heng; Ou, Aihua; Hong, Huixiao; Zhou, Zhaoxian; Gong, Ping; Zhang, Chaoyang

    2017-12-28

    Multi-label classification of data remains to be a challenging problem. Because of the complexity of the data, it is sometimes difficult to infer information about classes that are not mutually exclusive. For medical data, patients could have symptoms of multiple different diseases at the same time and it is important to develop tools that help to identify problems early. Intelligent health risk prediction models built with deep learning architectures offer a powerful tool for physicians to identify patterns in patient data that indicate risks associated with certain types of chronic diseases. Physical examination records of 110,300 anonymous patients were used to predict diabetes, hypertension, fatty liver, a combination of these three chronic diseases, and the absence of disease (8 classes in total). The dataset was split into training (90%) and testing (10%) sub-datasets. Ten-fold cross validation was used to evaluate prediction accuracy with metrics such as precision, recall, and F-score. Deep Learning (DL) architectures were compared with standard and state-of-the-art multi-label classification methods. Preliminary results suggest that Deep Neural Networks (DNN), a DL architecture, when applied to multi-label classification of chronic diseases, produced accuracy that was comparable to that of common methods such as Support Vector Machines. We have implemented DNNs to handle both problem transformation and algorithm adaption type multi-label methods and compare both to see which is preferable. Deep Learning architectures have the potential of inferring more information about the patterns of physical examination data than common classification methods. The advanced techniques of Deep Learning can be used to identify the significance of different features from physical examination data as well as to learn the contributions of each feature that impact a patient's risk for chronic diseases. However, accurate prediction of chronic disease risks remains a challenging

  20. A Disciplined Architectural Approach to Scaling Data Analysis for Massive, Scientific Data

    Science.gov (United States)

    Crichton, D. J.; Braverman, A. J.; Cinquini, L.; Turmon, M.; Lee, H.; Law, E.

    2014-12-01

    Data collections across remote sensing and ground-based instruments in astronomy, Earth science, and planetary science are outpacing scientists' ability to analyze them. Furthermore, the distribution, structure, and heterogeneity of the measurements themselves pose challenges that limit the scalability of data analysis using traditional approaches. Methods for developing science data processing pipelines, distribution of scientific datasets, and performing analysis will require innovative approaches that integrate cyber-infrastructure, algorithms, and data into more systematic approaches that can more efficiently compute and reduce data, particularly distributed data. This requires the integration of computer science, machine learning, statistics and domain expertise to identify scalable architectures for data analysis. The size of data returned from Earth Science observing satellites and the magnitude of data from climate model output, is predicted to grow into the tens of petabytes challenging current data analysis paradigms. This same kind of growth is present in astronomy and planetary science data. One of the major challenges in data science and related disciplines defining new approaches to scaling systems and analysis in order to increase scientific productivity and yield. Specific needs include: 1) identification of optimized system architectures for analyzing massive, distributed data sets; 2) algorithms for systematic analysis of massive data sets in distributed environments; and 3) the development of software infrastructures that are capable of performing massive, distributed data analysis across a comprehensive data science framework. NASA/JPL has begun an initiative in data science to address these challenges. Our goal is to evaluate how scientific productivity can be improved through optimized architectural topologies that identify how to deploy and manage the access, distribution, computation, and reduction of massive, distributed data, while

  1. Fast learning method for convolutional neural networks using extreme learning machine and its application to lane detection.

    Science.gov (United States)

    Kim, Jihun; Kim, Jonghong; Jang, Gil-Jin; Lee, Minho

    2017-03-01

    Deep learning has received significant attention recently as a promising solution to many problems in the area of artificial intelligence. Among several deep learning architectures, convolutional neural networks (CNNs) demonstrate superior performance when compared to other machine learning methods in the applications of object detection and recognition. We use a CNN for image enhancement and the detection of driving lanes on motorways. In general, the process of lane detection consists of edge extraction and line detection. A CNN can be used to enhance the input images before lane detection by excluding noise and obstacles that are irrelevant to the edge detection result. However, training conventional CNNs requires considerable computation and a big dataset. Therefore, we suggest a new learning algorithm for CNNs using an extreme learning machine (ELM). The ELM is a fast learning method used to calculate network weights between output and hidden layers in a single iteration and thus, can dramatically reduce learning time while producing accurate results with minimal training data. A conventional ELM can be applied to networks with a single hidden layer; as such, we propose a stacked ELM architecture in the CNN framework. Further, we modify the backpropagation algorithm to find the targets of hidden layers and effectively learn network weights while maintaining performance. Experimental results confirm that the proposed method is effective in reducing learning time and improving performance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Economics-driven software architecture

    CERN Document Server

    Mistrik, Ivan; Kazman, Rick; Zhang, Yuanyuan

    2014-01-01

    Economics-driven Software Architecture presents a guide for engineers and architects who need to understand the economic impact of architecture design decisions: the long term and strategic viability, cost-effectiveness, and sustainability of applications and systems. Economics-driven software development can increase quality, productivity, and profitability, but comprehensive knowledge is needed to understand the architectural challenges involved in dealing with the development of large, architecturally challenging systems in an economic way. This book covers how to apply economic consider

  3. Design of rotating electrical machines

    CERN Document Server

    Pyrhonen , Juha; Hrabovcova , Valeria

    2013-01-01

    In one complete volume, this essential reference presents an in-depth overview of the theoretical principles and techniques of electrical machine design. This timely new edition offers up-to-date theory and guidelines for the design of electrical machines, taking into account recent advances in permanent magnet machines as well as synchronous reluctance machines. New coverage includes: Brand new material on the ecological impact of the motors, covering the eco-design principles of rotating electrical machinesAn expanded section on the design of permanent magnet synchronous machines, now repo

  4. Do Architectural Design Decisions Improve the Understanding of Software Architecture? Two Controlled Experiments

    NARCIS (Netherlands)

    Shahin, M.; Liang, P.; Li, Z.

    2014-01-01

    Architectural design decision (ADD) and its design rationale, as a paradigm shift on documenting and enriching architecture design description, is supposed to facilitate the understanding of architecture and the reasoning behind the design rationale, which consequently improves the architecting

  5. An SOA-based architecture framework

    NARCIS (Netherlands)

    Aalst, van der W.M.P.; Beisiegel, M.; Hee, van K.M.; König, D.; Stahl, C.

    2007-01-01

    We present an Service-Oriented Architecture (SOA)– based architecture framework. The architecture framework is designed to be close to industry standards, especially to the Service Component Architecture (SCA). The framework is language independent and the building blocks of each system, activities

  6. Rhein-Ruhr architecture

    DEFF Research Database (Denmark)

    2002-01-01

    katalog til udstillingen 'Rhein - Ruhr architecture' Meldahls smedie, 15. marts - 28. april 2002. 99 sider......katalog til udstillingen 'Rhein - Ruhr architecture' Meldahls smedie, 15. marts - 28. april 2002. 99 sider...

  7. VIRTUAL MODELING OF A NUMERICAL CONTROL MACHINE TOOL USED FOR COMPLEX MACHINING OPERATIONS

    Directory of Open Access Journals (Sweden)

    POPESCU Adrian

    2015-11-01

    Full Text Available This paper presents the 3D virtual model of the numerical control machine Modustar 100, in terms of machine elements. This is a CNC machine of modular construction, all components allowing the assembly in various configurations. The paper focused on the design of the subassemblies specific to the axes numerically controlled by means of CATIA v5, which contained different drive kinematic chains of different translation modules that ensures translation on X, Y and Z axis. Machine tool development for high speed and highly precise cutting demands employment of advanced simulation techniques witch it reflect on cost of total development of the machine.

  8. Bionics in architecture

    Directory of Open Access Journals (Sweden)

    Sugár Viktória

    2017-04-01

    Full Text Available The adaptation of the forms and phenomena of nature is not a recent concept. Observation of natural mechanisms has been a primary source of innovation since prehistoric ages, which can be perceived through the history of architecture. Currently, this idea is coming to the front again through sustainable architecture and adaptive design. Investigating natural innovations and the clear-outness of evolution during the 20th century led to the creation of a separate scientific discipline, Bionics. Architecture and Bionics are strongly related to each other, since the act of building is as old as the human civilization - moreover its first formal and structural source was obviously the surrounding environment. Present paper discusses the definition of Bionics and its connection with the architecture.

  9. Time-domain prefilter design for enhanced tracking and vibration suppression in machine motion control

    Science.gov (United States)

    Cole, Matthew O. T.; Shinonawanik, Praween; Wongratanaphisan, Theeraphong

    2018-05-01

    Structural flexibility can impact negatively on machine motion control systems by causing unmeasured positioning errors and vibration at locations where accurate motion is important for task execution. To compensate for these effects, command signal prefiltering may be applied. In this paper, a new FIR prefilter design method is described that combines finite-time vibration cancellation with dynamic compensation properties. The time-domain formulation exploits the relation between tracking error and the moment values of the prefilter impulse response function. Optimal design solutions for filters having minimum H2 norm are derived and evaluated. The control approach does not require additional actuation or sensing and can be effective even without complete and accurate models of the machine dynamics. Results from implementation and testing on an experimental high-speed manipulator having a Delta robot architecture with directionally compliant end-effector are presented. The results show the importance of prefilter moment values for tracking performance and confirm that the proposed method can achieve significant reductions in both peak and RMS tracking error, as well as settling time, for complex motion patterns.

  10. A 100-W grade closed-cycle thermosyphon cooling system used in HTS rotating machines

    Science.gov (United States)

    Felder, Brice; Miki, Motohiro; Tsuzuki, Keita; Shinohara, Nobuyuki; Hayakawa, Hironao; Izumi, Mitsuru

    2012-06-01

    The cooling systems used for rotating High-Temperature Superconducting (HTS) machines need a cooling power high enough to ensure a low temperature during various utilization states. Radiation, torque tube or current leads represent hundreds of watts of invasive heat. The architecture also has to allow the rotation of the refrigerant. In this paper, a free-convection thermosyphon using two Gifford-McMahon (GM) cryocoolers is presented. The cryogen is mainly neon but helium can be added for an increase of the heat transfer coefficient. The design of the heat exchangers was first optimized with FEM thermal analysis. After manufacture, they were assembled for preliminary experiments and the necessity of annealing was studied for the copper parts. A single evaporator was installed to evaluate the thermal properties of such a heat syphon. The maximum bearable static heat load was also investigated, but was not reached even at 150 W of load. Finally, this cooling system was tested in the cooling down of a 100-kW range HTS rotating machine containing 12 Bi-2223 double-pancake coils (DPC).

  11. MITS machine operations

    International Nuclear Information System (INIS)

    Flinchem, J.

    1980-01-01

    This document contains procedures which apply to operations performed on individual P-1c machines in the Machine Interface Test System (MITS) at AiResearch Manufacturing Company's Torrance, California Facility

  12. Architecture and technology of 500 Msample/s feedback systems for control of coupled-bunch instabilities

    International Nuclear Information System (INIS)

    Teytelman, Dmitry

    2000-01-01

    Feedback control of coupled-bunch instabilities presents many challenges. Control bandwidths up to 250 MHz are required to damp all of the unstable coupled-bunch modes in recent accelerators. A digital parallel-processing array with 80 DSPs has been developed to control longitudinal instabilities in PEP-II/ALS/DA NE machines. Here the authors present a description of the architecture as well as the technologies used to implement 500 Msample/s real-time control system with 2,000 FIR filtering channels. Algorithms for feedback control, data acquisition, and analysis are described and measurements from ALS are presented

  13. Coordinate measuring machines

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo

    This document is used in connection with three exercises of 2 hours duration as a part of the course GEOMETRICAL METROLOGY AND MACHINE TESTING. The exercises concern three aspects of coordinate measuring: 1) Measuring and verification of tolerances on coordinate measuring machines, 2) Traceabilit...... and uncertainty during coordinate measurements, 3) Digitalisation and Reverse Engineering. This document contains a short description of each step in the exercise and schemes with room for taking notes of the results.......This document is used in connection with three exercises of 2 hours duration as a part of the course GEOMETRICAL METROLOGY AND MACHINE TESTING. The exercises concern three aspects of coordinate measuring: 1) Measuring and verification of tolerances on coordinate measuring machines, 2) Traceability...

  14. Electric machine

    Science.gov (United States)

    El-Refaie, Ayman Mohamed Fawzi [Niskayuna, NY; Reddy, Patel Bhageerath [Madison, WI

    2012-07-17

    An interior permanent magnet electric machine is disclosed. The interior permanent magnet electric machine comprises a rotor comprising a plurality of radially placed magnets each having a proximal end and a distal end, wherein each magnet comprises a plurality of magnetic segments and at least one magnetic segment towards the distal end comprises a high resistivity magnetic material.

  15. Machine Learning and Radiology

    Science.gov (United States)

    Wang, Shijun; Summers, Ronald M.

    2012-01-01

    In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. PMID:22465077

  16. Tribology in machine design

    CERN Document Server

    Stolarski, Tadeusz

    1999-01-01

    ""Tribology in Machine Design is strongly recommended for machine designers, and engineers and scientists interested in tribology. It should be in the engineering library of companies producing mechanical equipment.""Applied Mechanics ReviewTribology in Machine Design explains the role of tribology in the design of machine elements. It shows how algorithms developed from the basic principles of tribology can be used in a range of practical applications within mechanical devices and systems.The computer offers today's designer the possibility of greater stringen

  17. Flexible feature-space-construction architecture and its VLSI implementation for multi-scale object detection

    Science.gov (United States)

    Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans

    2018-04-01

    Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.

  18. Quadrilateral Micro-Hole Array Machining on Invar Thin Film: Wet Etching and Electrochemical Fusion Machining

    Directory of Open Access Journals (Sweden)

    Woong-Kirl Choi

    2018-01-01

    Full Text Available Ultra-precision products which contain a micro-hole array have recently shown remarkable demand growth in many fields, especially in the semiconductor and display industries. Photoresist etching and electrochemical machining are widely known as precision methods for machining micro-holes with no residual stress and lower surface roughness on the fabricated products. The Invar shadow masks used for organic light-emitting diodes (OLEDs contain numerous micro-holes and are currently machined by a photoresist etching method. However, this method has several problems, such as uncontrollable hole machining accuracy, non-etched areas, and overcutting. To solve these problems, a machining method that combines photoresist etching and electrochemical machining can be applied. In this study, negative photoresist with a quadrilateral hole array pattern was dry coated onto 30-µm-thick Invar thin film, and then exposure and development were carried out. After that, photoresist single-side wet etching and a fusion method of wet etching-electrochemical machining were used to machine micro-holes on the Invar. The hole machining geometry, surface quality, and overcutting characteristics of the methods were studied. Wet etching and electrochemical fusion machining can improve the accuracy and surface quality. The overcutting phenomenon can also be controlled by the fusion machining. Experimental results show that the proposed method is promising for the fabrication of Invar film shadow masks.

  19. A Universal Reactive Machine

    DEFF Research Database (Denmark)

    Andersen, Henrik Reif; Mørk, Simon; Sørensen, Morten U.

    1997-01-01

    Turing showed the existence of a model universal for the set of Turing machines in the sense that given an encoding of any Turing machine asinput the universal Turing machine simulates it. We introduce the concept of universality for reactive systems and construct a CCS processuniversal...

  20. Architectural Prototyping in Industrial Practice

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2008-01-01

    Architectural prototyping is the process of using executable code to investigate stakeholders’ software architecture concerns with respect to a system under development. Previous work has established this as a useful and cost-effective way of exploration and learning of the design space of a system......, in addressing issues regarding quality attributes, in addressing architectural risks, and in addressing the problem of knowledge transfer and conformance. Little work has been reported so far on the actual industrial use of architectural prototyping. In this paper, we report from an ethnographical study...... and focus group involving architects from four companies in which we have focused on architectural prototypes. Our findings conclude that architectural prototypes play an important role in resolving problems experimentally, but less so in exploring alternative solutions. Furthermore, architectural...

  1. Consequences of heavy machining vis à vis the machine structure – typical applications

    International Nuclear Information System (INIS)

    Leuch, M

    2011-01-01

    StarragHeckert has built 5 axis machines since the middle of the 80s for heavy duty milling. The STC-Centres are predominantly utilised in the aerospace industry, especially for milling structural workpieces, casings or Impellers made out of titanium and steel. StarragHeckert has a history of building machines for high performance milling. The machining of these components includes high forces thus spreading the wheat from the chaff. Although FEM calculations and multi-body simulations are carried out in the early stages of development, this paper will illustrate how the real process stability with modal analysis and cutting trials is determined. The experiment observes chatter stability to identify if the machine devices are adequate for the application or if the design has to be improved. Machining parameters of industrial applications are demonstrating the process stability for five axis heavy duties milling of StarragHeckert machine.

  2. Virtual Machine in Automation Projects

    OpenAIRE

    Xing, Xiaoyuan

    2010-01-01

    Virtual machine, as an engineering tool, has recently been introduced into automation projects in Tetra Pak Processing System AB. The goal of this paper is to examine how to better utilize virtual machine for the automation projects. This paper designs different project scenarios using virtual machine. It analyzes installability, performance and stability of virtual machine from the test results. Technical solutions concerning virtual machine are discussed such as the conversion with physical...

  3. Improvement of Wear Performance of Nano-Multilayer PVD Coatings under Dry Hard End Milling Conditions Based on Their Architectural Development

    Directory of Open Access Journals (Sweden)

    Shahereen Chowdhury

    2018-02-01

    Full Text Available The TiAlCrSiYN-based family of PVD (physical vapor deposition hard coatings was specially designed for extreme conditions involving the dry ultra-performance machining of hardened tool steels. However, there is a strong potential for further advances in the wear performance of the coatings through improvements in their architecture. A few different coating architectures (monolayer, multilayer, bi-multilayer, bi-multilayer with increased number of alternating nano-layers were studied in relation to cutting-tool life. Comprehensive characterization of the structure and properties of the coatings has been performed using XRD, SEM, TEM, micro-mechanical studies and tool-life evaluation. The wear performance was then related to the ability of the coating layer to exhibit minimal surface damage under operation, which is directly associated with the various micro-mechanical characteristics (such as hardness, elastic modulus and related characteristics; nano-impact; scratch test-based characteristics. The results presented exhibited that a substantial increase in tool life as well as improvement of the mechanical properties could be achieved through the architectural development of the coatings.

  4. Product Architecture Modularity Strategies

    DEFF Research Database (Denmark)

    Mikkola, Juliana Hsuan

    2003-01-01

    The focus of this paper is to integrate various perspectives on product architecture modularity into a general framework, and also to propose a way to measure the degree of modularization embedded in product architectures. Various trade-offs between modular and integral product architectures...... and how components and interfaces influence the degree of modularization are considered. In order to gain a better understanding of product architecture modularity as a strategy, a theoretical framework and propositions are drawn from various academic literature sources. Based on the literature review......, the following key elements of product architecture are identified: components (standard and new-to-the-firm), interfaces (standardization and specification), degree of coupling, and substitutability. A mathematical function, termed modularization function, is introduced to measure the degree of modularization...

  5. Iraqi architecture in mogul period

    Directory of Open Access Journals (Sweden)

    Hasan Shatha

    2018-01-01

    Full Text Available Iraqi architecture have many periods passed through it until now, each on from these periods have it is architectural style, also through time these styles interacted among us, to creating kind of space forming, space relationships, and architectural elements (detailed treatments, the research problem being from the multi interacted architectural styles causing some of confused of general characteristic to every style, that we could distinguish by it. Research tries to study architecture style through Mogul Conquest to Baghdad. Aim of research follow main characteristic for this architectural style in the Mogul periods on the level of form, elements, and treatments. Research depending on descriptive and analytical all buildings belong to this period, so from analyzing there style by, general form for building, architectural elements, and it architectural treatment, therefore; repeating this procedures to every building we get some similarities, from these similarities we can making conclusion about pure characteristic of the style of these period. Other side, we also discover some Dissimilar in the building periods, these will lead research to make what interacting among styles in this period, after all that we can drew clearly main characteristic of Architectural Style for Mogul Conquest in Baghdad

  6. Generic Machine Learning Pattern for Neuroimaging-Genetic Studies in the Cloud

    Directory of Open Access Journals (Sweden)

    Benoit eDa Mota

    2014-04-01

    Full Text Available Brain imaging is a natural intermediate phenotype to understand the link between genetic information and behavior or brain pathologies risk factors. Massive efforts have been made in the last few years to acquire high-dimensional neuroimaging and genetic data on large cohorts of subjects. The statistical analysis of such data is carried out with increasingly sophisticated techniques and represents a great computational challenge. Fortunately, increasing computational power in distributed architectures can be harnessed, if new neuroinformatics infrastructures are designed and training to use these new tools is provided. Combining a MapReduce framework (TomusBLOB with machine learning algorithms (Scikit-learn library, we design a scalable analysis tool that can deal with non-parametric statistics on high-dimensional data. End-users describe the statistical procedure to perform and can then test the model on their own computers before running the very same code in the cloud at a larger scale. We illustrate the potential of our approach on real data with an experiment showing how the functional signal in subcortical brain regions can be significantly fit with genome-wide genotypes. This experiment demonstrates the scalability and the reliability of our framework in the cloud with a two weeks deployment on hundreds of virtual machines.

  7. A COMPARATIVE STUDY OF SYSTEM NETWORK ARCHITECTURE Vs DIGITAL NETWORK ARCHITECTURE

    OpenAIRE

    Seema; Mukesh Arya

    2011-01-01

    The efficient managing system of sources is mandatory for the successful running of any network. Here this paper describes the most popular network architectures one of developed by IBM, System Network Architecture (SNA) and other is Digital Network Architecture (DNA). As we know that the network standards and protocols are needed for the network developers as well as users. Some standards are The IEEE 802.3 standards (The Institute of Electrical and Electronics Engineers 1980) (LAN), IBM Sta...

  8. The Buttonhole Machine. Module 13.

    Science.gov (United States)

    South Carolina State Dept. of Education, Columbia. Office of Vocational Education.

    This module on the bottonhole machine, one in a series dealing with industrial sewing machines, their attachments, and operation, covers two topics: performing special operations on the buttonhole machine (parts and purpose) and performing special operations on the buttonhole machine (gauged buttonholes). For each topic these components are…

  9. Introduction to machine learning.

    Science.gov (United States)

    Baştanlar, Yalin; Ozuysal, Mustafa

    2014-01-01

    The machine learning field, which can be briefly defined as enabling computers make successful predictions using past experiences, has exhibited an impressive development recently with the help of the rapid increase in the storage capacity and processing power of computers. Together with many other disciplines, machine learning methods have been widely employed in bioinformatics. The difficulties and cost of biological analyses have led to the development of sophisticated machine learning approaches for this application area. In this chapter, we first review the fundamental concepts of machine learning such as feature assessment, unsupervised versus supervised learning and types of classification. Then, we point out the main issues of designing machine learning experiments and their performance evaluation. Finally, we introduce some supervised learning methods.

  10. Applied machining technology

    CERN Document Server

    Tschätsch, Heinz

    2010-01-01

    Machining and cutting technologies are still crucial for many manufacturing processes. This reference presents all important machining processes in a comprehensive and coherent way. It includes many examples of concrete calculations, problems and solutions.

  11. RBAC Driven Least Privilege Architecture For Control Systems

    Energy Technology Data Exchange (ETDEWEB)

    Hull, Julie [Honeywell International Inc., Golden Valley, MN (United States); Markham, Mark [Honeywell International Inc., Golden Valley, MN (United States)

    2014-01-25

    The concept of role based access control (RBAC) within the IT environment has been studied by researchers and was supported by NIST (circa 1992). This earlier work highlighted the benefits of RBAC which include reduced administrative workload and policies which are easier to analyze and apply. The goals of this research were to expand the application of RBAC in the following ways. Apply RBAC to the control systems environment: The typical RBAC model within the IT environment is used to control a user’s access to files. Within the control system environment files are replaced with measurement (e.g., temperature) and control (e.g. valve) points organized as a hierarchy of control assets (e.g. a boiler, compressor, refinery unit). Control points have parameters (e.g., high alarm limit, set point, etc.) associated with them. The RBAC model is extended to support access to points and their parameters based upon roles while at the same time allowing permissions for the points to be defined at the asset level or point level directly. In addition, centralized policy administration with distributed access enforcement mechanisms was developed to support the distributed architecture of distributed control systems and SCADA; Extend the RBAC model to include access control for software and devices: The established RBAC approach is to assign users to roles. This work extends that notion by first breaking the control system down into three layers 1) users, 2) software and 3) devices. An RBAC model is then created for each of these three layers. The result is that RBAC can be used to define machine-to-machine policy enforced via the IP security (IPsec) protocol. This highlights the potential to use RBAC for machine-to-machine connectivity within the internet of things; and Enable dynamic policy based upon the operating mode of the system: The IT environment is generally static with respect to policy. However, large cyber physical systems such as industrial controls have various

  12. RATS: Reactive Architectures

    National Research Council Canada - National Science Library

    Christensen, Marc

    2004-01-01

    This project had two goals: To build an emulation prototype board for a tiled architecture and to demonstrate the utility of a global inter-chip free-space photonic interconnection fabric for polymorphous computer architectures (PCA...

  13. Architectural Anthropology

    DEFF Research Database (Denmark)

    Stender, Marie

    Architecture and anthropology have always had a common focus on dwelling, housing, urban life and spatial organisation. Current developments in both disciplines make it even more relevant to explore their boundaries and overlaps. Architects are inspired by anthropological insights and methods......, while recent material and spatial turns in anthropology have also brought an increasing interest in design, architecture and the built environment. Understanding the relationship between the social and the physical is at the heart of both disciplines, and they can obviously benefit from further...... collaboration: How can qualitative anthropological approaches contribute to contemporary architecture? And just as importantly: What can anthropologists learn from architects’ understanding of spatial and material surroundings? Recent theoretical developments in anthropology stress the role of materials...

  14. Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    Chikkagoudar, Satish; Chatterjee, Samrat; Thomas, Dennis G.; Carroll, Thomas E.; Muller, George

    2017-04-21

    The absence of a robust and unified theory of cyber dynamics presents challenges and opportunities for using machine learning based data-driven approaches to further the understanding of the behavior of such complex systems. Analysts can also use machine learning approaches to gain operational insights. In order to be operationally beneficial, cybersecurity machine learning based models need to have the ability to: (1) represent a real-world system, (2) infer system properties, and (3) learn and adapt based on expert knowledge and observations. Probabilistic models and Probabilistic graphical models provide these necessary properties and are further explored in this chapter. Bayesian Networks and Hidden Markov Models are introduced as an example of a widely used data driven classification/modeling strategy.

  15. A modular control architecture for real-time synchronous and asynchronous systems

    International Nuclear Information System (INIS)

    Butler, P.L.; Jones, J.P.

    1993-01-01

    This paper describes a control architecture for real-time control of complex robotic systems. The Modular Integrated Control Architecture (MICA), which is actually two complementary control systems, recognizes and exploits the differences between asynchronous and synchronous control. The asynchronous control system simulates shared memory on a heterogeneous network. For control information, a portable event-scheme is used. This scheme provides consistent interprocess coordination among multiple tasks on a number of distributed systems. The machines in the network can vary with respect to their native operating systems and the intemal representation of numbers they use. The synchronous control system is needed for tight real-time control of complex electromechanical systems such as robot manipulators, and the system uses multiple processors at a specified rate. Both the synchronous and asynchronous portions of MICA have been developed to be extremely modular. MICA presents a simple programming model to code developers and also considers the needs of system integrators and maintainers. MICA has been used successfully in a complex robotics project involving a mobile 7-degree-of-freedom manipulator in a heterogeneous network with a body of software totaling over 100,000 lines of code. MICA has also been used in another robotics system, controlling a commercial long-reach manipulator

  16. Dictionary of machine terms

    International Nuclear Information System (INIS)

    1990-06-01

    This book has introduction of dictionary of machine terms, and a compilation committee and introductory remarks. It gives descriptions of the machine terms in alphabetical order from a to Z and also includes abbreviation of machine terms and symbol table, way to read mathematical symbols and abbreviation and terms of drawings.

  17. HTS machine laboratory prototype

    DEFF Research Database (Denmark)

    machine. The machine comprises six stationary HTS field windings wound from both YBCO and BiSCOO tape operated at liquid nitrogen temperature and enclosed in a cryostat, and a three phase armature winding spinning at up to 300 rpm. This design has full functionality of HTS synchronous machines. The design...

  18. Travels in Architectural History

    Directory of Open Access Journals (Sweden)

    Davide Deriu

    2016-11-01

    Full Text Available Travel is a powerful force in shaping the perception of the modern world and plays an ever-growing role within architectural and urban cultures. Inextricably linked to political and ideological issues, travel redefines places and landscapes through new transport infrastructures and buildings. Architecture, in turn, is reconstructed through visual and textual narratives produced by scores of modern travellers — including writers and artists along with architects themselves. In the age of the camera, travel is bound up with new kinds of imaginaries; private records and recollections often mingle with official, stereotyped views, as the value of architectural heritage increasingly rests on the mechanical reproduction of its images. Whilst students often learn about architectural history through image collections, the place of the journey in the formation of the architect itself shifts. No longer a lone and passionate antiquarian or an itinerant designer, the modern architect eagerly hops on buses, trains, and planes in pursuit of personal as well as professional interests. Increasingly built on a presumption of mobility, architectural culture integrates travel into cultural debates and design experiments. By addressing such issues from a variety of perspectives, this collection, a special 'Architectural Histories' issue on travel, prompts us to rethink the mobile conditions in which architecture has historically been produced and received.

  19. Probability Machines: Consistent Probability Estimation Using Nonparametric Learning Machines

    Science.gov (United States)

    Malley, J. D.; Kruppa, J.; Dasgupta, A.; Malley, K. G.; Ziegler, A.

    2011-01-01

    Summary Background Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. Objectives The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Methods Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Results Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Conclusions Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications. PMID:21915433

  20. National machine guarding program: Part 1. Machine safeguarding practices in small metal fabrication businesses

    OpenAIRE

    Parker, David L.; Yamin, Samuel C.; Brosseau, Lisa M.; Xi, Min; Gordon, Robert; Most, Ivan G.; Stanley, Rodney

    2015-01-01

    Background Metal fabrication workers experience high rates of traumatic occupational injuries. Machine operators in particular face high risks, often stemming from the absence or improper use of machine safeguarding or the failure to implement lockout procedures. Methods The National Machine Guarding Program (NMGP) was a translational research initiative implemented in conjunction with two workers' compensation insures. Insurance safety consultants trained in machine guarding used standardize...

  1. A microcomputer network for the control of digitising machines

    International Nuclear Information System (INIS)

    Seller, P.

    1981-01-01

    A distributed microcomputing network operates in the Bubble Chamber Research Group Scanning Laboratory at the Rutherford and Appleton Laboratories. A microcomputer at each digitising table buffers information, controls the functioning of the table and enhances the machine/operator interface. The system consists of fourteen microcomputers together with a VAX 11/780 computer used for data analysis. These are inter-connected via a packet switched network. This paper will describe the features of the combined system, including the distributed computing architecture and the packet switched method of communication. This paper will also describe in detail a high speed packet switching controller used as a central node of the network. This controller is a multiprocessor microcomputer system with eighteen central processor units, thirty-four direct memory access channels and thirty-four prioritorised and vectored interrupt channels. This microcomputer is of general interest as a communications controller due to its totally programmable nature. (orig.)

  2. Architectural Knitted Surfaces

    DEFF Research Database (Denmark)

    Mossé, Aurélie

    2010-01-01

    WGSN reports from the Architectural Knitted Surfaces workshop recently held at ShenkarCollege of Engineering and Design, Tel Aviv, which offered a cutting-edge insight into interactive knitted surfaces. With the increasing role of smart textiles in architecture, the Architectural Knitted Surfaces...... workshop brought together architects and interior and textile designers to highlight recent developments in intelligent knitting. The five-day workshop was led by architects Ayelet Karmon and Mette Ramsgaard Thomsen, together with Amir Cang and Eyal Sheffer from the Knitting Laboratory, in collaboration...

  3. A Smart Gateway Architecture for Improving Efficiency of Home Network Applications

    Directory of Open Access Journals (Sweden)

    Fei Ding

    2016-01-01

    Full Text Available A smart home gateway plays an important role in the Internet of Things (IoT system that takes responsibility for the connection between the network layer and the ubiquitous sensor network (USN layer. Even though the home network application is developing rapidly, researches on the home gateway based open development architecture are less. This makes it difficult to extend the home network to support new applications, share service, and interoperate with other home network systems. An integrated access gateway (IAGW is proposed in this paper which upward connects with the operator machine-to-machine platform (M2M P/F. In this home network scheme, the gateway provides standard interfaces for supporting various applications in home environments, ranging from on-site configuration to node and service access. In addition, communication management ability is also provided by M2M P/F. A testbed of a simple home network application system that includes the IAGW prototype is created to test its user interaction capabilities. Experimental results show that the proposed gateway provides significant flexibility for users to configure and deploy a home automation network; it can be applied to other monitoring areas and simultaneously supports a multi-ubiquitous sensor network.

  4. A novel architecture for information retrieval system based on semantic web

    Science.gov (United States)

    Zhang, Hui

    2011-12-01

    Nowadays, the web has enabled an explosive growth of information sharing (there are currently over 4 billion pages covering most areas of human endeavor) so that the web has faced a new challenge of information overhead. The challenge that is now before us is not only to help people locating relevant information precisely but also to access and aggregate a variety of information from different resources automatically. Current web document are in human-oriented formats and they are suitable for the presentation, but machines cannot understand the meaning of document. To address this issue, Berners-Lee proposed a concept of semantic web. With semantic web technology, web information can be understood and processed by machine. It provides new possibilities for automatic web information processing. A main problem of semantic web information retrieval is that when these is not enough knowledge to such information retrieval system, the system will return to a large of no sense result to uses due to a huge amount of information results. In this paper, we present the architecture of information based on semantic web. In addiction, our systems employ the inference Engine to check whether the query should pose to Keyword-based Search Engine or should pose to the Semantic Search Engine.

  5. Machine Directional Register System Modeling for Shaft-Less Drive Gravure Printing Machines

    Directory of Open Access Journals (Sweden)

    Shanhui Liu

    2013-01-01

    Full Text Available In the latest type of gravure printing machines referred to as the shaft-less drive system, each gravure printing roller is driven by an individual servo motor, and all motors are electrically synchronized. The register error is regulated by a speed difference between the adjacent printing rollers. In order to improve the control accuracy of register system, an accurate mathematical model of the register system should be investigated for the latest machines. Therefore, the mathematical model of the machine directional register (MDR system is studied for the multicolor gravure printing machines in this paper. According to the definition of the MDR error, the model is derived, and then it is validated by the numerical simulation and experiments carried out in the experimental setup of the four-color gravure printing machines. The results show that the established MDR system model is accurate and reliable.

  6. Machine learning and radiology.

    Science.gov (United States)

    Wang, Shijun; Summers, Ronald M

    2012-07-01

    In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. Copyright © 2012. Published by Elsevier B.V.

  7. Manipulations of Totalitarian Nazi Architecture

    Science.gov (United States)

    Antoszczyszyn, Marek

    2017-10-01

    The paper takes under considerations controversies surrounding German architecture designed during Nazi period between 1933-45. This architecture is commonly criticized for being out of innovation, taste & elementary sense of beauty. Moreover, it has been consequently wiped out from architectural manuals, probably for its undoubted associations with the totalitarian system considered as the most maleficent in the whole history. But in the meantime the architecture of another totalitarian system which appeared to be not less sinister than Nazi one is not stigmatized with such verve. It is Socrealism architecture, developed especially in East Europe & reportedly containing lots of similarities with Nazi architecture. Socrealism totalitarian architecture was never condemned like Nazi one, probably due to politically manipulated propaganda that influenced postwar public opinion. This observation leads to reflection that maybe in the same propaganda way some values of Nazi architecture are still consciously dissembled in order to hide the fact that some rules used by Nazi German architects have been also consciously used after the war. Those are especially manipulations that allegedly Nazi architecture consisted of. The paper provides some definitions around totalitarian manipulations as well as ideological assumptions for their implementation. Finally, the register of confirmed manipulations is provided with use of photo case study.

  8. Machining with abrasives

    CERN Document Server

    Jackson, Mark J

    2011-01-01

    Abrasive machining is key to obtaining the desired geometry and surface quality in manufacturing. This book discusses the fundamentals and advances in the abrasive machining processes. It provides a complete overview of developing areas in the field.

  9. Design and evaluation of cellular power converter architectures

    Science.gov (United States)

    Perreault, David John

    Power electronic technology plays an important role in many energy conversion and storage applications, including machine drives, power supplies, frequency changers and UPS systems. Increases in performance and reductions in cost have been achieved through the development of higher performance power semiconductor devices and integrated control devices with increased functionality. Manufacturing techniques, however, have changed little. High power is typically achieved by paralleling multiple die in a sing!e package, producing the physical equivalent of a single large device. Consequently, both the device package and the converter in which the device is used continue to require large, complex mechanical structures, and relatively sophisticated heat transfer systems. An alternative to this approach is the use of a cellular power converter architecture, which is based upon the parallel connection of a large number of quasi-autonomous converters, called cells, each of which is designed for a fraction of the system rating. The cell rating is chosen such that single-die devices in inexpensive packages can be used, and the cell fabricated with an automated assembly process. The use of quasi-autonomous cells means that system performance is not compromised by the failure of a cell. This thesis explores the design of cellular converter architectures with the objective of achieving improvements in performance, reliability, and cost over conventional converter designs. New approaches are developed and experimentally verified for highly distributed control of cellular converters, including methods for ripple cancellation and current-sharing control. The performance of these techniques are quantified, and their dynamics are analyzed. Cell topologies suitable to the cellular architecture are investigated, and their use for systems in the 5-500 kVA range is explored. The design, construction, and experimental evaluation of a 6 kW cellular switched-mode rectifier is also addressed

  10. The Knife Machine. Module 15.

    Science.gov (United States)

    South Carolina State Dept. of Education, Columbia. Office of Vocational Education.

    This module on the knife machine, one in a series dealing with industrial sewing machines, their attachments, and operation, covers one topic: performing special operations on the knife machine (a single needle or multi-needle machine which sews and cuts at the same time). These components are provided: an introduction, directions, an objective,…

  11. GAUDI-Architecture design document

    CERN Document Server

    Mato, P

    1998-01-01

    98-064 This document is the result of the architecture design phase for the LHCb event data processing applications project. The architecture of the LHCb software system includes its logical and physical structure which has been forged by all the strategic and tactical decisions applied during development. The strategic decisions should be made explicitly with the considerations for the trade-off of each alternative. The other purpose of this document is that it serves as the main material for the scheduled architecture review that will take place in the next weeks. The architecture review will allow us to identify what are the weaknesses or strengths of the proposed architecture as well as we hope to obtain a list of suggested changes to improve it. All that well before the system is being realized in code. It is in our interest to identify the possible problems at the architecture design phase of the software project before much of the software is implemented. Strategic decisions must be cross checked caref...

  12. Constructing Support Vector Machine Ensembles for Cancer Classification Based on Proteomic Profiling

    Institute of Scientific and Technical Information of China (English)

    Yong Mao; Xiao-Bo Zhou; Dao-Ying Pi; You-Xian Sun

    2005-01-01

    In this study, we present a constructive algorithm for training cooperative support vector machine ensembles (CSVMEs). CSVME combines ensemble architecture design with cooperative training for individual SVMs in ensembles. Unlike most previous studies on training ensembles, CSVME puts emphasis on both accuracy and collaboration among individual SVMs in an ensemble. A group of SVMs selected on the basis of recursive classifier elimination is used in CSVME, and the number of the individual SVMs selected to construct CSVME is determined by 10-fold cross-validation. This kind of SVME has been tested on two ovarian cancer datasets previously obtained by proteomic mass spectrometry. By combining several individual SVMs, the proposed method achieves better performance than the SVME of all base SVMs.

  13. Machine learning based global particle indentification algorithms at LHCb experiment

    CERN Multimedia

    Derkach, Denis; Likhomanenko, Tatiana; Rogozhnikov, Aleksei; Ratnikov, Fedor

    2017-01-01

    One of the most important aspects of data processing at LHC experiments is the particle identification (PID) algorithm. In LHCb, several different sub-detector systems provide PID information: the Ring Imaging CHerenkov (RICH) detector, the hadronic and electromagnetic calorimeters, and the muon chambers. To improve charged particle identification, several neural networks including a deep architecture and gradient boosting have been applied to data. These new approaches provide higher identification efficiencies than existing implementations for all charged particle types. It is also necessary to achieve a flat dependency between efficiencies and spectator variables such as particle momentum, in order to reduce systematic uncertainties during later stages of data analysis. For this purpose, "flat” algorithms that guarantee the flatness property for efficiencies have also been developed. This talk presents this new approach based on machine learning and its performance.

  14. Architecture at Hydro-Quebec. L'architecture a Hydro-Quebec

    Energy Technology Data Exchange (ETDEWEB)

    1991-01-01

    Architecture at Hydro-Quebec is concerned not only with combining function and aesthetics in designing buildings and other structures for an electrical utility, but also to satisfy technical and administrative needs and to help solve contemporary problems such as the rational use of energy. Examples are presented of Hydro-Quebec's architectural accomplishments in the design of hydroelectric power stations and their surrounding landscapes, thermal power stations, transmission substations, research and testing facilities, and administrative buildings. It is shown how some buildings are designed to adapt to local environments and to conserve energy. The utility's policy of conserving installations of historic value, such as certain pre-1930 power stations, is illustrated, and aspects of its general architectural policy are outlined. 20 figs.

  15. Restrictions of process machine retooling at machine-building enterprises

    Directory of Open Access Journals (Sweden)

    Kuznetsova Elena

    2017-01-01

    Full Text Available The competitiveness of the national economy depends on the technological level of the machine-building enterprises production equipment. Today in Russia there are objective and subjective restrictions for the optimum policy formation of the manufacturing equipment renewal. The analysis of the manufacturing equipment age structure dynamics in the Russian machine-building complex indicates the negative tendencies intensification: increase in the equipment service life, reduction in the share of up-to-date equipment, and drop in its use efficiency. The article investigates and classifies the main restrictions of the manufacturing equipment renewal process, such as regulatory and legislative, financial, organizational, competency-based. The economic consequences of the revealed restrictions influence on the machine-building enterprises activity are shown.

  16. Mechanical design of machine components

    CERN Document Server

    Ugural, Ansel C

    2015-01-01

    Mechanical Design of Machine Components, Second Edition strikes a balance between theory and application, and prepares students for more advanced study or professional practice. It outlines the basic concepts in the design and analysis of machine elements using traditional methods, based on the principles of mechanics of materials. The text combines the theory needed to gain insight into mechanics with numerical methods in design. It presents real-world engineering applications, and reveals the link between basic mechanics and the specific design of machine components and machines. Divided into three parts, this revised text presents basic background topics, deals with failure prevention in a variety of machine elements and covers applications in design of machine components as well as entire machines. Optional sections treating special and advanced topics are also included.Key Features of the Second Edition:Incorporates material that has been completely updated with new chapters, problems, practical examples...

  17. Soft computing in machine learning

    CERN Document Server

    Park, Jooyoung; Inoue, Atsushi

    2014-01-01

    As users or consumers are now demanding smarter devices, intelligent systems are revolutionizing by utilizing machine learning. Machine learning as part of intelligent systems is already one of the most critical components in everyday tools ranging from search engines and credit card fraud detection to stock market analysis. You can train machines to perform some things, so that they can automatically detect, diagnose, and solve a variety of problems. The intelligent systems have made rapid progress in developing the state of the art in machine learning based on smart and deep perception. Using machine learning, the intelligent systems make widely applications in automated speech recognition, natural language processing, medical diagnosis, bioinformatics, and robot locomotion. This book aims at introducing how to treat a substantial amount of data, to teach machines and to improve decision making models. And this book specializes in the developments of advanced intelligent systems through machine learning. It...

  18. Towards a Media Architecture

    DEFF Research Database (Denmark)

    Ebsen, Tobias

    2010-01-01

    This text explores the concept of media architecture as a phenomenon of visual culture that describes the use of screen-technology in new spatial configurations in practices of architecture and art. I shall argue that this phenomenon is not necessarily a revolutionary new approach, but rather...... a result of conceptual changes in both modes visual representation and in expressions of architecture. These are changes the may be described as an evolution of ideas and consequent experiments that can be traced back to changes in the history of art and the various styles and ideologies of architecture....

  19. Mankind, machines and people

    Energy Technology Data Exchange (ETDEWEB)

    Hugli, A

    1984-01-01

    The following questions are addressed: is there a difference between machines and men, between human communication and communication with machines. Will we ever reach the point when the dream of artificial intelligence becomes a reality. Will thinking machines be able to replace the human spirit in all its aspects. Social consequences and philosophical aspects are addressed. 8 references.

  20. Architectural Engineers

    DEFF Research Database (Denmark)

    Petersen, Rikke Premer

    engineering is addresses from two perspectives – as an educational response and an occupational constellation. Architecture and engineering are two of the traditional design professions and they frequently meet in the occupational setting, but at educational institutions they remain largely estranged....... The paper builds on a multi-sited study of an architectural engineering program at the Technical University of Denmark and an architectural engineering team within an international engineering consultancy based on Denmark. They are both responding to new tendencies within the building industry where...... the role of engineers and architects increasingly overlap during the design process, but their approaches reflect different perceptions of the consequences. The paper discusses some of the challenges that design education, not only within engineering, is facing today: young designers must be equipped...