WorldWideScience

Sample records for finding optimum hardware

  1. Optimum SNR data compression in hardware using an Eigencoil array.

    Science.gov (United States)

    King, Scott B; Varosi, Steve M; Duensing, G Randy

    2010-05-01

    With the number of receivers available on clinical MRI systems now ranging from 8 to 32 channels, data compression methods are being explored to lessen the demands on the computer for data handling and processing. Although software-based methods of compression after reception lessen computational requirements, a hardware-based method before the receiver also reduces the number of receive channels required. An eight-channel Eigencoil array is constructed by placing a hardware radiofrequency signal combiner inline after preamplification, before the receiver system. The Eigencoil array produces signal-to-noise ratio (SNR) of an optimal reconstruction using a standard sum-of-squares reconstruction, with peripheral SNR gains of 30% over the standard array. The concept of "receiver channel reduction" or MRI data compression is demonstrated, with optimal SNR using only four channels, and with a three-channel Eigencoil, superior sum-of-squares SNR was achieved over the standard eight-channel array. A three-channel Eigencoil portion of a product neurovascular array confirms in vivo SNR performance and demonstrates parallel MRI up to R = 3. This SNR-preserving data compression method advantageously allows users of MRI systems with fewer receiver channels to achieve the SNR of higher-channel MRI systems. (c) 2010 Wiley-Liss, Inc.

  2. Hardware-Based Non-Optimum Factors for Launch Vehicle Structural Design

    Science.gov (United States)

    Wu, K. Chauncey; Cerro, Jeffrey A.

    2010-01-01

    During aerospace vehicle conceptual and preliminary design, empirical non-optimum factors are typically applied to predicted structural component weights to account for undefined manufacturing and design details. Non-optimum factors are developed here for 32 aluminum-lithium 2195 orthogrid panels comprising the liquid hydrogen tank barrel of the Space Shuttle External Tank using measured panel weights and manufacturing drawings. Minimum values for skin thickness, axial and circumferential blade stiffener thickness and spacing, and overall panel thickness are used to estimate individual panel weights. Panel non-optimum factors computed using a coarse weights model range from 1.21 to 1.77, and a refined weights model (including weld lands and skin and stiffener transition details) yields non-optimum factors of between 1.02 and 1.54. Acreage panels have an average 1.24 non-optimum factor using the coarse model, and 1.03 with the refined version. The observed consistency of these acreage non-optimum factors suggests that relatively simple models can be used to accurately predict large structural component weights for future launch vehicles.

  3. MORPION: a fast hardware processor for straight line finding in MWPC

    International Nuclear Information System (INIS)

    Mur, M.

    1980-02-01

    A fast hardware processor for straight line finding in MWPC has been built in Saclay and successfully operated in the NA3 experiment at CERN. We give the motivations to build this processor, and describe the hardware implementation of the line finding algorithm. Finally its use and performance in NA3 are described

  4. Finding the Optimum Scenario in Risk-benefit Assessment: An Example on Vitamin D

    DEFF Research Database (Denmark)

    Berjia, Firew Lemma; Hoekstra, J.; Verhagen, H.

    2014-01-01

    when changing from the reference to the optimum scenario. Conclusion: The method allowed us to find the optimum serum level in the vitamin D example. Additional case studies are needed to further validate the applicability of the approach to other nutrients or foods, especially with regards...... a method for finding the optimum scenario that provides maximum net health gains. Methods: A multiple scenario simulation. The method is presented using vitamin D intake in Denmark as an example. In addition to the reference scenario, several alternative scenarios are simulated to detect the scenario...... that provides maximum net health gains. As a common health metric, Disability Adjusted Life Years (DALY) has been used to project the net health effect by using the QALIBRA (Quality of Life for Benefit Risk Assessment) software. Results: The method used in the vitamin D example shows that it is feasible to find...

  5. Finding the Optimum Scenario in Risk-benefit Assessment: An Example on Vitamin D

    DEFF Research Database (Denmark)

    Berjia, Firew Lemma; Hoekstra, J.; Verhagen, H.

    2014-01-01

    an optimum scenario that provides maximum net health gain in health risk-benefit assessment of dietary exposure as expressed by serum vitamin D level. With regard to the vitamin D assessment, a considerable health gain is observed due to the reduction of risk of other cause mortality, fall and hip fractures......Background: In risk-benefit assessment of food and nutrients, several studies so far have focused on comparison of two scenarios to weigh the health effect against each other. One obvious next step is finding the optimum scenario that provides maximum net health gains. Aim: This paper aims to show...... that provides maximum net health gains. As a common health metric, Disability Adjusted Life Years (DALY) has been used to project the net health effect by using the QALIBRA (Quality of Life for Benefit Risk Assessment) software. Results: The method used in the vitamin D example shows that it is feasible to find...

  6. Finding optimum airfoil shape to get maximum aerodynamic efficiency for a wind turbine

    Science.gov (United States)

    Sogukpinar, Haci; Bozkurt, Ismail

    2017-02-01

    In this study, aerodynamic performances of S-series wind turbine airfoil of S 825 are investigated to find optimum angle of attack. Aerodynamic performances calculations are carried out by utilization of a Computational Fluid Dynamics (CFD) method withstand finite capacity approximation by using Reynolds-Averaged-Navier Stokes (RANS) theorem. The lift and pressure coefficients, lift to drag ratio of airfoil S 825 are analyzed with SST turbulence model then obtained results crosscheck with wind tunnel data to verify the precision of computational Fluid Dynamics (CFD) approximation. The comparison indicates that SST turbulence model used in this study can predict aerodynamics properties of wind blade.

  7. Finding an optimum immuno-histochemical feature set to distinguish benign phyllodes from fibroadenoma.

    Science.gov (United States)

    Maity, Priti Prasanna; Chatterjee, Subhamoy; Das, Raunak Kumar; Mukhopadhyay, Subhalaxmi; Maity, Ashok; Maulik, Dhrubajyoti; Ray, Ajoy Kumar; Dhara, Santanu; Chatterjee, Jyotirmoy

    2013-05-01

    Benign phyllodes and fibroadenoma are two well-known breast tumors with remarkable diagnostic ambiguity. The present study is aimed at determining an optimum set of immuno-histochemical features to distinguish them by analyzing important observations on expressions of important genes in fibro-glandular tissue. Immuno-histochemically, the expressions of p63 and α-SMA in myoepithelial cells and collagen I, III and CD105 in stroma of tumors and their normal counterpart were studied. Semi-quantified features were analyzed primarily by ANOVA and ranked through F-scores for understanding relative importance of group of features in discriminating three classes followed by reduction in F-score arranged feature space dimension and application of inter-class Bhattacharyya distances to distinguish tumors with an optimum set of features. Among thirteen studied features except one all differed significantly in three study classes. F-Ranking of features revealed highest discriminative potential of collagen III (initial region). F-Score arranged feature space dimension and application of Bhattacharyya distance gave rise to a feature set of lower dimension which can discriminate benign phyllodes and fibroadenoma effectively. The work definitely separated normal breast, fibroadenoma and benign phyllodes, through an optimal set of immuno-histochemical features which are not only useful to address diagnostic ambiguity of the tumors but also to spell about malignant potentiality. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Do preliminary chest X-ray findings define the optimum role of pulmonary scintigraphy in suspected pulmonary embolism?

    International Nuclear Information System (INIS)

    Forbes, Kirsten P.N.; Reid, John H.; Murchison, John T.

    2001-01-01

    AIM: To investigate if preliminary chest radiograph (CXR) findings can define the optimum role of lung scintigraphy in subjects investigated for pulmonary embolism (PE). MATERIALS AND METHODS: The CXR and scintigraphy findings from 613 consecutive subjects investigated for suspected PE were retrieved from a radiological database. Of 393 patients with abnormal CXRs, a subgroup of 238 was examined and individual radiographic abnormalities were characterized. CXR findings were related to the scintigraphy result. RESULTS: Scintigraphy was normal in 286 subjects (47%), non-diagnostic in 207 (34%) and high probability for PE in 120 (20%). In 393 subjects (64%) the preliminary CXR was abnormal and 188 (48%) of scintigrams in this group were non-diagnostic. Individual radiographic abnormalities were not associated with significantly different scintigraphic outcomes. If the preliminary CXR was normal (36%), the proportion of non-diagnostic scintigrams decreased to 9% (19 of 220 subjects) (P < 0.05). CONCLUSION: In subjects investigated for PE, an abnormal CXR increases the prevalence of non-diagnostic scintigrams. A normal pre-test CXR is more often associated with a definitive (normal or high probability) scintigram result. The chest radiograph may be useful in deciding the optimum sequence of investigations. Forbes, K.P.N., Reid, J.H., Murchison, J.T. (2001)

  9. Direction of Radio Finding via MUSIC (Multiple Signal Classification) Algorithm for Hardware Design System

    Science.gov (United States)

    Zhang, Zheng

    2017-10-01

    Concept of radio direction finding systems, which use radio direction finding is based on digital signal processing algorithms. Thus, the radio direction finding system becomes capable to locate and track signals by the both. Performance of radio direction finding significantly depends on effectiveness of digital signal processing algorithms. The algorithm uses the Direction of Arrival (DOA) algorithms to estimate the number of incidents plane waves on the antenna array and their angle of incidence. This manuscript investigates implementation of the DOA algorithms (MUSIC) on the uniform linear array in the presence of white noise. The experiment results exhibit that MUSIC algorithm changed well with the radio direction.

  10. Hardware Demonstrator of a Level-1 Track Finding Algorithm with FPGAs for the Phase II CMS Experiment

    International Nuclear Information System (INIS)

    Cieri, D.

    2016-01-01

    At the HL-LHC, proton bunches collide every 25 ns, producing an average of 140 pp interactions per bunch crossing. To operate in such an environment, the CMS experiment will need a Level-1 (L1) hardware trigger, able to identify interesting events within a latency of 12.5 μs. This novel L1 trigger will make use of data coming from the silicon tracker to constrain the trigger rate . Goal of this new track trigger will be to build L1 tracks from the tracker information. The architecture that will be implemented in future to process tracker data is still under discussion. One possibility is to adopt a system entirely based on FPGA electronic. The proposed track finding algorithm is based on the Hough transform method. The algorithm has been tested using simulated pp collision data and it is currently being demonstrated in hardware, using the “MP7”, which is a μTCA board with a powerful FPGA capable of handling data rates approaching 1 Tb/s. Two different implementations of the Hough transform technique are currently under investigation: one utilizes a systolic array to represent the Hough space, while the other exploits a pipelined approach. (paper)

  11. Hardware Demonstrator of a Level-1 Track Finding Algorithm with FPGAs for the Phase II CMS Experiment

    CERN Document Server

    AUTHOR|(CDS)2090481

    2016-01-01

    At the HL-LHC, proton bunches collide every 25\\,ns, producing an average of 140 pp interactions per bunch crossing. To operate in such an environment, the CMS experiment will need a Level-1 (L1) hardware trigger, able to identify interesting events within a latency of 12.5\\,$\\mu$s. This novel L1 trigger will make use of data coming from the silicon tracker to constrain the trigger rate. Goal of this new \\textit{track trigger} will be to build L1 tracks from the tracker information. The architecture that will be implemented in future to process tracker data is still under discussion. One possibility is to adopt a system entirely based on FPGA electronic. The proposed track finding algorithm is based on the Hough transform method. The algorithm has been tested using simulated pp collision data and it is currently being demonstrated in hardware, using the ``MP7'', which is a $\\mu$TCA board with a powerful FPGA capable of handling data rates approaching 1 Tb/s. Two different implementations of the Hough tran...

  12. The ATLAS FTK Auxiliary Card: A Highly Functional VME Rear Transition Module for a Hardware Track Finding Processing Unit

    CERN Document Server

    Alison, John; The ATLAS collaboration; Bogdan, Mircea; Bryant, Patrick; Cheng, Yangyang; Krizka, Karol; Shochet, Mel; Tompkins, Lauren; Webster, Jordan S

    2014-01-01

    The ATLAS Fast TracKer is a hardware-based charged particle track finder for the High Level Trigger system of the ATLAS Experiment at the LHC. Using a multi-component system, it finds charged particle trajectories of 1 GeV/c and greater using data from the full ATLAS silicon tracking detectors at a rate of 100 kHz. Pattern recognition and preliminary track fitting are performed by VME Processing Units consisting of an Associative Memory Board containing custom associative memory chips for pattern recognition, and the Auxiliary Card (AUX), a powerful rear transition module which formats the data for pattern recognition and performs linearized fits on track candidates. We report on the design and testing of the AUX, which utilizes six FPGAs to process up to 32 Gbps of hit data, as well as fit the helical trajectory of one track candidate per nanosecond through a highly parallel track fitting architecture. Both the board and firmware design will be discussed, as well as the performance observed in tests at CERN ...

  13. OPTIMUM PROSESSENTRERING

    Directory of Open Access Journals (Sweden)

    K. Adendorff

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: The paper derives an expression for optimum process centreing for a given design specification and spoilage and/or rework costs.

    AFRIKAANSE OPSOMMING: Die problem Van prosessentrering vir n gegewe ontwerpspesifikasie en herwerk- en/of skrootkoste word behandel.

  14. Evaluation of a binary optimization approach to find the optimum locations of energy storage devices in a power grid with stochastically varying loads and wind generation

    Science.gov (United States)

    Dar, Zamiyad

    The prices in the electricity market change every five minutes. The prices in peak demand hours can be four or five times more than the prices in normal off peak hours. Renewable energy such as wind power has zero marginal cost and a large percentage of wind energy in a power grid can reduce the price significantly. The variability of wind power prevents it from being constantly available in peak hours. The price differentials between off-peak and on-peak hours due to wind power variations provide an opportunity for a storage device owner to buy energy at a low price and sell it in high price hours. In a large and complex power grid, there are many locations for installation of a storage device. Storage device owners prefer to install their device at locations that allow them to maximize profit. Market participants do not possess much information about the system operator's dispatch, power grid, competing generators and transmission system. The publicly available data from the system operator usually consists of Locational Marginal Prices (LMP), load, reserve prices and regulation prices. In this thesis, we develop a method to find the optimum location of a storage device without using the grid, transmission or generator data. We formulate and solve an optimization problem to find the most profitable location for a storage device using only the publicly available market pricing data such as LMPs, and reserve prices. We consider constraints arising due to storage device operation limitations in our objective function. We use binary optimization and branch and bound method to optimize the operation of a storage device at a given location to earn maximum profit. We use two different versions of our method and optimize the profitability of a storage unit at each location in a 36 bus model of north eastern United States and south eastern Canada for four representative days representing four seasons in a year. Finally, we compare our results from the two versions of our

  15. Hardware malware

    CERN Document Server

    Krieg, Christian

    2013-01-01

    In our digital world, integrated circuits are present in nearly every moment of our daily life. Even when using the coffee machine in the morning, or driving our car to work, we interact with integrated circuits. The increasing spread of information technology in virtually all areas of life in the industrialized world offers a broad range of attack vectors. So far, mainly software-based attacks have been considered and investigated, while hardware-based attacks have attracted comparatively little interest. The design and production process of integrated circuits is mostly decentralized due to

  16. Nondissipative optimum charge regulator

    Science.gov (United States)

    Rosen, R.; Vitebsky, J. N.

    1970-01-01

    Optimum charge regulator provides constant level charge/discharge control of storage batteries. Basic power transfer and control is performed by solar panel coupled to battery through power switching circuit. Optimum controller senses battery current and modifies duty cycle of switching circuit to maximize current available to battery.

  17. Introduction to Hardware Security

    Directory of Open Access Journals (Sweden)

    Yier Jin

    2015-10-01

    Full Text Available Hardware security has become a hot topic recently with more and more researchers from related research domains joining this area. However, the understanding of hardware security is often mixed with cybersecurity and cryptography, especially cryptographic hardware. For the same reason, the research scope of hardware security has never been clearly defined. To help researchers who have recently joined in this area better understand the challenges and tasks within the hardware security domain and to help both academia and industry investigate countermeasures and solutions to solve hardware security problems, we will introduce the key concepts of hardware security as well as its relations to related research topics in this survey paper. Emerging hardware security topics will also be clearly depicted through which the future trend will be elaborated, making this survey paper a good reference for the continuing research efforts in this area.

  18. Optimum design of steel structures

    CERN Document Server

    Farkas, József

    2013-01-01

    This book helps designers and manufacturers to select and develop the most suitable and competitive steel structures, which are safe, fit for production and economic. An optimum design system is used to find the best characteristics of structural models, which guarantee the fulfilment of design and fabrication requirements and minimize the cost function. Realistic numerical models are used as main components of industrial steel structures. Chapter 1 containts some experiences with the optimum design of steel structures Chapter 2 treats some newer mathematical optimization methods. Chapter 3 gives formulae for fabrication times and costs. Chapters 4 deals with beams and columns. Summarizes the Eurocode rules for design. Chapter 5 deals with the design of tubular trusses. Chapter 6 gives the design of frame structures and fire-resistant design rules for a frame. In Chapters 7 some minimum cost design problems of stiffened and cellular plates and shells are worked out for cases of different stiffenings and loads...

  19. Raspberry Pi hardware projects 1

    CERN Document Server

    Robinson, Andrew

    2013-01-01

    Learn how to take full advantage of all of Raspberry Pi's amazing features and functions-and have a blast doing it! Congratulations on becoming a proud owner of a Raspberry Pi, the credit-card-sized computer! If you're ready to dive in and start finding out what this amazing little gizmo is really capable of, this ebook is for you. Taken from the forthcoming Raspberry Pi Projects, Raspberry Pi Hardware Projects 1 contains three cool hardware projects that let you have fun with the Raspberry Pi while developing your Raspberry Pi skills. The authors - PiFace inventor, Andrew Robinson and Rasp

  20. Open Hardware Business Models

    Directory of Open Access Journals (Sweden)

    Edy Ferreira

    2008-04-01

    Full Text Available In the September issue of the Open Source Business Resource, Patrick McNamara, president of the Open Hardware Foundation, gave a comprehensive introduction to the concept of open hardware, including some insights about the potential benefits for both companies and users. In this article, we present the topic from a different perspective, providing a classification of market offers from companies that are making money with open hardware.

  1. Open Hardware Business Models

    OpenAIRE

    Edy Ferreira

    2008-01-01

    In the September issue of the Open Source Business Resource, Patrick McNamara, president of the Open Hardware Foundation, gave a comprehensive introduction to the concept of open hardware, including some insights about the potential benefits for both companies and users. In this article, we present the topic from a different perspective, providing a classification of market offers from companies that are making money with open hardware.

  2. On Optimum Stratification

    OpenAIRE

    M. G. M. Khan; V. D. Prasad; D. K. Rao

    2014-01-01

    In this manuscript, we discuss the problem of determining the optimum stratification of a study (or main) variable based on the auxiliary variable that follows a uniform distribution. If the stratification of survey variable is made using the auxiliary variable it may lead to substantial gains in precision of the estimates. This problem is formulated as a Nonlinear Programming Problem (NLPP), which turn out to multistage decision problem and is solved using dynamic programming technique.

  3. Open Hardware at CERN

    CERN Multimedia

    CERN Knowledge Transfer Group

    2015-01-01

    CERN is actively making its knowledge and technology available for the benefit of society and does so through a variety of different mechanisms. Open hardware has in recent years established itself as a very effective way for CERN to make electronics designs and in particular printed circuit board layouts, accessible to anyone, while also facilitating collaboration and design re-use. It is creating an impact on many levels, from companies producing and selling products based on hardware designed at CERN, to new projects being released under the CERN Open Hardware Licence. Today the open hardware community includes large research institutes, universities, individual enthusiasts and companies. Many of the companies are actively involved in the entire process from design to production, delivering services and consultancy and even making their own products available under open licences.

  4. Hardware description languages

    Science.gov (United States)

    Tucker, Jerry H.

    1994-01-01

    Hardware description languages are special purpose programming languages. They are primarily used to specify the behavior of digital systems and are rapidly replacing traditional digital system design techniques. This is because they allow the designer to concentrate on how the system should operate rather than on implementation details. Hardware description languages allow a digital system to be described with a wide range of abstraction, and they support top down design techniques. A key feature of any hardware description language environment is its ability to simulate the modeled system. The two most important hardware description languages are Verilog and VHDL. Verilog has been the dominant language for the design of application specific integrated circuits (ASIC's). However, VHDL is rapidly gaining in popularity.

  5. Hardware protection through obfuscation

    CERN Document Server

    Bhunia, Swarup; Tehranipoor, Mark

    2017-01-01

    This book introduces readers to various threats faced during design and fabrication by today’s integrated circuits (ICs) and systems. The authors discuss key issues, including illegal manufacturing of ICs or “IC Overproduction,” insertion of malicious circuits, referred as “Hardware Trojans”, which cause in-field chip/system malfunction, and reverse engineering and piracy of hardware intellectual property (IP). The authors provide a timely discussion of these threats, along with techniques for IC protection based on hardware obfuscation, which makes reverse-engineering an IC design infeasible for adversaries and untrusted parties with any reasonable amount of resources. This exhaustive study includes a review of the hardware obfuscation methods developed at each level of abstraction (RTL, gate, and layout) for conventional IC manufacturing, new forms of obfuscation for emerging integration strategies (split manufacturing, 2.5D ICs, and 3D ICs), and on-chip infrastructure needed for secure exchange o...

  6. ZEUS hardware control system

    Science.gov (United States)

    Loveless, R.; Erhard, P.; Ficenec, J.; Gather, K.; Heath, G.; Iacovacci, M.; Kehres, J.; Mobayyen, M.; Notz, D.; Orr, R.; Orr, R.; Sephton, A.; Stroili, R.; Tokushuku, K.; Vogel, W.; Whitmore, J.; Wiggers, L.

    1989-12-01

    The ZEUS collaboration is building a system to monitor, control and document the hardware of the ZEUS detector. This system is based on a network of VAX computers and microprocessors connected via ethernet. The database for the hardware values will be ADAMO tables; the ethernet connection will be DECNET, TCP/IP, or RPC. Most of the documentation will also be kept in ADAMO tables for easy access by users.

  7. ZEUS hardware control system

    International Nuclear Information System (INIS)

    Loveless, R.; Erhard, P.; Ficenec, J.; Gather, K.; Heath, G.; Iacovacci, M.; Kehres, J.; Mobayyen, M.; Notz, D.; Orr, R.; Sephton, A.; Stroili, R.; Tokushuku, K.; Vogel, W.; Whitmore, J.; Wiggers, L.

    1989-01-01

    The ZEUS collaboration is building a system to monitor, control and document the hardware of the ZEUS detector. This system is based on a network of VAX computers and microprocessors connected via ethernet. The database for the hardware values will be ADAMO tables; the ethernet connection will be DECNET, TCP/IP, or RPC. Most of the documentation will also be kept in ADAMO tables for easy access by users. (orig.)

  8. The optimum spanning catenary cable

    Science.gov (United States)

    Wang, C. Y.

    2015-03-01

    A heavy cable spans two points in space. There exists an optimum cable length such that the maximum tension is minimized. If the two end points are at the same level, the optimum length is 1.258 times the distance between the ends. The optimum lengths for end points of different heights are also found.

  9. Choosing the optimum burnup

    International Nuclear Information System (INIS)

    Geller, L.; Goldstein, L.; Franks, W.A.

    1986-01-01

    This paper reviews some of the considerations utilities must evaluate when going to higher discharge burnups. The advantages and disadvantages of higher discharge burnups are described, as well as a consistent approach for evaluating optimum discharge burnup and its comparison to current practice. When an analysis is performed over the life of the plant, the design of the terminal cycles has significant impact on the lifetime savings from higher burnups. Designs for high burnup cycles have a greater average inventory value in the core. As one goes to higher burnup, there is a greater likelihood of discarding a larger value in unused fuel unless the terminal cycles are designed carefully. This effect can be large enough in some cases to wipe out the lifetime cost savings relative to operating with a higher discharge burnup cycle

  10. Optimum coolant chemistry in BWRs

    International Nuclear Information System (INIS)

    Lin, C.C.; Cowan, R.L.; Kiss, E.

    2004-01-01

    LWR water chemistry parameters are directly or indirectly related to the plant's operational performance and for a significant amount of Operation and Maintenance (O and M) costs. Obvious impacts are the operational costs associated with water treatment, monitoring and associated radwaste generation. Less obvious is the important role water chemistry plays in the magnitude of drywell shutdown dose rates, fuel corrosion performance and, (probably most importantly) materials degradation such as from stress corrosion cracking of piping and Reactor Pressure Vessel (RPV) internal components. To improve the operational excellence of the BWR and to minimize the impact of water chemistry on O and M costs. General Electric has developed the concept of Optimum Water Chemistry (OWC). The 'best practices' and latest technology findings from the U.S., Asia and Europe are integrated into the suggested OWC Specification. This concept, together with cost effective ways to meet the requirement, are discussed. (author)

  11. Hardware Objects for Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Thalinger, Christian; Korsholm, Stephan

    2008-01-01

    Java, as a safe and platform independent language, avoids access to low-level I/O devices or direct memory access. In standard Java, low-level I/O it not a concern; it is handled by the operating system. However, in the embedded domain resources are scarce and a Java virtual machine (JVM) without...... an underlying middleware is an attractive architecture. When running the JVM on bare metal, we need access to I/O devices from Java; therefore we investigate a safe and efficient mechanism to represent I/O devices as first class Java objects, where device registers are represented by object fields. Access...... to those registers is safe as Java’s type system regulates it. The access is also fast as it is directly performed by the bytecodes getfield and putfield. Hardware objects thus provide an object-oriented abstraction of low-level hardware devices. As a proof of concept, we have implemented hardware objects...

  12. Computer hardware fault administration

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  13. The VMTG Hardware Description

    CERN Document Server

    Puccio, B

    1998-01-01

    The document describes the hardware features of the CERN Master Timing Generator. This board is the common platform for the transmission of General Timing Machine required by the CERN accelerators. In addition, the paper shows the various jumper options to customise the card which is compliant to the VMEbus standard.

  14. The LASS hardware processor

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1976-01-01

    The problems of data analysis with hardware processors are reviewed and a description is given of a programmable processor. This processor, the 168/E, has been designed for use in the LASS multi-processor system; it has an execution speed comparable to the IBM 370/168 and uses the subset of IBM 370 instructions appropriate to the LASS analysis task. (Auth.)

  15. CERN Neutrino Platform Hardware

    CERN Document Server

    Nelson, Kevin

    2017-01-01

    My summer research was broadly in CERN's neutrino platform hardware efforts. This project had two main components: detector assembly and data analysis work for ICARUS. Specifically, I worked on assembly for the ProtoDUNE project and monitored the safety of ICARUS as it was transported to Fermilab by analyzing the accelerometer data from its move.

  16. RRFC hardware operation manual

    International Nuclear Information System (INIS)

    Abhold, M.E.; Hsue, S.T.; Menlove, H.O.; Walton, G.

    1996-05-01

    The Research Reactor Fuel Counter (RRFC) system was developed to assay the 235 U content in spent Material Test Reactor (MTR) type fuel elements underwater in a spent fuel pool. RRFC assays the 235 U content using active neutron coincidence counting and also incorporates an ion chamber for gross gamma-ray measurements. This manual describes RRFC hardware, including detectors, electronics, and performance characteristics

  17. Hardware Accelerated Simulated Radiography

    International Nuclear Information System (INIS)

    Laney, D; Callahan, S; Max, N; Silva, C; Langer, S; Frank, R

    2005-01-01

    We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32 bit floating point texture capabilities to obtain validated solutions to the radiative transport equation for X-rays. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedra that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester. We show that the hardware accelerated solution is faster than the current technique used by scientists

  18. Sterilization of space hardware.

    Science.gov (United States)

    Pflug, I. J.

    1971-01-01

    Discussion of various techniques of sterilization of space flight hardware using either destructive heating or the action of chemicals. Factors considered in the dry-heat destruction of microorganisms include the effects of microbial water content, temperature, the physicochemical properties of the microorganism and adjacent support, and nature of the surrounding gas atmosphere. Dry-heat destruction rates of microorganisms on the surface, between mated surface areas, or buried in the solid material of space vehicle hardware are reviewed, along with alternative dry-heat sterilization cycles, thermodynamic considerations, and considerations of final sterilization-process design. Discussed sterilization chemicals include ethylene oxide, formaldehyde, methyl bromide, dimethyl sulfoxide, peracetic acid, and beta-propiolactone.

  19. Hardware characteristic and application

    International Nuclear Information System (INIS)

    Gu, Dong Hyeon

    1990-03-01

    The contents of this book are system board on memory, performance, system timer system click and specification, coprocessor such as programing interface and hardware interface, power supply on input and output, protection for DC output, Power Good signal, explanation on 84 keyboard and 101/102 keyboard,BIOS system, 80286 instruction set and 80287 coprocessor, characters, keystrokes and colors, communication and compatibility of IBM personal computer on application direction, multitasking and code for distinction of system.

  20. Optimum study designs.

    Science.gov (United States)

    Gu, C; Rao, D C

    2001-01-01

    Because simplistic designs will lead to prohibitively large sample sizes, the optimization of genetic study designs is critical for successfully mapping genes for complex diseases. Creative designs are necessary for detecting and amplifying the usually weak signals for complex traits. Two important outcomes of a study design--power and resolution--are implicitly tied together by the principle of uncertainty. Overemphasis on either one may lead to suboptimal designs. To achieve optimality for a particular study, therefore, practical measures such as cost-effectiveness must be used to strike a balance between power and resolution. In this light, the myriad of factors involved in study design can be checked for their effects on the ultimate outcomes, and the popular existing designs can be sorted into building blocks that may be useful for particular situations. It is hoped that imaginative construction of novel designs using such building blocks will lead to enhanced efficiency in finding genes for complex human traits.

  1. Optimum Design of Plasma Focus

    International Nuclear Information System (INIS)

    Ramos, Ruben; Gonzalez, Jose; Clausse, Alejandro

    2000-01-01

    The optimum design of Plasma Focus devices is presented based in a lumped parameter model of the MHD equations.Maps in the design parameters space are obtained, which determine the length and deuterium pressure required to produce a given neutron yield.Sensitivity analyses of the main effective numbers (sweeping efficiencies) was performed, and lately the optimum values were determined in order to set a basis for the conceptual design

  2. Problem of determining optimum geological and technical measures

    Energy Technology Data Exchange (ETDEWEB)

    Osipov, G N; Roste, Z A; Salimzhanov, E S

    1968-01-01

    This article is concerned with the mathematical simulation of oilfield operation, particularly the use of linear programing to determine optimum conditions for exploitation of a field. The basic approach is to define the field operation by a series of equations, apply boundary conditions and through an iterative computer technique find optimum operating conditions. Application of the method to Tuimazy field is illustrated.

  3. Optimum unambiguous discrimination of linearly independent pure state

    DEFF Research Database (Denmark)

    Pang, Shengshi; Wu, Shengjun

    2009-01-01

    be satisfied by the optimum solution in different situations. We also provide the detailed steps to find the optimum measurement strategy. The method and results we obtain are given a geometrical illustration with a numerical example. Furthermore, using these equations, we derive a formula which shows a clear...

  4. COMPUTER HARDWARE MARKING

    CERN Multimedia

    Groupe de protection des biens

    2000-01-01

    As part of the campaign to protect CERN property and for insurance reasons, all computer hardware belonging to the Organization must be marked with the words 'PROPRIETE CERN'.IT Division has recently introduced a new marking system that is both economical and easy to use. From now on all desktop hardware (PCs, Macintoshes, printers) issued by IT Division with a value equal to or exceeding 500 CHF will be marked using this new system.For equipment that is already installed but not yet marked, including UNIX workstations and X terminals, IT Division's Desktop Support Service offers the following services free of charge:Equipment-marking wherever the Service is called out to perform other work (please submit all work requests to the IT Helpdesk on 78888 or helpdesk@cern.ch; for unavoidable operational reasons, the Desktop Support Service will only respond to marking requests when these coincide with requests for other work such as repairs, system upgrades, etc.);Training of personnel designated by Division Leade...

  5. Foundations of hardware IP protection

    CERN Document Server

    Torres, Lionel

    2017-01-01

    This book provides a comprehensive and up-to-date guide to the design of security-hardened, hardware intellectual property (IP). Readers will learn how IP can be threatened, as well as protected, by using means such as hardware obfuscation/camouflaging, watermarking, fingerprinting (PUF), functional locking, remote activation, hidden transmission of data, hardware Trojan detection, protection against hardware Trojan, use of secure element, ultra-lightweight cryptography, and digital rights management. This book serves as a single-source reference to design space exploration of hardware security and IP protection. · Provides readers with a comprehensive overview of hardware intellectual property (IP) security, describing threat models and presenting means of protection, from integrated circuit layout to digital rights management of IP; · Enables readers to transpose techniques fundamental to digital rights management (DRM) to the realm of hardware IP security; · Introduce designers to the concept of salutar...

  6. Open hardware for open science

    CERN Multimedia

    CERN Bulletin

    2011-01-01

    Inspired by the open source software movement, the Open Hardware Repository was created to enable hardware developers to share the results of their R&D activities. The recently published CERN Open Hardware Licence offers the legal framework to support this knowledge and technology exchange.   Two years ago, a group of electronics designers led by Javier Serrano, a CERN engineer, working in experimental physics laboratories created the Open Hardware Repository (OHR). This project was initiated in order to facilitate the exchange of hardware designs across the community in line with the ideals of “open science”. The main objectives include avoiding duplication of effort by sharing results across different teams that might be working on the same need. “For hardware developers, the advantages of open hardware are numerous. For example, it is a great learning tool for technologies some developers would not otherwise master, and it avoids unnecessary work if someone ha...

  7. Optimum Safety Levels for Breakwaters

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Sørensen, John Dalsgaard

    2005-01-01

    Optimum design safety levels for rock and cube armoured rubble mound breakwaters without superstructure are investigated by numerical simulations on the basis of minimization of the total costs over the service life of the structure, taking into account typical uncertainties related to wave...... statistics and structure response. The study comprises the influence of interest rate, service lifetime, downtime costs and damage accumulation. Design limit states and safety classes for breakwaters are discussed. The results indicate that optimum safety levels are somewhat higher than the safety levels...

  8. Hardware Support for Embedded Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2012-01-01

    The general Java runtime environment is resource hungry and unfriendly for real-time systems. To reduce the resource consumption of Java in embedded systems, direct hardware support of the language is a valuable option. Furthermore, an implementation of the Java virtual machine in hardware enables...... worst-case execution time analysis of Java programs. This chapter gives an overview of current approaches to hardware support for embedded and real-time Java....

  9. HARDWARE TROJAN IDENTIFICATION AND DETECTION

    OpenAIRE

    Samer Moein; Fayez Gebali; T. Aaron Gulliver; Abdulrahman Alkandari

    2017-01-01

    ABSTRACT The majority of techniques developed to detect hardware trojans are based on specific attributes. Further, the ad hoc approaches employed to design methods for trojan detection are largely ineffective. Hardware trojans have a number of attributes which can be used to systematically develop detection techniques. Based on this concept, a detailed examination of current trojan detection techniques and the characteristics of existing hardware trojans is presented. This is used to dev...

  10. Hardware assisted hypervisor introspection.

    Science.gov (United States)

    Shi, Jiangyong; Yang, Yuexiang; Tang, Chuan

    2016-01-01

    In this paper, we introduce hypervisor introspection, an out-of-box way to monitor the execution of hypervisors. Similar to virtual machine introspection which has been proposed to protect virtual machines in an out-of-box way over the past decade, hypervisor introspection can be used to protect hypervisors which are the basis of cloud security. Virtual machine introspection tools are usually deployed either in hypervisor or in privileged virtual machines, which might also be compromised. By utilizing hardware support including nested virtualization, EPT protection and #BP, we are able to monitor all hypercalls belongs to the virtual machines of one hypervisor, include that of privileged virtual machine and even when the hypervisor is compromised. What's more, hypercall injection method is used to simulate hypercall-based attacks and evaluate the performance of our method. Experiment results show that our method can effectively detect hypercall-based attacks with some performance cost. Lastly, we discuss our furture approaches of reducing the performance cost and preventing the compromised hypervisor from detecting the existence of our introspector, in addition with some new scenarios to apply our hypervisor introspection system.

  11. LHCb: Hardware Data Injector

    CERN Multimedia

    Delord, V; Neufeld, N

    2009-01-01

    The LHCb High Level Trigger and Data Acquisition system selects about 2 kHz of events out of the 1 MHz of events, which have been selected previously by the first-level hardware trigger. The selected events are consolidated into files and then sent to permanent storage for subsequent analysis on the Grid. The goal of the upgrade of the LHCb readout is to lift the limitation to 1 MHz. This means speeding up the DAQ to 40 MHz. Such a DAQ system will certainly employ 10 Gigabit or technologies and might also need new networking protocols: a customized TCP or proprietary solutions. A test module is being presented, which integrates in the existing LHCb infrastructure. It is a 10-Gigabit traffic generator, flexible enough to generate LHCb's raw data packets using dummy data or simulated data. These data are seen as real data coming from sub-detectors by the DAQ. The implementation is based on an FPGA using 10 Gigabit Ethernet interface. This module is integrated in the experiment control system. The architecture, ...

  12. Optimum target thickness for polarimeters

    International Nuclear Information System (INIS)

    Sitnik, I.M.

    2003-01-01

    Polarimeters with thick targets are a tool to measure the proton polarization. But the question about the optimum target thickness is still the subject of discussion. An attempt to calculate the most common parameters concerning this problem, in a few GeV region, is made

  13. Hardware for soft computing and soft computing for hardware

    CERN Document Server

    Nedjah, Nadia

    2014-01-01

    Single and Multi-Objective Evolutionary Computation (MOEA),  Genetic Algorithms (GAs), Artificial Neural Networks (ANNs), Fuzzy Controllers (FCs), Particle Swarm Optimization (PSO) and Ant colony Optimization (ACO) are becoming omnipresent in almost every intelligent system design. Unfortunately, the application of the majority of these techniques is complex and so requires a huge computational effort to yield useful and practical results. Therefore, dedicated hardware for evolutionary, neural and fuzzy computation is a key issue for designers. With the spread of reconfigurable hardware such as FPGAs, digital as well as analog hardware implementations of such computation become cost-effective. The idea behind this book is to offer a variety of hardware designs for soft computing techniques that can be embedded in any final product. Also, to introduce the successful application of soft computing technique to solve many hard problem encountered during the design of embedded hardware designs. Reconfigurable em...

  14. FY1995 evolvable hardware chip; 1995 nendo shinkasuru hardware chip

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This project aims at the development of 'Evolvable Hardware' (EHW) which can adapt its hardware structure to the environment to attain better hardware performance, under the control of genetic algorithms. EHW is a key technology to explore the new application area requiring real-time performance and on-line adaptation. 1. Development of EHW-LSI for function level hardware evolution, which includes 15 DSPs in one chip. 2. Application of the EHW to the practical industrial applications such as data compression, ATM control, digital mobile communication. 3. Two patents : (1) the architecture and the processing method for programmable EHW-LSI. (2) The method of data compression for loss-less data, using EHW. 4. The first international conference for evolvable hardware was held by authors: Intl. Conf. on Evolvable Systems (ICES96). It was determined at ICES96 that ICES will be held every two years between Japan and Europe. So the new society has been established by us. (NEDO)

  15. The role of the visual hardware system in rugby performance ...

    African Journals Online (AJOL)

    This study explores the importance of the 'hardware' factors of the visual system in the game of rugby. A group of professional and club rugby players were tested and the results compared. The results were also compared with the established norms for elite athletes. The findings indicate no significant difference in hardware ...

  16. Secure coupling of hardware components

    NARCIS (Netherlands)

    Hoepman, J.H.; Joosten, H.J.M.; Knobbe, J.W.

    2011-01-01

    A method and a system for securing communication between at least a first and a second hardware components of a mobile device is described. The method includes establishing a first shared secret between the first and the second hardware components during an initialization of the mobile device and,

  17. NDAS Hardware Translation Layer Development

    Science.gov (United States)

    Nazaretian, Ryan N.; Holladay, Wendy T.

    2011-01-01

    The NASA Data Acquisition System (NDAS) project is aimed to replace all DAS software for NASA s Rocket Testing Facilities. There must be a software-hardware translation layer so the software can properly talk to the hardware. Since the hardware from each test stand varies, drivers for each stand have to be made. These drivers will act more like plugins for the software. If the software is being used in E3, then the software should point to the E3 driver package. If the software is being used at B2, then the software should point to the B2 driver package. The driver packages should also be filled with hardware drivers that are universal to the DAS system. For example, since A1, A2, and B2 all use the Preston 8300AU signal conditioners, then the driver for those three stands should be the same and updated collectively.

  18. Hardware for dynamic quantum computing.

    Science.gov (United States)

    Ryan, Colm A; Johnson, Blake R; Ristè, Diego; Donovan, Brian; Ohki, Thomas A

    2017-10-01

    We describe the hardware, gateware, and software developed at Raytheon BBN Technologies for dynamic quantum information processing experiments on superconducting qubits. In dynamic experiments, real-time qubit state information is fed back or fed forward within a fraction of the qubits' coherence time to dynamically change the implemented sequence. The hardware presented here covers both control and readout of superconducting qubits. For readout, we created a custom signal processing gateware and software stack on commercial hardware to convert pulses in a heterodyne receiver into qubit state assignments with minimal latency, alongside data taking capability. For control, we developed custom hardware with gateware and software for pulse sequencing and steering information distribution that is capable of arbitrary control flow in a fraction of superconducting qubit coherence times. Both readout and control platforms make extensive use of field programmable gate arrays to enable tailored qubit control systems in a reconfigurable fabric suitable for iterative development.

  19. Static Scheduling of Periodic Hardware Tasks with Precedence and Deadline Constraints on Reconfigurable Hardware Devices

    Directory of Open Access Journals (Sweden)

    Ikbel Belaid

    2011-01-01

    Full Text Available Task graph scheduling for reconfigurable hardware devices can be defined as finding a schedule for a set of periodic tasks with precedence, dependence, and deadline constraints as well as their optimal allocations on the available heterogeneous hardware resources. This paper proposes a new methodology comprising three main stages. Using these three main stages, dynamic partial reconfiguration and mixed integer programming, pipelined scheduling and efficient placement are achieved and enable parallel computing of the task graph on the reconfigurable devices by optimizing placement/scheduling quality. Experiments on an application of heterogeneous hardware tasks demonstrate an improvement of resource utilization of 12.45% of the available reconfigurable resources corresponding to a resource gain of 17.3% compared to a static design. The configuration overhead is reduced to 2% of the total running time. Due to pipelined scheduling, the task graph spanning is minimized by 4% compared to sequential execution of the graph.

  20. FY1995 evolvable hardware chip; 1995 nendo shinkasuru hardware chip

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This project aims at the development of 'Evolvable Hardware' (EHW) which can adapt its hardware structure to the environment to attain better hardware performance, under the control of genetic algorithms. EHW is a key technology to explore the new application area requiring real-time performance and on-line adaptation. 1. Development of EHW-LSI for function level hardware evolution, which includes 15 DSPs in one chip. 2. Application of the EHW to the practical industrial applications such as data compression, ATM control, digital mobile communication. 3. Two patents : (1) the architecture and the processing method for programmable EHW-LSI. (2) The method of data compression for loss-less data, using EHW. 4. The first international conference for evolvable hardware was held by authors: Intl. Conf. on Evolvable Systems (ICES96). It was determined at ICES96 that ICES will be held every two years between Japan and Europe. So the new society has been established by us. (NEDO)

  1. Proof-Carrying Hardware: Concept and Prototype Tool Flow for Online Verification

    OpenAIRE

    Drzevitzky, Stephanie; Kastens, Uwe; Platzner, Marco

    2010-01-01

    Dynamically reconfigurable hardware combines hardware performance with software-like flexibility and finds increasing use in networked systems. The capability to load hardware modules at runtime provides these systems with an unparalleled degree of adaptivity but at the same time poses new challenges for security and safety. In this paper, we elaborate on the presentation of proof carrying hardware (PCH) as a novel approach to reconfigurable system security. PCH takes ...

  2. Optimum Maintenance Strategies for Highway Bridges

    DEFF Research Database (Denmark)

    Frangopol, Dan M.; Thoft-Christensen, Palle; Das, Parag C.

    As bridges become older and maintenance costs become higher, transportation agencies are facing challenges related to implementation of optimal bridge management programs based on life cycle cost considerations. A reliability-based approach is necessary to find optimal solutions based on minimum...... expected life-cycle costs or maximum life-cycle benefits. This is because many maintenance activities can be associated with significant costs, but their effects on bridge safety can be minor. In this paper, the program of an investigation on optimum maintenance strategies for different bridge types...... is described. The end result of this investigation will be a general reliability-based framework to be used by the UK Highways Agency in order to plan optimal strategies for the maintenance of its bridge network so as to optimize whole-life costs....

  3. Hardware standardization for embedded systems

    International Nuclear Information System (INIS)

    Sharma, M.K.; Kalra, Mohit; Patil, M.B.; Mohanty, Ashutos; Ganesh, G.; Biswas, B.B.

    2010-01-01

    Reactor Control Division (RCnD) has been one of the main designers of safety and safety related systems for power reactors. These systems have been built using in-house developed hardware. Since the present set of hardware was designed long ago, a need was felt to design a new family of hardware boards. A Working Group on Electronics Hardware Standardization (WG-EHS) was formed with an objective to develop a family of boards, which is general purpose enough to meet the requirements of the system designers/end users. RCnD undertook the responsibility of design, fabrication and testing of boards for embedded systems. VME and a proprietary I/O bus were selected as the two system buses. The boards have been designed based on present day technology and components. The intelligence of these boards has been implemented on FPGA/CPLD using VHDL. This paper outlines the various boards that have been developed with a brief description. (author)

  4. Commodity hardware and software summary

    International Nuclear Information System (INIS)

    Wolbers, S.

    1997-04-01

    A review is given of the talks and papers presented in the Commodity Hardware and Software Session at the CHEP97 conference. An examination of the trends leading to the consideration of PC's for HEP is given, and a status of the work that is being done at various HEP labs and Universities is given

  5. On the optimum energy mix

    International Nuclear Information System (INIS)

    Fujii, Yasumasa

    2011-01-01

    After the Fukushima accident occurred in March 2011, reform of Japan's basic energy plan and energy supply system was reported to be under discussion such as to reduce dependence on nuclear power. Planning of energy policy should be considered based on four evaluation indexes of 'economics'. 'environmental effects', 'stable supply of energy' and 'sustainability'. 'Stable supply of energy' should include stability of domestic energy supply infrastructure against natural disasters in addition to stable supply of overseas resources. 'Sustainability' meant long-term availability of resources. Since there did not exist an almighty energy source and energy supply system superior in terms of every above-mentioned evaluation index, it would be wise to use combining various energy sources and supply system in rational way. This combination lead to optimum energy mix, so-called 'Energy Best Mix'. The author evaluated characteristics of energy sources and energy supply system in terms of four indexes and showed best energy mix from short-, medium- and long-term perspectives. Since fossil fuel resources would deplete anyhow, it would be inevitable for human being to be dependent on non-fossil energy resources regardless of greenhouse effects. At present it would be difficult and no guarantee to establish society fully dependent on renewable energy, then it would be probable to need utilization of nuclear energy in the long term. (T. Tanaka)

  6. BIOLOGICALLY INSPIRED HARDWARE CELL ARCHITECTURE

    DEFF Research Database (Denmark)

    2010-01-01

    Disclosed is a system comprising: - a reconfigurable hardware platform; - a plurality of hardware units defined as cells adapted to be programmed to provide self-organization and self-maintenance of the system by means of implementing a program expressed in a programming language defined as DNA...... language, where each cell is adapted to communicate with one or more other cells in the system, and where the system further comprises a converter program adapted to convert keywords from the DNA language to a binary DNA code; where the self-organisation comprises that the DNA code is transmitted to one...... or more of the cells, and each of the one or more cells is adapted to determine its function in the system; where if a fault occurs in a first cell and the first cell ceases to perform its function, self-maintenance is performed by that the system transmits information to the cells that the first cell has...

  7. Hardware-Accelerated Simulated Radiography

    International Nuclear Information System (INIS)

    Laney, D; Callahan, S; Max, N; Silva, C; Langer, S.; Frank, R

    2005-01-01

    We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32-bit floating point texture capabilities to obtain solutions to the radiative transport equation for X-rays. The hardware accelerated solutions are accurate enough to enable scientists to explore the experimental design space with greater efficiency than the methods currently in use. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedral meshes that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester

  8. The principles of computer hardware

    CERN Document Server

    Clements, Alan

    2000-01-01

    Principles of Computer Hardware, now in its third edition, provides a first course in computer architecture or computer organization for undergraduates. The book covers the core topics of such a course, including Boolean algebra and logic design; number bases and binary arithmetic; the CPU; assembly language; memory systems; and input/output methods and devices. It then goes on to cover the related topics of computer peripherals such as printers; the hardware aspects of the operating system; and data communications, and hence provides a broader overview of the subject. Its readable, tutorial-based approach makes it an accessible introduction to the subject. The book has extensive in-depth coverage of two microprocessors, one of which (the 68000) is widely used in education. All chapters in the new edition have been updated. Major updates include: powerful software simulations of digital systems to accompany the chapters on digital design; a tutorial-based introduction to assembly language, including many exam...

  9. Hunting for hardware changes in data centres

    International Nuclear Information System (INIS)

    Coelho dos Santos, M; Steers, I; Szebenyi, I; Xafi, A; Barring, O; Bonfillou, E

    2012-01-01

    With many servers and server parts the environment of warehouse sized data centres is increasingly complex. Server life-cycle management and hardware failures are responsible for frequent changes that need to be managed. To manage these changes better a project codenamed “hardware hound” focusing on hardware failure trending and hardware inventory has been started at CERN. By creating and using a hardware oriented data set - the inventory - with detailed information on servers and their parts as well as tracking changes to this inventory, the project aims at, for example, being able to discover trends in hardware failure rates.

  10. Finding a Roadmap to achieve Large Neuromorphic Hardware Systems

    Directory of Open Access Journals (Sweden)

    Jennifer eHasler

    2013-09-01

    Full Text Available Neuromorphic systems are gaining increasing importance in an era where CMOS digital computing techniques are meeting hard physical limits. These silicon systems mimic extremely energy efficient neural computing structures, potentially both for solving engineering applications as well as understanding neural computation. Towards this end, the authors provide a glimpse at what the technology evolution roadmap looks like for these systems so that Neuromorphic engineers may gain the same benefit of anticipation and foresight that IC designers gained from Moore's law many years ago. Scaling of energy efficiency, performance, and size will be discussed as well as how the implementation and application space of Neuromorphic systems are expected to evolve over time.

  11. Problems in the optimum display of SPECT images

    International Nuclear Information System (INIS)

    Fielding, S.L.

    1988-01-01

    The instrumentation, computer hardware and software, and the image display system are all very important in the production of diagnostically useful SPECT images. Acquisition and processing parameters are discussed which can affect the quality of SPECT images. Regular quality control of the gamma camera and computer is important to keep the artifacts due to instrumentation to a minimum. The choice of reconstruction method will depend on the statistics in the study. The paper has shown that for high count rate studies, a high pass filter can be used to enhance the reconstructions. For lower count rate studies, pre-filtering is useful and the data can be reconstructed into thicker slices to reduce the effect of image noise. Finally, the optimum display for the images must be chosen, so that the information contained in the SPECT data can be easily perceived by the clinician. (orig.) [de

  12. CT4 - Cost-Optimum Procedures

    DEFF Research Database (Denmark)

    Thomsen, Kirsten Engelund; Wittchen, Kim Bjarne

    This report collects the status in European member states regarding implementation of the cos optimum procedure for setting energy performance requirements to new and existing buildings.......This report collects the status in European member states regarding implementation of the cos optimum procedure for setting energy performance requirements to new and existing buildings....

  13. Compiling quantum circuits to realistic hardware architectures using temporal planners

    Science.gov (United States)

    Venturelli, Davide; Do, Minh; Rieffel, Eleanor; Frank, Jeremy

    2018-04-01

    To run quantum algorithms on emerging gate-model quantum hardware, quantum circuits must be compiled to take into account constraints on the hardware. For near-term hardware, with only limited means to mitigate decoherence, it is critical to minimize the duration of the circuit. We investigate the application of temporal planners to the problem of compiling quantum circuits to newly emerging quantum hardware. While our approach is general, we focus on compiling to superconducting hardware architectures with nearest neighbor constraints. Our initial experiments focus on compiling Quantum Alternating Operator Ansatz (QAOA) circuits whose high number of commuting gates allow great flexibility in the order in which the gates can be applied. That freedom makes it more challenging to find optimal compilations but also means there is a greater potential win from more optimized compilation than for less flexible circuits. We map this quantum circuit compilation problem to a temporal planning problem, and generated a test suite of compilation problems for QAOA circuits of various sizes to a realistic hardware architecture. We report compilation results from several state-of-the-art temporal planners on this test set. This early empirical evaluation demonstrates that temporal planning is a viable approach to quantum circuit compilation.

  14. Qualification of software and hardware

    International Nuclear Information System (INIS)

    Gossner, S.; Schueller, H.; Gloee, G.

    1987-01-01

    The qualification of on-line process control equipment is subdivided into three areas: 1) materials and structural elements; 2) on-line process-control components and devices; 3) electrical systems (reactor protection and confinement system). Microprocessor-aided process-control equipment are difficult to verify for failure-free function owing to the complexity of the functional structures of the hardware and to the variety of the software feasible for microprocessors. Hence, qualification will make great demands on the inspecting expert. (DG) [de

  15. Door Hardware and Installations; Carpentry: 901894.

    Science.gov (United States)

    Dade County Public Schools, Miami, FL.

    The curriculum guide outlines a course designed to provide instruction in the selection, preparation, and installation of hardware for door assemblies. The course is divided into five blocks of instruction (introduction to doors and hardware, door hardware, exterior doors and jambs, interior doors and jambs, and a quinmester post-test) totaling…

  16. Development of Non-Optimum Factors for Launch Vehicle Propellant Tank Bulkhead Weight Estimation

    Science.gov (United States)

    Wu, K. Chauncey; Wallace, Matthew L.; Cerro, Jeffrey A.

    2012-01-01

    Non-optimum factors are used during aerospace conceptual and preliminary design to account for the increased weights of as-built structures due to future manufacturing and design details. Use of higher-fidelity non-optimum factors in these early stages of vehicle design can result in more accurate predictions of a concept s actual weights and performance. To help achieve this objective, non-optimum factors are calculated for the aluminum-alloy gores that compose the ogive and ellipsoidal bulkheads of the Space Shuttle Super-Lightweight Tank propellant tanks. Minimum values for actual gore skin thicknesses and weld land dimensions are extracted from selected production drawings, and are used to predict reference gore weights. These actual skin thicknesses are also compared to skin thicknesses predicted using classical structural mechanics and tank proof-test pressures. Both coarse and refined weights models are developed for the gores. The coarse model is based on the proof pressure-sized skin thicknesses, and the refined model uses the actual gore skin thicknesses and design detail dimensions. To determine the gore non-optimum factors, these reference weights are then compared to flight hardware weights reported in a mass properties database. When manufacturing tolerance weight estimates are taken into account, the gore non-optimum factors computed using the coarse weights model range from 1.28 to 2.76, with an average non-optimum factor of 1.90. Application of the refined weights model yields non-optimum factors between 1.00 and 1.50, with an average non-optimum factor of 1.14. To demonstrate their use, these calculated non-optimum factors are used to predict heavier, more realistic gore weights for a proposed heavy-lift launch vehicle s propellant tank bulkheads. These results indicate that relatively simple models can be developed to better estimate the actual weights of large structures for future launch vehicles.

  17. Fast image processing on parallel hardware

    International Nuclear Information System (INIS)

    Bittner, U.

    1988-01-01

    Current digital imaging modalities in the medical field incorporate parallel hardware which is heavily used in the stage of image formation like the CT/MR image reconstruction or in the DSA real time subtraction. In order to image post-processing as efficient as image acquisition, new software approaches have to be found which take full advantage of the parallel hardware architecture. This paper describes the implementation of two-dimensional median filter which can serve as an example for the development of such an algorithm. The algorithm is analyzed by viewing it as a complete parallel sort of the k pixel values in the chosen window which leads to a generalization to rank order operators and other closely related filters reported in literature. A section about the theoretical base of the algorithm gives hints for how to characterize operations suitable for implementations on pipeline processors and the way to find the appropriate algorithms. Finally some results that computation time and usefulness of medial filtering in radiographic imaging are given

  18. Travel Software using GPU Hardware

    CERN Document Server

    Szalwinski, Chris M; Dimov, Veliko Atanasov; CERN. Geneva. ATS Department

    2015-01-01

    Travel is the main multi-particle tracking code being used at CERN for the beam dynamics calculations through hadron and ion linear accelerators. It uses two routines for the calculation of space charge forces, namely, rings of charges and point-to-point. This report presents the studies to improve the performance of Travel using GPU hardware. The studies showed that the performance of Travel with the point-to-point simulations of space-charge effects can be speeded up at least 72 times using current GPU hardware. Simple recompilation of the source code using an Intel compiler can improve performance at least 4 times without GPU support. The limited memory of the GPU is the bottleneck. Two algorithms were investigated on this point: repeated computation and tiling. The repeating computation algorithm is simpler and is the currently recommended solution. The tiling algorithm was more complicated and degraded performance. Both build and test instructions for the parallelized version of the software are inclu...

  19. NOAA Optimum Interpolation (OI) SST V2

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The optimum interpolation (OI) sea surface temperature (SST) analysis is produced weekly on a one-degree grid. The analysis uses in situ and satellite SST's plus...

  20. On Optimum Safety Levels of Breakwaters

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Sørensen, John Dalsgaard

    2006-01-01

    The paper presents results from numerical simulations performed with the objective of identifying optimum design safety levels of conventional rubble mound and caisson breakwaters, corresponding to the lowest costs over the service life of the structures. The work is related to the PIANC Working...... Group 47 on "Selection of type of breakwater structures". The paper summaries results given in Burcharth and Sorensen (2005) related to outer rubble mound breakwaters but focus on optimum safety levels for outer caisson breakwaters on low and high rubble foundations placed on sea beds strong enough...... to resist geotechnical slip failures. Optimum safety levels formulated for use both in deterministic and probabilistic design procedures are given. Results obtained so far indicate that the optimum safety levels for caisson breakwaters are much higher than for rubble mound breakwaters....

  1. NOAA Daily Optimum Interpolation Sea Surface Temperature

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA 1/4° daily Optimum Interpolation Sea Surface Temperature (or daily OISST) is an analysis constructed by combining observations from different platforms...

  2. Optimum Stratification of a Skewed Population

    OpenAIRE

    D.K. Rao; M.G.M. Khan; K.G. Reddy

    2014-01-01

    The focus of this paper is to develop a technique of solving a combined problem of determining Optimum Strata Boundaries(OSB) and Optimum Sample Size (OSS) of each stratum, when the population understudy isskewed and the study variable has a Pareto frequency distribution. The problem of determining the OSB isformulated as a Mathematical Programming Problem (MPP) which is then solved by dynamic programming technique. A numerical example is presented to illustrate the compu...

  3. Optimum design for pipe-support allocation against seismic loading

    International Nuclear Information System (INIS)

    Hara, Fumio; Iwasaki, Akira

    1996-01-01

    This paper deals with the optimum design methodology of a piping system subjected to a seismic design loading to reduce its dynamic response by selecting the location of pipe supports and whereby reducing the number of pipe supports to be used. The author employs the Genetic Algorithm for obtaining a reasonably optimum solution of the pipe support location, support capacity and number of supports. The design condition specified by the support location, support capacity and the number of supports to be used is encored by an integer number string for each of the support allocation candidates and they prepare many strings for expressing various kinds of pipe-support allocation state. Corresponding to each string, the authors evaluate the seismic response of the piping system to the design seismic excitation and apply the Genetic Algorithm to select the next generation candidates of support allocation to improve the seismic design performance specified by a weighted linear combination of seismic response magnitude, support capacity and the number of supports needed. Continuing this selection process, they find a reasonably optimum solution to the seismic design problem. They examine the feasibility of this optimum design method by investigating the optimum solution for 5, 7 and 10 degree-of-freedom models of piping system, and find that this method can offer one a theoretically feasible solution to the problem. They will be, thus, liberated from the severe uncertainty of damping value when the pipe support guaranties the design capacity of damping. Finally, they discuss the usefulness of the Genetic Algorithm for the seismic design problem of piping systems and some sensitive points when it will be applied to actual design problems

  4. Hardware Support for Dynamic Languages

    DEFF Research Database (Denmark)

    Schleuniger, Pascal; Karlsson, Sven; Probst, Christian W.

    2011-01-01

    In recent years, dynamic programming languages have enjoyed increasing popularity. For example, JavaScript has become one of the most popular programming languages on the web. As the complexity of web applications is growing, compute-intensive workloads are increasingly handed off to the client...... side. While a lot of effort is put in increasing the performance of web browsers, we aim for multicore systems with dedicated cores to effectively support dynamic languages. We have designed Tinuso, a highly flexible core for experimentation that is optimized for high performance when implemented...... on FPGA. We composed a scalable multicore configuration where we study how hardware support for software speculation can be used to increase the performance of dynamic languages....

  5. Is there an optimum level for renewable energy?

    International Nuclear Information System (INIS)

    Moriarty, Patrick; Honnery, Damon

    2011-01-01

    Because continued heavy use of fossil fuel will lead to both global climate change and resource depletion of easily accessible fuels, many researchers advocate a rapid transition to renewable energy (RE) sources. In this paper we examine whether RE can provide anywhere near the levels of primary energy forecast by various official organisations in a business-as-usual world. We find that the energy costs of energy will rise in a non-linear manner as total annual primary RE output increases. In addition, increasing levels of RE will lead to increasing levels of ecosystem maintenance energy costs per unit of primary energy output. The result is that there is an optimum level of primary energy output, in the sense that the sustainable level of energy available to the economy is maximised at that level. We further argue that this optimum occurs at levels well below the energy consumption forecasts for a few decades hence. - Highlights: → We need to shift to renewable energy for climate change and fuel depletion reasons. → We examine whether renewable energy can provide the primary energy levels forecast. → The energy costs of energy rise non-linearly with renewable energy output. → There is thus an optimum level of primary energy output. → This optimum occurs at levels well below future official energy use forecasts.

  6. Optimum burnup of BAEC TRIGA research reactor

    International Nuclear Information System (INIS)

    Lyric, Zoairia Idris; Mahmood, Mohammad Sayem; Motalab, Mohammad Abdul; Khan, Jahirul Haque

    2013-01-01

    Highlights: ► Optimum loading scheme for BAEC TRIGA core is out-to-in loading with 10 fuels/cycle starting with 5 for the first reload. ► The discharge burnup ranges from 17% to 24% of U235 per fuel element for full power (3 MW) operation. ► Optimum extension of operating core life is 100 MWD per reload cycle. - Abstract: The TRIGA Mark II research reactor of BAEC (Bangladesh Atomic Energy Commission) has been operating since 1986 without any reshuffling or reloading yet. Optimum fuel burnup strategy has been investigated for the present BAEC TRIGA core, where three out-to-in loading schemes have been inspected in terms of core life extension, burnup economy and safety. In considering different schemes of fuel loading, optimization has been searched by only varying the number of fuels discharged and loaded. A cost function has been defined and evaluated based on the calculated core life and fuel load and discharge. The optimum loading scheme has been identified for the TRIGA core, the outside-to-inside fuel loading with ten fuels for each cycle starting with five fuels for the first reload. The discharge burnup has been found ranging from 17% to 24% of U235 per fuel element and optimum extension of core operating life is 100 MWD for each loading cycle. This study will contribute to the in-core fuel management of TRIGA reactor

  7. Constructing Hardware in a Scale Embedded Language

    Energy Technology Data Exchange (ETDEWEB)

    2014-08-21

    Chisel is a new open-source hardware construction language developed at UC Berkeley that supports advanced hardware design using highly parameterized generators and layered domain-specific hardware languages. Chisel is embedded in the Scala programming language, which raises the level of hardware design abstraction by providing concepts including object orientation, functional programming, parameterized types, and type inference. From the same source, Chisel can generate a high-speed C++-based cycle-accurate software simulator, or low-level Verilog designed to pass on to standard ASIC or FPGA tools for synthesis and place and route.

  8. Open-source hardware for medical devices.

    Science.gov (United States)

    Niezen, Gerrit; Eslambolchilar, Parisa; Thimbleby, Harold

    2016-04-01

    Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device.

  9. Hardware Resource Allocation for Hardware/Software Partitioning in the LYCOS System

    DEFF Research Database (Denmark)

    Grode, Jesper Nicolai Riis; Knudsen, Peter Voigt; Madsen, Jan

    1998-01-01

    as a designer's/design tool's aid to generate good hardware allocations for use in hardware/software partitioning. The algorithm has been implemented in a tool under the LYCOS system. The results show that the allocations produced by the algorithm come close to the best allocations obtained by exhaustive search.......This paper presents a novel hardware resource allocation technique for hardware/software partitioning. It allocates hardware resources to the hardware data-path using information such as data-dependencies between operations in the application, and profiling information. The algorithm is useful...

  10. Optimum Tilt Angle at Tropical Region

    Directory of Open Access Journals (Sweden)

    S Soulayman

    2015-02-01

    Full Text Available : One of the important parameters that affect the performance of a solar collector is its tilt angle with the horizon. This is because of the variation of tilt angle changes the amount of solar radiation reaching the collector surface. Meanwhile, is the rule of thumb, which says that solar collector Equator facing position is the best, is valid for tropical region? Thus, it is required to determine the optimum tilt as for Equator facing and for Pole oriented collectors. In addition, the question that may arise: how many times is reasonable for adjusting collector tilt angle for a definite value of surface azimuth angle? A mathematical model was used for estimating the solar radiation on a tilted surface, and to determine the optimum tilt angle and orientation (surface azimuth angle for the solar collector at any latitude. This model was applied for determining optimum tilt angle and orientation in the tropical zones, on a daily basis, as well as for a specific period. The optimum angle was computed by searching for the values for which the radiation on the collector surface is a maximum for a particular day or a specific period. The results reveal that changing the tilt angle 12 times in a year (i.e. using the monthly optimum tilt angle maintains approximately the total amount of solar radiation near the maximum value that is found by changing the tilt angle daily to its optimum value. This achieves a yearly gain in solar radiation of 11% to 18% more than the case of a solar collector fixed on a horizontal surface.

  11. A methodology for selecting optimum organizations for space communities

    Science.gov (United States)

    Ragusa, J. M.

    1978-01-01

    This paper suggests that a methodology exists for selecting optimum organizations for future space communities of various sizes and purposes. Results of an exploratory study to identify an optimum hypothetical organizational structure for a large earth-orbiting multidisciplinary research and applications (R&A) Space Base manned by a mixed crew of technologists are presented. Since such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than the empirical testing of it. The principal finding of this research was that a four-level project type 'total matrix' model will optimize the effectiveness of Space Base technologists. An overall conclusion which can be reached from the research is that application of this methodology, or portions of it, may provide planning insights for the formal organizations which will be needed during the Space Industrialization Age.

  12. Hardware availability calculations and results of the IFMIF accelerator facility

    International Nuclear Information System (INIS)

    Bargalló, Enric; Arroyo, Jose Manuel; Abal, Javier; Beauvais, Pierre-Yves; Gobin, Raphael; Orsini, Fabienne; Weber, Moisés; Podadera, Ivan; Grespan, Francesco; Fagotti, Enrico; De Blas, Alfredo; Dies, Javier; Tapia, Carlos; Mollá, Joaquín; Ibarra, Ángel

    2014-01-01

    Highlights: • IFMIF accelerator facility hardware availability analyses methodology is described. • Results of the individual hardware availability analyses are shown for the reference design. • Accelerator design improvements are proposed for each system. • Availability results are evaluated and compared with the requirements. - Abstract: Hardware availability calculations have been done individually for each system of the deuteron accelerators of the International Fusion Materials Irradiation Facility (IFMIF). The principal goal of these analyses is to estimate the availability of the systems, compare it with the challenging IFMIF requirements and find new paths to improve availability performances. Major unavailability contributors are highlighted and possible design changes are proposed in order to achieve the hardware availability requirements established for each system. In this paper, such possible improvements are implemented in fault tree models and the availability results are evaluated. The parallel activity on the design and construction of the linear IFMIF prototype accelerator (LIPAc) provides detailed design information for the RAMI (reliability, availability, maintainability and inspectability) analyses and allows finding out the improvements that the final accelerator could have. Because of the R and D behavior of the LIPAc, RAMI improvements could be the major differences between the prototype and the IFMIF accelerator design

  13. Hardware availability calculations and results of the IFMIF accelerator facility

    Energy Technology Data Exchange (ETDEWEB)

    Bargalló, Enric, E-mail: enric.bargallo-font@upc.edu [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Arroyo, Jose Manuel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Abal, Javier [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Beauvais, Pierre-Yves; Gobin, Raphael; Orsini, Fabienne [Commissariat à l’Energie Atomique, Saclay (France); Weber, Moisés; Podadera, Ivan [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Grespan, Francesco; Fagotti, Enrico [Istituto Nazionale di Fisica Nucleare, Legnaro (Italy); De Blas, Alfredo; Dies, Javier; Tapia, Carlos [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Mollá, Joaquín; Ibarra, Ángel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain)

    2014-10-15

    Highlights: • IFMIF accelerator facility hardware availability analyses methodology is described. • Results of the individual hardware availability analyses are shown for the reference design. • Accelerator design improvements are proposed for each system. • Availability results are evaluated and compared with the requirements. - Abstract: Hardware availability calculations have been done individually for each system of the deuteron accelerators of the International Fusion Materials Irradiation Facility (IFMIF). The principal goal of these analyses is to estimate the availability of the systems, compare it with the challenging IFMIF requirements and find new paths to improve availability performances. Major unavailability contributors are highlighted and possible design changes are proposed in order to achieve the hardware availability requirements established for each system. In this paper, such possible improvements are implemented in fault tree models and the availability results are evaluated. The parallel activity on the design and construction of the linear IFMIF prototype accelerator (LIPAc) provides detailed design information for the RAMI (reliability, availability, maintainability and inspectability) analyses and allows finding out the improvements that the final accelerator could have. Because of the R and D behavior of the LIPAc, RAMI improvements could be the major differences between the prototype and the IFMIF accelerator design.

  14. Computer hardware description languages - A tutorial

    Science.gov (United States)

    Shiva, S. G.

    1979-01-01

    The paper introduces hardware description languages (HDL) as useful tools for hardware design and documentation. The capabilities and limitations of HDLs are discussed along with the guidelines needed in selecting an appropriate HDL. The directions for future work are provided and attention is given to the implementation of HDLs in microcomputers.

  15. Common Core: Teaching Optimum Topic Exploration (TOTE)

    Science.gov (United States)

    Karge, Belinda Dunnick; Moore, Roxane Kushner

    2015-01-01

    The Common Core has become a household term and yet many educators do not understand what it means. This article explains the historical perspectives of the Common Core and gives guidance to teachers in application of Teaching Optimum Topic Exploration (TOTE) necessary for full implementation of the Common Core State Standards. An effective…

  16. Optimum fiber distribution in singlewall corrugated fiberboard

    Science.gov (United States)

    Millard W. Johnson; Thomas J. Urbanik; William E. Denniston

    1979-01-01

    Determining optimum distribution of fiber through rational design of corrugated fiberboard could result in significant reductions in fiber required to meet end-use conditions, with subsequent reductions in price pressure and extension of the softwood timber supply. A theory of thin plates under large deformations is developed that is both kinematically and physically...

  17. Calculations enable optimum design of magnetic brake

    Science.gov (United States)

    Kosmahl, H. G.

    1966-01-01

    Mathematical analysis and computations determine optimum magnetic coil configurations for a magnetic brake which controllably decelerates a free falling load to a soft stop. Calculations on unconventionally wound coils determine the required parameters for the desired deceleration with minimum electrical energy supplied to the stationary coil.

  18. Genotype x environment interaction and optimum resource ...

    African Journals Online (AJOL)

    ... x E) interaction and to determine the optimum resource allocation for cassava yield trials. The effects of environment, genotype and G x E interaction were highly significant for all yield traits. Variations due to G x E interaction were greater than those due to genotypic differences for all yield traits. Genotype x location x year ...

  19. Determination of the Optimum Thickness of Approximately ...

    African Journals Online (AJOL)

    In an attempt to conserve the world's scarce energy and material resources, a balance between the cost of heating a material and the optimum thickness of the material becomes vey essential. One of such materials is the local cast aluminium pot commonly used as cooking ware in Nigeria. This paper therefore sets up a ...

  20. Development of the optimum rotor theories

    DEFF Research Database (Denmark)

    Okulov, Valery; Sørensen, Jens Nørkær; van Kuik, Gijs A.M.

    The purpose of this study is the examination of optimum rotor theories with ideal load distributions along the blades, to analyze some of the underlying ideas and concepts, as well as to illuminate them. The book gives the historical background of the issue and presents the analysis of the problems...

  1. An evaluation of Skylab habitability hardware

    Science.gov (United States)

    Stokes, J.

    1974-01-01

    For effective mission performance, participants in space missions lasting 30-60 days or longer must be provided with hardware to accommodate their personal needs. Such habitability hardware was provided on Skylab. Equipment defined as habitability hardware was that equipment composing the food system, water system, sleep system, waste management system, personal hygiene system, trash management system, and entertainment equipment. Equipment not specifically defined as habitability hardware but which served that function were the Wardroom window, the exercise equipment, and the intercom system, which was occasionally used for private communications. All Skylab habitability hardware generally functioned as intended for the three missions, and most items could be considered as adequate concepts for future flights of similar duration. Specific components were criticized for their shortcomings.

  2. Comparative Modal Analysis of Sieve Hardware Designs

    Science.gov (United States)

    Thompson, Nathaniel

    2012-01-01

    The CMTB Thwacker hardware operates as a testbed analogue for the Flight Thwacker and Sieve components of CHIMRA, a device on the Curiosity Rover. The sieve separates particles with a diameter smaller than 150 microns for delivery to onboard science instruments. The sieving behavior of the testbed hardware should be similar to the Flight hardware for the results to be meaningful. The elastodynamic behavior of both sieves was studied analytically using the Rayleigh Ritz method in conjunction with classical plate theory. Finite element models were used to determine the mode shapes of both designs, and comparisons between the natural frequencies and mode shapes were made. The analysis predicts that the performance of the CMTB Thwacker will closely resemble the performance of the Flight Thwacker within the expected steady state operating regime. Excitations of the testbed hardware that will mimic the flight hardware were recommended, as were those that will improve the efficiency of the sieving process.

  3. An optimum analysis sequence for environmental gamma-ray spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    De la Torre, F.; Rios M, C.; Ruvalcaba A, M. G.; Mireles G, F.; Saucedo A, S.; Davila R, I.; Pinedo, J. L., E-mail: fta777@hotmail.co [Universidad Autonoma de Zacatecas, Centro Regional de Estudis Nucleares, Calle Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas (Mexico)

    2010-10-15

    This work aims to obtain an optimum analysis sequence for environmental gamma-ray spectroscopy by means of Genie 2000 (Canberra). Twenty different analysis sequences were customized using different peak area percentages and different algorithms for: 1) peak finding, and 2) peak area determination, and with or without the use of a library -based on evaluated nuclear data- of common gamma-ray emitters in environmental samples. The use of an optimum analysis sequence with certified nuclear information avoids the problems originated by the significant variations in out-of-date nuclear parameters of commercial software libraries. Interference-free gamma ray energies with absolute emission probabilities greater than 3.75% were included in the customized library. The gamma-ray spectroscopy system (based on a Ge Re-3522 Canberra detector) was calibrated both in energy and shape by means of the IAEA-2002 reference spectra for software intercomparison. To test the performance of the analysis sequences, the IAEA-2002 reference spectrum was used. The z-score and the reduced {chi}{sup 2} criteria were used to determine the optimum analysis sequence. The results show an appreciable variation in the peak area determinations and their corresponding uncertainties. Particularly, the combination of second derivative peak locate with simple peak area integration algorithms provides the greater accuracy. Lower accuracy comes from the combination of library directed peak locate algorithm and Genie's Gamma-M peak area determination. (Author)

  4. An optimum analysis sequence for environmental gamma-ray spectrometry

    International Nuclear Information System (INIS)

    De la Torre, F.; Rios M, C.; Ruvalcaba A, M. G.; Mireles G, F.; Saucedo A, S.; Davila R, I.; Pinedo, J. L.

    2010-10-01

    This work aims to obtain an optimum analysis sequence for environmental gamma-ray spectroscopy by means of Genie 2000 (Canberra). Twenty different analysis sequences were customized using different peak area percentages and different algorithms for: 1) peak finding, and 2) peak area determination, and with or without the use of a library -based on evaluated nuclear data- of common gamma-ray emitters in environmental samples. The use of an optimum analysis sequence with certified nuclear information avoids the problems originated by the significant variations in out-of-date nuclear parameters of commercial software libraries. Interference-free gamma ray energies with absolute emission probabilities greater than 3.75% were included in the customized library. The gamma-ray spectroscopy system (based on a Ge Re-3522 Canberra detector) was calibrated both in energy and shape by means of the IAEA-2002 reference spectra for software intercomparison. To test the performance of the analysis sequences, the IAEA-2002 reference spectrum was used. The z-score and the reduced χ 2 criteria were used to determine the optimum analysis sequence. The results show an appreciable variation in the peak area determinations and their corresponding uncertainties. Particularly, the combination of second derivative peak locate with simple peak area integration algorithms provides the greater accuracy. Lower accuracy comes from the combination of library directed peak locate algorithm and Genie's Gamma-M peak area determination. (Author)

  5. WHAT IS THE OPTIMUM SIZE OF GOVERNMENT: A SUGGESTION

    Directory of Open Access Journals (Sweden)

    Aykut Ekinci

    2011-01-01

    Full Text Available What is the optimum size of government? When the rule of law and the establishment of private property rights are taken into consideration, it is clear that the answer will not be at some 0%. On the other hand, when the experience of the old Soviet Union, East Germany and North Korea is considered, the answer will not be at some 100% either. Therefore, extreme points should not be the right answer. This study offers using normal distribution to answer this question. The study has revealed the following findings: (i The total amount of public expenditures as % of GDP, a is at minimum level at 4.55% rate, b is at optimum level at 13.4% rate, c is at maximum level at 31.7%. (ii Thus, as a fiscal rule, countries should: a choose the total amount of public expenditures as % of GDP ≤ 31.7% b target 13.4%. (iii Tree dimensional (3D normal distribution demonstrates that a healthy market system could be built upon a healthy government system (iv This approach rejects Wagner’s law. In a healthy growing economy, optimum government size could be kept at 13.4%. (v The UK, the USA and the European countries have been in the Keynesian-Marxist area, which reduces their average growth.

  6. PERANCANGAN APLIKASI SISTEM PAKAR DIAGNOSA KERUSAKAN HARDWARE KOMPUTER METODE FORWARD CHAINING

    Directory of Open Access Journals (Sweden)

    Ali Akbar Rismayadi

    2016-09-01

    Full Text Available Abstract Damage to computer hardware, not a big disaster, because not all damage to computer hardware can not be repaired, nearly all computer users, whether public or institutions often suffer various kinds of damage that occurred in the computer hardware it has, and the damage can be caused by various factors that are basically as the user does not know the cause of what makes the computer hardware used damaged. Therefore, it is necessary to build an application that can help users to mendiganosa damage to computer hardware. So that everyone can diagnose the type of hardware damage his computer. Development of expert system diagnosis of damage to computer hardware uses forward chaining method by promoting alisisis descriptive of various damage data obtained from several experts and other sources of literature to reach a conclusion on the diagnosis of damage. As well as using the waterfall model as a model system development, starting from the analysis stage to stage software needs support. This application is built using a programming language tools Eclipse ADT as well as SQLite as its database. diagnosis expert system damage computer hardware is expected to be used as a tool to help find the causes of damage to computer hardware independently without the help of a computer technician.

  7. Transmission delays in hardware clock synchronization

    Science.gov (United States)

    Shin, Kang G.; Ramanathan, P.

    1988-01-01

    Various methods, both with software and hardware, have been proposed to synchronize a set of physical clocks in a system. Software methods are very flexible and economical but suffer an excessive time overhead, whereas hardware methods require no time overhead but are unable to handle transmission delays in clock signals. The effects of nonzero transmission delays in synchronization have been studied extensively in the communication area in the absence of malicious or Byzantine faults. The authors show that it is easy to incorporate the ideas from the communication area into the existing hardware clock synchronization algorithms to take into account the presence of both malicious faults and nonzero transmission delays.

  8. Optimum Operational Parameters for Yawed Wind Turbines

    Directory of Open Access Journals (Sweden)

    David A. Peters

    2011-01-01

    Full Text Available A set of systematical optimum operational parameters for wind turbines under various wind directions is derived by using combined momentum-energy and blade-element-energy concepts. The derivations are solved numerically by fixing some parameters at practical values. Then, the interactions between the produced power and the influential factors of it are generated in the figures. It is shown that the maximum power produced is strongly affected by the wind direction, the tip speed, the pitch angle of the rotor, and the drag coefficient, which are specifically indicated by figures. It also turns out that the maximum power can take place at two different optimum tip speeds in some cases. The equations derived herein can also be used in the modeling of tethered wind turbines which can keep aloft and deliver energy.

  9. OPTIMUM PROGRAMMABLE CONTROL OF UNMANNED FLYING VEHICLE

    Directory of Open Access Journals (Sweden)

    A. А. Lobaty

    2012-01-01

    Full Text Available The paper considers an analytical synthesis problem pertaining to programmable control of an unmanned flying vehicle while steering it to the fixed space point. The problem has been solved while applying a maximum principle which takes into account a final control purpose and its integral expenses. The paper presents an optimum law of controlling overload variation of a flying vehicle that has been obtained analytically

  10. Techniques for evaluating optimum data center operation

    Science.gov (United States)

    Hamann, Hendrik F.; Rodriguez, Sergio Adolfo Bermudez; Wehle, Hans-Dieter

    2017-06-14

    Techniques for modeling a data center are provided. In one aspect, a method for determining data center efficiency is provided. The method includes the following steps. Target parameters for the data center are obtained. Technology pre-requisite parameters for the data center are obtained. An optimum data center efficiency is determined given the target parameters for the data center and the technology pre-requisite parameters for the data center.

  11. Probabilistic studies for safety at optimum cost

    International Nuclear Information System (INIS)

    Pitner, P.

    1999-01-01

    By definition, the risk of failure of very reliable components is difficult to evaluate. How can the best strategies for in service inspection and maintenance be defined to limit this risk to an acceptable level at optimum cost? It is not sufficient to design structures with margins, it is also essential to understand how they age. The probabilistic approach has made it possible to develop well proven concepts. (author)

  12. The optimum lead thickness for lead-activation detectors

    International Nuclear Information System (INIS)

    Si Fenni; Hu Qingyuan

    2009-01-01

    The optimum lead thickness for lead-activation detectors has been studied in this paper. First existence of the optimum lead thickness is explained theoretically. Then the optimum lead thickness is obtained by two methods, MCNP5 calculation and mathematical estimation. At last factors which affect the optimum lead thickness are discussed. It turns out that the optimum lead thickness is irrelevant to incident neutron energies. It is recommended 2.5 cm generally.

  13. Hardware-in-the-Loop Testing

    Data.gov (United States)

    Federal Laboratory Consortium — RTC has a suite of Hardware-in-the Loop facilities that include three operational facilities that provide performance assessment and production acceptance testing of...

  14. Hardware device binding and mutual authentication

    Science.gov (United States)

    Hamlet, Jason R; Pierson, Lyndon G

    2014-03-04

    Detection and deterrence of device tampering and subversion by substitution may be achieved by including a cryptographic unit within a computing device for binding multiple hardware devices and mutually authenticating the devices. The cryptographic unit includes a physically unclonable function ("PUF") circuit disposed in or on the hardware device, which generates a binding PUF value. The cryptographic unit uses the binding PUF value during an enrollment phase and subsequent authentication phases. During a subsequent authentication phase, the cryptographic unit uses the binding PUF values of the multiple hardware devices to generate a challenge to send to the other device, and to verify a challenge received from the other device to mutually authenticate the hardware devices.

  15. Implementation of Hardware Accelerators on Zynq

    DEFF Research Database (Denmark)

    Toft, Jakob Kenn

    of the ARM Cortex-9 processor featured on the Zynq SoC, with regard to execution time, power dissipation and energy consumption. The implementation of the hardware accelerators were successful. Use of the Monte Carlo processor resulted in a significant increase in performance. The Telco hardware accelerator......In the recent years it has become obvious that the performance of general purpose processors are having trouble meeting the requirements of high performance computing applications of today. This is partly due to the relatively high power consumption, compared to the performance, of general purpose...... processors, which has made hardware accelerators an essential part of several datacentres and the worlds fastest super-computers. In this work, two different hardware accelerators were implemented on a Xilinx Zynq SoC platform mounted on the ZedBoard platform. The two accelerators are based on two different...

  16. Cooperative communications hardware, channel and PHY

    CERN Document Server

    Dohler, Mischa

    2010-01-01

    Facilitating Cooperation for Wireless Systems Cooperative Communications: Hardware, Channel & PHY focuses on issues pertaining to the PHY layer of wireless communication networks, offering a rigorous taxonomy of this dispersed field, along with a range of application scenarios for cooperative and distributed schemes, demonstrating how these techniques can be employed. The authors discuss hardware, complexity and power consumption issues, which are vital for understanding what can be realized at the PHY layer, showing how wireless channel models differ from more traditional

  17. Designing Secure Systems on Reconfigurable Hardware

    OpenAIRE

    Huffmire, Ted; Brotherton, Brett; Callegari, Nick; Valamehr, Jonathan; White, Jeff; Kastner, Ryan; Sherwood, Ted

    2008-01-01

    The extremely high cost of custom ASIC fabrication makes FPGAs an attractive alternative for deployment of custom hardware. Embedded systems based on reconfigurable hardware integrate many functions onto a single device. Since embedded designers often have no choice but to use soft IP cores obtained from third parties, the cores operate at different trust levels, resulting in mixed trust designs. The goal of this project is to evaluate recently proposed security primitives for reconfigurab...

  18. IDD Archival Hardware Architecture and Workflow

    Energy Technology Data Exchange (ETDEWEB)

    Mendonsa, D; Nekoogar, F; Martz, H

    2008-10-09

    This document describes the functionality of every component in the DHS/IDD archival and storage hardware system shown in Fig. 1. The document describes steps by step process of image data being received at LLNL then being processed and made available to authorized personnel and collaborators. Throughout this document references will be made to one of two figures, Fig. 1 describing the elements of the architecture and the Fig. 2 describing the workflow and how the project utilizes the available hardware.

  19. Software for Managing Inventory of Flight Hardware

    Science.gov (United States)

    Salisbury, John; Savage, Scott; Thomas, Shirman

    2003-01-01

    The Flight Hardware Support Request System (FHSRS) is a computer program that relieves engineers at Marshall Space Flight Center (MSFC) of most of the non-engineering administrative burden of managing an inventory of flight hardware. The FHSRS can also be adapted to perform similar functions for other organizations. The FHSRS affords a combination of capabilities, including those formerly provided by three separate programs in purchasing, inventorying, and inspecting hardware. The FHSRS provides a Web-based interface with a server computer that supports a relational database of inventory; electronic routing of requests and approvals; and electronic documentation from initial request through implementation of quality criteria, acquisition, receipt, inspection, storage, and final issue of flight materials and components. The database lists both hardware acquired for current projects and residual hardware from previous projects. The increased visibility of residual flight components provided by the FHSRS has dramatically improved the re-utilization of materials in lieu of new procurements, resulting in a cost savings of over $1.7 million. The FHSRS includes subprograms for manipulating the data in the database, informing of the status of a request or an item of hardware, and searching the database on any physical or other technical characteristic of a component or material. The software structure forces normalization of the data to facilitate inquiries and searches for which users have entered mixed or inconsistent values.

  20. Optimum Actuator Selection with a Genetic Algorithm for Aircraft Control

    Science.gov (United States)

    Rogers, James L.

    2004-01-01

    The placement of actuators on a wing determines the control effectiveness of the airplane. One approach to placement maximizes the moments about the pitch, roll, and yaw axes, while minimizing the coupling. For example, the desired actuators produce a pure roll moment without at the same time causing much pitch or yaw. For a typical wing, there is a large set of candidate locations for placing actuators, resulting in a substantially larger number of combinations to examine in order to find an optimum placement satisfying the mission requirements and mission constraints. A genetic algorithm has been developed for finding the best placement for four actuators to produce an uncoupled pitch moment. The genetic algorithm has been extended to find the minimum number of actuators required to provide uncoupled pitch, roll, and yaw control. A simplified, untapered, unswept wing is the model for each application.

  1. Developed Hybrid Model for Propylene Polymerisation at Optimum Reaction Conditions

    Directory of Open Access Journals (Sweden)

    Mohammad Jakir Hossain Khan

    2016-02-01

    Full Text Available A statistical model combined with CFD (computational fluid dynamic method was used to explain the detailed phenomena of the process parameters, and a series of experiments were carried out for propylene polymerisation by varying the feed gas composition, reaction initiation temperature, and system pressure, in a fluidised bed catalytic reactor. The propylene polymerisation rate per pass was considered the response to the analysis. Response surface methodology (RSM, with a full factorial central composite experimental design, was applied to develop the model. In this study, analysis of variance (ANOVA indicated an acceptable value for the coefficient of determination and a suitable estimation of a second-order regression model. For better justification, results were also described through a three-dimensional (3D response surface and a related two-dimensional (2D contour plot. These 3D and 2D response analyses provided significant and easy to understand findings on the effect of all the considered process variables on expected findings. To diagnose the model adequacy, the mathematical relationship between the process variables and the extent of polymer conversion was established through the combination of CFD with statistical tools. All the tests showed that the model is an excellent fit with the experimental validation. The maximum extent of polymer conversion per pass was 5.98% at the set time period and with consistent catalyst and co-catalyst feed rates. The optimum conditions for maximum polymerisation was found at reaction temperature (RT 75 °C, system pressure (SP 25 bar, and 75% monomer concentration (MC. The hydrogen percentage was kept fixed at all times. The coefficient of correlation for reaction temperature, system pressure, and monomer concentration ratio, was found to be 0.932. Thus, the experimental results and model predicted values were a reliable fit at optimum process conditions. Detailed and adaptable CFD results were capable

  2. Thermal Comfort and Optimum Humidity Part 1

    Directory of Open Access Journals (Sweden)

    M. V. Jokl

    2002-01-01

    Full Text Available The hydrothermal microclimate is the main component in indoor comfort. The optimum hydrothermal level can be ensured by suitable changes in the sources of heat and water vapor within the building, changes in the environment (the interior of the building and in the people exposed to the conditions inside the building. A change in the heat source and the source of water vapor involves improving the heat - insulating properties and the air permeability of the peripheral walls and especially of the windows. The change in the environment will bring human bodies into balance with the environment. This can be expressed in terms of an optimum or at least an acceptable globe temperature, an adequate proportion of radiant heat within the total amount of heat from the environment (defined by the difference between air and wall temperature, uniform cooling of the human body by the environment, defined a by the acceptable temperature difference between head and ankles, b by acceptable temperature variations during a shift (location unchanged, or during movement from one location to another without a change of clothing. Finally, a moisture balance between man and the environment is necessary (defined by acceptable relative air humidity. A change for human beings means a change of clothes which, of course, is limited by social acceptance in summer and by inconvenient heaviness in winter. The principles of optimum heating and cooling, humidification and dehumidification are presented in this paper.Hydrothermal comfort in an environment depends on heat and humidity flows (heat and water vapors, occurring in a given space in a building interior and affecting the total state of the human organism.

  3. Thermal Comfort and Optimum Humidity Part 2

    Directory of Open Access Journals (Sweden)

    M. V. Jokl

    2002-01-01

    Full Text Available The hydrothermal microclimate is the main component in indoor comfort. The optimum hydrothermal level can be ensured by suitable changes in the sources of heat and water vapor within the building, changes in the environment (the interior of the building and in the people exposed to the conditions inside the building. A change in the heat source and the source of water vapor involves improving the heat - insulating properties and the air permeability of the peripheral walls and especially of the windows. The change in the environment will bring human bodies into balance with the environment. This can be expressed in terms of an optimum or at least an acceptable globe temperature, an adequate proportion of radiant heat within the total amount of heat from the environment (defined by the difference between air and wall temperature, uniform cooling of the human body by the environment, defined a by the acceptable temperature difference between head and ankles, b by acceptable temperature variations during a shift (location unchanged, or during movement from one location to another without a change of clothing. Finally, a moisture balance between man and the environment is necessary (defined by acceptable relative air humidity. A change for human beings means a change of clothes which, of course, is limited by social acceptance in summer and by inconvenient heaviness in winter. The principles of optimum heating and cooling, humidification and dehumidification are presented in this paper.Hydrothermal comfort in an environment depends on heat and humidity flows (heat and water vapors, occurring in a given space in a building interior and affecting the total state of the human organism.

  4. Design issues for optimum solar cell configuration

    Science.gov (United States)

    Kumar, Atul; Thakur, Ajay D.

    2018-05-01

    A computer based simulation of solar cell structure is performed to study the optimization of pn junction configuration for photovoltaic action. The fundamental aspects of photovoltaic action viz, absorption, separation collection, and their dependence on material properties and deatails of device structures is discussed. Using SCAPS 1D we have simulated the ideal pn junction and shown the effect of band offset and carrier densities on solar cell performance. The optimum configuration can be achieved by optimizing transport of carriers in pn junction under effect of field dependent recombination (tunneling) and density dependent recombination (SRH, Auger) mechanisms.

  5. VEG-01: Veggie Hardware Verification Testing

    Science.gov (United States)

    Massa, Gioia; Newsham, Gary; Hummerick, Mary; Morrow, Robert; Wheeler, Raymond

    2013-01-01

    The Veggie plant/vegetable production system is scheduled to fly on ISS at the end of2013. Since much of the technology associated with Veggie has not been previously tested in microgravity, a hardware validation flight was initiated. This test will allow data to be collected about Veggie hardware functionality on ISS, allow crew interactions to be vetted for future improvements, validate the ability of the hardware to grow and sustain plants, and collect data that will be helpful to future Veggie investigators as they develop their payloads. Additionally, food safety data on the lettuce plants grown will be collected to help support the development of a pathway for the crew to safely consume produce grown on orbit. Significant background research has been performed on the Veggie plant growth system, with early tests focusing on the development of the rooting pillow concept, and the selection of fertilizer, rooting medium and plant species. More recent testing has been conducted to integrate the pillow concept into the Veggie hardware and to ensure that adequate water is provided throughout the growth cycle. Seed sanitation protocols have been established for flight, and hardware sanitation between experiments has been studied. Methods for shipping and storage of rooting pillows and the development of crew procedures and crew training videos for plant activities on-orbit have been established. Science verification testing was conducted and lettuce plants were successfully grown in prototype Veggie hardware, microbial samples were taken, plant were harvested, frozen, stored and later analyzed for microbial growth, nutrients, and A TP levels. An additional verification test, prior to the final payload verification testing, is desired to demonstrate similar growth in the flight hardware and also to test a second set of pillows containing zinnia seeds. Issues with root mat water supply are being resolved, with final testing and flight scheduled for later in 2013.

  6. From Open Source Software to Open Source Hardware

    OpenAIRE

    Viseur , Robert

    2012-01-01

    Part 2: Lightning Talks; International audience; The open source software principles progressively give rise to new initiatives for culture (free culture), data (open data) or hardware (open hardware). The open hardware is experiencing a significant growth but the business models and legal aspects are not well known. This paper is dedicated to the economics of open hardware. We define the open hardware concept and determine intellectual property tools we can apply to open hardware, with a str...

  7. Flight Hardware Virtualization for On-Board Science Data Processing

    Data.gov (United States)

    National Aeronautics and Space Administration — Utilize Hardware Virtualization technology to benefit on-board science data processing by investigating new real time embedded Hardware Virtualization solutions and...

  8. REGARDING "TRAGIC ECONOMIC OPTIMUM" FROM HOLISTIC+ PERSPECTIVE

    Directory of Open Access Journals (Sweden)

    Constantin Popescu

    2010-12-01

    Full Text Available Communication aims to discuss the new scientific vision of "the entire integrated" as it follows the recent achievements of quantum physics, psychology and biology. From this perspective, economy is seen as a living organism, part of the social organism and together with de bright ecology. The optimum of the economy as a living organism is based on dynamic compatibilities with all common living requirements. The evolution of economic life is organically linked to the unavoidable circumstances contained in the form of V. Frankl ‘s tragic triad consisting of: pain, guilt and death. In interaction with the holistic triad circumscribed by limitations, uncertainties and open interdependencies, the tragic economic optimum (TEO is formed. It can be understood as that state of economic life in which freedom of choice of scarce resources under uncertainty has in the compatibility of rationality and hope the development criteria of MEANING. TEO means to say YES to economic life even in conditions of resource limitations, bankruptcies and unemployment, negative externalities, stress, etc. By respiritualization of responsibility using scientific knowledge. TEO - involves multicriteria modeling of economic life by integrating human demands, community, environmental, spiritual and business development in the assessment predicting human GDP as a variable wave aggregate.

  9. Optimum body size of Holstein replacement heifers.

    Science.gov (United States)

    Hoffman, P C

    1997-03-01

    Criteria that define optimum body size of replacement heifers are required by commercial dairy producers to evaluate replacement heifer management programs. Historically recommended body size criteria have been based on live BW measurements. Numerous research studies have observed a positive relationship between BW at first calving and first lactation milk yield, which has served as the impetus for using live BW to define body size of replacement heifers. Live BW is, however, not the only available measurement to define body size. Skeletal measurements such as wither height, length, and pelvic area have been demonstrated to be related to first lactation performance and (or) dystocia. Live BW measurements also do not define differences in body composition. Differences in body composition of replacement heifers at first calving are also related to key performance variables. An updated research data base is available for the modern Holstein genotype to incorporate measures of skeletal growth and body composition with BW when defining body size. These research projects also lend insight into the relative importance of measurements that define body size of replacement heifers. Incorporation of these measurements from current research into present BW recommendations should aid commercial dairy producers to better define replacement heifer growth and management practices. This article proposes enhancements in defining optimum body size and growth characteristics of Holstein replacement heifers.

  10. Non-fuel bearing hardware melting technology

    International Nuclear Information System (INIS)

    Newman, D.F.

    1993-01-01

    Battelle has developed a portable hardware melter concept that would allow spent fuel rod consolidation operations at commercial nuclear power plants to provide significantly more storage space for other spent fuel assemblies in existing pool racks at lower cost. Using low pressure compaction, the non-fuel bearing hardware (NFBH) left over from the removal of spent fuel rods from the stainless steel end fittings and the Zircaloy guide tubes and grid spacers still occupies 1/3 to 2/5 of the volume of the consolidated fuel rod assemblies. Melting the non-fuel bearing hardware reduces its volume by a factor 4 from that achievable with low-pressure compaction. This paper describes: (1) the configuration and design features of Battelle's hardware melter system that permit its portability, (2) the system's throughput capacity, (3) the bases for capital and operating estimates, and (4) the status of NFBH melter demonstration to reduce technical risks for implementation of the concept. Since all NFBH handling and processing operations would be conducted at the reactor site, costs for shipping radioactive hardware to and from a stationary processing facility for volume reduction are avoided. Initial licensing, testing, and installation in the field would follow the successful pattern achieved with rod consolidation technology

  11. Universal Curve of Optimum Thermoelectric Figures of Merit for Bulk and Low-Dimensional Semiconductors

    Science.gov (United States)

    Hung, Nguyen T.; Nugraha, Ahmad R. T.; Saito, Riichiro

    2018-02-01

    This paper is a contribution to the Physical Review Applied collection in memory of Mildred S. Dresselhaus. Analytical formulas for thermoelectric figures of merit and power factors are derived based on the one-band model. We find that there is a direct relationship between the optimum figures of merit and the optimum power factors of semiconductors despite of the fact that the two quantities are generally given by different values of chemical potentials. By introducing a dimensionless parameter consisting of the optimum power factor and lattice thermal conductivity (without electronic thermal conductivity), it is possible to unify optimum figures of merit of both bulk and low-dimensional semiconductors into a single universal curve that covers many materials with different dimensionalities.

  12. A novel approach for optimum allocation of FACTS devices using multi-objective function

    International Nuclear Information System (INIS)

    Gitizadeh, M.; Kalantar, M.

    2009-01-01

    This paper presents a novel approach to find optimum type, location, and capacity of flexible alternating current transmission systems (FACTS) devices in a power system using a multi-objective optimization function. Thyristor controlled series compensator (TCSC) and static var compensator (SVC) are utilized to achieve these objectives: active power loss reduction, new introduced FACTS devices cost reduction, increase the robustness of the security margin against voltage collapse, and voltage deviation reduction. The operational and controlling constraints as well as load constraints are considered in the optimum allocation procedure. Here, a goal attainment method based on simulated annealing is used to approach the global optimum. In addition, the estimated annual load profile has been utilized to the optimum siting and sizing of FACTS devices to approach a practical solution. The standard IEEE 14-bus test system is used to validate the performance and effectiveness of the proposed method

  13. Optimum motion track planning for avoiding obstacles

    International Nuclear Information System (INIS)

    Attia, A.A.A

    2008-01-01

    A genetic algorithm (GA) is a stochastic search and optimization technique based on the mechanism of natural selection. A population of candidate solutions (Chromosomes) is held and interacts over a number of iterations (Generations) to produce better solutions. In canonical GA, the chromosomes are encoded as binary strings. Driving the process is the fitness of the chromosomes, which relates the quality of a candidate in quantitative terms. The fitness function encapsulates the problem- specific knowledge. The fitness is used in a stochastic selection of pairs of chromosomes which are 'reproduced' to generate new solution strings. Reproduction involves crossover, which generates new children by combining chromosomes in a process which swaps portions of each others genes. The other reproduction operator is called mutation. Mutation randomly changes genes and is used to introduce new information into the search. Both crossover and mutation make heavy use of random numbers.The aim of this thesis is to investigate the H/W implementation of genetic algorithm based motion path planning of robot. The potential benefit of using genetic algorithm hardware is that it allows both the huge parallelism which is suited to random number generation, crossover, mutation and fitness evaluation. For many real-world applications, GA can run for days, even when it is executed on a high performance workstation. According to the extensive computation of GA, it follows that hardware-based GA has been put forward. There are aspects of GA approach attract H/W implementation. The operation of selection and reproduction are basically problem independent and involve basic string manipulation tasks. These can be achieved by logical circuits.The fitness evaluation task, which is problem dependent, however proves a major difficulty in H/W implementation. Another difficulty comes from that designs can only be used for the individual problem their fitness function represents. Therefore, in this

  14. A Hardware Abstraction Layer in Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Korsholm, Stephan; Kalibera, Tomas

    2011-01-01

    Embedded systems use specialized hardware devices to interact with their environment, and since they have to be dependable, it is attractive to use a modern, type-safe programming language like Java to develop programs for them. Standard Java, as a platform-independent language, delegates access...... to devices, direct memory access, and interrupt handling to some underlying operating system or kernel, but in the embedded systems domain resources are scarce and a Java Virtual Machine (JVM) without an underlying middleware is an attractive architecture. The contribution of this article is a proposal...... for Java packages with hardware objects and interrupt handlers that interface to such a JVM. We provide implementations of the proposal directly in hardware, as extensions of standard interpreters, and finally with an operating system middleware. The latter solution is mainly seen as a migration path...

  15. Hardware Acceleration of Adaptive Neural Algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - world conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.

  16. MFTF supervisory control and diagnostics system hardware

    International Nuclear Information System (INIS)

    Butner, D.N.

    1979-01-01

    The Supervisory Control and Diagnostics System (SCDS) for the Mirror Fusion Test Facility (MFTF) is a multiprocessor minicomputer system designed so that for most single-point failures, the hardware may be quickly reconfigured to provide continued operation of the experiment. The system is made up of nine Perkin-Elmer computers - a mixture of 8/32's and 7/32's. Each computer has ports on a shared memory system consisting of two independent shared memory modules. Each processor can signal other processors through hardware external to the shared memory. The system communicates with the Local Control and Instrumentation System, which consists of approximately 65 microprocessors. Each of the six system processors has facilities for communicating with a group of microprocessors; the groups consist of from four to 24 microprocessors. There are hardware switches so that if an SCDS processor communicating with a group of microprocessors fails, another SCDS processor takes over the communication

  17. Hardware Accelerated Sequence Alignment with Traceback

    Directory of Open Access Journals (Sweden)

    Scott Lloyd

    2009-01-01

    in a timely manner. Known methods to accelerate alignment on reconfigurable hardware only address sequence comparison, limit the sequence length, or exhibit memory and I/O bottlenecks. A space-efficient, global sequence alignment algorithm and architecture is presented that accelerates the forward scan and traceback in hardware without memory and I/O limitations. With 256 processing elements in FPGA technology, a performance gain over 300 times that of a desktop computer is demonstrated on sequence lengths of 16000. For greater performance, the architecture is scalable to more processing elements.

  18. Quantum neuromorphic hardware for quantum artificial intelligence

    Science.gov (United States)

    Prati, Enrico

    2017-08-01

    The development of machine learning methods based on deep learning boosted the field of artificial intelligence towards unprecedented achievements and application in several fields. Such prominent results were made in parallel with the first successful demonstrations of fault tolerant hardware for quantum information processing. To which extent deep learning can take advantage of the existence of a hardware based on qubits behaving as a universal quantum computer is an open question under investigation. Here I review the convergence between the two fields towards implementation of advanced quantum algorithms, including quantum deep learning.

  19. Human Centered Hardware Modeling and Collaboration

    Science.gov (United States)

    Stambolian Damon; Lawrence, Brad; Stelges, Katrine; Henderson, Gena

    2013-01-01

    In order to collaborate engineering designs among NASA Centers and customers, to in clude hardware and human activities from multiple remote locations, live human-centered modeling and collaboration across several sites has been successfully facilitated by Kennedy Space Center. The focus of this paper includes innovative a pproaches to engineering design analyses and training, along with research being conducted to apply new technologies for tracking, immersing, and evaluating humans as well as rocket, vehic le, component, or faci lity hardware utilizing high resolution cameras, motion tracking, ergonomic analysis, biomedical monitoring, wor k instruction integration, head-mounted displays, and other innovative human-system integration modeling, simulation, and collaboration applications.

  20. Design optimum frac jobs using virtual intelligence techniques

    Science.gov (United States)

    Mohaghegh, Shahab; Popa, Andrei; Ameri, Sam

    2000-10-01

    Designing optimal frac jobs is a complex and time-consuming process. It usually involves the use of a two- or three-dimensional computer model. For the computer models to perform as intended, a wealth of input data is required. The input data includes wellbore configuration and reservoir characteristics such as porosity, permeability, stress and thickness profiles of the pay layers as well as the overburden layers. Among other essential information required for the design process is fracturing fluid type and volume, proppant type and volume, injection rate, proppant concentration and frac job schedule. Some of the parameters such as fluid and proppant types have discrete possible choices. Other parameters such as fluid and proppant volume, on the other hand, assume values from within a range of minimum and maximum values. A potential frac design for a particular pay zone is a combination of all of these parameters. Finding the optimum combination is not a trivial process. It usually requires an experienced engineer and a considerable amount of time to tune the parameters in order to achieve desirable outcome. This paper introduces a new methodology that integrates two virtual intelligence techniques, namely, artificial neural networks and genetic algorithms to automate and simplify the optimum frac job design process. This methodology requires little input from the engineer beyond the reservoir characterizations and wellbore configuration. The software tool that has been developed based on this methodology uses the reservoir characteristics and an optimization criteria indicated by the engineer, for example a certain propped frac length, and provides the detail of the optimum frac design that will result in the specified criteria. An ensemble of neural networks is trained to mimic the two- or three-dimensional frac simulator. Once successfully trained, these networks are capable of providing instantaneous results in response to any set of input parameters. These

  1. Design optimum frac jobs using virtual intelligence techniques

    Energy Technology Data Exchange (ETDEWEB)

    Shahab Mohaghegh; Andrei Popa; Sam Ameri [West Virginia University, Morgantown, WV (United States). Petroleum and Natural Gas Engineering

    2000-10-01

    Designing optimal frac jobs is a complex and time-consuming process. It usually involves the use of a two- or three-dimensional computer model. For the computer models to perform as intended, a wealth of input data is required. The input data includes wellbore configuration and reservoir characteristics such as porosity, permeability, stress and thickness profiles of the pay layers as well as the overburden layers. Among other essential information required for the design process is fracturing fluid type and volume, proppant type and volume, injection rate, proppant concentration and frac job schedule. Some of the parameters such as fluid and proppant types have discrete possible choices. Other parameters such as fluid and proppant volume, on the other hand, assume values from within a range of minimum and maximum values. A potential frac design for a particular pay zone is a combination of all of these parameters. Finding the optimum combination is not a trivial process. It usually requires an experienced engineer and a considerable amount of time to tune the parameters in order to achieve desirable outcome. This paper introduces a new methodology that integrates two virtual intelligence techniques, namely, artificial neural networks and genetic algorithms to automate and simplify the optimum frac job design process. This methodology requires little input from the engineer beyond the reservoir characterizations and wellbore configuration. The software tool that has been developed based on this methodology uses the reservoir characteristics and an optimization criteria indicated by the engineer, for example a certain propped frac length, and provides the detail of the optimum frac design that will result in the specified criteria. An ensemble of neural networks is trained to mimic the two- or three-dimensional frac simulator. Once successfully trained, these networks are capable of providing instantaneous results in response to any set of input parameters. These

  2. Design chart of optimum current leads

    International Nuclear Information System (INIS)

    Ishibashi, K.; Katase, A.; Maechata, K.

    1986-01-01

    The heat flow through current leads is one of major heat losses in a superconducting magnet system. To reduce the heat flow, current leads have been optimized in a complex way by varying such quantities as conductor length, cross-sectional area, heat transfer coefficient and cooling perimeter. Therefore, this study is made to simplify the design procedure, and to explain the general characteristics of the current leads. A new combined parameter which takes turbulent flow into account is introduced in the present work to enable us to draw a useful design chart. This chart gives, to a wide variety of current leads, detailed information about the optimum design-viz. geometric dimensions, heat flow into liquid helium, and pressure drop of the cooling gas. Change of the cross-sectional area along the conductor may improve the current lead performance. The effects of this area change are examined in detail

  3. Optimum design of a nuclear heat supply

    International Nuclear Information System (INIS)

    Borel, J.P.

    1984-01-01

    This paper presents an economic analysis for the optimum design of a nuclear heat supply to a given district-heating network. First, a general description of the system is given, which includes a nuclear power plant, a heating power plant and a district-heating network. The heating power plant is fed with steam from the nuclear power plant. It is assumed that the heating network is already in operation and that the nuclear power plant was previously designed to supply electricity. Second, a technical definition of the heat production and transportation installations is given. The optimal power of these installations is examined. The main result is a relationship between the network capacity and the level of the nuclear heat supply as a substitute for oil under the best economic conditions. The analysis also presents information for choosing the best operating mode. Finally, the heating power plant is studied in more detail from the energy, technical and economic aspects. (author)

  4. Optimum utilisation of the uranium resource

    International Nuclear Information System (INIS)

    Ion, S. E.; Wilson, P.D.

    1998-01-01

    The nuclear industry faces many challenges, notably to maximise safety, secure an adequate energy supply, manage wastes satisfactorily and achieve political acceptability. One way forward is to optimise together the various interdependent stages of the fuel cycle - the now familiar 'holistic approach'. Many of the issues will demand large R and D expenditure, most effectively met through international collaboration. Sustainable development requires optimum utilisation of energy potential, to which the most accessible key is recycling uranium and the plutonium bred from it. Realising anything like this full potential requires fast-neutron reactors, and therefore BNFL continues to sustain the UK involvement in their international development. Meanwhile, current R and D programmes must aim to make the nuclear option more competitive against fossil resources, while maintaining and developing the necessary skills for more advanced technologies The paper outlines the strategies being pursued and highlights BNFL 's programmes. (author)

  5. Carbon sequestration, optimum forest rotation and their environmental impact

    Energy Technology Data Exchange (ETDEWEB)

    Kula, Erhun, E-mail: erhun.kula@bahcesehir.edu.tr [Department of Economics, Bahcesehir University, Besiktas, Istanbul (Turkey); Gunalay, Yavuz, E-mail: yavuz.gunalay@bahcesehir.edu.tr [Department of Business Studies, Bahcesehir University, Besiktas, Istanbul (Turkey)

    2012-11-15

    Due to their large biomass forests assume an important role in the global carbon cycle by moderating the greenhouse effect of atmospheric pollution. The Kyoto Protocol recognises this contribution by allocating carbon credits to countries which are able to create new forest areas. Sequestrated carbon provides an environmental benefit thus must be taken into account in cost-benefit analysis of afforestation projects. Furthermore, like timber output carbon credits are now tradable assets in the carbon exchange. By using British data, this paper looks at the issue of identifying optimum felling age by considering carbon sequestration benefits simultaneously with timber yields. The results of this analysis show that the inclusion of carbon benefits prolongs the optimum cutting age by requiring trees to stand longer in order to soak up more CO{sub 2}. Consequently this finding must be considered in any carbon accounting calculations. - Highlights: Black-Right-Pointing-Pointer Carbon sequestration in forestry is an environmental benefit. Black-Right-Pointing-Pointer It moderates the problem of global warming. Black-Right-Pointing-Pointer It prolongs the gestation period in harvesting. Black-Right-Pointing-Pointer This paper uses British data in less favoured districts for growing Sitka spruce species.

  6. Impacts of optimum cost effective energy efficiency standards

    International Nuclear Information System (INIS)

    Brancic, A.B.; Peters, J.S.; Arch, M.

    1991-01-01

    Building Codes are increasingly required to be responsive to social and economic policy concerns. In 1990 the State of Connecticut passes An Act Concerning Global Warming, Public Act 90-219, which mandates the revision of the state building code to require that buildings and building elements be designed to provide optimum cost-effective energy efficiency over the useful life of the building. Further, such revision must meet the American Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE) Standard 90.1 - 1989. As the largest electric energy supplier in Connecticut, Northeast Utilities (NU) sponsored a pilot study of the cost effectiveness of alternative building code standards for commercial construction. This paper reports on this study which analyzed design and construction means, building elements, incremental construction costs, and energy savings to determine the optimum cost-effective building code standard. Findings are that ASHRAE 90.1 results in 21% energy savings and alternative standards above it result in significant additional savings. Benefit/cost analysis showed that both are cost effective

  7. Carbon sequestration, optimum forest rotation and their environmental impact

    International Nuclear Information System (INIS)

    Kula, Erhun; Gunalay, Yavuz

    2012-01-01

    Due to their large biomass forests assume an important role in the global carbon cycle by moderating the greenhouse effect of atmospheric pollution. The Kyoto Protocol recognises this contribution by allocating carbon credits to countries which are able to create new forest areas. Sequestrated carbon provides an environmental benefit thus must be taken into account in cost–benefit analysis of afforestation projects. Furthermore, like timber output carbon credits are now tradable assets in the carbon exchange. By using British data, this paper looks at the issue of identifying optimum felling age by considering carbon sequestration benefits simultaneously with timber yields. The results of this analysis show that the inclusion of carbon benefits prolongs the optimum cutting age by requiring trees to stand longer in order to soak up more CO 2 . Consequently this finding must be considered in any carbon accounting calculations. - Highlights: ► Carbon sequestration in forestry is an environmental benefit. ► It moderates the problem of global warming. ► It prolongs the gestation period in harvesting. ► This paper uses British data in less favoured districts for growing Sitka spruce species.

  8. Enabling Open Hardware through FOSS tools

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Software developers often take open file formats and tools for granted. When you publish code on github, you do not ask yourself if somebody will be able to open it and modify it. We need the same freedom in the open hardware world, to make it truly accessible for everyone.

  9. Hardware and layout aspects affecting maintainability

    International Nuclear Information System (INIS)

    Jayaraman, V.N.; Surendar, Ch.

    1977-01-01

    It has been found from maintenance experience at the Rajasthan Atomic Power Station that proper hardware and instrumentation layout can reduce maintenance and down-time on the related equipment. The problems faced in this connection and how they were solved is narrated. (M.G.B.)

  10. CAMAC high energy physics electronics hardware

    International Nuclear Information System (INIS)

    Kolpakov, I.F.

    1977-01-01

    CAMAC hardware for high energy physics large spectrometers and control systems is reviewed as is the development of CAMAC modules at the High Energy Laboratory, JINR (Dubna). The total number of crates used at the Laboratory is 179. The number of CAMAC modules of 120 different types exceeds 1700. The principles of organization and the structure of developed CAMAC systems are described. (author)

  11. Design of hardware accelerators for demanding applications.

    NARCIS (Netherlands)

    Jozwiak, L.; Jan, Y.

    2010-01-01

    This paper focuses on mastering the architecture development of hardware accelerators. It presents the results of our analysis of the main issues that have to be addressed when designing accelerators for modern demanding applications, when using as an example the accelerator design for LDPC decoding

  12. Building Correlators with Many-Core Hardware

    NARCIS (Netherlands)

    van Nieuwpoort, R.V.

    2010-01-01

    Radio telescopes typically consist of multiple receivers whose signals are cross-correlated to filter out noise. A recent trend is to correlate in software instead of custom-built hardware, taking advantage of the flexibility that software solutions offer. Examples include e-VLBI and LOFAR. However,

  13. Computer hardware for radiologists: Part I

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM, Picture Archiving and Communication System (PACS, Radiology information system (RIS technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU, the chipset, the random access memory (RAM, the memory modules, bus, storage drives, and ports. The personnel computer (PC has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs. The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called "buses". The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute "programs". A Pentium® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration.

  14. Computer hardware for radiologists: Part I

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM), Picture Archiving and Communication System (PACS), Radiology information system (RIS) technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU), the chipset, the random access memory (RAM), the memory modules, bus, storage drives, and ports. The personnel computer (PC) has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs). The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called “buses”. The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute “programs”. A Pentium ® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM) is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration

  15. Environmental Control System Software & Hardware Development

    Science.gov (United States)

    Vargas, Daniel Eduardo

    2017-01-01

    ECS hardware: (1) Provides controlled purge to SLS Rocket and Orion spacecraft. (2) Provide mission-focused engineering products and services. ECS software: (1) NASA requires Compact Unique Identifiers (CUIs); fixed-length identifier used to identify information items. (2) CUI structure; composed of nine semantic fields that aid the user in recognizing its purpose.

  16. Digital Hardware Design Teaching: An Alternative Approach

    Science.gov (United States)

    Benkrid, Khaled; Clayton, Thomas

    2012-01-01

    This article presents the design and implementation of a complete review of undergraduate digital hardware design teaching in the School of Engineering at the University of Edinburgh. Four guiding principles have been used in this exercise: learning-outcome driven teaching, deep learning, affordability, and flexibility. This has identified…

  17. The fast Amsterdam multiprocessor (FAMP) system hardware

    International Nuclear Information System (INIS)

    Hertzberger, L.O.; Kieft, G.; Kisielewski, B.; Wiggers, L.W.; Engster, C.; Koningsveld, L. van

    1981-01-01

    The architecture of a multiprocessor system is described that will be used for on-line filter and second stage trigger applications. The system is based on the MC 68000 microprocessor from Motorola. Emphasis is paid to hardware aspects, in particular the modularity, processor communication and interfacing, whereas the system software and the applications will be described in separate articles. (orig.)

  18. Remote hardware-reconfigurable robotic camera

    Science.gov (United States)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.

    2001-10-01

    In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.

  19. Constraint-Based Local Search for Constrained Optimum Paths Problems

    Science.gov (United States)

    Pham, Quang Dung; Deville, Yves; van Hentenryck, Pascal

    Constrained Optimum Path (COP) problems arise in many real-life applications and are ubiquitous in communication networks. They have been traditionally approached by dedicated algorithms, which are often hard to extend with side constraints and to apply widely. This paper proposes a constraint-based local search (CBLS) framework for COP applications, bringing the compositionality, reuse, and extensibility at the core of CBLS and CP systems. The modeling contribution is the ability to express compositional models for various COP applications at a high level of abstraction, while cleanly separating the model and the search procedure. The main technical contribution is a connected neighborhood based on rooted spanning trees to find high-quality solutions to COP problems. The framework, implemented in COMET, is applied to Resource Constrained Shortest Path (RCSP) problems (with and without side constraints) and to the edge-disjoint paths problem (EDP). Computational results show the potential significance of the approach.

  20. An Optimum Solution for Electric Power Theft

    Directory of Open Access Journals (Sweden)

    Aamir Hussain Memon

    2013-07-01

    Full Text Available Electric power theft is a problem that continues to plague power sector across the whole country. Every year, the electricity companies face the line losses at an average 20-30% and according to power ministry estimation WAPDA companies lose more than Rs. 125 billion. Significantly, it is enough to destroy the entire power sector of country. According to sources 20% losses means the masses would have to pay extra 20% in terms of electricity tariffs. In other words, the innocent consumers pay the bills of those who steal electricity. For all that, no any permanent solution for this major issue has ever been proposed. We propose an applicable and optimum solution for this impassable problem. In our research, we propose an Electric power theft solution based on three stages; Transmission stage, Distribution stage, and User stage. Without synchronization among all, the complete solution can not be achieved. The proposed solution is simulated on NI (National Instruments Circuit Design Suite Multisim v.10.0. Our research work is an implicit and a workable approach towards the Electric power theft, as for conditions in Pakistan, which is bearing the brunt of power crises already

  1. Mesh networks: an optimum solution for AMR

    Energy Technology Data Exchange (ETDEWEB)

    Mimno, G.

    2003-12-01

    Characteristics of mesh networks and the advantage of using them in automatic meter reading equipment (AMR) are discussed. Mesh networks are defined as being similar to a fishing net made of knots and links. In mesh networks the knots represent meter sites and the links are the radio paths between the meter sites and the neighbourhood concentrator. In mesh networks any knot in the communications chain can link to any other and the optimum path is calculated by the network by hopping from meter to meter until the radio message reaches a concentrator. This mesh communications architecture is said to be vastly superior to many older types of radio-based meter reading technologies; its main advantage is that it not only significantly improves the economics of fixed network deployment, but also supports time-of-use metering, remote disconnect services and advanced features, such as real-time pricing, demand response, and other efficiency measures, providing a better return on investment and reliability.

  2. Optimum harvest maturity for Leymus chinensis seed

    Directory of Open Access Journals (Sweden)

    Jixiang Lin

    2016-06-01

    Full Text Available Timely harvest is critical to achieve maximum seed viability and vigour in agricultural production. However, little information exists concerning how to reap the best quality seeds of Leymus chinensis, which is the dominant and most promising grass species in the Songnen Grassland of Northern China. The objective of this study was to investigate and evaluate possible quality indices of the seeds at different days after peak anthesis. Seed quality at different development stages was assessed by the colours of the seed and lemmas, seed weight, moisture content, electrical conductivity of seed leachate and germination indices. Two consecutive years of experimental results showed that the maximum seed quality was recorded at 39 days after peak anthesis. At this date, the colours of the seed and lemmas reached heavy brown and yellow, respectively. The seed weight was highest and the moisture content and the electrical conductivity of seed leachate were lowest. In addition, the seed also reached its maximum germination percentage and energy at this stage, determined using a standard germination test (SGT and accelerated ageing test (AAT. Thus, Leymus chinensis can be harvested at 39 days after peak anthesis based on the changes in parameters. Colour identification can be used as an additional indicator to provide a more rapid and reliable measure of optimum seed maturity; approximately 10 days after the colour of the lemmas reached yellow and the colour of the seed reached heavy brown, the seed of this species was suitable for harvest.

  3. Designing from minimum to optimum functionality

    Science.gov (United States)

    Bannova, Olga; Bell, Larry

    2011-04-01

    This paper discusses a multifaceted strategy to link NASA Minimal Functionality Habitable Element (MFHE) requirements to a compatible growth plan; leading forward to evolutionary, deployable habitats including outpost development stages. The discussion begins by reviewing fundamental geometric features inherent in small scale, vertical and horizontal, pressurized module configuration options to characterize applicability to meet stringent MFHE constraints. A proposed scenario to incorporate a vertical core MFHE concept into an expanded architecture to provide continuity of structural form and a logical path from "minimum" to "optimum" design of a habitable module. The paper describes how habitation and logistics accommodations could be pre-integrated into a common Hab/Log Module that serves both habitation and logistics functions. This is offered as a means to reduce unnecessary redundant development costs and to avoid EVA-intensive on-site adaptation and retrofitting requirements for augmented crew capacity. An evolutionary version of the hard shell Hab/Log design would have an expandable middle section to afford larger living and working accommodations. In conclusion, the paper illustrates that a number of cargo missions referenced for NASA's 4.0.0 Lunar Campaign Scenario could be eliminated altogether to expedite progress and reduce budgets. The plan concludes with a vertical growth geometry that provides versatile and efficient site development opportunities using a combination of hard Hab/Log modules and a hybrid expandable "CLAM" (Crew Lunar Accommodations Module) element.

  4. Achieving optimum diffraction based overlay performance

    Science.gov (United States)

    Leray, Philippe; Laidler, David; Cheng, Shaunee; Coogans, Martyn; Fuchs, Andreas; Ponomarenko, Mariya; van der Schaar, Maurits; Vanoppen, Peter

    2010-03-01

    Diffraction Based Overlay (DBO) metrology has been shown to have significantly reduced Total Measurement Uncertainty (TMU) compared to Image Based Overlay (IBO), primarily due to having no measurable Tool Induced Shift (TIS). However, the advantages of having no measurable TIS can be outweighed by increased susceptibility to WIS (Wafer Induced Shift) caused by target damage, process non-uniformities and variations. The path to optimum DBO performance lies in having well characterized metrology targets, which are insensitive to process non-uniformities and variations, in combination with optimized recipes which take advantage of advanced DBO designs. In this work we examine the impact of different degrees of process non-uniformity and target damage on DBO measurement gratings and study their impact on overlay measurement accuracy and precision. Multiple wavelength and dual polarization scatterometry are used to characterize the DBO design performance over the range of process variation. In conclusion, we describe the robustness of DBO metrology to target damage and show how to exploit the measurement capability of a multiple wavelength, dual polarization scatterometry tool to ensure the required measurement accuracy for current and future technology nodes.

  5. Optimum aberration coefficients for recording high-resolution off-axis holograms in a Cs-corrected TEM

    Energy Technology Data Exchange (ETDEWEB)

    Linck, Martin, E-mail: linck@ceos-gmbh.de [CEOS GmbH, Englerstr. 28, D-69126 Heidelberg (Germany)

    2013-01-15

    Amongst the impressive improvements in high-resolution electron microscopy, the Cs-corrector also has significantly enhanced the capabilities of off-axis electron holography. Recently, it has been shown that the signal above noise in the reconstructable phase can be significantly improved by combining holography and hardware aberration correction. Additionally, with a spherical aberration close to zero, the traditional optimum focus for recording high-resolution holograms ('Lichte's defocus') has become less stringent and both, defocus and spherical aberration, can be selected freely within a certain range. This new degree of freedom can be used to improve the signal resolution in the holographically reconstructed object wave locally, e.g. at the atomic positions. A brute force simulation study for an aberration corrected 200 kV TEM is performed to determine optimum values for defocus and spherical aberration for best possible signal to noise in the reconstructed atomic phase signals. Compared to the optimum aberrations for conventional phase contrast imaging (NCSI), which produce 'bright atoms' in the image intensity, the resulting optimum values of defocus and spherical aberration for off-axis holography enable 'black atom contrast' in the hologram. However, they can significantly enhance the local signal resolution at the atomic positions. At the same time, the benefits of hardware aberration correction for high-resolution off-axis holography are preserved. It turns out that the optimum is depending on the object and its thickness and therefore not universal. -- Highlights: Black-Right-Pointing-Pointer Optimized aberration parameters for high-resolution off-axis holography. Black-Right-Pointing-Pointer Simulation and analysis of noise in high-resolution off-axis holograms. Black-Right-Pointing-Pointer Improving signal resolution in the holographically reconstructed phase shift. Black-Right-Pointing-Pointer Comparison of &apos

  6. Optimum aberration coefficients for recording high-resolution off-axis holograms in a Cs-corrected TEM

    International Nuclear Information System (INIS)

    Linck, Martin

    2013-01-01

    Amongst the impressive improvements in high-resolution electron microscopy, the Cs-corrector also has significantly enhanced the capabilities of off-axis electron holography. Recently, it has been shown that the signal above noise in the reconstructable phase can be significantly improved by combining holography and hardware aberration correction. Additionally, with a spherical aberration close to zero, the traditional optimum focus for recording high-resolution holograms (“Lichte's defocus”) has become less stringent and both, defocus and spherical aberration, can be selected freely within a certain range. This new degree of freedom can be used to improve the signal resolution in the holographically reconstructed object wave locally, e.g. at the atomic positions. A brute force simulation study for an aberration corrected 200 kV TEM is performed to determine optimum values for defocus and spherical aberration for best possible signal to noise in the reconstructed atomic phase signals. Compared to the optimum aberrations for conventional phase contrast imaging (NCSI), which produce “bright atoms” in the image intensity, the resulting optimum values of defocus and spherical aberration for off-axis holography enable “black atom contrast” in the hologram. However, they can significantly enhance the local signal resolution at the atomic positions. At the same time, the benefits of hardware aberration correction for high-resolution off-axis holography are preserved. It turns out that the optimum is depending on the object and its thickness and therefore not universal. -- Highlights: ► Optimized aberration parameters for high-resolution off-axis holography. ► Simulation and analysis of noise in high-resolution off-axis holograms. ► Improving signal resolution in the holographically reconstructed phase shift. ► Comparison of “black” and “white” atom contrast in off-axis holograms.

  7. Fuel cell hardware-in-loop

    Energy Technology Data Exchange (ETDEWEB)

    Moore, R.M.; Randolf, G.; Virji, M. [University of Hawaii, Hawaii Natural Energy Institute (United States); Hauer, K.H. [Xcellvision (Germany)

    2006-11-08

    Hardware-in-loop (HiL) methodology is well established in the automotive industry. One typical application is the development and validation of control algorithms for drive systems by simulating the vehicle plus the vehicle environment in combination with specific control hardware as the HiL component. This paper introduces the use of a fuel cell HiL methodology for fuel cell and fuel cell system design and evaluation-where the fuel cell (or stack) is the unique HiL component that requires evaluation and development within the context of a fuel cell system designed for a specific application (e.g., a fuel cell vehicle) in a typical use pattern (e.g., a standard drive cycle). Initial experimental results are presented for the example of a fuel cell within a fuel cell vehicle simulation under a dynamic drive cycle. (author)

  8. Hardware and software status of QCDOC

    International Nuclear Information System (INIS)

    Boyle, P.A.; Chen, D.; Christ, N.H.; Clark, M.; Cohen, S.D.; Cristian, C.; Dong, Z.; Gara, A.; Joo, B.; Jung, C.; Kim, C.; Levkova, L.; Liao, X.; Liu, G.; Mawhinney, R.D.; Ohta, S.; Petrov, K.; Wettig, T.; Yamaguchi, A.

    2004-01-01

    QCDOC is a massively parallel supercomputer whose processing nodes are based on an application-specific integrated circuit (ASIC). This ASIC was custom-designed so that crucial lattice QCD kernels achieve an overall sustained performance of 50% on machines with several 10,000 nodes. This strong scalability, together with low power consumption and a price/performance ratio of $1 per sustained MFlops, enable QCDOC to attack the most demanding lattice QCD problems. The first ASICs became available in June of 2003, and the testing performed so far has shown all systems functioning according to specification. We review the hardware and software status of QCDOC and present performance figures obtained in real hardware as well as in simulation

  9. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  10. Outline of a Hardware Reconfiguration Framework for Modular Industrial Mobile Manipulators

    DEFF Research Database (Denmark)

    Schou, Casper; Bøgh, Simon; Madsen, Ole

    2014-01-01

    This paper presents concepts and ideas of a hard- ware reconfiguration framework for modular industrial mobile manipulators. Mobile manipulators pose a highly flexible pro- duction resource due to their ability to autonomously navigate between workstations. However, due to this high flexibility new...... approaches to the operation of the robots are needed. Reconfig- uring the robot to a new task should be carried out by shop floor operators and, thus, be both quick and intuitive. Late research has already proposed a method for intuitive robot programming. However, this relies on a predetermined hardware...... configuration. Finding a single multi-purpose hardware configuration suited to all tasks is considered unrealistic. As a result, the need for reconfiguration of the hardware is inevitable. In this paper an outline of a framework for making hardware reconfiguration quick and intuitive is presented. Two main...

  11. A Scalable Approach for Hardware Semiformal Verification

    OpenAIRE

    Grimm, Tomas; Lettnin, Djones; Hübner, Michael

    2018-01-01

    The current verification flow of complex systems uses different engines synergistically: virtual prototyping, formal verification, simulation, emulation and FPGA prototyping. However, none is able to verify a complete architecture. Furthermore, hybrid approaches aiming at complete verification use techniques that lower the overall complexity by increasing the abstraction level. This work focuses on the verification of complex systems at the RT level to handle the hardware peculiarities. Our r...

  12. Hardware Design of a Smart Meter

    OpenAIRE

    Ganiyu A. Ajenikoko; Anthony A. Olaomi

    2014-01-01

    Smart meters are electronic measurement devices used by utilities to communicate information for billing customers and operating their electric systems. This paper presents the hardware design of a smart meter. Sensing and circuit protection circuits are included in the design of the smart meter in which resistors are naturally a fundamental part of the electronic design. Smart meters provides a route for energy savings, real-time pricing, automated data collection and elimina...

  13. Optimization Strategies for Hardware-Based Cofactorization

    Science.gov (United States)

    Loebenberger, Daniel; Putzka, Jens

    We use the specific structure of the inputs to the cofactorization step in the general number field sieve (GNFS) in order to optimize the runtime for the cofactorization step on a hardware cluster. An optimal distribution of bitlength-specific ECM modules is proposed and compared to existing ones. With our optimizations we obtain a speedup between 17% and 33% of the cofactorization step of the GNFS when compared to the runtime of an unoptimized cluster.

  14. Particle Transport Simulation on Heterogeneous Hardware

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    CPUs and GPGPUs. About the speaker Vladimir Koylazov is CTO and founder of Chaos Software and one of the original developers of the V-Ray raytracing software. Passionate about 3D graphics and programming, Vlado is the driving force behind Chaos Group's software solutions. He participated in the implementation of algorithms for accurate light simulations and support for different hardware platforms, including CPU and GPGPU, as well as distributed calculat...

  15. High exposure rate hardware ALARA plan

    International Nuclear Information System (INIS)

    Nellesen, A.L.

    1996-10-01

    This as low as reasonably achievable review provides a description of the engineering and administrative controls used to manage personnel exposure and to control contamination levels and airborne radioactivity concentrations. HERH waste is hardware found in the N-Fuel Storage Basin, which has a contact dose rate greater than 1 R/hr and used filters. This waste will be collected in the fuel baskets at various locations in the basins

  16. Trends in computer hardware and software.

    Science.gov (United States)

    Frankenfeld, F M

    1993-04-01

    Previously identified and current trends in the development of computer systems and in the use of computers for health care applications are reviewed. Trends identified in a 1982 article were increasing miniaturization and archival ability, increasing software costs, increasing software independence, user empowerment through new software technologies, shorter computer-system life cycles, and more rapid development and support of pharmaceutical services. Most of these trends continue today. Current trends in hardware and software include the increasing use of reduced instruction-set computing, migration to the UNIX operating system, the development of large software libraries, microprocessor-based smart terminals that allow remote validation of data, speech synthesis and recognition, application generators, fourth-generation languages, computer-aided software engineering, object-oriented technologies, and artificial intelligence. Current trends specific to pharmacy and hospitals are the withdrawal of vendors of hospital information systems from the pharmacy market, improved linkage of information systems within hospitals, and increased regulation by government. The computer industry and its products continue to undergo dynamic change. Software development continues to lag behind hardware, and its high cost is offsetting the savings provided by hardware.

  17. Software error masking effect on hardware faults

    International Nuclear Information System (INIS)

    Choi, Jong Gyun; Seong, Poong Hyun

    1999-01-01

    Based on the Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL), in this work, a simulation model for fault injection is developed to estimate the dependability of the digital system in operational phase. We investigated the software masking effect on hardware faults through the single bit-flip and stuck-at-x fault injection into the internal registers of the processor and memory cells. The fault location reaches all registers and memory cells. Fault distribution over locations is randomly chosen based on a uniform probability distribution. Using this model, we have predicted the reliability and masking effect of an application software in a digital system-Interposing Logic System (ILS) in a nuclear power plant. We have considered four the software operational profiles. From the results it was found that the software masking effect on hardware faults should be properly considered for predicting the system dependability accurately in operation phase. It is because the masking effect was formed to have different values according to the operational profile

  18. A Hardware Lab Anywhere At Any Time

    Directory of Open Access Journals (Sweden)

    Tobias Schubert

    2004-12-01

    Full Text Available Scientific technical courses are an important component in any student's education. These courses are usually characterised by the fact that the students execute experiments in special laboratories. This leads to extremely high costs and a reduction in the maximum number of possible participants. From this traditional point of view, it doesn't seem possible to realise the concepts of a Virtual University in the context of sophisticated technical courses since the students must be "on the spot". In this paper we introduce the so-called Mobile Hardware Lab which makes student participation possible at any time and from any place. This lab nevertheless transfers a feeling of being present in a laboratory. This is accomplished with a special Learning Management System in combination with hardware components which correspond to a fully equipped laboratory workstation that are lent out to the students for the duration of the lab. The experiments are performed and solved at home, then handed in electronically. Judging and marking are also both performed electronically. Since 2003 the Mobile Hardware Lab is now offered in a completely web based form.

  19. Instrument hardware and software upgrades at IPNS

    International Nuclear Information System (INIS)

    Worlton, Thomas; Hammonds, John; Mikkelson, D.; Mikkelson, Ruth; Porter, Rodney; Tao, Julian; Chatterjee, Alok

    2006-01-01

    IPNS is in the process of upgrading their time-of-flight neutron scattering instruments with improved hardware and software. The hardware upgrades include replacing old VAX Qbus and Multibus-based data acquisition systems with new systems based on VXI and VME. Hardware upgrades also include expanded detector banks and new detector electronics. Old VAX Fortran-based data acquisition and analysis software is being replaced with new software as part of the ISAW project. ISAW is written in Java for ease of development and portability, and is now used routinely for data visualization, reduction, and analysis on all upgraded instruments. ISAW provides the ability to process and visualize the data from thousands of detector pixels, each having thousands of time channels. These operations can be done interactively through a familiar graphical user interface or automatically through simple scripts. Scripts and operators provided by end users are automatically included in the ISAW menu structure, along with those distributed with ISAW, when the application is started

  20. Implementation of optimum solar electricity generating system

    International Nuclear Information System (INIS)

    Singh, Balbir Singh Mahinder; Karim, Samsul Ariffin A.; Sivapalan, Subarna; Najib, Nurul Syafiqah Mohd; Menon, Pradeep

    2014-01-01

    Under the 10 th Malaysian Plan, the government is expecting the renewable energy to contribute approximately 5.5% to the total electricity generation by the year 2015, which amounts to 98MW. One of the initiatives to ensure that the target is achievable was to establish the Sustainable Energy Development Authority of Malaysia. SEDA is given the authority to administer and manage the implementation of the feed-in tariff (FiT) mechanism which is mandated under the Renewable Energy Act 2011. The move to establish SEDA is commendable and the FiT seems to be attractive but there is a need to create awareness on the implementation of the solar electricity generating system (SEGS). In Malaysia, harnessing technologies related to solar energy resources have great potential for implementation. However, the main issue that plagues the implementation of SEGS is the intermittent nature of this source of energy. The availability of sunlight is during the day time, and there is a need for electrical energy storage system, so that there is electricity available during the night time as well. The meteorological condition such as clouds, haze and pollution affects the SEGS as well. The PV based SEGS is seems to be promising electricity generating system that can contribute towards achieving the 5.5% target and will be able to minimize the negative effects of utilizing fossil fuels for electricity generation on the environment. Malaysia is committed to Kyoto Protocol, which emphasizes on fighting global warming by achieving stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system. In this paper, the technical aspects of the implementation of optimum SEGS is discussed, especially pertaining to the positioning of the PV panels

  1. Implementation of optimum solar electricity generating system

    Science.gov (United States)

    Singh, Balbir Singh Mahinder; Sivapalan, Subarna; Najib, Nurul Syafiqah Mohd; Menon, Pradeep; Karim, Samsul Ariffin A.

    2014-10-01

    Under the 10th Malaysian Plan, the government is expecting the renewable energy to contribute approximately 5.5% to the total electricity generation by the year 2015, which amounts to 98MW. One of the initiatives to ensure that the target is achievable was to establish the Sustainable Energy Development Authority of Malaysia. SEDA is given the authority to administer and manage the implementation of the feed-in tariff (FiT) mechanism which is mandated under the Renewable Energy Act 2011. The move to establish SEDA is commendable and the FiT seems to be attractive but there is a need to create awareness on the implementation of the solar electricity generating system (SEGS). In Malaysia, harnessing technologies related to solar energy resources have great potential for implementation. However, the main issue that plagues the implementation of SEGS is the intermittent nature of this source of energy. The availability of sunlight is during the day time, and there is a need for electrical energy storage system, so that there is electricity available during the night time as well. The meteorological condition such as clouds, haze and pollution affects the SEGS as well. The PV based SEGS is seems to be promising electricity generating system that can contribute towards achieving the 5.5% target and will be able to minimize the negative effects of utilizing fossil fuels for electricity generation on the environment. Malaysia is committed to Kyoto Protocol, which emphasizes on fighting global warming by achieving stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system. In this paper, the technical aspects of the implementation of optimum SEGS is discussed, especially pertaining to the positioning of the PV panels.

  2. Implementation of optimum solar electricity generating system

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Balbir Singh Mahinder, E-mail: balbir@petronas.com.my; Karim, Samsul Ariffin A., E-mail: samsul-ariffin@petronas.com.my [Department of Fundamental and Applied Sciences, Universiti Teknologi PETRONAS, 31750 Bandar Seri Iskandar, Perak (Malaysia); Sivapalan, Subarna, E-mail: subarna-sivapalan@petronas.com.my [Department of Management and Humanities, Universiti Teknologi PETRONAS, 31750 Bandar Seri Iskandar, Perak (Malaysia); Najib, Nurul Syafiqah Mohd; Menon, Pradeep [Department of Electrical and Electronics Engineering, Universiti Teknologi PETRONAS, 31750 Bandar Seri Iskandar, Perak (Malaysia)

    2014-10-24

    Under the 10{sup th} Malaysian Plan, the government is expecting the renewable energy to contribute approximately 5.5% to the total electricity generation by the year 2015, which amounts to 98MW. One of the initiatives to ensure that the target is achievable was to establish the Sustainable Energy Development Authority of Malaysia. SEDA is given the authority to administer and manage the implementation of the feed-in tariff (FiT) mechanism which is mandated under the Renewable Energy Act 2011. The move to establish SEDA is commendable and the FiT seems to be attractive but there is a need to create awareness on the implementation of the solar electricity generating system (SEGS). In Malaysia, harnessing technologies related to solar energy resources have great potential for implementation. However, the main issue that plagues the implementation of SEGS is the intermittent nature of this source of energy. The availability of sunlight is during the day time, and there is a need for electrical energy storage system, so that there is electricity available during the night time as well. The meteorological condition such as clouds, haze and pollution affects the SEGS as well. The PV based SEGS is seems to be promising electricity generating system that can contribute towards achieving the 5.5% target and will be able to minimize the negative effects of utilizing fossil fuels for electricity generation on the environment. Malaysia is committed to Kyoto Protocol, which emphasizes on fighting global warming by achieving stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system. In this paper, the technical aspects of the implementation of optimum SEGS is discussed, especially pertaining to the positioning of the PV panels.

  3. Monitoring Particulate Matter with Commodity Hardware

    Science.gov (United States)

    Holstius, David

    Health effects attributed to outdoor fine particulate matter (PM 2.5) rank it among the risk factors with the highest health burdens in the world, annually accounting for over 3.2 million premature deaths and over 76 million lost disability-adjusted life years. Existing PM2.5 monitoring infrastructure cannot, however, be used to resolve variations in ambient PM2.5 concentrations with adequate spatial and temporal density, or with adequate coverage of human time-activity patterns, such that the needs of modern exposure science and control can be met. Small, inexpensive, and portable devices, relying on newly available off-the-shelf sensors, may facilitate the creation of PM2.5 datasets with improved resolution and coverage, especially if many such devices can be deployed concurrently with low system cost. Datasets generated with such technology could be used to overcome many important problems associated with exposure misclassification in air pollution epidemiology. Chapter 2 presents an epidemiological study of PM2.5 that used data from ambient monitoring stations in the Los Angeles basin to observe a decrease of 6.1 g (95% CI: 3.5, 8.7) in population mean birthweight following in utero exposure to the Southern California wildfires of 2003, but was otherwise limited by the sparsity of the empirical basis for exposure assessment. Chapter 3 demonstrates technical potential for remedying PM2.5 monitoring deficiencies, beginning with the generation of low-cost yet useful estimates of hourly and daily PM2.5 concentrations at a regulatory monitoring site. The context (an urban neighborhood proximate to a major goods-movement corridor) and the method (an off-the-shelf sensor costing approximately USD $10, combined with other low-cost, open-source, readily available hardware) were selected to have special significance among researchers and practitioners affiliated with contemporary communities of practice in public health and citizen science. As operationalized by

  4. Optimum community energy storage system for demand load shifting

    International Nuclear Information System (INIS)

    Parra, David; Norman, Stuart A.; Walker, Gavin S.; Gillott, Mark

    2016-01-01

    Highlights: • PbA-acid and lithium-ion batteries are optimised up to a 100-home community. • A 4-period real-time pricing and Economy 7 (2-period time-of-use) are compared. • Li-ion batteries perform worse with Economy 7 for small communities and vice versa. • The community approach reduced the levelised cost by 56% compared to a single home. • Heat pumps reduced the levelised cost and increased the profitability of batteries. - Abstract: Community energy storage (CES) is becoming an attractive technological option to facilitate the use of distributed renewable energy generation, manage demand loads and decarbonise the residential sector. There is strong interest in understanding the techno-economic benefits of using CES systems, which energy storage technology is more suitable and the optimum CES size. In this study, the performance including equivalent full cycles and round trip efficiency of lead-acid (PbA) and lithium-ion (Li-ion) batteries performing demand load shifting are quantified as a function of the size of the community using simulation-based optimisation. Two different retail tariffs are compared: a time-of-use tariff (Economy 7) and a real-time-pricing tariff including four periods based on the electricity prices on the wholesale market. Additionally, the economic benefits are quantified when projected to two different years: 2020 and a hypothetical zero carbon year. The findings indicate that the optimum PbA capacity was approximately twice the optimum Li-ion capacity in the case of the real-time-pricing tariff and around 1.6 times for Economy 7 for any community size except a single home. The levelised cost followed a negative logarithmic trend while the internal rate of return followed a positive logarithmic trend as a function of the size of the community. PbA technology reduced the levelised cost down to 0.14 £/kW h when projected to the year 2020 for the retail tariff Economy 7. CES systems were sized according to the demand load and

  5. Optimum gain and phase for stochastic cooling systems

    International Nuclear Information System (INIS)

    Meer, S. van der.

    1984-01-01

    A detailed analysis of optimum gain and phase adjustment in stochastic cooling systems reveals that the result is strongly influenced by the beam feedback effect and that for optimum performance the system phase should change appreciably across each Schottky band. It is shown that the performance is not greatly diminished if a constant phase is adopted instead. On the other hand, the effect of mixing between pick-up and kicker (which produces a phase change similar to the optimum one) is shown to be less perturbing than is usually assumed, provided that the absolute value of the gain is not too far from the optimum value. (orig.)

  6. Hardware implementation of a GFSR pseudo-random number generator

    Science.gov (United States)

    Aiello, G. R.; Budinich, M.; Milotti, E.

    1989-12-01

    We describe the hardware implementation of a pseudo-random number generator of the "Generalized Feedback Shift Register" (GFSR) type. After brief theoretical considerations we describe two versions of the hardware, the tests done and the performances achieved.

  7. A portable anaerobic microbioreactor reveals optimum growth conditions for the methanogen Methanosaeta concilii.

    Science.gov (United States)

    Steinhaus, Benjamin; Garcia, Marcelo L; Shen, Amy Q; Angenent, Largus T

    2007-03-01

    Conventional studies of the optimum growth conditions for methanogens (methane-producing, obligate anaerobic archaea) are typically conducted with serum bottles or bioreactors. The use of microfluidics to culture methanogens allows direct microscopic observations of the time-integrated response of growth. Here, we developed a microbioreactor (microBR) with approximately 1-microl microchannels to study some optimum growth conditions for the methanogen Methanosaeta concilii. The microBR is contained in an anaerobic chamber specifically designed to place it directly onto an inverted light microscope stage while maintaining a N2-CO2 environment. The methanogen was cultured for months inside microchannels of different widths. Channel width was manipulated to create various fluid velocities, allowing the direct study of the behavior and responses of M. concilii to various shear stresses and revealing an optimum shear level of approximately 20 to 35 microPa. Gradients in a single microchannel were then used to find an optimum pH level of 7.6 and an optimum total NH4-N concentration of less than 1,100 mg/liter (<47 mg/liter as free NH3-N) for M. concilii under conditions of the previously determined ideal shear stress and pH and at a temperature of 35 degrees C.

  8. Open Source Hardware for DIY Environmental Sensing

    Science.gov (United States)

    Aufdenkampe, A. K.; Hicks, S. D.; Damiano, S. G.; Montgomery, D. S.

    2014-12-01

    The Arduino open source electronics platform has been very popular within the DIY (Do It Yourself) community for several years, and it is now providing environmental science researchers with an inexpensive alternative to commercial data logging and transmission hardware. Here we present the designs for our latest series of custom Arduino-based dataloggers, which include wireless communication options like self-meshing radio networks and cellular phone modules. The main Arduino board uses a custom interface board to connect to various research-grade sensors to take readings of turbidity, dissolved oxygen, water depth and conductivity, soil moisture, solar radiation, and other parameters. Sensors with SDI-12 communications can be directly interfaced to the logger using our open Arduino-SDI-12 software library (https://github.com/StroudCenter/Arduino-SDI-12). Different deployment options are shown, like rugged enclosures to house the loggers and rigs for mounting the sensors in both fresh water and marine environments. After the data has been collected and transmitted by the logger, the data is received by a mySQL-PHP stack running on a web server that can be accessed from anywhere in the world. Once there, the data can be visualized on web pages or served though REST requests and Water One Flow (WOF) services. Since one of the main benefits of using open source hardware is the easy collaboration between users, we are introducing a new web platform for discussion and sharing of ideas and plans for hardware and software designs used with DIY environmental sensors and data loggers.

  9. Computer hardware for radiologists: Part 2

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU, chipset, random access memory (RAM, and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. "Storage drive" is a term describing a "memory" hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. "Drive interfaces" connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular "input/output devices" used commonly with computers are the printer, monitor, mouse, and keyboard. The "bus" is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated ISA bus. "Ports" are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ′ever increasing′ digital future.

  10. Computer hardware for radiologists: Part 2

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. “Storage drive” is a term describing a “memory” hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. “Drive interfaces” connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular “input/output devices” used commonly with computers are the printer, monitor, mouse, and keyboard. The “bus” is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. “Ports” are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ‘ever increasing’ digital future

  11. The Impact of Flight Hardware Scavenging on Space Logistics

    Science.gov (United States)

    Oeftering, Richard C.

    2011-01-01

    For a given fixed launch vehicle capacity the logistics payload delivered to the moon may be only roughly 20 percent of the payload delivered to the International Space Station (ISS). This is compounded by the much lower flight frequency to the moon and thus low availability of spares for maintenance. This implies that lunar hardware is much more scarce and more costly per kilogram than ISS and thus there is much more incentive to preserve hardware. The Constellation Lunar Surface System (LSS) program is considering ways of utilizing hardware scavenged from vehicles including the Altair lunar lander. In general, the hardware will have only had a matter of hours of operation yet there may be years of operational life remaining. By scavenging this hardware the program, in effect, is treating vehicle hardware as part of the payload. Flight hardware may provide logistics spares for system maintenance and reduce the overall logistics footprint. This hardware has a wide array of potential applications including expanding the power infrastructure, and exploiting in-situ resources. Scavenging can also be seen as a way of recovering the value of, literally, billions of dollars worth of hardware that would normally be discarded. Scavenging flight hardware adds operational complexity and steps must be taken to augment the crew s capability with robotics, capabilities embedded in flight hardware itself, and external processes. New embedded technologies are needed to make hardware more serviceable and scavengable. Process technologies are needed to extract hardware, evaluate hardware, reconfigure or repair hardware, and reintegrate it into new applications. This paper also illustrates how scavenging can be used to drive down the cost of the overall program by exploiting the intrinsic value of otherwise discarded flight hardware.

  12. Management of cladding hulls and fuel hardware

    International Nuclear Information System (INIS)

    1985-01-01

    The reprocessing of spent fuel from power reactors based on chop-leach technology produces a solid waste product of cladding hulls and other metallic residues. This report describes the current situation in the management of fuel cladding hulls and hardware. Information is presented on the material composition of such waste together with the heating effects due to neutron-induced activation products and fuel contamination. As no country has established a final disposal route and the corresponding repository, this report also discusses possible disposal routes and various disposal options under consideration at present

  13. Open Hardware for CERN's accelerator control systems

    International Nuclear Information System (INIS)

    Bij, E van der; Serrano, J; Wlostowski, T; Cattin, M; Gousiou, E; Sanchez, P Alvarez; Boccardi, A; Voumard, N; Penacoba, G

    2012-01-01

    The accelerator control systems at CERN will be upgraded and many electronics modules such as analog and digital I/O, level converters and repeaters, serial links and timing modules are being redesigned. The new developments are based on the FPGA Mezzanine Card, PCI Express and VME64x standards while the Wishbone specification is used as a system on a chip bus. To attract partners, the projects are developed in an 'Open' fashion. Within this Open Hardware project new ways of working with industry are being evaluated and it has been proven that industry can be involved at all stages, from design to production and support.

  14. Hardware for computing the integral image

    OpenAIRE

    Fernández-Berni, J.; Rodríguez-Vázquez, Ángel; Río, Rocío del; Carmona-Galán, R.

    2015-01-01

    La presente invención, según se expresa en el enunciado de esta memoria descriptiva, consiste en hardware de señal mixta para cómputo de la imagen integral en el plano focal mediante una agrupación de celdas básicas de sensado-procesamiento cuya interconexión puede ser reconfigurada mediante circuitería periférica que hace posible una implementación muy eficiente de una tarea de procesamiento muy útil en visión artificial como es el cálculo de la imagen integral en escenarios tales como monit...

  15. Development of Hardware Dual Modality Tomography System

    Directory of Open Access Journals (Sweden)

    R. M. Zain

    2009-06-01

    Full Text Available The paper describes the hardware development and performance of the Dual Modality Tomography (DMT system. DMT consists of optical and capacitance sensors. The optical sensors consist of 16 LEDs and 16 photodiodes. The Electrical Capacitance Tomography (ECT electrode design use eight electrode plates as the detecting sensor. The digital timing and the control unit have been developing in order to control the light projection of optical emitters, switching the capacitance electrodes and to synchronize the operation of data acquisition. As a result, the developed system is able to provide a maximum 529 set data per second received from the signal conditioning circuit to the computer.

  16. Fast Gridding on Commodity Graphics Hardware

    DEFF Research Database (Denmark)

    Sørensen, Thomas Sangild; Schaeffter, Tobias; Noe, Karsten Østergaard

    2007-01-01

    is the far most time consuming of the three steps (Table 1). Modern graphics cards (GPUs) can be utilised as a fast parallel processor provided that algorithms are reformulated in a parallel solution. The purpose of this work is to test the hypothesis, that a non-cartesian reconstruction can be efficiently...... implemented on graphics hardware giving a significant speedup compared to CPU based alternatives. We present a novel GPU implementation of the convolution step that overcomes the problems of memory bandwidth that has limited the speed of previous GPU gridding algorithms [2]....

  17. Reconfigurable Hardware for Compressing Hyperspectral Image Data

    Science.gov (United States)

    Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua

    2010-01-01

    High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of

  18. List search hardware for interpretive software

    CERN Document Server

    Altaber, Jacques; Mears, B; Rausch, R

    1979-01-01

    Interpreted languages, e.g. BASIC, are simple to learn, easy to use, quick to modify and in general 'user-friendly'. However, a critically time consuming process during interpretation is that of list searching. A special microprogrammed device for fast list searching has therefore been developed at the SPS Division of CERN. It uses bit- sliced hardware. Fast algorithms perform search, insert and delete of a six-character name and its value in a list of up to 1000 pairs. The prototype shows retrieval times of the order of 10-30 microseconds. (11 refs).

  19. Hardware trigger processor for the MDT system

    CERN Document Server

    AUTHOR|(SzGeCERN)757787; The ATLAS collaboration; Hazen, Eric; Butler, John; Black, Kevin; Gastler, Daniel Edward; Ntekas, Konstantinos; Taffard, Anyes; Martinez Outschoorn, Verena; Ishino, Masaya; Okumura, Yasuyuki

    2017-01-01

    We are developing a low-latency hardware trigger processor for the Monitored Drift Tube system in the Muon spectrometer. The processor will fit candidate Muon tracks in the drift tubes in real time, improving significantly the momentum resolution provided by the dedicated trigger chambers. We present a novel pure-FPGA implementation of a Legendre transform segment finder, an associative-memory alternative implementation, an ARM (Zynq) processor-based track fitter, and compact ATCA carrier board architecture. The ATCA architecture is designed to allow a modular, staged approach to deployment of the system and exploration of alternative technologies.

  20. The optimum decision rules for the oddity task

    NARCIS (Netherlands)

    Versfeld, N.J.; Dai, H.; Green, D.M.

    1996-01-01

    This paper presents the optimum decision rule for an m-interval oddity task in which m-1 intervals contain the same signal and one is different or odd. The optimum decision rule depends on the degree of correlation among observations. The present approach unifies the different strategies that occur

  1. Optimum mobility’ facelift. Part 2 – the technique

    OpenAIRE

    Fanous, Nabil; Karsan, Naznin; Zakhary, Kristina; Tawile, Carolyne

    2006-01-01

    In the first of this two-part article on the ‘optimum mobility’ facelift, facial tissue mobility was analyzed, and three theories or mechanisms emerged: ‘intrinsic mobility’, ‘surgically induced mobility’ and ‘optimum mobility points’.

  2. Optimum material gradient composition for the functionally graded ...

    African Journals Online (AJOL)

    This study investigates the relation between the material gradient properties and the optimum sensing/actuation design of the functionally graded piezoelectric beams. Three-dimensional (3D) finite element analysis has been employed for the prediction of an optimum composition profile in these types of sensors and ...

  3. Is Hardware Removal Recommended after Ankle Fracture Repair?

    Directory of Open Access Journals (Sweden)

    Hong-Geun Jung

    2016-01-01

    Full Text Available The indications and clinical necessity for routine hardware removal after treating ankle or distal tibia fracture with open reduction and internal fixation are disputed even when hardware-related pain is insignificant. Thus, we determined the clinical effects of routine hardware removal irrespective of the degree of hardware-related pain, especially in the perspective of patients’ daily activities. This study was conducted on 80 consecutive cases (78 patients treated by surgery and hardware removal after bony union. There were 56 ankle and 24 distal tibia fractures. The hardware-related pain, ankle joint stiffness, discomfort on ambulation, and patient satisfaction were evaluated before and at least 6 months after hardware removal. Pain score before hardware removal was 3.4 (range 0 to 6 and decreased to 1.3 (range 0 to 6 after removal. 58 (72.5% patients experienced improved ankle stiffness and 65 (81.3% less discomfort while walking on uneven ground and 63 (80.8% patients were satisfied with hardware removal. These results suggest that routine hardware removal after ankle or distal tibia fracture could ameliorate hardware-related pain and improves daily activities and patient satisfaction even when the hardware-related pain is minimal.

  4. ISS Logistics Hardware Disposition and Metrics Validation

    Science.gov (United States)

    Rogers, Toneka R.

    2010-01-01

    I was assigned to the Logistics Division of the International Space Station (ISS)/Spacecraft Processing Directorate. The Division consists of eight NASA engineers and specialists that oversee the logistics portion of the Checkout, Assembly, and Payload Processing Services (CAPPS) contract. Boeing, their sub-contractors and the Boeing Prime contract out of Johnson Space Center, provide the Integrated Logistics Support for the ISS activities at Kennedy Space Center. Essentially they ensure that spares are available to support flight hardware processing and the associated ground support equipment (GSE). Boeing maintains a Depot for electrical, mechanical and structural modifications and/or repair capability as required. My assigned task was to learn project management techniques utilized by NASA and its' contractors to provide an efficient and effective logistics support infrastructure to the ISS program. Within the Space Station Processing Facility (SSPF) I was exposed to Logistics support components, such as, the NASA Spacecraft Services Depot (NSSD) capabilities, Mission Processing tools, techniques and Warehouse support issues, required for integrating Space Station elements at the Kennedy Space Center. I also supported the identification of near-term ISS Hardware and Ground Support Equipment (GSE) candidates for excessing/disposition prior to October 2010; and the validation of several Logistics Metrics used by the contractor to measure logistics support effectiveness.

  5. CASIS Fact Sheet: Hardware and Facilities

    Science.gov (United States)

    Solomon, Michael R.; Romero, Vergel

    2016-01-01

    Vencore is a proven information solutions, engineering, and analytics company that helps our customers solve their most complex challenges. For more than 40 years, we have designed, developed and delivered mission-critical solutions as our customers' trusted partner. The Engineering Services Contract, or ESC, provides engineering and design services to the NASA organizations engaged in development of new technologies at the Kennedy Space Center. Vencore is the ESC prime contractor, with teammates that include Stinger Ghaffarian Technologies, Sierra Lobo, Nelson Engineering, EASi, and Craig Technologies. The Vencore team designs and develops systems and equipment to be used for the processing of space launch vehicles, spacecraft, and payloads. We perform flight systems engineering for spaceflight hardware and software; develop technologies that serve NASA's mission requirements and operations needs for the future. Our Flight Payload Support (FPS) team at Kennedy Space Center (KSC) provides engineering, development, and certification services as well as payload integration and management services to NASA and commercial customers. Our main objective is to assist principal investigators (PIs) integrate their science experiments into payload hardware for research aboard the International Space Station (ISS), commercial spacecraft, suborbital vehicles, parabolic flight aircrafts, and ground-based studies. Vencore's FPS team is AS9100 certified and a recognized implementation partner for the Center for Advancement of Science in Space (CASIS

  6. ARM assembly language with hardware experiments

    CERN Document Server

    Elahi, Ata

    2015-01-01

    This book provides a hands-on approach to learning ARM assembly language with the use of a TI microcontroller. The book starts with an introduction to computer architecture and then discusses number systems and digital logic. The text covers ARM Assembly Language, ARM Cortex Architecture and its components, and Hardware Experiments using TILM3S1968. Written for those interested in learning embedded programming using an ARM Microcontroller. ·         Introduces number systems and signal transmission methods   ·         Reviews logic gates, registers, multiplexers, decoders and memory   ·         Provides an overview and examples of ARM instruction set   ·         Uses using Keil development tools for writing and debugging ARM assembly language Programs   ·         Hardware experiments using a Mbed NXP LPC1768 microcontroller; including General Purpose Input/Output (GPIO) configuration, real time clock configuration, binary input to 7-segment display, creating ...

  7. Introduction to Hardware Security and Trust

    CERN Document Server

    Wang, Cliff

    2012-01-01

    The emergence of a globalized, horizontal semiconductor business model raises a set of concerns involving the security and trust of the information systems on which modern society is increasingly reliant for mission-critical functionality. Hardware-oriented security and trust issues span a broad range including threats related to the malicious insertion of Trojan circuits designed, e.g.,to act as a ‘kill switch’ to disable a chip, to integrated circuit (IC) piracy,and to attacks designed to extract encryption keys and IP from a chip. This book provides the foundations for understanding hardware security and trust, which have become major concerns for national security over the past decade.  Coverage includes security and trust issues in all types of electronic devices and systems such as ASICs, COTS, FPGAs, microprocessors/DSPs, and embedded systems.  This serves as an invaluable reference to the state-of-the-art research that is of critical significance to the security of,and trust in, modern society�...

  8. Solar cooling in the hardware-in-the-loop test; Solare Kuehlung im Hardware-in-the-Loop-Test

    Energy Technology Data Exchange (ETDEWEB)

    Lohmann, Sandra; Radosavljevic, Rada; Goebel, Johannes; Gottschald, Jonas; Adam, Mario [Fachhochschule Duesseldorf (Germany). Erneuerbare Energien und Energieeffizienz E2

    2012-07-01

    The first part of the BMBF-funded research project 'Solar cooling in the hardware-in-the-loop test' (SoCool HIL) deals with the simulation of a solar refrigeration system using the simulation environment Matlab / Simulink with the toolboxes Stateflow and Carnot. Dynamic annual simulations and DoE supported parameter variations were used to select meaningful system configurations, control strategies and dimensioning of components. The second part of this project deals with hardware-in-the-loop tests using the 17.5 kW absorption chiller of the company Yazaki Europe Limited (Hertfordshire, United Kingdom). For this, the chiller is operated on a test bench in order to emulate the behavior of other system components (solar circuit with heat storage, recooling, buildings and cooling distribution / transfer). The chiller is controlled by a simulation of the system using MATLAB / Simulink / Carnot. Based on the knowledge on the real dynamic performance of the chiller the simulation model of the chiller can then be validated. Further tests are used to optimize the control of the chiller to the current cooling load. In addition, some changes in system configurations (for example cold backup) are tested with the real machine. The results of these tests and the findings on the dynamic performance of the chiller are presented.

  9. Optimum Image Formation for Spaceborne Microwave Radiometer Products.

    Science.gov (United States)

    Long, David G; Brodzik, Mary J

    2016-05-01

    This paper considers some of the issues of radiometer brightness image formation and reconstruction for use in the NASA-sponsored Calibrated Passive Microwave Daily Equal-Area Scalable Earth Grid 2.0 Brightness Temperature Earth System Data Record project, which generates a multisensor multidecadal time series of high-resolution radiometer products designed to support climate studies. Two primary reconstruction algorithms are considered: the Backus-Gilbert approach and the radiometer form of the scatterometer image reconstruction (SIR) algorithm. These are compared with the conventional drop-in-the-bucket (DIB) gridded image formation approach. Tradeoff study results for the various algorithm options are presented to select optimum values for the grid resolution, the number of SIR iterations, and the BG gamma parameter. We find that although both approaches are effective in improving the spatial resolution of the surface brightness temperature estimates compared to DIB, SIR requires significantly less computation. The sensitivity of the reconstruction to the accuracy of the measurement spatial response function (MRF) is explored. The partial reconstruction of the methods can tolerate errors in the description of the sensor measurement response function, which simplifies the processing of historic sensor data for which the MRF is not known as well as modern sensors. Simulation tradeoff results are confirmed using actual data.

  10. Generalized Pareto optimum and semi-classical spinors

    Science.gov (United States)

    Rouleux, M.

    2018-02-01

    In 1971, S. Smale presented a generalization of Pareto optimum he called the critical Pareto set. The underlying motivation was to extend Morse theory to several functions, i.e. to find a Morse theory for m differentiable functions defined on a manifold M of dimension ℓ. We use this framework to take a 2 × 2 Hamiltonian ℋ = ℋ(p) ∈ 2 C ∞(T * R 2) to its normal form near a singular point of the Fresnel surface. Namely we say that ℋ has the Pareto property if it decomposes, locally, up to a conjugation with regular matrices, as ℋ(p) = u ‧(p)C(p)(u ‧(p))*, where u : R 2 → R 2 has singularities of codimension 1 or 2, and C(p) is a regular Hermitian matrix (“integrating factor”). In particular this applies in certain cases to the matrix Hamiltonian of Elasticity theory and its (relative) perturbations of order 3 in momentum at the origin.

  11. Investigation of earthquake factor for optimum tuned mass dampers

    Science.gov (United States)

    Nigdeli, Sinan Melih; Bekdaş, Gebrail

    2012-09-01

    In this study the optimum parameters of tuned mass dampers (TMD) are investigated under earthquake excitations. An optimization strategy was carried out by using the Harmony Search (HS) algorithm. HS is a metaheuristic method which is inspired from the nature of musical performances. In addition to the HS algorithm, the results of the optimization objective are compared with the results of the other documented method and the corresponding results are eliminated. In that case, the best optimum results are obtained. During the optimization, the optimum TMD parameters were searched for single degree of freedom (SDOF) structure models with different periods. The optimization was done for different earthquakes separately and the results were compared.

  12. Programming languages and compiler design for realistic quantum hardware

    Science.gov (United States)

    Chong, Frederic T.; Franklin, Diana; Martonosi, Margaret

    2017-09-01

    Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.

  13. Programming languages and compiler design for realistic quantum hardware.

    Science.gov (United States)

    Chong, Frederic T; Franklin, Diana; Martonosi, Margaret

    2017-09-13

    Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.

  14. Handbook of hardware/software codesign

    CERN Document Server

    Teich, Jürgen

    2017-01-01

    This handbook presents fundamental knowledge on the hardware/software (HW/SW) codesign methodology. Contributing expert authors look at key techniques in the design flow as well as selected codesign tools and design environments, building on basic knowledge to consider the latest techniques. The book enables readers to gain real benefits from the HW/SW codesign methodology through explanations and case studies which demonstrate its usefulness. Readers are invited to follow the progress of design techniques through this work, which assists readers in following current research directions and learning about state-of-the-art techniques. Students and researchers will appreciate the wide spectrum of subjects that belong to the design methodology from this handbook. .

  15. Battery Management System Hardware Concepts: An Overview

    Directory of Open Access Journals (Sweden)

    Markus Lelie

    2018-03-01

    Full Text Available This paper focuses on the hardware aspects of battery management systems (BMS for electric vehicle and stationary applications. The purpose is giving an overview on existing concepts in state-of-the-art systems and enabling the reader to estimate what has to be considered when designing a BMS for a given application. After a short analysis of general requirements, several possible topologies for battery packs and their consequences for the BMS’ complexity are examined. Four battery packs that were taken from commercially available electric vehicles are shown as examples. Later, implementation aspects regarding measurement of needed physical variables (voltage, current, temperature, etc. are discussed, as well as balancing issues and strategies. Finally, safety considerations and reliability aspects are investigated.

  16. EPICS: Allen-Bradley hardware reference manual

    International Nuclear Information System (INIS)

    Nawrocki, G.

    1993-01-01

    This manual covers the following hardware: Allen-Bradley 6008 -- SV VMEbus I/O scanner; Allen-Bradley universal I/O chassis 1771-A1B, -A2B, -A3B, and -A4B; Allen-Bradley power supply module 1771-P4S; Allen-Bradley 1771-ASB remote I/O adapter module; Allen-Bradley 1771-IFE analog input module; Allen-Bradley 1771-OFE analog output module; Allen-Bradley 1771-IG(D) TTL input module; Allen-Bradley 1771-OG(d) TTL output; Allen-Bradley 1771-IQ DC selectable input module; Allen-Bradley 1771-OW contact output module; Allen-Bradley 1771-IBD DC (10--30V) input module; Allen-Bradley 1771-OBD DC (10--60V) output module; Allen-Bradley 1771-IXE thermocouple/millivolt input module; and the Allen-Bradley 2705 RediPANEL push button module

  17. Locating hardware faults in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-04-13

    Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.

  18. Theorem Proving in Intel Hardware Design

    Science.gov (United States)

    O'Leary, John

    2009-01-01

    For the past decade, a framework combining model checking (symbolic trajectory evaluation) and higher-order logic theorem proving has been in production use at Intel. Our tools and methodology have been used to formally verify execution cluster functionality (including floating-point operations) for a number of Intel products, including the Pentium(Registered TradeMark)4 and Core(TradeMark)i7 processors. Hardware verification in 2009 is much more challenging than it was in 1999 - today s CPU chip designs contain many processor cores and significant firmware content. This talk will attempt to distill the lessons learned over the past ten years, discuss how they apply to today s problems, outline some future directions.

  19. Hardware implementation of stochastic spiking neural networks.

    Science.gov (United States)

    Rosselló, Josep L; Canals, Vincent; Morro, Antoni; Oliver, Antoni

    2012-08-01

    Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.

  20. Communication Estimation for Hardware/Software Codesign

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1998-01-01

    This paper presents a general high level estimation model of communication throughput for the implementation of a given communication protocol. The model, which is part of a larger model that includes component price, software driver object code size and hardware driver area, is intended...... to be general enough to be able to capture the characteristics of a wide range of communication protocols and yet to be sufficiently detailed as to allow the designer or design tool to efficiently explore tradeoffs between throughput, bus widths, burst/non-burst transfers and data packing strategies. Thus...... it provides a basis for decision making with respect to communication protocols/components and communication driver design in the initial design space exploration phase of a co-synthesis process where a large number of possibilities must be examined and where fast estimators are therefore necessary. The fill...

  1. The double Chooz hardware trigger system

    Energy Technology Data Exchange (ETDEWEB)

    Cucoanes, Andi; Beissel, Franz; Reinhold, Bernd; Roth, Stefan; Stahl, Achim; Wiebusch, Christopher [RWTH Aachen (Germany)

    2008-07-01

    The double Chooz neutrino experiment aims to improve the present knowledge on {theta}{sub 13} mixing angle using two similar detectors placed at {proportional_to}280 m and respectively 1 km from the Chooz power plant reactor cores. The detectors measure the disappearance of reactor antineutrinos. The hardware trigger has to be very efficient for antineutrinos as well as for various types of background events. The triggering condition is based on discriminated PMT sum signals and the multiplicity of groups of PMTs. The talk gives an outlook to the double Chooz experiment and explains the requirements of the trigger system. The resulting concept and its performance is shown as well as first results from a prototype system.

  2. Theoretical optimum of implant positional index design.

    Science.gov (United States)

    Semper, W; Kraft, S; Krüger, T; Nelson, K

    2009-08-01

    Rotational freedom of the implant-abutment connection influences its screw joint stability; for optimization, influential factors need to be evaluated based on a previously developed closed formula. The underlying hypothesis is that the manufacturing tolerances, geometric pattern, and dimensions of the index do not influence positional stability. We used the dimensions of 5 commonly used implant systems with a clearance of 20 microm to calculate the extent of rotational freedom; a 3D simulation (SolidWorks) validated the analytical findings. Polygonal positional indices showed the highest degrees of rotational freedom. The polygonal profile displayed higher positional stability than the polygons, but less positional accuracy than the cam-groove connection. Features of a maximal rotation-safe positional index were determined. The analytical calculation of rotational freedom of implant positional indices is possible. Rotational freedom is dependent on the geometric design of the index and may be decreased by incorporating specific aspects into the positional index design.

  3. Optimum analysis of a Brownian refrigerator.

    Science.gov (United States)

    Luo, X G; Liu, N; He, J Z

    2013-02-01

    A Brownian refrigerator with the cold and hot reservoirs alternating along a space coordinate is established. The heat flux couples with the movement of the Brownian particles due to an external force in the spatially asymmetric but periodic potential. After using the Arrhenius factor to describe the behaviors of the forward and backward jumps of the particles, the expressions for coefficient of performance (COP) and cooling rate are derived analytically. Then, through maximizing the product of conversion efficiency and heat flux flowing out, a new upper bound only depending on the temperature ratio of the cold and hot reservoirs is found numerically in the reversible situation, and it is a little larger than the so-called Curzon and Ahlborn COP ε(CA)=(1/√[1-τ])-1. After considering the irreversible factor owing to the kinetic energy change of the moving particles, we find the optimized COP is smaller than ε(CA) and the external force even does negative work on the Brownian particles when they jump from a cold to hot reservoir.

  4. Energy - achieving an optimum through information

    International Nuclear Information System (INIS)

    Gitt, W.

    1986-01-01

    What have computer programs in common with everyday human behaviour. Or the birds' passage, or photosynthesis, or the chemical reactions in a cell. They all primarily are information-controlled processes. The book under review deals with 'information' and 'energy', two main concepts in today's technological world. 'Energy' during the last few years has become a significant criterion with regard to technological progress. 'Information' is not only a main term in informatics terminology, but also a central concept for example in biology, linguistics, and communication science. The author shows that every 'information' is the result of an intellectual and purposeful process. The concept of information is taken as the red thread leading the author's journey through manifold strata of modern life, asking questions, finding answers, discussing problems. The wide spectrum of aspects discussed, including for instance a new approach to the Bible, and the remarkable examples presented by the author, make this book a treasure of knowledge, and of faith. (orig./HP) [de

  5. Optimum Temperature and Thermal Stability of Crude Polyphenol ...

    African Journals Online (AJOL)

    The optimum temperature was found to be 300C for the enzyme extracted from guava, ... processing industries because during the processing ... enhance the brown colour produced (Valero et al., ... considerable economic and nutritional loss.

  6. Performance characteristics of aerodynamically optimum turbines for wind energy generators

    Science.gov (United States)

    Rohrbach, C.; Worobel, R.

    1975-01-01

    This paper presents a brief discussion of the aerodynamic methodology for wind energy generator turbines, an approach to the design of aerodynamically optimum wind turbines covering a broad range of design parameters, some insight on the effect on performance of nonoptimum blade shapes which may represent lower fabrication costs, the annual wind turbine energy for a family of optimum wind turbines, and areas of needed research. On the basis of the investigation, it is concluded that optimum wind turbines show high performance over a wide range of design velocity ratios; that structural requirements impose constraints on blade geometry; that variable pitch wind turbines provide excellent power regulation and that annual energy output is insensitive to design rpm and solidity of optimum wind turbines.

  7. Optimum strategies for nuclear energy system development (method of synthesis)

    International Nuclear Information System (INIS)

    Belenky, V.Z.

    1983-01-01

    The problem of optimum long-term development of the nuclear energy system is considered. The optimum strategies (i.e. minimum total uranium consumption) for the transition phase leading to a stationary regime of development are found. For this purpose the author has elaborated a new method of solving linear problems of optimal control which can include jumps in trajectories. The method gives a possibility to fulfil a total synthesis of optimum strategies. A key characteristic of the problem is the productivity function of the nuclear energy system which connects technological system parameters with its growth rate. There are only two types of optimum strategies, according to an increasing or decreasing productivity function. Both cases are illustrated with numerical examples. (orig.) [de

  8. Experimental validation of optimum resistance moment of concrete ...

    African Journals Online (AJOL)

    Experimental validation of optimum resistance moment of concrete slabs reinforced ... other solutions to combat corrosion problems in steel reinforced concrete. ... Eight specimens of two-way spanning slabs reinforced with CFRP bars were ...

  9. Hardware Locks with Priority Ceiling Emulation for a Java Chip-Multiprocessor

    DEFF Research Database (Denmark)

    Strøm, Torur Biskopstø; Schoeberl, Martin

    2015-01-01

    According to the safety-critical Java specification, priority ceiling emulation is a requirement for implementations, as it has preferable properties, such as avoiding priority inversion and being deadlock free on uni-core systems. In this paper we explore our hardware supported implementation...... of priority ceiling emulation on the multicore Java optimized processor, and compare it to the existing hardware locks on the Java optimized processor. We find that the additional overhead for priority ceiling emulation on a multicore processor is several times higher than simpler, non-premptive locks, mainly...

  10. A data acquisition computer for high energy physics applications DAFNE:- hardware manual

    International Nuclear Information System (INIS)

    Barlow, J.; Seller, P.; De-An, W.

    1983-07-01

    A high performance stand alone computer system based on the Motorola 68000 micro processor has been built at the Rutherford Appleton Laboratory. Although the design was strongly influenced by the requirement to provide a compact data acquisition computer for the high energy physics environment, the system is sufficiently general to find applications in a wider area. It provides colour graphics and tape and disc storage together with access to CAMAC systems. This report is the hardware manual of the data acquisition computer, DAFNE (Data Acquisition For Nuclear Experiments), and as such contains a full description of the hardware structure of the computer system. (author)

  11. Optimum filters for narrow-band frequency modulation.

    Science.gov (United States)

    Shelton, R. D.

    1972-01-01

    The results of a computer search for the optimum type of bandpass filter for low-index angle-modulated signals are reported. The bandpass filters are discussed in terms of their low-pass prototypes. Only filter functions with constant numerators are considered. The pole locations for the optimum filters of several cases are shown in a table. The results are fairly independent of modulation index and bandwidth.

  12. Finding Optimum Focal Point Position with Neural Networks in CO2 Laser Welding

    DEFF Research Database (Denmark)

    Gong, Hui; Olsen, Flemming Ove

    1997-01-01

    CO2 lasers are increasingly being utilized for quality welding in production. Considering the high equipment cost, the start-up time and set-up time should be minimized. Ideally the parameters should be set up and optimized more or less automatically. In this article neural networks are designed...

  13. Hardware descriptions of the I and C systems for NPP

    International Nuclear Information System (INIS)

    Lee, Cheol Kwon; Oh, In Suk; Park, Joo Hyun; Kim, Dong Hoon; Han, Jae Bok; Shin, Jae Whal; Kim, Young Bak

    2003-09-01

    The hardware specifications for I and C Systems of SNPP(Standard Nuclear Power Plant) are reviewed in order to acquire the hardware requirement and specification of KNICS (Korea Nuclear Instrumentation and Control System). In the study, we investigated hardware requirements, hardware configuration, hardware specifications, man-machine hardware requirements, interface requirements with the other system, and data communication requirements that are applicable to SNP. We reviewed those things of control systems, protection systems, monitoring systems, information systems, and process instrumentation systems. Through the study, we described the requirements and specifications of digital systems focusing on a microprocessor and a communication interface, and repeated it for analog systems focusing on the manufacturing companies. It is expected that the experience acquired from this research will provide vital input for the development of the KNICS

  14. Optimum Parameters for Tuned Mass Damper Using Shuffled Complex Evolution (SCE Algorithm

    Directory of Open Access Journals (Sweden)

    Hessamoddin Meshkat Razavi

    2015-06-01

    Full Text Available This study is investigated the optimum parameters for a tuned mass damper (TMD under the seismic excitation. Shuffled complex evolution (SCE is a meta-heuristic optimization method which is used to find the optimum damping and tuning frequency ratio for a TMD. The efficiency of the TMD is evaluated by decreasing the structural displacement dynamic magnification factor (DDMF and acceleration dynamic magnification factor (ADMF for a specific vibration mode of the structure. The optimum TMD parameters and the corresponding optimized DDMF and ADMF are achieved for two control levels (displacement control and acceleration control, different structural damping ratio and mass ratio of the TMD system. The optimum TMD parameters are checked for a 10-storey building under earthquake excitations. The maximum storey displacement and acceleration obtained by SCE method are compared with the results of other existing approaches. The results show that the peak building response decreased with decreases of about 20% for displacement and 30% for acceleration of the top floor. To show the efficiency of the adopted algorithm (SCE, a comparison is also made between SCE and other meta-heuristic optimization methods such as genetic algorithm (GA, particle swarm optimization (PSO method and harmony search (HS algorithm in terms of success rate and computational processing time. The results show that the proposed algorithm outperforms other meta-heuristic optimization methods.

  15. {sup 18}F-FDG PET/CT evaluation of children and young adults with suspected spinal fusion hardware infection

    Energy Technology Data Exchange (ETDEWEB)

    Bagrosky, Brian M. [University of Colorado School of Medicine, Department of Pediatric Radiology, Children' s Hospital Colorado, 12123 E. 16th Ave., Box 125, Aurora, CO (United States); University of Colorado School of Medicine, Department of Radiology, Division of Nuclear Medicine, Aurora, CO (United States); Hayes, Kari L.; Fenton, Laura Z. [University of Colorado School of Medicine, Department of Pediatric Radiology, Children' s Hospital Colorado, 12123 E. 16th Ave., Box 125, Aurora, CO (United States); Koo, Phillip J. [University of Colorado School of Medicine, Department of Radiology, Division of Nuclear Medicine, Aurora, CO (United States)

    2013-08-15

    Evaluation of the child with spinal fusion hardware and concern for infection is challenging because of hardware artifact with standard imaging (CT and MRI) and difficult physical examination. Studies using {sup 18}F-FDG PET/CT combine the benefit of functional imaging with anatomical localization. To discuss a case series of children and young adults with spinal fusion hardware and clinical concern for hardware infection. These people underwent FDG PET/CT imaging to determine the site of infection. We performed a retrospective review of whole-body FDG PET/CT scans at a tertiary children's hospital from December 2009 to January 2012 in children and young adults with spinal hardware and suspected hardware infection. The PET/CT scan findings were correlated with pertinent clinical information including laboratory values of inflammatory markers, postoperative notes and pathology results to evaluate the diagnostic accuracy of FDG PET/CT. An exempt status for this retrospective review was approved by the Institution Review Board. Twenty-five FDG PET/CT scans were performed in 20 patients. Spinal fusion hardware infection was confirmed surgically and pathologically in six patients. The most common FDG PET/CT finding in patients with hardware infection was increased FDG uptake in the soft tissue and bone immediately adjacent to the posterior spinal fusion rods at multiple contiguous vertebral levels. Noninfectious hardware complications were diagnosed in ten patients and proved surgically in four. Alternative sources of infection were diagnosed by FDG PET/CT in seven patients (five with pneumonia, one with pyonephrosis and one with superficial wound infections). FDG PET/CT is helpful in evaluation of children and young adults with concern for spinal hardware infection. Noninfectious hardware complications and alternative sources of infection, including pneumonia and pyonephrosis, can be diagnosed. FDG PET/CT should be the first-line cross-sectional imaging study in

  16. Expert System analysis of non-fuel assembly hardware and spent fuel disassembly hardware: Its generation and recommended disposal

    International Nuclear Information System (INIS)

    Williamson, D.A.

    1991-01-01

    Almost all of the effort being expended on radioactive waste disposal in the United States is being focused on the disposal of spent Nuclear Fuel, with little consideration for other areas that will have to be disposed of in the same facilities. one area of radioactive waste that has not been addressed adequately because it is considered a secondary part of the waste issue is the disposal of the various Non-Fuel Bearing Components of the reactor core. These hardware components fall somewhat arbitrarily into two categories: Non-Fuel Assembly (NFA) hardware and Spent Fuel Disassembly (SFD) hardware. This work provides a detailed examination of the generation and disposal of NFA hardware and SFD hardware by the nuclear utilities of the United States as it relates to the Civilian Radioactive Waste Management Program. All available sources of data on NFA and SFD hardware are analyzed with particular emphasis given to the Characteristics Data Base developed by Oak Ridge National Laboratory and the characterization work performed by Pacific Northwest Laboratories and Rochester Gas ampersand Electric. An Expert System developed as a portion of this work is used to assist in the prediction of quantities of NFA hardware and SFD hardware that will be generated by the United States' utilities. Finally, the hardware waste management practices of the United Kingdom, France, Germany, Sweden, and Japan are studied for possible application to the disposal of domestic hardware wastes. As a result of this work, a general classification scheme for NFA and SFD hardware was developed. Only NFA and SFD hardware constructed of zircaloy and experiencing a burnup of less than 70,000 MWD/MTIHM and PWR control rods constructed of stainless steel are considered Low-Level Waste. All other hardware is classified as Greater-ThanClass-C waste

  17. Accelerating epistasis analysis in human genetics with consumer graphics hardware

    Directory of Open Access Journals (Sweden)

    Cancare Fabio

    2009-07-01

    Full Text Available Abstract Background Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs have more memory bandwidth and computational capability than Central Processing Units (CPUs and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. Findings We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective

  18. Why Open Source Hardware matters and why you should care

    OpenAIRE

    Gürkaynak, Frank K.

    2017-01-01

    Open source hardware is currently where open source software was about 30 years ago. The idea is well received by enthusiasts, there is interest and the open source hardware has gained visible momentum recently, with several well-known universities including UC Berkeley, Cambridge and ETH Zürich actively working on large projects involving open source hardware, attracting the attention of companies big and small. But it is still not quite there yet. In this talk, based on my experience on the...

  19. Support for NUMA hardware in HelenOS

    OpenAIRE

    Horký, Vojtěch

    2011-01-01

    The goal of this master thesis is to extend HelenOS operating system with the support for ccNUMA hardware. The text of the thesis contains a brief introduction to ccNUMA hardware, an overview of NUMA features and relevant features of HelenOS (memory management, scheduling, etc.). The thesis analyses various design decisions of the implementation of NUMA support -- introducing the hardware topology into the kernel data structures, propagating this information to user space, thread affinity to ...

  20. Optimum dose of 2-hydroxyethyl methacrylate based bonding material on pulp cells toxicity

    OpenAIRE

    Saraswati, Widya

    2010-01-01

    Background: 2-hydroxyethyl methacrylate (HEMA), one type of resins commonly used as bonding base material, is commonly used due to its advantageous chemical characteristics. Several preliminary studies indicated that resin is a material capable to induce damage in dentin-pulp complex. It is necessary to perform further investigation related with its biological safety for hard and soft tissues in oral cavity. Purpose: The author performed an in vitro test to find optimum dose of HEMA resin mon...

  1. Optimal Reinsurance Design for Pareto Optimum: From the Perspective of Multiple Reinsurers

    Directory of Open Access Journals (Sweden)

    Xing Rong

    2016-01-01

    Full Text Available This paper investigates optimal reinsurance strategies for an insurer which cedes the insured risk to multiple reinsurers. Assume that the insurer and every reinsurer apply the coherent risk measures. Then, we find out the necessary and sufficient conditions for the reinsurance market to achieve Pareto optimum; that is, every ceded-loss function and the retention function are in the form of “multiple layers reinsurance.”

  2. Reliable software for unreliable hardware a cross layer perspective

    CERN Document Server

    Rehman, Semeen; Henkel, Jörg

    2016-01-01

    This book describes novel software concepts to increase reliability under user-defined constraints. The authors’ approach bridges, for the first time, the reliability gap between hardware and software. Readers will learn how to achieve increased soft error resilience on unreliable hardware, while exploiting the inherent error masking characteristics and error (stemming from soft errors, aging, and process variations) mitigations potential at different software layers. · Provides a comprehensive overview of reliability modeling and optimization techniques at different hardware and software levels; · Describes novel optimization techniques for software cross-layer reliability, targeting unreliable hardware.

  3. Environmental Friendly Coatings and Corrosion Prevention For Flight Hardware Project

    Science.gov (United States)

    Calle, Luz

    2014-01-01

    Identify, test and develop qualification criteria for environmentally friendly corrosion protective coatings and corrosion preventative compounds (CPC's) for flight hardware an ground support equipment.

  4. Open Hardware For CERN's Accelerator Control Systems

    CERN Document Server

    van der Bij, E; Ayass, M; Boccardi, A; Cattin, M; Gil Soriano, C; Gousiou, E; Iglesias Gonsálvez, S; Penacoba Fernandez, G; Serrano, J; Voumard, N; Wlostowski, T

    2011-01-01

    The accelerator control systems at CERN will be renovated and many electronics modules will be redesigned as the modules they will replace cannot be bought anymore or use obsolete components. The modules used in the control systems are diverse: analog and digital I/O, level converters and repeaters, serial links and timing modules. Overall around 120 modules are supported that are used in systems such as beam instrumentation, cryogenics and power converters. Only a small percentage of the currently used modules are commercially available, while most of them had been specifically designed at CERN. The new developments are based on VITA and PCI-SIG standards such as FMC (FPGA Mezzanine Card), PCI Express and VME64x using transition modules. As system-on-chip interconnect, the public domain Wishbone specification is used. For the renovation, it is considered imperative to have for each board access to the full hardware design and its firmware so that problems could quickly be resolved by CERN engineers or its ...

  5. Magnetic qubits as hardware for quantum computers

    International Nuclear Information System (INIS)

    Tejada, J.; Chudnovsky, E.; Barco, E. del

    2000-01-01

    We propose two potential realisations for quantum bits based on nanometre scale magnetic particles of large spin S and high anisotropy molecular clusters. In case (1) the bit-value basis states vertical bar-0> and vertical bar-1> are the ground and first excited spin states S z = S and S-1, separated by an energy gap given by the ferromagnetic resonance (FMR) frequency. In case (2), when there is significant tunnelling through the anisotropy barrier, the qubit states correspond to the symmetric, vertical bar-0>, and antisymmetric, vertical bar-1>, combinations of the two-fold degenerate ground state S z = ± S. In each case the temperature of operation must be low compared to the energy gap, Δ, between the states vertical bar-0> and vertical bar-1>. The gap Δ in case (2) can be controlled with an external magnetic field perpendicular to the easy axis of the molecular cluster. The states of different molecular clusters and magnetic particles may be entangled by connecting them by superconducting lines with Josephson switches, leading to the potential for quantum computing hardware. (author)

  6. Magnetic qubits as hardware for quantum computers

    Energy Technology Data Exchange (ETDEWEB)

    Tejada, J.; Chudnovsky, E.; Barco, E. del [and others

    2000-07-01

    We propose two potential realisations for quantum bits based on nanometre scale magnetic particles of large spin S and high anisotropy molecular clusters. In case (1) the bit-value basis states vertical bar-0> and vertical bar-1> are the ground and first excited spin states S{sub z} = S and S-1, separated by an energy gap given by the ferromagnetic resonance (FMR) frequency. In case (2), when there is significant tunnelling through the anisotropy barrier, the qubit states correspond to the symmetric, vertical bar-0>, and antisymmetric, vertical bar-1>, combinations of the two-fold degenerate ground state S{sub z} = {+-} S. In each case the temperature of operation must be low compared to the energy gap, {delta}, between the states vertical bar-0> and vertical bar-1>. The gap {delta} in case (2) can be controlled with an external magnetic field perpendicular to the easy axis of the molecular cluster. The states of different molecular clusters and magnetic particles may be entangled by connecting them by superconducting lines with Josephson switches, leading to the potential for quantum computing hardware. (author)

  7. Nanorobot Hardware Architecture for Medical Defense

    Directory of Open Access Journals (Sweden)

    Luiz C. Kretly

    2008-05-01

    Full Text Available This work presents a new approach with details on the integrated platform and hardware architecture for nanorobots application in epidemic control, which should enable real time in vivo prognosis of biohazard infection. The recent developments in the field of nanoelectronics, with transducers progressively shrinking down to smaller sizes through nanotechnology and carbon nanotubes, are expected to result in innovative biomedical instrumentation possibilities, with new therapies and efficient diagnosis methodologies. The use of integrated systems, smart biosensors, and programmable nanodevices are advancing nanoelectronics, enabling the progressive research and development of molecular machines. It should provide high precision pervasive biomedical monitoring with real time data transmission. The use of nanobioelectronics as embedded systems is the natural pathway towards manufacturing methodology to achieve nanorobot applications out of laboratories sooner as possible. To demonstrate the practical application of medical nanorobotics, a 3D simulation based on clinical data addresses how to integrate communication with nanorobots using RFID, mobile phones, and satellites, applied to long distance ubiquitous surveillance and health monitoring for troops in conflict zones. Therefore, the current model can also be used to prevent and save a population against the case of some targeted epidemic disease.

  8. Hardware upgrade for A2 data acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Ostrick, Michael; Gradl, Wolfgang; Otte, Peter-Bernd; Neiser, Andreas; Steffen, Oliver; Wolfes, Martin; Koerner, Tito [Institut fuer Kernphysik, Mainz (Germany); Collaboration: A2-Collaboration

    2014-07-01

    The A2 Collaboration uses an energy tagged photon beam which is produced via bremsstrahlung off the MAMI electron beam. The detector system consists of Crystal Ball and TAPS and covers almost the whole solid angle. A frozen-spin polarized target allows to perform high precision measurements of polarization observables in meson photo-production. During the last summer, a major upgrade of the data acquisition system was performed, both on the hardware and the software side. The goal of this upgrade was increased reliability of the system and an improvement in the data rate to disk. By doubling the number of readout CPUs and employing special VME crates with a split backplane, the number of bus accesses per readout cycle and crate was cut by a factor of two, giving almost a factor of two gain in the readout rate. In the course of the upgrade, we also switched most of the detector control system to using the distributed control system EPICS. For the upgraded control system, some new tools were developed to make full use of the capabilities of this decentralised slow control and monitoring system. The poster presents some of the major contributions to this project.

  9. Parametric Investigation of Optimum Thermal Insulation Thickness for External Walls

    Directory of Open Access Journals (Sweden)

    Omer Kaynakli

    2011-06-01

    Full Text Available Numerous studies have estimated the optimum thickness of thermal insulation materials used in building walls for different climate conditions. The economic parameters (inflation rate, discount rate, lifetime and energy costs, the heating/cooling loads of the building, the wall structure and the properties of the insulation material all affect the optimum insulation thickness. This study focused on the investigation of these parameters that affect the optimum thermal insulation thickness for building walls. To determine the optimum thickness and payback period, an economic model based on life-cycle cost analysis was used. As a result, the optimum thermal insulation thickness increased with increasing the heating and cooling energy requirements, the lifetime of the building, the inflation rate, energy costs and thermal conductivity of insulation. However, the thickness decreased with increasing the discount rate, the insulation material cost, the total wall resistance, the coefficient of performance (COP of the cooling system and the solar radiation incident on a wall. In addition, the effects of these parameters on the total life-cycle cost, payback periods and energy savings were also investigated.

  10. FPGA BASED HARDWARE KEY FOR TEMPORAL ENCRYPTION

    Directory of Open Access Journals (Sweden)

    B. Lakshmi

    2010-09-01

    Full Text Available In this paper, a novel encryption scheme with time based key technique on an FPGA is presented. Time based key technique ensures right key to be entered at right time and hence, vulnerability of encryption through brute force attack is eliminated. Presently available encryption systems, suffer from Brute force attack and in such a case, the time taken for breaking a code depends on the system used for cryptanalysis. The proposed scheme provides an effective method in which the time is taken as the second dimension of the key so that the same system can defend against brute force attack more vigorously. In the proposed scheme, the key is rotated continuously and four bits are drawn from the key with their concatenated value representing the delay the system has to wait. This forms the time based key concept. Also the key based function selection from a pool of functions enhances the confusion and diffusion to defend against linear and differential attacks while the time factor inclusion makes the brute force attack nearly impossible. In the proposed scheme, the key scheduler is implemented on FPGA that generates the right key at right time intervals which is then connected to a NIOS – II processor (a virtual microcontroller which is brought out from Altera FPGA that communicates with the keys to the personal computer through JTAG (Joint Test Action Group communication and the computer is used to perform encryption (or decryption. In this case the FPGA serves as hardware key (dongle for data encryption (or decryption.

  11. Bayesian Estimation and Inference using Stochastic Hardware

    Directory of Open Access Journals (Sweden)

    Chetan Singh Thakur

    2016-03-01

    Full Text Available In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker, demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND, we show how inference can be performed in a Directed Acyclic Graph (DAG using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.

  12. Sharing open hardware through ROP, the robotic open platform

    NARCIS (Netherlands)

    Lunenburg, J.; Soetens, R.P.T.; Schoenmakers, F.; Metsemakers, P.M.G.; van de Molengraft, M.J.G.; Steinbuch, M.; Behnke, S.; Veloso, M.; Visser, A.; Xiong, R.

    2014-01-01

    The robot open source software community, in particular ROS, drastically boosted robotics research. However, a centralized place to exchange open hardware designs does not exist. Therefore we launched the Robotic Open Platform (ROP). A place to share and discuss open hardware designs. Among others

  13. Sharing open hardware through ROP, the Robotic Open Platform

    NARCIS (Netherlands)

    Lunenburg, J.J.M.; Soetens, R.P.T.; Schoenmakers, Ferry; Metsemakers, P.M.G.; Molengraft, van de M.J.G.; Steinbuch, M.

    2013-01-01

    The robot open source software community, in particular ROS, drastically boosted robotics research. However, a centralized place to exchange open hardware designs does not exist. Therefore we launched the Robotic Open Platform (ROP). A place to share and discuss open hardware designs. Among others

  14. Hardware packet pacing using a DMA in a parallel computer

    Science.gov (United States)

    Chen, Dong; Heidelberger, Phillip; Vranas, Pavlos

    2013-08-13

    Method and system for hardware packet pacing using a direct memory access controller in a parallel computer which, in one aspect, keeps track of a total number of bytes put on the network as a result of a remote get operation, using a hardware token counter.

  15. Hardware/software virtualization for the reconfigurable multicore platform.

    NARCIS (Netherlands)

    Ferger, M.; Al Kadi, M.; Hübner, M.; Koedam, M.L.P.J.; Sinha, S.S.; Goossens, K.G.W.; Marchesan Almeida, Gabriel; Rodrigo Azambuja, J.; Becker, Juergen

    2012-01-01

    This paper presents the Flex Tiles approach for the virtualization of hardware and software for a reconfigurable multicore architecture. The approach enables the virtualization of a dynamic tile-based hardware architecture consisting of processing tiles connected via a network-on-chip and a

  16. Flexible hardware design for RSA and Elliptic Curve Cryptosystems

    NARCIS (Netherlands)

    Batina, L.; Bruin - Muurling, G.; Örs, S.B.; Okamoto, T.

    2004-01-01

    This paper presents a scalable hardware implementation of both commonly used public key cryptosystems, RSA and Elliptic Curve Cryptosystem (ECC) on the same platform. The introduced hardware accelerator features a design which can be varied from very small (less than 20 Kgates) targeting wireless

  17. Hardware and software for image acquisition in nuclear medicine

    International Nuclear Information System (INIS)

    Fideles, E.L.; Vilar, G.; Silva, H.S.

    1992-01-01

    A system for image acquisition and processing in nuclear medicine is presented, including the hardware and software referring to acquisition. The hardware is consisted of an analog-digital conversion card, developed in wire-wape. Its function is digitate the analogic signs provided by gamma camera. The acquisitions are made in list or frame mode. (C.G.C.)

  18. Hardware Abstraction and Protocol Optimization for Coded Sensor Networks

    DEFF Research Database (Denmark)

    Nistor, Maricica; Roetter, Daniel Enrique Lucani; Barros, João

    2015-01-01

    The design of the communication protocols in wireless sensor networks (WSNs) often neglects several key characteristics of the sensor's hardware, while assuming that the number of transmitted bits is the dominating factor behind the system's energy consumption. A closer look at the hardware speci...

  19. Determination of optimum filter in myocardial SPECT: A phantom study

    International Nuclear Information System (INIS)

    Takavar, A.; Shamsipour, Gh.; Sohrabi, M.; Eftekhari, M.

    2004-01-01

    Background: In myocardial perfusion SPECT images are degraded by photon attenuation, the distance-dependent collimator, detector response and photons scatter. Filters greatly affect quality of nuclear medicine images. Materials and Methods: A phantom simulating heart left ventricle was built. About 1mCi of 99m Tc was injected into the phantom. Images was taken from this phantom. Some filters including Parzen, Hamming, Hanning, Butter worth and Gaussian were exerted on the phantom images. By defining some criteria such as contrast, signal to noise ratio, and defect size detectability, the best filter can be determined. Results: 0.325 Nyquist frequency and 0.5 nq was obtained as the optimum cut off frequencies respectively for hamming and handing filters. Order 11, cut off 0.45 Nq and order 20 cut off 0.5 Nq obtained optimum respectively for Butter worth and Gaussian filters. Conclusion: The optimum member of every filter's family was obtained

  20. Optimum Arrangement of Reactive Power Sources While Using Genetic Algori

    Directory of Open Access Journals (Sweden)

    A. M. Gashimov

    2010-01-01

    Full Text Available Reduction of total losses in distribution electricity supply network is considered as an important measure which serves for improvement of efficiency of electric power supply systems. This objective can be achieved by optimum distribution of reactive power sources in proper places of distribution electricity supply network. The proposed methodology is based on application of a genetic algorithm. Total expenses for installation of capacitor banks, their operation and also expenses related to electric power losses are considered as an efficiency function which is used for determination of places with optimum values of capacitor bank power. The methodology is the most efficient for selection of optimum places in the network where it is necessary to install capacitor banks with due account of their power control depending on a switched-on load value in the units.

  1. Optimum Combining for Rapidly Fading Channels in Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Sonia Furman

    2003-10-01

    Full Text Available Research and technology in wireless communication systems such as radar and cellular networks have successfully implemented alternative design approaches that utilize antenna array techniques such as optimum combining, to mitigate the degradation effects of multipath in rapid fading channels. In ad hoc networks, these methods have not yet been exploited primarily due to the complexity inherent in the network's architecture. With the high demand for improved signal link quality, devices configured with omnidirectional antennas can no longer meet the growing need for link quality and spectrum efficiency. This study takes an empirical approach to determine an optimum combining antenna array based on 3 variants of interelement spacing. For rapid fading channels, the simulation results show that the performance in the network of devices retrofitted with our antenna arrays consistently exceeded those with an omnidirectional antenna. Further, with the optimum combiner, the performance increased by over 60% compared to that of an omnidirectional antenna in a rapid fading channel.

  2. Optimum detection for extracting maximum information from symmetric qubit sets

    International Nuclear Information System (INIS)

    Mizuno, Jun; Fujiwara, Mikio; Sasaki, Masahide; Akiba, Makoto; Kawanishi, Tetsuya; Barnett, Stephen M.

    2002-01-01

    We demonstrate a class of optimum detection strategies for extracting the maximum information from sets of equiprobable real symmetric qubit states of a single photon. These optimum strategies have been predicted by Sasaki et al. [Phys. Rev. A 59, 3325 (1999)]. The peculiar aspect is that the detections with at least three outputs suffice for optimum extraction of information regardless of the number of signal elements. The cases of ternary (or trine), quinary, and septenary polarization signals are studied where a standard von Neumann detection (a projection onto a binary orthogonal basis) fails to access the maximum information. Our experiments demonstrate that it is possible with present technologies to attain about 96% of the theoretical limit

  3. A Practical Introduction to HardwareSoftware Codesign

    CERN Document Server

    Schaumont, Patrick R

    2013-01-01

    This textbook provides an introduction to embedded systems design, with emphasis on integration of custom hardware components with software. The key problem addressed in the book is the following: how can an embedded systems designer strike a balance between flexibility and efficiency? The book describes how combining hardware design with software design leads to a solution to this important computer engineering problem. The book covers four topics in hardware/software codesign: fundamentals, the design space of custom architectures, the hardware/software interface and application examples. The book comes with an associated design environment that helps the reader to perform experiments in hardware/software codesign. Each chapter also includes exercises and further reading suggestions. Improvements in this second edition include labs and examples using modern FPGA environments from Xilinx and Altera, which make the material applicable to a greater number of courses where these tools are already in use.  Mo...

  4. Determination of optimum oven cooking procedures for lean beef products

    OpenAIRE

    Rodas?Gonz?lez, Argenis; Larsen, Ivy L.; Uttaro, Bethany; Ju?rez, Manuel; Parslow, Joyce; Aalhus, Jennifer L.

    2015-01-01

    Abstract In order to determine optimum oven cooking procedures for lean beef, the effects of searing at 232 or 260?C for 0, 10, 20 or 30?min, and roasting at 160 or 135?C on semimembranosus (SM) and longissimus lumborum (LL) muscles were evaluated. In addition, the optimum determined cooking method (oven?seared for 10?min at 232?C and roasted at 135?C) was applied to SM roasts varying in weight from 0.5 to 2.5?kg. Mainly, SM muscles seared for 0 or 10?min at 232?C followed by roast at 135?C h...

  5. A first course in optimum design of yacht sails

    Science.gov (United States)

    Sugimoto, Takeshi

    1993-03-01

    The optimum sail geometry is analytically obtained for the case of maximizing the thrust under equality and inequality constraints on the lift and the heeling moment. A single mainsail is assumed to be set close-hauled in uniform wind and upright on the flat sea surface. The governing parameters are the mast height and the gap between the sail foot and the sea surface. The lifting line theory is applied to analyze the aerodynamic forces acting on a sail. The design method consists of the variational principle and a feasibility study. Almost triangular sails are found to be optimum. Their advantages are discussed.

  6. Generic Advertising Optimum Budget for Iran’s Milk Industry

    Directory of Open Access Journals (Sweden)

    H. Shahbazi

    2016-05-01

    Full Text Available Introduction One of the main targets of planners, decision makers and governments is increasing society health with promotion and production of suitable and healthy food. One of the basic commodities that have important role in satisfaction of required human food is milk. So, some part of government and producer healthy budget allocate to milk consumption promotion by using generic advertising. If effectiveness of advertising budget on profitability is more, producer will have more willing to spend for advertising. Determination of optimal generic advertising budget is one of important problem in managerial decision making in producing firm as well as increase in consumption and profit and decrease in wasting and non-optimality of budget. Materials and Methods: In this study, optimal generic advertising budget intensity index (advertising budget share of production cost was estimated under two different scenarios by using equilibrium replacement model. In equilibrium replacement model, producer surplus are maximized in respect to generic advertising in retail level. According to market where two levels of farm and processing before retail exist and there is trade in farm and retail level, we present different models. Fixed and variable proportion hypothesis is another one. Finally, eight relations are presented for determination of milk generic advertising optimum budget. So, we use data from several resources such as previous studies, national (Iran Static center and international institute (Fao formal data and own estimation. Because there are several estimations in previous studies, we identify some scenarios (in two general scenarios for calculation of milk generic advertising optimum budget. Results and Discussion: Estimation of milk generic advertising optimum budget in scenario 1 shows that in case of one market level, fixed supplies and no trade, optimum budget is 0.4672539 percent. In case of one market level and no trade, optimum

  7. Optimum position of isolators within erbium-doped fibers

    DEFF Research Database (Denmark)

    Lumholt, Ole; Schüsler, Kim; Bjarklev, Anders Overgaard

    1992-01-01

    An isolator is used as an amplified spontaneous emission suppressing component within an erbium-doped fiber. The optimum isolator placement is both experimentally and theoretically determined and found to be slightly dependent upon pump power. Improvements of 4 dB in gain and 2 dB in noise figure...... are measured for the optimum isolator location at 25% of the fiber length when the fiber is pumped with 60 mW of pump power at 1.48 μm...

  8. PENENTUAN WAKTU KONTAK DAN pH OPTIMUM PENYERAPAN METILEN BIRU MENGGUNAKAN ABU SEKAM PADI

    Directory of Open Access Journals (Sweden)

    Anung Riapanitra

    2006-11-01

    Full Text Available Dyes are widely used for colouring in textile industries, significant losses occur during the manufacture and processing of the product, and these lost chemical are discharged in surrounding effluent. Adsorption of dyes is an effective technology for treatment of wastewater contaminated by the mismanaged of different types of dyes. In this research, we investigated the potential of rice husk ash for removal of methylene blue dyeing agent in aqueous system. The aim of this research is to find out the optimum contact time and pH on the adsorption of methylene blue using rice husk ash. Batch kinetics studies were carried out under varying experimental condition of contact time and pH. An adsorption equilibrium condition was reached within 10 minutes and the optimum condition for adsorption was at pH 3. The adsorption of methylene blue was decreasing with decreasing the solution pH value.

  9. Hardware Development Process for Human Research Facility Applications

    Science.gov (United States)

    Bauer, Liz

    2000-01-01

    The simple goal of the Human Research Facility (HRF) is to conduct human research experiments on the International Space Station (ISS) astronauts during long-duration missions. This is accomplished by providing integration and operation of the necessary hardware and software capabilities. A typical hardware development flow consists of five stages: functional inputs and requirements definition, market research, design life cycle through hardware delivery, crew training, and mission support. The purpose of this presentation is to guide the audience through the early hardware development process: requirement definition through selecting a development path. Specific HRF equipment is used to illustrate the hardware development paths. The source of hardware requirements is the science community and HRF program. The HRF Science Working Group, consisting of SCientists from various medical disciplines, defined a basic set of equipment with functional requirements. This established the performance requirements of the hardware. HRF program requirements focus on making the hardware safe and operational in a space environment. This includes structural, thermal, human factors, and material requirements. Science and HRF program requirements are defined in a hardware requirements document which includes verification methods. Once the hardware is fabricated, requirements are verified by inspection, test, analysis, or demonstration. All data is compiled and reviewed to certify the hardware for flight. Obviously, the basis for all hardware development activities is requirement definition. Full and complete requirement definition is ideal prior to initiating the hardware development. However, this is generally not the case, but the hardware team typically has functional inputs as a guide. The first step is for engineers to conduct market research based on the functional inputs provided by scientists. CommerCially available products are evaluated against the science requirements as

  10. Modeling Budget Optimum Allocation of Khorasan Razavi Province Agriculture Sector

    Directory of Open Access Journals (Sweden)

    Seyed Mohammad Fahimifard

    2016-09-01

    Full Text Available Introduction: Stock shortage is one of the development impasses in developing countries and trough it the agriculture sector has faced with the most limitation. The share of Iran’s agricultural sector from total investments after the Islamic revolution (1979 has been just 5.5 percent. This fact causes low efficiency in Iran’s agriculture sector. For instance per each 1 cubic meter of water in Iran’s agriculture sector, less that 1 kilogram dry food produced and each Iranian farmer achieves less annual income and has less mechanization in comparison with similar countries in Iran’s 1404 perspective document. Therefore, it is clear that increasing investment in agriculture sector, optimize the budget allocation for this sector is mandatory however has not been adequately and scientifically revised until now. Thus, in this research optimum budget allocation of Iran- Khorasan Razavi province agriculture sector was modeled. Materials and Methods: In order to model the optimum budget allocation of Khorasan Razavi province’s agriculture sector at first optimum budget allocation between agriculture programs was modeled with compounding three indexes: 1. Analyzing the priorities of Khorasan Razavi province’s agriculture sector experts with the application of Analytical Hierarchy Process (AHP, 2. The average share of agriculture sector programs from 4th country’s development program for Khorasan Razavi province’s agriculture sector, and 3.The average share of agriculture sector programs from 5th country’s development program for Khorasan Razavi province’s agriculture sector. Then, using Delphi technique potential indexes of each program was determined. After that, determined potential indexes were weighted using Analytical Hierarchy Process (AHP and finally, using numerical taxonomy model to optimize allocation of the program’s budget between cities based on two scenarios. Required data, also was gathered from the budget and planning

  11. Fracture of fusion mass after hardware removal in patients with high sagittal imbalance.

    Science.gov (United States)

    Sedney, Cara L; Daffner, Scott D; Stefanko, Jared J; Abdelfattah, Hesham; Emery, Sanford E; France, John C

    2016-04-01

    As spinal fusions become more common and more complex, so do the sequelae of these procedures, some of which remain poorly understood. The authors report on a series of patients who underwent removal of hardware after CT-proven solid fusion, confirmed by intraoperative findings. These patients later developed a spontaneous fracture of the fusion mass that was not associated with trauma. A series of such patients has not previously been described in the literature. An unfunded, retrospective review of the surgical logs of 3 fellowship-trained spine surgeons yielded 7 patients who suffered a fracture of a fusion mass after hardware removal. Adult patients from the West Virginia University Department of Orthopaedics who underwent hardware removal in the setting of adjacent-segment disease (ASD), and subsequently experienced fracture of the fusion mass through the uninstrumented segment, were studied. The medical records and radiological studies of these patients were examined for patient demographics and comorbidities, initial indication for surgery, total number of surgeries, timeline of fracture occurrence, risk factors for fracture, as well as sagittal imbalance. All 7 patients underwent hardware removal in conjunction with an extension of fusion for ASD. All had CT-proven solid fusion of their previously fused segments, which was confirmed intraoperatively. All patients had previously undergone multiple operations for a variety of indications, 4 patients were smokers, and 3 patients had osteoporosis. Spontaneous fracture of the fusion mass occurred in all patients and was not due to trauma. These fractures occurred 4 months to 4 years after hardware removal. All patients had significant sagittal imbalance of 13-15 cm. The fracture level was L-5 in 6 of the 7 patients, which was the first uninstrumented level caudal to the newly placed hardware in all 6 of these patients. Six patients underwent surgery due to this fracture. The authors present a case series of 7

  12. Determination of Optimum Moisture Content of Palm Nut Cracking ...

    African Journals Online (AJOL)

    USER

    ABSTRACT: After processing the palm fruit for oil, the nut is usually dried in order to loosen the kernel from the shell. The drying is necessary to enhance the release of whole kernel when the nut is cracked. A study was carried out to determine the optimum moisture content of nuts for high yield of whole kernels during ...

  13. Optimum tilt angle and orientation for solar collectors in Syria

    International Nuclear Information System (INIS)

    Skeiker, Kamal

    2009-01-01

    One of the important parameters that affect the performance of a solar collector is its tilt angle with the horizon. This is because of the variation of tilt angle changes the amount of solar radiation reaching the collector surface. A mathematical model was used for estimating the solar radiation on a tilted surface, and to determine the optimum tilt angle and orientation (surface azimuth angle) for the solar collector in the main Syrian zones, on a daily basis, as well as for a specific period. The optimum angle was computed by searching for the values for which the radiation on the collector surface is a maximum for a particular day or a specific period. The results reveal that changing the tilt angle 12 times in a year (i.e. using the monthly optimum tilt angle) maintains approximately the total amount of solar radiation near the maximum value that is found by changing the tilt angle daily to its optimum value. This achieves a yearly gain in solar radiation of approximately 30% more than the case of a solar collector fixed on a horizontal surface.

  14. Analytical Solution for Optimum Design of Furrow Irrigation Systems

    Science.gov (United States)

    Kiwan, M. E.

    1996-05-01

    An analytical solution for the optimum design of furrow irrigation systems is derived. The non-linear calculus optimization method is used to formulate a general form for designing the optimum system elements under circumstances of maximizing the water application efficiency of the system during irrigation. Different system bases and constraints are considered in the solution. A full irrigation water depth is considered to be achieved at the tail of the furrow line. The solution is based on neglecting the recession and depletion times after off-irrigation. This assumption is valid in the case of open-end (free gradient) furrow systems rather than closed-end (closed dike) systems. Illustrative examples for different systems are presented and the results are compared with the output obtained using an iterative numerical solution method. The final derived solution is expressed as a function of the furrow length ratio (the furrow length to the water travelling distance). The function of water travelling developed by Reddy et al. is considered for reaching the optimum solution. As practical results from the study, the optimum furrow elements for free gradient systems can be estimated to achieve the maximum application efficiency, i.e. furrow length, water inflow rate and cutoff irrigation time.

  15. Optimum position for wells producing at constant wellbore pressure

    Energy Technology Data Exchange (ETDEWEB)

    Camacho-Velazquez, R.; Rodriguez de la Garza, F. [Univ. Nacional Autonoma de Mexico, Mexico City (Mexico); Galindo-Nava, A. [Inst. Mexicanos del Petroleo, Mexico City (Mexico)]|[Univ. Nacional de Mexico, Mexico City (Mexico); Prats, M.

    1994-12-31

    This paper deals with the determination of the optimum position of several wells, producing at constant different wellbore pressures from a two-dimensional closed-boundary reservoirs, to maximize the cumulative production or the total flow rate. To achieve this objective they authors use an improved version of the analytical solution recently proposed by Rodriguez and Cinco-Ley and an optimization algorithm based on a quasi-Newton procedure with line search. At each iteration the algorithm approximates the negative of the objective function by a cuadratic relation derived from a Taylor series. The improvement of rodriguez and Cinco`s solution is attained in four ways. First, an approximation is obtained, which works better at earlier times (before the boundary dominated period starts) than the previous solution. Second, the infinite sums that are present in the solution are expressed in a condensed form, which is relevant for reducing the computer time when the optimization algorithm is used. Third, the solution is modified to take into account the possibility of having wells starting to produce at different times. This point allows them to deal with the problem of getting the optimum position for an infill drilling program. Last, the solution is extended to include the possibility of changing the value of wellbore pressure or being able to stimulate any of the wells at any time. When the wells are producing at different wellbore pressures it is found that the optimum position is a function of time, otherwise the optimum position is fixed.

  16. Optimum Onager: The Classical Mechanics of a Classical Siege Engine

    Science.gov (United States)

    Denny, Mark

    2009-01-01

    The onager is a throwing weapon of classical antiquity, familiar to both the ancient Greeks and Romans. Here we analyze the dynamics of onager operation and derive the optimum angle for launching a projectile to its maximum range. There is plenty of scope for further considerations about increasing onager range, and so by thinking about how this…

  17. The effects of physical and chemical changes on the optimum ...

    African Journals Online (AJOL)

    The aim of this study was to determine physical and chemical changes during fruit development and their relationship with optimum harvest maturity for Bacon, Fuerte and Zutano avocado cultivars grown under Dörtyol ecological condition. Fruits cv. Bacon, Fuerte and Zutano were obtained trees grafted on seedlings and ...

  18. Applied orthogonal experiment design for the optimum microwave ...

    African Journals Online (AJOL)

    An experiment on polysaccharides from Rhodiolae Radix (PRR) extraction was carried out using microwave-assisted extraction (MAE) method with an objective to establishing the optimum MAE conditions of PRR. Single factor experiments were performed to determine the appropriate range of extraction conditions, and the ...

  19. Optimum geometry for torque ripple minimization of switched reluctance motors

    NARCIS (Netherlands)

    Sahin, F.; Ertan, H.B.; Leblebicioglu, K.

    2000-01-01

    For switched reluctance motors, one of the major problems is torque ripple which causes increased undesirable acoustic noise and possibly speed ripple. This paper describes an approach to determine optimum magnetic circuit parameters to minimize low speed torque ripple for such motors. The

  20. METHODS FOR DETERMINATION OF THE OPTIMUM EXPLOSIVES IN DIFFERENT ROCKS

    Directory of Open Access Journals (Sweden)

    Josip Krsnik

    1989-12-01

    Full Text Available The most appropriate explosives required for blasting of the particular types of rocks were established by test blasting method with linear burden increase. By the same method the optimum magnitudes of deep-holes blasting were established (the paper is published in Croatian.

  1. Applicability Problem in Optimum Reinforced Concrete Structures Design

    Directory of Open Access Journals (Sweden)

    Ashara Assedeq

    2016-01-01

    Full Text Available Optimum reinforced concrete structures design is very complex problem, not only considering exactness of calculus but also because of questionable applicability of existing methods in practice. This paper presents the main theoretical mathematical and physical features of the problem formulation as well as the review and analysis of existing methods and solutions considering their exactness and applicability.

  2. How stem defects affect the capability of optimum bucking method?

    Directory of Open Access Journals (Sweden)

    Abdullah Emin Akay

    2015-07-01

    Full Text Available In forest harvesting activities, computer-assisted optimum bucking method increases the economic value of harvested trees. The bucking decision highly depends on the log quality grades which mainly vary with the surface characteristics such as stem defects and form of the stems. In this study, the effects of stem defects on optimum bucking method was investigated by comparing bucking applications which were conducted during the logging operations in two different Brutian Pine (Pinus brutia Ten stands. In the applications, the first stand contained the stems with relatively more stem defects than that of the stems in the second stand. The average number of defects per log for sample trees in the first and the second stand was recorded as 3.64 and 2.70, respectively. The results indicated that optimum bucking method increased the average economic value of harvested trees by 15.45% and 8.26 % in the stands, respectively. Therefore, the computer-assisted optimum bucking method potentially provides better results than that of traditional bucking method especially for the harvested trees with more stem defects.

  3. Optimum length of finned pipe for waste heat recovery

    International Nuclear Information System (INIS)

    Soeylemez, M.S.

    2008-01-01

    A thermoeconomic feasibility analysis is presented yielding a simple algebraic optimization formula for estimating the optimum length of a finned pipe that is used for waste heat recovery. A simple economic optimization method is used in the present study by combining it with an integrated overall heat balance method based on fin effectiveness for calculating the maximum savings from a waste heat recovery system

  4. The Optimum Conditions of Foreign Languages in Primary Education

    Science.gov (United States)

    Giannikas, Christina Nicole

    2014-01-01

    The aim of the paper is to review the primary language learning situation in Europe and shed light on the benefits it carries. Early language learning is the biggest policy development in education and has developed in rapid speed over the past 30 years; this article considers the effects and advantages of the optimum condition of an early start,…

  5. Improved optimum condition for recovery and measurement of 210 ...

    African Journals Online (AJOL)

    The aim of this study was to determine the optimum conditions for deposition of 210Po and evaluate the accuracy and precision of the results for its determination in environmental samples. To improve the technique for measurement of polonium-210(210Po) in environmental samples. The optimization of five factors (volume ...

  6. Dietary energy level for optimum productivity and carcass ...

    African Journals Online (AJOL)

    user

    2013-08-05

    Aug 5, 2013 ... optimum weights at dietary energy levels of 13.81, 13.23, 13.43 and ... Tadelle & Ogle (2000) reported that energy requirement of ..... The authors would like to acknowledge the National Research Foundation (NRF) and VLIR ...

  7. Bud initiation and optimum harvest date in Brussels sprouts

    NARCIS (Netherlands)

    Everaarts, A.P.; Sukkel, W.

    1999-01-01

    For six cultivars of Brussels sprouts (Brassica oleracea var. gemmifera) with a decreasing degree of earliness, or optimum harvest date, the time of bud initiation was determined during two seasons. Fifty percent of the plants had initiated buds between 60 and 75 days after planting (DAP) in 1994

  8. Optimum dietary protein requirement of genetically male tilapia ...

    African Journals Online (AJOL)

    The study was conducted to investigate the optimum dietary protein level needed for growing genetically male tilapia, Oreochromis niloticus. Diets containing crude protein levels 40, 42.5, 45, 47.5 and 50% were formulated and tried in triplicates. Test diets were fed to 20 fish/1m3 floating hapa at 5% of fish body weight daily ...

  9. Optimum development temperature and duration for nuclear plate

    International Nuclear Information System (INIS)

    Nagoshi, Chieko.

    1975-01-01

    Sakura 100 μm thick nuclear plates have been employed to determine optimum temperature and duration of the Amidol developer for low energy protons (Ep 0 C were tried for periods of 15--35 min. For Ep 0 C and for development time less than 30 min. (auth.)

  10. Optimum workforce-size model using dynamic programming approach

    African Journals Online (AJOL)

    This paper presents an optimum workforce-size model which determines the minimum number of excess workers (overstaffing) as well as the minimum total recruitment cost during a specified planning horizon. The model is an extension of other existing dynamic programming models for manpower planning in the sense ...

  11. Dietary energy level for optimum productivity and carcass ...

    African Journals Online (AJOL)

    A study was conducted to determine dietary energy levels for optimum productivity and carcass characteristics of indigenous Venda chickens raised in closed confinement. Four dietary treatments were considered in the first phase (1 to 7 weeks) on two hundred day-old unsexed indigenous Venda chicks indicated as EVS1, ...

  12. Procedure for determining the optimum rate of increasing shaft depth

    Energy Technology Data Exchange (ETDEWEB)

    Durov, E.M.

    1983-03-01

    Presented is an economic analysis of increasing shaft depth during mine modernization. Investigations carried out by the Yuzhgiproshakht Institute are analyzed. The investigations are aimed at determining the optimum shaft sinking rate (the rate which reduces investment to the minimum). The following factors are considered: coal output of a mine (0.9, 1.2, 1.5 and 1.8 Mt/year), depth at which the new mining level is situated (600, 800, 1200, 1400 and 1600 m), four schemes of increasing depth of 2 central shafts (rock hoisting to ground surface, rock hoisting to the existing level, rock haulage to the developed level, rock haulage to the level being developed using a large diameter borehole drilled from the new level to the shaft bottom and enlarged from shaft bottom to the new level), shaft sinking rate (10, 20, 30, 40, 50 and 60 m/month), range of increasing shaft depth (the difference between depth of the shaft before and after increasing its depth by 100, 200, 300 and 400 m). Comparative evaluations show that the optimum shaft sinking rate depends on the scheme for rock hoisting (one of 4 analyzed), range of increasing shaft depth and gas content in coal seams. The optimum shaft sinking rate ranges from 20 to 40 m/month in coal mines with low methane content and from 20 to 30 m/month in gassy coal mines. The planned coal output of a mine does not influence the optimum shaft sinking rate.

  13. Deuterium–tritium catalytic reaction in fast ignition: Optimum ...

    Indian Academy of Sciences (India)

    proton beam, the corresponding optimum interval values are proton average energy 3 ... contributions, into the study of the ignition and burn dynamics in a fast ignition frame- ..... choice of proton beam energy would fall in 3 ≤ Ep ≤ 10 MeV.

  14. Targeting multiple heterogeneous hardware platforms with OpenCL

    Science.gov (United States)

    Fox, Paul A.; Kozacik, Stephen T.; Humphrey, John R.; Paolini, Aaron; Kuller, Aryeh; Kelmelis, Eric J.

    2014-06-01

    The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations have substantial implementation differences. The abstractions provided by the OpenCL API are often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations often do not take advantage of potential performance gains from certain features due to hardware limitations and other factors. These factors make it challenging to produce code that is portable in practice, resulting in much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort offsets the principal advantage of OpenCL: portability. The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted to perform well across a wide range of hardware platforms. To this end, we explore some general practices for producing performant code that are effective across platforms. Additionally, we explore some ways of modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics. The minimum requirement for portability implies avoiding the use of OpenCL features that are optional, not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down to explicit vector operations. Static optimizations and branch elimination in device code help the platform compiler to effectively optimize programs. Modularization of some code is important to allow operations to be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in hardware

  15. Optimum gas turbine cycle for combined cycle power plant

    International Nuclear Information System (INIS)

    Polyzakis, A.L.; Koroneos, C.; Xydis, G.

    2008-01-01

    The gas turbine based power plant is characterized by its relatively low capital cost compared with the steam power plant. It has environmental advantages and short construction lead time. However, conventional industrial engines have lower efficiencies, especially at part load. One of the technologies adopted nowadays for efficiency improvement is the 'combined cycle'. The combined cycle technology is now well established and offers superior efficiency to any of the competing gas turbine based systems that are likely to be available in the medium term for large scale power generation applications. This paper has as objective the optimization of a combined cycle power plant describing and comparing four different gas turbine cycles: simple cycle, intercooled cycle, reheated cycle and intercooled and reheated cycle. The proposed combined cycle plant would produce 300 MW of power (200 MW from the gas turbine and 100 MW from the steam turbine). The results showed that the reheated gas turbine is the most desirable overall, mainly because of its high turbine exhaust gas temperature and resulting high thermal efficiency of the bottoming steam cycle. The optimal gas turbine (GT) cycle will lead to a more efficient combined cycle power plant (CCPP), and this will result in great savings. The initial approach adopted is to investigate independently the four theoretically possible configurations of the gas plant. On the basis of combining these with a single pressure Rankine cycle, the optimum gas scheme is found. Once the gas turbine is selected, the next step is to investigate the impact of the steam cycle design and parameters on the overall performance of the plant, in order to choose the combined cycle offering the best fit with the objectives of the work as depicted above. Each alterative cycle was studied, aiming to find the best option from the standpoint of overall efficiency, installation and operational costs, maintainability and reliability for a combined power

  16. Hardware Implementation of a Bilateral Subtraction Filter

    Science.gov (United States)

    Huertas, Andres; Watson, Robert; Villalpando, Carlos; Goldberg, Steven

    2009-01-01

    A bilateral subtraction filter has been implemented as a hardware module in the form of a field-programmable gate array (FPGA). In general, a bilateral subtraction filter is a key subsystem of a high-quality stereoscopic machine vision system that utilizes images that are large and/or dense. Bilateral subtraction filters have been implemented in software on general-purpose computers, but the processing speeds attainable in this way even on computers containing the fastest processors are insufficient for real-time applications. The present FPGA bilateral subtraction filter is intended to accelerate processing to real-time speed and to be a prototype of a link in a stereoscopic-machine- vision processing chain, now under development, that would process large and/or dense images in real time and would be implemented in an FPGA. In terms that are necessarily oversimplified for the sake of brevity, a bilateral subtraction filter is a smoothing, edge-preserving filter for suppressing low-frequency noise. The filter operation amounts to replacing the value for each pixel with a weighted average of the values of that pixel and the neighboring pixels in a predefined neighborhood or window (e.g., a 9 9 window). The filter weights depend partly on pixel values and partly on the window size. The present FPGA implementation of a bilateral subtraction filter utilizes a 9 9 window. This implementation was designed to take advantage of the ability to do many of the component computations in parallel pipelines to enable processing of image data at the rate at which they are generated. The filter can be considered to be divided into the following parts (see figure): a) An image pixel pipeline with a 9 9- pixel window generator, b) An array of processing elements; c) An adder tree; d) A smoothing-and-delaying unit; and e) A subtraction unit. After each 9 9 window is created, the affected pixel data are fed to the processing elements. Each processing element is fed the pixel value for

  17. Hardware Realization of Chaos Based Symmetric Image Encryption

    KAUST Repository

    Barakat, Mohamed L.

    2012-06-01

    This thesis presents a novel work on hardware realization of symmetric image encryption utilizing chaos based continuous systems as pseudo random number generators. Digital implementation of chaotic systems results in serious degradations in the dynamics of the system. Such defects are illuminated through a new technique of generalized post proceeding with very low hardware cost. The thesis further discusses two encryption algorithms designed and implemented as a block cipher and a stream cipher. The security of both systems is thoroughly analyzed and the performance is compared with other reported systems showing a superior results. Both systems are realized on Xilinx Vetrix-4 FPGA with a hardware and throughput performance surpassing known encryption systems.

  18. Dynamically-Loaded Hardware Libraries (HLL) Technology for Audio Applications

    DEFF Research Database (Denmark)

    Esposito, A.; Lomuscio, A.; Nunzio, L. Di

    2016-01-01

    In this work, we apply hardware acceleration to embedded systems running audio applications. We present a new framework, Dynamically-Loaded Hardware Libraries or HLL, to dynamically load hardware libraries on reconfigurable platforms (FPGAs). Provided a library of application-specific processors......, we load on-the-fly the specific processor in the FPGA, and we transfer the execution from the CPU to the FPGA-based accelerator. The proposed architecture provides excellent flexibility with respect to the different audio applications implemented, high quality audio, and an energy efficient solution....

  19. Acceleration of Meshfree Radial Point Interpolation Method on Graphics Hardware

    International Nuclear Information System (INIS)

    Nakata, Susumu

    2008-01-01

    This article describes a parallel computational technique to accelerate radial point interpolation method (RPIM)-based meshfree method using graphics hardware. RPIM is one of the meshfree partial differential equation solvers that do not require the mesh structure of the analysis targets. In this paper, a technique for accelerating RPIM using graphics hardware is presented. In the method, the computation process is divided into small processes suitable for processing on the parallel architecture of the graphics hardware in a single instruction multiple data manner.

  20. Hardware support for collecting performance counters directly to memory

    Science.gov (United States)

    Gara, Alan; Salapura, Valentina; Wisniewski, Robert W.

    2012-09-25

    Hardware support for collecting performance counters directly to memory, in one aspect, may include a plurality of performance counters operable to collect one or more counts of one or more selected activities. A first storage element may be operable to store an address of a memory location. A second storage element may be operable to store a value indicating whether the hardware should begin copying. A state machine may be operable to detect the value in the second storage element and trigger hardware copying of data in selected one or more of the plurality of performance counters to the memory location whose address is stored in the first storage element.

  1. Aspects of system modelling in Hardware/Software partitioning

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1996-01-01

    This paper addresses fundamental aspects of system modelling and partitioning algorithms in the area of Hardware/Software Codesign. Three basic system models for partitioning are presented and the consequences of partitioning according to each of these are analyzed. The analysis shows...... the importance of making a clear distinction between the model used for partitioning and the model used for evaluation It also illustrates the importance of having a realistic hardware model such that hardware sharing can be taken into account. Finally, the importance of integrating scheduling and allocation...

  2. A Fast hardware tracker for the ATLAS Trigger

    CERN Document Server

    Pandini, Carlo Enrico; The ATLAS collaboration

    2015-01-01

    The trigger system at the ATLAS experiment is designed to lower the event rate occurring from the nominal bunch crossing at 40 MHz to about 1 kHz for a designed LHC luminosity of 10$^{34}$ cm$^{-2}$ s$^{-1}$. To achieve high background rejection while maintaining good efficiency for interesting physics signals, sophisticated algorithms are needed which require extensive use of tracking information. The Fast TracKer (FTK) trigger system, part of the ATLAS trigger upgrade program, is a highly parallel hardware device designed to perform track-finding at 100 kHz and based on a mixture of advanced technologies. Modern, powerful Field Programmable Gate Arrays (FPGA) form an important part of the system architecture, and the combinatorial problem of pattern recognition is solved by ~8000 standard-cell ASICs named Associative Memories. The availability of the tracking and subsequent vertex information within a short latency ensures robust selections and allows improved trigger performance for the most difficult sign...

  3. A Fast hardware Tracker for the ATLAS Trigger system

    CERN Document Server

    Pandini, Carlo Enrico; The ATLAS collaboration

    2015-01-01

    The trigger system at the ATLAS experiment is designed to lower the event rate occurring from the nominal bunch crossing at 40 MHz to about 1 kHz for a designed LHC luminosity of 10$^{34}$ cm$^{-2}$ s$^{-1}$. After a very successful data taking run the LHC is expected to run starting in 2015 with much higher instantaneous luminosities and this will increase the load on the High Level Trigger system. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals, which requires a more extensive use of tracking information. The Fast Tracker (FTK) trigger system, part of the ATLAS trigger upgrade program, is a highly parallel hardware device designed to perform full-scan track-finding at the event rate of 100 kHz. FTK is a dedicated processor based on a mixture of advanced technologies. Modern, powerful, Field Programmable Gate Arrays form an important part of the system architecture, and the combinatorial problem of pattern r...

  4. Tax Efficiency vs. Tax Equity – Points of View regarding Tax Optimum

    Directory of Open Access Journals (Sweden)

    Stela Aurelia Toader

    2011-10-01

    Full Text Available Objectives. Starting from the idea that tax equity requirements, administration costs and the tendency towards tax evasion determine the design of tax systems, it is important to identify a satisfactory efficiency/equity deal in order to build a tax system as close to optimum requirements as possible. Prior Work Previous studies proved that an optimum tax system is that through which it will be collected a level of tax revenues which will satisfy budgetary demands, while losing only a minimum ‘amount’ of welfare. In what degree the Romanian tax system meets these requirements? Approach We envisage analyzing the possibilities of improving Romanian tax system as to come nearest to optimum requirements. Results We can conclude fiscal system can uphold important improvements in what assuring tax equity is concerned, resulting in raising the degree of free conformation in the field of tax payment and, implicitly, the degree of tax efficiency. Implications Knowing to what extent it can be acted upon in the direction of finding that satisfactory efficiency/equity deal may allow oneself to identify the blueprint of a tax system in which the loss of welfare is kept down to minimum. Value For the Romanian institutions empowered to impose taxes, the knowledge of the possibilities of making the tax system more efficient can be important while aiming at reducing the level of evasion phenomenon.

  5. Optimum Assembly Sequence Planning System Using Discrete Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Özkan Özmen

    2018-01-01

    Full Text Available Assembly refers both to the process of combining parts to create a structure and to the product resulting therefrom. The complexity of this process increases with the number of pieces in the assembly. This paper presents the assembly planning system design (APSD program, a computer program developed based on a matrix-based approach and the discrete artificial bee colony (DABC algorithm, which determines the optimum assembly sequence among numerous feasible assembly sequences (FAS. Specifically, the assembly sequences of three-dimensional (3D parts prepared in the computer-aided design (CAD software AutoCAD are first coded using the matrix-based methodology and the resulting FAS are assessed and the optimum assembly sequence is selected according to the assembly time optimisation criterion using DABC. The results of comparison of the performance of the proposed method with other methods proposed in the literature verify its superiority in finding the sequence with the lowest overall time. Further, examination of the results of application of APSD to assemblies consisting of parts in different numbers and shapes shows that it can select the optimum sequence from among hundreds of FAS.

  6. Optimum energy levels and offsets for organic donor/acceptor binary photovoltaic materials and solar cells

    International Nuclear Information System (INIS)

    Sun, S.-S.

    2005-01-01

    Optimum frontier orbital energy levels and offsets of an organic donor/acceptor binary type photovoltaic material have been analyzed using classic Marcus electron transfer theory in order to achieve the most efficient photo induced charge separation. This study reveals that, an exciton quenching parameter (EQP) yields one optimum donor/acceptor frontier orbital energy offset that equals the sum of the exciton binding energy and the charge separation reorganization energy, where the photo generated excitons are converted into charges most efficiently. A recombination quenching parameter (RQP) yields a second optimum donor/acceptor energy offset where the ratio of charge separation rate constant over charge recombination rate constant becomes largest. It is desirable that the maximum RQP is coincidence or close to the maximum EQP. A third energy offset is also identified where charge recombination becomes most severe. It is desirable that the most severe charge recombination offset is far away from maximum EQP offset. These findings are very critical for evaluating and fine tuning frontier orbital energy levels of a donor/acceptor pair in order to realize high efficiency organic photovoltaic materials

  7. Technoeconomical Assessment of Optimum Design for Photovoltaic Water Pumping System for Rural Area in Oman

    Directory of Open Access Journals (Sweden)

    Hussein A. Kazem

    2015-01-01

    Full Text Available Photovoltaic (PV systems have been used globally for a long time to supply electricity for water pumping system for irrigation. System cost drops down with time since PV technology, efficiency, and design methodology have been improved and cost of wattage drops dramatically in the last decade. In the present paper optimum PV system design for water pumping system has been proposed for Oman. Intuitive and numerical methods were used to design the system. HOMER software as a numerical method was used to design the system to come up with optimum design for Oman. Also, REPS.OM software has been used to find the optimum design based on hourly meteorological data. The daily solar energy in Sohar was found to be 6.182 kWh/m2·day. However, it is found that the system annual yield factor is 2024.66 kWh/kWp. Furthermore, the capacity factor was found to be 23.05%, which is promising. The cost of energy and system capital cost has been compared with that of diesel generator and systems in literature. The comparison shows that the cost of energy is 0.180, 0.309, and 0.790 USD/kWh for PV-REPS.OM, PV-HOMER, and diesel systems, respectively, which sound that PV water pumping systems are promising in Oman.

  8. Finding Trapped Miners by Using a Prototype Seismic Recording System Made from Music-Recording Hardware

    Science.gov (United States)

    Pratt, Thomas L.

    2009-01-01

    The goal of this project was to use off-the-shelf music recording equipment to build and test a prototype seismic system to listen for people trapped in underground chambers (mines, caves, collapsed buildings). Previous workers found that an array of geophones is effective in locating trapped miners; displaying the data graphically, as well as playing it back into an audio device (headphones) at high speeds, was found to be effective for locating underground tapping. The desired system should record the data digitally to allow for further analysis, be capable of displaying the data graphically, allow for rudimentary analysis (bandpass filter, deconvolution), and allow the user to listen to the data at varying speeds. Although existing seismic reflection systems are adequate to record, display and analyze the data, they are relatively expensive and difficult to use and do not have an audio playback option. This makes it difficult for individual mines to have a system waiting on the shelf for an emergency. In contrast, music recording systems, like the one I used to construct the prototype system, can be purchased for about 20 percent of the cost of a seismic reflection system and are designed to be much easier to use. The prototype system makes use of an ~$3,000, 16-channel music recording system made by Presonus, Inc., of Baton Rouge, Louisiana. Other manufacturers make competitive systems that would serve equally well. Connecting the geophones to the recording system required the only custom part of this system - a connector that takes the output from the geophone cable and breaks it into 16 microphone inputs to be connected to the music recording system. The connector took about 1 day of technician time to build, using about $300 in off-the-shelf parts. Comparisons of the music recording system and a standard seismic reflection system (A 24-channel 'Geode' system manufactured by Geometrics, Inc., of San Jose, California) were carried out at two locations. Initial recordings of small hammer taps were carried out in a small field in Seattle, Washington; more elaborate tests were carried out at the San Juan Coal Mine in San Juan, New Mexico, in which miners underground were signaling. The comparisons demonstrate that the recordings made by the two systems are nearly identical, indicating that either system adequately records the data from the geophones. In either system the data can quickly be converted to a format (Society of Exploration Geophysicists 'Y' format; 'SEGY') to allow for filtering and other signal processing. With a modest software development effort, it is clear that either system could produce equivalent data products (SEGY data and audio data) within a few minutes of finishing the recording. The two systems both have significant advantages and drawbacks. With the seismograph, the tapping was distinctly visible when it occurred during a time window that was displayed. I have not identified or developed software for converting the resulting data to sound recordings that can be heard, but this limitation could be overcome with a trivial software development effort. The main drawbacks to the seismograph are that it does not allow for real-time listening, it is expensive to purchase, and it contains many features that are not utilized for this application. The music recording system is simple to use (it is designed for a general user, rather than a trained technician), allows for listening during recording, and has the advantage of using inexpensive, off-the-shelf components. It also allows for quick (within minutes) playback of the audio data at varying speeds. The data display by the software in the prototype system, however, is clearly inferior to the display on the seismograph. The music system also has the drawback of substantially oversampling the data by a factor of 24 (48,000 samples per second versus 2,000 samples per second) because the user interface only allows limited subsampling. This latte

  9. Determination of optimum filter in inferolateral view of myocardial SPECT

    International Nuclear Information System (INIS)

    Takavar; Eftekhari, M.; Fallahi, B.; Shamsipour, Gh.; Sohrabi, M.; Saghari, M.

    2004-01-01

    Background: In myocardial perfusion SPECT imaging, images are degraded by photon attenuation, distance-dependent collimator, detector response and photon scattering. As filters greatly affect quality of nuclear medicine images, in this study determination of optimum filter for inferolateral view is our prime objective. Materials and Methods: .A phantom simulating heart left ventricle was built. About 1mCi of 99m Tc, was injected into the phantom. Images were taken from this phantom. Parzen, Hamming, Hanning, Butter worth and Gaussian filters were exerted on the images obtained from the phantom.. By defining some criteria such as contrast, signal to noise ratio, and defect size delectability, the best filter was determined for our ADAC spect system at our nuclear medicine center. In this study, 27 patients who previously had undergone coronary angiography were chosen to be included. All of these patients revealed significant stenosis in the left circumflex artery. Myocardial SPECT images of these patients had inferolateral defect. The images of these patients were processed with 12 filters including the optimum filters obtained from phantom study and some other non-optimum filters. A nuclear medicine physician quantified the results by assigmng mark from 0 to 4. to every image. 0 mark for images that didn't show the defect properly and 4 for the best one. The data from patient study were analyzed with non-related, non -parametric Friedman test. Results: Nyquist frequency of 0.325 and 0.5 were obtained as the optimum cut-off frequencies for hamming and Hanning filters respectively. Order 11 and cut-off frequency of 0.45 and order 20. with cut-off frequency of 0.5 were found to be optimum for Butter worth and Gaussian filters. In patient studies it was found that, Butter worth filter with cut-off frequency of 0.45 and order of 11 produced the best quality images. Conclusion: In this study. Butter worth filter with cut-off frequency of 0.45 and order of 11 was the

  10. Optimum condition determination of Rirang uranium ores grinding using ball mill

    International Nuclear Information System (INIS)

    Affandi, Kosim; Waluyo, Sugeng; Sarono, Budi; Sujono; Muhammad

    2002-01-01

    The grinding experiment on Rirang Uranium ore has been carried out with the aim is to find out the optimum condition of wet grinding using ball mill to produce particle size -325, -200 and -100 mesh. This will be used for decomposition feed the test was done by examine the parameters comparison of ore's weight against ball's weight and time of grinding. The test shown that the product of particle size -325 meshes was achieved optimum condition at the comparison ore's weight: ball = 1:3, grinding time 150 minutes, % solid 60, speed rotation of ball mill 60 rpm and recovery of grinding was 93.51 % of -325 mesh. The product of particle size -200 mesh was achieved optimum condition at comparison ore's weight: ball = 1:2, time of grinding 60 minutes, the fraction of + 200 mesh was regrind, the recovery of grinding 6.82% at particle size of (-200 + 250) mesh, 5.75 % at (-250 + 325)m mesh and, 47.93 % -325 mesh. The product of particle size -100 mesh was achieved the optimum condition at comparison ore's weight: ball = 1:2, time of grinding at 30 minutes particle size +100 mesh regrinding using mortar grinder, recovery of grinding 30.10% at particle size (-100 + 150) m, 12.28 % at (-150 + 200) mesh, 15.92 % at (-200 + 250) mesh, 12.44 % at (-250 + 325) mesh and 29.26 % -325 mesh. The determination of specific gravity of Rirang uranium ore was between 4.15 - 4.55 g/cm 3

  11. Generation of Embedded Hardware/Software from SystemC

    Directory of Open Access Journals (Sweden)

    Dominique Houzet

    2006-08-01

    Full Text Available Designers increasingly rely on reusing intellectual property (IP and on raising the level of abstraction to respect system-on-chip (SoC market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propose a design flow intended to reduce the SoC design cost. This design flow unifies hardware and software using a single high-level language. It integrates hardware/software (HW/SW generation tools and an automatic interface synthesis through a custom library of adapters. We have validated our interface synthesis approach on a hardware producer/consumer case study and on the design of a given software radiocommunication application.

  12. Generation of Embedded Hardware/Software from SystemC

    Directory of Open Access Journals (Sweden)

    Ouadjaout Salim

    2006-01-01

    Full Text Available Designers increasingly rely on reusing intellectual property (IP and on raising the level of abstraction to respect system-on-chip (SoC market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propose a design flow intended to reduce the SoC design cost. This design flow unifies hardware and software using a single high-level language. It integrates hardware/software (HW/SW generation tools and an automatic interface synthesis through a custom library of adapters. We have validated our interface synthesis approach on a hardware producer/consumer case study and on the design of a given software radiocommunication application.

  13. Hardware device to physical structure binding and authentication

    Science.gov (United States)

    Hamlet, Jason R.; Stein, David J.; Bauer, Todd M.

    2013-08-20

    Detection and deterrence of device tampering and subversion may be achieved by including a cryptographic fingerprint unit within a hardware device for authenticating a binding of the hardware device and a physical structure. The cryptographic fingerprint unit includes an internal physically unclonable function ("PUF") circuit disposed in or on the hardware device, which generate an internal PUF value. Binding logic is coupled to receive the internal PUF value, as well as an external PUF value associated with the physical structure, and generates a binding PUF value, which represents the binding of the hardware device and the physical structure. The cryptographic fingerprint unit also includes a cryptographic unit that uses the binding PUF value to allow a challenger to authenticate the binding.

  14. Hardware Realization of Chaos Based Symmetric Image Encryption

    KAUST Repository

    Barakat, Mohamed L.

    2012-01-01

    This thesis presents a novel work on hardware realization of symmetric image encryption utilizing chaos based continuous systems as pseudo random number generators. Digital implementation of chaotic systems results in serious degradations

  15. Hardware Implementation Of Line Clipping A lgorithm By Using FPGA

    Directory of Open Access Journals (Sweden)

    Amar Dawod

    2013-04-01

    Full Text Available The computer graphics system performance is increasing faster than any other computing application. Algorithms for line clipping against convex polygons and lines have been studied for a long time and many research papers have been published so far. In spite of the latest graphical hardware development and significant increase of performance the clipping is still a bottleneck of any graphical system. So its implementation in hardware is essential for real time applications. In this paper clipping operation is discussed and a hardware implementation of the line clipping algorithm is presented and finally formulated and tested using Field Programmable Gate Arrays (FPGA. The designed hardware unit consists of two parts : the first is positional code generator unit and the second is the clipping unit. Finally it is worth mentioning that the  designed unit is capable of clipping (232524 line segments per second.       

  16. Performance comparison between ISCSI and other hardware and software solutions

    CERN Document Server

    Gug, M

    2003-01-01

    We report on our investigations on some technologies that can be used to build disk servers and networks of disk servers using commodity hardware and software solutions. It focuses on the performance that can be achieved by these systems and gives measured figures for different configurations. It is divided into two parts : iSCSI and other technologies and hardware and software RAID solutions. The first part studies different technologies that can be used by clients to access disk servers using a gigabit ethernet network. It covers block access technologies (iSCSI, hyperSCSI, ENBD). Experimental figures are given for different numbers of clients and servers. The second part compares a system based on 3ware hardware RAID controllers, a system using linux software RAID and IDE cards and a system mixing both hardware RAID and software RAID. Performance measurements for reading and writing are given for different RAID levels.

  17. Hardware Realization of Chaos-based Symmetric Video Encryption

    KAUST Repository

    Ibrahim, Mohamad A.

    2013-01-01

    This thesis reports original work on hardware realization of symmetric video encryption using chaos-based continuous systems as pseudo-random number generators. The thesis also presents some of the serious degradations caused by digitally

  18. Improvement of hardware basic testing : Identification and development of a scripted automation tool that will support hardware basic testing

    OpenAIRE

    Rask, Ulf; Mannestig, Pontus

    2002-01-01

    In the ever-increasing development pace, circuits and hardware are no exception. Hardware designs grow and circuits gets more complex at the same time as the market pressure lowers the expected time-to-market. In this rush, verification methods often lag behind. Hardware manufacturers must be aware of the importance of total verification if they want to avoid quality flaws and broken deadlines which in the long run will lead to delayed time-to-market, bad publicity and a decreasing market sha...

  19. Basics of spectroscopic instruments. Hardware of NMR spectrometer

    International Nuclear Information System (INIS)

    Sato, Hajime

    2009-01-01

    NMR is a powerful tool for structure analysis of small molecules, natural products, biological macromolecules, synthesized polymers, samples from material science and so on. Magnetic Resonance Imaging (MRI) is applicable to plants and animals Because most of NMR experiments can be done by an automation mode, one can forget hardware of NMR spectrometers. It would be good to understand features and performance of NMR spectrometers. Here I present hardware of a modern NMR spectrometer which is fully equipped with digital technology. (author)

  20. Memory Based Machine Intelligence Techniques in VLSI hardware

    OpenAIRE

    James, Alex Pappachen

    2012-01-01

    We briefly introduce the memory based approaches to emulate machine intelligence in VLSI hardware, describing the challenges and advantages. Implementation of artificial intelligence techniques in VLSI hardware is a practical and difficult problem. Deep architectures, hierarchical temporal memories and memory networks are some of the contemporary approaches in this area of research. The techniques attempt to emulate low level intelligence tasks and aim at providing scalable solutions to high ...

  1. Utilizing IXP1200 hardware and software for packet filtering

    OpenAIRE

    Lindholm, Jeffery L.

    2004-01-01

    As network processors have advanced in speed and efficiency they have become more and more complex in both hardware and software configurations. Intel's IXP1200 is one of these new network processors that has been given to different universities worldwide to conduct research on. The goal of this thesis is to take the first step in starting that research by providing a stable system that can provide a reliable platform for further research. This thesis introduces the fundamental hardware of In...

  2. Security challenges and opportunities in adaptive and reconfigurable hardware

    OpenAIRE

    Costan, Victor Marius; Devadas, Srinivas

    2011-01-01

    We present a novel approach to building hardware support for providing strong security guarantees for computations running in the cloud (shared hardware in massive data centers), while maintaining the high performance and low cost that make cloud computing attractive in the first place. We propose augmenting regular cloud servers with a Trusted Computation Base (TCB) that can securely perform high-performance computations. Our TCB achieves cost savings by spreading functionality across two pa...

  3. Review of Maxillofacial Hardware Complications and Indications for Salvage

    OpenAIRE

    Hernandez Rosa, Jonatan; Villanueva, Nathaniel L.; Sanati-Mehrizy, Paymon; Factor, Stephanie H.; Taub, Peter J.

    2015-01-01

    From 2002 to 2006, more than 117,000 facial fractures were recorded in the U.S. National Trauma Database. These fractures are commonly treated with open reduction and internal fixation. While in place, the hardware facilitates successful bony union. However, when postoperative complications occur, the plates may require removal before bony union. Indications for salvage versus removal of the maxillofacial hardware are not well defined. A literature review was performed to identify instances w...

  4. Testing Microgravity Flight Hardware Concepts on the NASA KC-135

    Science.gov (United States)

    Motil, Susan M.; Harrivel, Angela R.; Zimmerli, Gregory A.

    2001-01-01

    This paper provides an overview of utilizing the NASA KC-135 Reduced Gravity Aircraft for the Foam Optics and Mechanics (FOAM) microgravity flight project. The FOAM science requirements are summarized, and the KC-135 test-rig used to test hardware concepts designed to meet the requirements are described. Preliminary results regarding foam dispensing, foam/surface slip tests, and dynamic light scattering data are discussed in support of the flight hardware development for the FOAM experiment.

  5. Accelerator Technology: Injection and Extraction Related Hardware: Kickers and Septa

    CERN Document Server

    Barnes, M J; Mertens, V

    2013-01-01

    This document is part of Subvolume C 'Accelerators and Colliders' of Volume 21 'Elementary Particles' of Landolt-Börnstein - Group I 'Elementary Particles, Nuclei and Atoms'. It contains the the Section '8.7 Injection and Extraction Related Hardware: Kickers and Septa' of the Chapter '8 Accelerator Technology' with the content: 8.7 Injection and Extraction Related Hardware: Kickers and Septa 8.7.1 Fast Pulsed Systems (Kickers) 8.7.2 Electrostatic and Magnetic Septa

  6. Learning Machines Implemented on Non-Deterministic Hardware

    OpenAIRE

    Gupta, Suyog; Sindhwani, Vikas; Gopalakrishnan, Kailash

    2014-01-01

    This paper highlights new opportunities for designing large-scale machine learning systems as a consequence of blurring traditional boundaries that have allowed algorithm designers and application-level practitioners to stay -- for the most part -- oblivious to the details of the underlying hardware-level implementations. The hardware/software co-design methodology advocated here hinges on the deployment of compute-intensive machine learning kernels onto compute platforms that trade-off deter...

  7. Hardware control system using modular software under RSX-11D

    International Nuclear Information System (INIS)

    Kittell, R.S.; Helland, J.A.

    1978-01-01

    A modular software system used to control extensive hardware is described. The development, operation, and experience with this software are discussed. Included are the methods employed to implement this system while taking advantage of the Real-Time features of RSX-11D. Comparisons are made between this system and an earlier nonmodular system. The controlled hardware includes magnet power supplies, stepping motors, DVM's, and multiplexors, and is interfaced through CAMAC. 4 figures

  8. The optimum size of rotating qarḍ ḥasan savings and credit associations

    Directory of Open Access Journals (Sweden)

    Seyed Kazem Sadr

    2017-07-01

    Full Text Available Purpose - Several indigenous credit and savings schemes have been accredited recently in developing countries for the benefit of households and entrepreneurs alike. Famous among them are the Rotating Savings and Credit Associations (ROSCAs that exist in almost all continents currently. The rapid development of ROSCAs and their varied structures in many countries have been the subject of numerous studies. What has not been thoroughly analysed is the optimum size of these associations and the fact that lending and borrowing is without interest. The aim of this paper is to present a model that would determine the optimum size of ROSCAs and deal with the following issues: how the group size varies with changes in the income level of the members, the demand for the loan, the size of the collected loan and its duration. Further, the question of whether or not lending to the association in return for obtaining larger sums is a violation of the qarḍ (loan contract is dealt with, and several Sharīʿah compatible formulations are provided. Design/methodology/approach - Economic analysis has been applied to show the optimum size of Qarḍ Ḥasan Associations (QHAs, which are the Sharīʿah-compliant equivalent of ROSCAs, and the Sharīʿah rules of the qarḍ contract to illustrate the legitimacy of group lending. Findings - The major findings of this study are determination of the optimum size of QHAs, the factors that affect the size and suggestion of alternative legal forms for group financing. Research limitations/implications - Inaccessibility to sources of data to test the hypothesis that has been put forth is the main difficulty encountered when conducting research on the subject. Practical implications - The paper concludes that the development of informal interest-free ROSCAs in both Muslim and non-Muslim countries is an efficient informal microfinance scheme and that it is compatible with Sharīʿah rules. Originality/value - The optimum size

  9. MRI monitoring of focused ultrasound sonications near metallic hardware.

    Science.gov (United States)

    Weber, Hans; Ghanouni, Pejman; Pascal-Tenorio, Aurea; Pauly, Kim Butts; Hargreaves, Brian A

    2018-07-01

    To explore the temperature-induced signal change in two-dimensional multi-spectral imaging (2DMSI) for fast thermometry near metallic hardware to enable MR-guided focused ultrasound surgery (MRgFUS) in patients with implanted metallic hardware. 2DMSI was optimized for temperature sensitivity and applied to monitor focus ultrasound surgery (FUS) sonications near metallic hardware in phantoms and ex vivo porcine muscle tissue. Further, we evaluated its temperature sensitivity for in vivo muscle in patients without metallic hardware. In addition, we performed a comparison of temperature sensitivity between 2DMSI and conventional proton-resonance-frequency-shift (PRFS) thermometry at different distances from metal devices and different signal-to-noise ratios (SNR). 2DMSI thermometry enabled visualization of short ultrasound sonications near metallic hardware. Calibration using in vivo muscle yielded a constant temperature sensitivity for temperatures below 43 °C. For an off-resonance coverage of ± 6 kHz, we achieved a temperature sensitivity of 1.45%/K, resulting in a minimum detectable temperature change of ∼2.5 K for an SNR of 100 with a temporal resolution of 6 s per frame. The proposed 2DMSI thermometry has the potential to allow MR-guided FUS treatments of patients with metallic hardware and therefore expand its reach to a larger patient population. Magn Reson Med 80:259-271, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  10. Hardware Middleware for Person Tracking on Embedded Distributed Smart Cameras

    Directory of Open Access Journals (Sweden)

    Ali Akbar Zarezadeh

    2012-01-01

    Full Text Available Tracking individuals is a prominent application in such domains like surveillance or smart environments. This paper provides a development of a multiple camera setup with jointed view that observes moving persons in a site. It focuses on a geometry-based approach to establish correspondence among different views. The expensive computational parts of the tracker are hardware accelerated via a novel system-on-chip (SoC design. In conjunction with this vision application, a hardware object request broker (ORB middleware is presented as the underlying communication system. The hardware ORB provides a hardware/software architecture to achieve real-time intercommunication among multiple smart cameras. Via a probing mechanism, a performance analysis is performed to measure network latencies, that is, time traversing the TCP/IP stack, in both software and hardware ORB approaches on the same smart camera platform. The empirical results show that using the proposed hardware ORB as client and server in separate smart camera nodes will considerably reduce the network latency up to 100 times compared to the software ORB.

  11. Determination of optimum oven cooking procedures for lean beef products.

    Science.gov (United States)

    Rodas-González, Argenis; Larsen, Ivy L; Uttaro, Bethany; Juárez, Manuel; Parslow, Joyce; Aalhus, Jennifer L

    2015-11-01

    In order to determine optimum oven cooking procedures for lean beef, the effects of searing at 232 or 260°C for 0, 10, 20 or 30 min, and roasting at 160 or 135°C on semimembranosus (SM) and longissimus lumborum (LL) muscles were evaluated. In addition, the optimum determined cooking method (oven-seared for 10 min at 232°C and roasted at 135°C) was applied to SM roasts varying in weight from 0.5 to 2.5 kg. Mainly, SM muscles seared for 0 or 10 min at 232°C followed by roast at 135°C had lower cooking loss, higher external browning color, more uniform internal color, and were more tender and flavorful (P searing is the recommended oven cooking procedure; with best response from muscle roast weight ≥1 kg.

  12. RECENT APPROACHES IN THE OPTIMUM CURRENCY AREAS THEORY

    Directory of Open Access Journals (Sweden)

    AURA SOCOL

    2011-04-01

    Full Text Available This study is dealing with the endogenous characteristic of the OCA criteria, starting from the idea that a higherconformity of the business cycles will result in a better timing of the economic cycles and, thus, in getting closerto the quality of an optimum currency area. Thus, if the classical theory is focused on a static approach of theproblem, the new theories assert that these conditions are dynamic, and they cannot be positively affected evenby the establishment of the Economic and Monetary Union. The consequences are overwhelming, as theendogenous approach shows that a monetary union can be achieved even if all the conditions mentioned inMundell’s optimum currency areas theory are not met, showing that some of them may also be met subsequentto the unification. Thus, a country joining a monetary union, althogh it does not meet the criteria for an optimumcurrency area, will ex post lead to the increase of the integration and business cycle correlation degree.

  13. Determination of optimum pressurizer level for kori unit 1

    Energy Technology Data Exchange (ETDEWEB)

    Song, Dong Soo; Lee, Chang Sup; Lee Jae Yong; Kim, Yo Han; Lee, Dong Hyuk [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    To determine the optimum pressurizer water level during normal operation for Kori unit 1, performance and safety analysis are performed. The methodology is developed by evaluating {sup d}ecrease in secondary heat removal{sup e}vents such as Loss of Normal Feedwater accident. To demonstrate optimum pressurizer level setpoint, RETRAN-03 code is used for performance analysis. Analysis results of RETRAN following reactor trip are compared with the actual plant data to justify RETRAN code modelling. The results of performance and safety analyses show that the newly established level setpoints not only improve the performance of pressurizer during transient including reactor trip but also meet the design bases of the pressurizer volume and pressure. 6 refs., 5 figs. (Author)

  14. Optimum design of exploding pusher target to produce maximum neutrons

    International Nuclear Information System (INIS)

    Kitagawa, Y.; Miyanaga, N.; Kato, Y.; Nakatsuka, M.; Nishiguchi, A.; Yabe, T.; Yamanaka, C.

    1985-03-01

    Exploding pusher target experiments have been conducted with the 1.052-μm GEKKO MII two-beam glass laser system to design an optimum target, which couples to the incident laser light most effectively to produce the maximum neutrons. Since hot electrons preheat the shell entirely in spite of strongly nonuniform irradiation, a simple model can design the optimum target, of which the shell/fuel interface is accelerated to 0.5 to 0.7 times the initial radius within a laser pulse. A 2-dimensional computer simulation supports this target design. The scaling of the neutron yield N with the laser power P is N ∝ P 2.4±0.4 . (author)

  15. Determination of the Optimum Ozone Product on the Plasma Ozonizer

    International Nuclear Information System (INIS)

    Agus Purwadi; Widdi Usada; Suryadi; Isyuniarto; Sri Sukmajaya

    2002-01-01

    An experiment of the optimum ozone product determination on the cylindrical plasma ozonizer has been done. The experiment is carried out by using alternating high voltage power supply, oscilloscope CS-1577 A, flow meter and spectronik-20 instrument for the absorbance solution samples which produced by varying the physics parameter values of the discharge alternating high voltage and velocity of oxygen gas input. The plasma ozonizer is made of cylinder stainless steel as the electrode and cylinder glass as the dielectric with 1.00 mm of the discharge gap and 7.225 mm 3 of the discharge tube volume. The experiment results shows that the optimum ozone product is 0.360 mg/s obtained at the the discharge of alternating high voltage of 25.50 kV, the frequency of 1.00 kHz and the rate of oxygen gas input of 1.00 lpm. (author)

  16. Determination of optimum pressurizer level for kori unit 1

    Energy Technology Data Exchange (ETDEWEB)

    Song, Dong Soo; Lee, Chang Sup; Yong, Lee Jae; Kim, Yo Han; Lee, Dong Hyuk [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    To determine the optimum pressurizer water level during normal operation for Kori unit 1, performance and safety analysis are performed. The methodology is developed by evaluating {sup d}ecrease in secondary heat removal{sup e}vents such as Loss of Normal Feedwater accident. To demonstrate optimum pressurizer level setpoint, RETRAN-03 code is used for performance analysis. Analysis results of RETRAN following reactor trip are compared with the actual plant data to justify RETRAN code modelling. The results of performance and safety analyses show that the newly established level setpoints not only improve the performance of pressurizer during transient including reactor trip but also meet the design bases of the pressurizer volume and pressure. 6 refs., 5 figs. (Author)

  17. Optimum Design of Heat Exchangers Networks Part -I: Software Development

    International Nuclear Information System (INIS)

    Gabr, E.M.A.; EI-Temtamy, S.A.; Deriasl, S.F.; Moustafa, H.A.

    2004-01-01

    In this paper, we have developed a computerized framework for Heat Exchanger Network Synthesis (HENS) with optimality conditions of achieving the least operating and capital cost. The framework of HEN design involves the development three-computer programs, which applied sequentially to design an optimum HEN. The first program Automatic Minimum Utilities [AMU] developed for automatic formulation of LP equations, these equations can be solved by the optimization software [LINDO] to predict minimum hot and cold utilities. The second program based on Vertical Heat Transfer Method [VHTM] for predicting minimum overall heat transfer area and defining the optimum δbT m in. The third program [Mod.RESHEX] developed for targeting of heat transfer area and automatic synthesis of HEN. This program represents the modifications and development of RESHEX method to overcome the design defects, which appeared on original RESHEX applications

  18. Determining optimum aging time using novel core flooding equipment

    DEFF Research Database (Denmark)

    Ahkami, Mehrdad; Chakravarty, Krishna Hara; Xiarchos, Ioannis

    2016-01-01

    the optimum aging time regardless of variations in crude oil, rock, and brine properties. State of the art core flooding equipment has been developed that can be used for consistently determining the resistivity of the coreplug during aging and waterflooding using advanced data acquisition software......New methods for enhanced oil recovery are typically developed using core flooding techniques. Establishing reservoir conditions is essential before the experimental campaign commences. The realistic oil-rock wettability can be obtained through optimum aging of the core. Aging time is affected....... In the proposed equipment, independent axial and sleeve pressure can be applied to mimic stresses at reservoir conditions. 10 coreplugs (four sandstones and six chalk samples) from the North Sea have been aged for more than 408 days in total and more than 29000 resistivity data points have been measured...

  19. Application of customer-interruption costs for optimum distribution planning

    International Nuclear Information System (INIS)

    Mok, Y.L.; Chung, T.S.

    1996-01-01

    We present a new methodology for obtaining optimum values of the integrated cost of utility investment with customer interruption in distribution planning for electric power systems by determining the reliability cost and worth of the distribution system. Reliability cost refers to investment cost of the utility in achieving a defined level of reliability. Reliability worth is the benefit gained by the utility customer from an increase of reliability. A computer program has been developed to determine comparative reliability indices for a typical distribution network. With the average interruption cost, outage duration, average disconnected load, cost data for distribution equipment, etc. being known, the relation between reliability cost, reliability worth and reliability at the specified load point are obtained. The optimum reliability of the distribution system is then determined from the minimum cost to the utility with customer interruption. The applicability of this approach is demonstrated by several practical networks. (Author)

  20. Optimum Conditions for Uricase Enzyme Production by Gliomastix gueg

    Directory of Open Access Journals (Sweden)

    Atalla, M. M.

    2009-01-01

    Full Text Available Nineteen strains of microorganisms were screened for uricase production. Gliomastix gueg was recognized to produce high levels of the enzyme. The optimum fermentation conditions for uricase production by Gliomastix gueg were examined. Results showed that uric acid medium was the most favorable one, the optimum temperature was at 30ºC, and incubation period required for maximum production was 8 days with aeration level at 150 rpm and at pH 8.0. Sucrose proved to be the best carbon source, uric acid was found to be the best nitrogen source. Both, dipotassium hydrogen phosphate and ferrous chloride as well as some vitamins gave the highest amount of uricase by Gliomastix gueg.

  1. Optimum biasing of integral equations in Monte Carlo calculations

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.

    1979-01-01

    In solving integral equations and estimating average values with the Monte Carlo method, biasing functions may be used to reduce the variancee of the estimates. A simple derivation was used to prove the existence of a zero-variance collision estimator if a specific biasing function and survival probability are applied. This optimum biasing function is the same as that used for the well known zero-variance last-event estimator

  2. Optimum design of Nd-doped fiber optical amplifiers

    DEFF Research Database (Denmark)

    Rasmussen, Thomas; Bjarklev, Anders Overgaard; Lumholt, Ole

    1992-01-01

    The waveguide parameters for a Nd-doped fluoride (Nd:ZBLANP) fiber amplifier have been optimized for small-signal and booster operation using an accurate numerical model. The optimum cutoff wavelength is shown to be 800 nm and the numerical aperture should be made as large as possible. Around 80%......% booster quantum conversion efficiency can be reached for an input power of 10 dBm and a pump power of 100 mW by the use of one filter...

  3. Optimum concrete compression strength using bio-enzyme

    OpenAIRE

    Bagio Tony Hartono; Basoeki Makno; Tistogondo Julistyana; Pradana Sofyan Ali

    2017-01-01

    To make concrete with high compressive strength and has a certain concrete specifications other than the main concrete materials are also needed concrete mix quality control and other added material is also in line with the current technology of concrete mix that produces concrete with specific characteristics. Addition of bio enzyme on five concrete mixture that will be compared with normal concrete in order to know the optimum level bio-enzyme in concrete to increase the strength of the con...

  4. Theoretical and numerical study of an optimum design algorithm

    International Nuclear Information System (INIS)

    Destuynder, Philippe.

    1976-08-01

    This work can be separated into two main parts. First, the behavior of the solution of an elliptic variational equation is analyzed when the domain is submitted to a small perturbation. The case of inequations is also considered. Secondly the previous results are used for deriving an optimum design algorithm. This algorithm was suggested by the center-method proposed by Huard. Numerical results show the superiority of the method on other different optimization techniques [fr

  5. Generating AN Optimum Treatment Plan for External Beam Radiation Therapy.

    Science.gov (United States)

    Kabus, Irwin

    1990-01-01

    The application of linear programming to the generation of an optimum external beam radiation treatment plan is investigated. MPSX, an IBM linear programming software package was used. All data originated from the CAT scan of an actual patient who was treated for a pancreatic malignant tumor before this study began. An examination of several alternatives for representing the cross section of the patient showed that it was sufficient to use a set of strategically placed points in the vital organs and tumor and a grid of points spaced about one half inch apart for the healthy tissue. Optimum treatment plans were generated from objective functions representing various treatment philosophies. The optimum plans were based on allowing for 216 external radiation beams which accounted for wedges of any size. A beam reduction scheme then reduced the number of beams in the optimum plan to a number of beams small enough for implementation. Regardless of the objective function, the linear programming treatment plan preserved about 95% of the patient's right kidney vs. 59% for the plan the hospital actually administered to the patient. The clinician, on the case, found most of the linear programming treatment plans to be superior to the hospital plan. An investigation was made, using parametric linear programming, concerning any possible benefits derived from generating treatment plans based on objective functions made up of convex combinations of two objective functions, however, this proved to have only limited value. This study also found, through dual variable analysis, that there was no benefit gained from relaxing some of the constraints on the healthy regions of the anatomy. This conclusion was supported by the clinician. Finally several schemes were found that, under certain conditions, can further reduce the number of beams in the final linear programming treatment plan.

  6. Choice of economical optimum blanket of hybrid reactors

    Energy Technology Data Exchange (ETDEWEB)

    Blinkin, V L; Novikov, V M

    1981-01-01

    The economical effectiveness of symbiotic power systems depends on the choice of the correlation between energy production and fissile fuel production in blankets of controlled thermonuclear fusion reactor (CTR), what is investigated here. It is shown that the optimum value of this correlation essentially depends on the ratio between the specific costs for energy production in hybrid thermonuclear reactors and that in fission reactors as part of the symbiotic system.

  7. Choice of word length in the design of a specialized hardware for lossless wavelet compression of medical images

    Science.gov (United States)

    Urriza, Isidro; Barragan, Luis A.; Artigas, Jose I.; Garcia, Jose I.; Navarro, Denis

    1997-11-01

    Image compression plays an important role in the archiving and transmission of medical images. Discrete cosine transform (DCT)-based compression methods are not suitable for medical images because of block-like image artifacts that could mask or be mistaken for pathology. Wavelet transforms (WTs) are used to overcome this problem. When implementing WTs in hardware, finite precision arithmetic introduces quantization errors. However, lossless compression is usually required in the medical image field. Thus, the hardware designer must look for the optimum register length that, while ensuring the lossless accuracy criteria, will also lead to a high-speed implementation with small chip area. In addition, wavelet choice is a critical issue that affects image quality as well as system design. We analyze the filters best suited to image compression that appear in the literature. For them, we obtain the maximum quantization errors produced in the calculation of the WT components. Thus, we deduce the minimum word length required for the reconstructed image to be numerically identical to the original image. The theoretical results are compared with experimental results obtained from algorithm simulations on random test images. These results enable us to compare the hardware implementation cost of the different filter banks. Moreover, to reduce the word length, we have analyzed the case of increasing the integer part of the numbers while maintaining constant the word length when the scale increases.

  8. Optimum design of dual pressure heat recovery steam generator using non-dimensional parameters based on thermodynamic and thermoeconomic approaches

    International Nuclear Information System (INIS)

    Naemi, Sanaz; Saffar-Avval, Majid; Behboodi Kalhori, Sahand; Mansoori, Zohreh

    2013-01-01

    The thermodynamic and thermoeconomic analyses are investigated to achieve the optimum operating parameters of a dual pressure heat recovery steam generator (HRSG), coupled with a heavy duty gas turbine. In this regard, the thermodynamic objective function including the exergy waste and the exergy destruction, is defined in such a way to find the optimum pinch point, and consequently to minimize the objective function by using non-dimensional operating parameters. The results indicated that, the optimum pinch point from thermodynamic viewpoint is 2.5 °C and 2.1 °C for HRSGs with live steam at 75 bar and 90 bar respectively. Since thermodynamic analysis is not able to consider economic factors, another objective function including annualized installation cost and annual cost of irreversibilities is proposed. To find the irreversibility cost, electricity price and also fuel price are considered independently. The optimum pinch point from thermoeconomic viewpoint on basis of electricity price is 20.6 °C (75 bar) and 19.2 °C (90 bar), whereas according to the fuel price it is 25.4 °C and 23.7 °C. Finally, an extensive sensitivity analysis is performed to compare optimum pinch point for different electricity and fuel prices. -- Highlights: ► Presenting thermodynamic and thermoeconomic optimization of a heat recovery steam generator. ► Defining an objective function consists of exergy waste and exergy destruction. ► Defining an objective function including capital cost and cost of irreversibilities. ► Obtaining the optimized operating parameters of a dual pressure heat recovery boiler. ► Computing the optimum pinch point using non-dimensional operating parameters

  9. An integrated expert system for optimum in core fuel management

    International Nuclear Information System (INIS)

    Abd Elmoatty, Mona S.; Nagy, M.S.; Aly, Mohamed N.; Shaat, M.K.

    2011-01-01

    Highlights: → An integrated expert system constructed for optimum in core fuel management. → Brief discussion of the ESOIFM Package modules, inputs and outputs. → Package was applied on the DALAT Nuclear Research Reactor (0.5 MW). → The Package verification showed good agreement. - Abstract: An integrated expert system called Efficient and Safe Optimum In-core Fuel Management (ESOIFM Package) has been constructed to achieve an optimum in core fuel management and automate the process of data analysis. The Package combines the constructed mathematical models with the adopted artificial intelligence techniques. The paper gives a brief discussion of the ESOIFM Package modules, inputs and outputs. The Package was applied on the DALAT Nuclear Research Reactor (0.5 MW). Moreover, the data of DNRR have been used as a case study for testing and evaluation of ESOIFM Package. This paper shows the comparison between the ESOIFM Package burn-up results, the DNRR experimental burn-up data, and other DNRR Codes burn-up results. The results showed good agreement.

  10. Optimum profit model considering production, quality and sale problem

    Science.gov (United States)

    Chen, Chung-Ho; Lu, Chih-Lun

    2011-12-01

    Chen and Liu ['Procurement Strategies in the Presence of the Spot Market-an Analytical Framework', Production Planning and Control, 18, 297-309] presented the optimum profit model between the producers and the purchasers for the supply chain system with a pure procurement policy. However, their model with a simple manufacturing cost did not consider the used cost of the customer. In this study, the modified Chen and Liu's model will be addressed for determining the optimum product and process parameters. The authors propose a modified Chen and Liu's model under the two-stage screening procedure. The surrogate variable having a high correlation with the measurable quality characteristic will be directly measured in the first stage. The measurable quality characteristic will be directly measured in the second stage when the product decision cannot be determined in the first stage. The used cost of the customer will be measured by adopting Taguchi's quadratic quality loss function. The optimum purchaser's order quantity, the producer's product price and the process quality level will be jointly determined by maximising the expected profit between them.

  11. Optimum Antenna Downtilt Angles for Macrocellular WCDMA Network

    Directory of Open Access Journals (Sweden)

    Niemelä Jarno

    2005-01-01

    Full Text Available The impact of antenna downtilt on the performance of cellular WCDMA network has been studied by using a radio network planning tool. An optimum downtilt angle has been evaluated for numerous practical macrocellular site and antenna configurations for electrical and mechanical antenna downtilt concepts. The aim of this massive simulation campaign was expected to provide an answer to two questions: firstly, how to select the downtilt angle of a macrocellular base station antenna? Secondly, what is the impact of antenna downtilt on system capacity and network coverage? Optimum downtilt angles were observed to vary between – depending on the network configuration. Moreover, the corresponding downlink capacity gains varied between – . Antenna vertical beamwidth affects clearly the required optimum downtilt angle the most. On the other hand, with wider antenna vertical beamwidth, the impact of downtilt on system performance is not such imposing. In addition, antenna height together with the size of the dominance area affect the required downtilt angle. Finally, the simulation results revealed how the importance of the antenna downtilt becomes more significant in dense networks, where the capacity requirements are typically also higher.

  12. Planning of optimum production from a natural gas field

    Energy Technology Data Exchange (ETDEWEB)

    Van Dam, J

    1968-03-01

    The design of an optimum development plan for a natural gas field always depends on the typical characteristics of the producing field, as well as those of the market to be served by this field. Therefore, a good knowledge of the field parameters, such as the total natural gas reserves, the well productivity, and the dependence of production rates on pipeline pressure and depletion of natural gas reserves, is required prior to designing the development scheme of the field, which in fact depends on the gas-sales contract to be concluded in order to commit the natural gas reserves to the market. In this paper these various technical parameters are discussed in some detail, and on this basis a theoretical/economical analysis of natural gas production is given. For this purpose a simplified economical/mathematical model for the field is proposed, from which optimum production rates at various future dates can be calculated. The results of these calculations are represented in a dimensionless diagram which may serve as an aid in designing optimum development plans for a natural gas field. The use of these graphs is illustrated in a few examples.

  13. The theory of an ‘optimum currency area’

    Directory of Open Access Journals (Sweden)

    Jarosław Kundera

    2012-12-01

    Full Text Available The main goal of this paper is to analyse and distinguish the main components of the theory of an ‘Optimum Currency Area’. The theory of an optimum currency area indicates some essential elements as preconditions for the successful introduction of a common currency: high mobility of labour, openness of the economy defined as a high proportion of tradable to non-tradable goods, and high diversification of domestic production before joining the union. The article’s analysis helps to better understanding the reasons of the current crisis in the euro zone. The main problem with a common currency area is the adjustment to imbalances, which cannot take place through exchange rates in conditions of a common currency. The missing elements of the theory are the role of the mobility of capital to correct interregional balance of payments disequilibria and lack of a common budget with sufficient own resources during the occurrence of debt crises in member countries. The theory of an optimum currency area has noticed the importance of coordination between fiscal and monetary policy and the necessity of redistribution of resources among partners. However, it does not say much about the methods applied, how to deal with debt crises and what the cost of a potential breaking up of monetary union would be.

  14. Semianalytical and Seminumerical Calculations of Optimum Material Distributions

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Gunnar

    1963-06-15

    Perturbation theory applied to the multigroup diffusion equations gives a general condition for optimum distribution of reactor materials. A certain function of the material densities and the fluxes, here called the W (eight) function, must thus be constant where the variable material density is larger than zero if changes in this density affect only the group constants where the changes occur. The weight function is, however, generally a complicated function and complete solutions have therefore previously been presented only for the special case when constant weight function implies constant thermal flux. It is demonstrated that the condition of constant weight function can be used together with well known methods for numerical solution of the multigroup diffusion equations to obtain optimum material distributions also when the thermal flux varies over the core. Solution of the minimum fuel mass problem for two reflected reactors thus shows that an effective reflector such as D{sub 2}O gives a peak in the optimum fuel distribution at the core-reflector interface, while an ineffective reflector such as a breeder blanket or a steel tank wall 'pushes' the fuel away from the strongly absorbing zone. It is also interesting to compare the effective reflector case with analytically obtained solutions corresponding to flat power density, flat thermal flux and flat fuel density.

  15. Optimum heat power cycles for specified boundary conditions

    International Nuclear Information System (INIS)

    Ibrahim, O.M.; Klein, S.A.; Mitchell, J.W.

    1991-01-01

    In this paper optimization of the power output of Carnot and closed Brayton cycles is considered for both finite and infinite thermal capacitance rates of the external fluid streams. The method of Lagrange multipliers is used to solve for working fluid temperatures that yield maximum power. Analytical expressions for the maximum power and the cycle efficiency at maximum power are obtained. A comparison of the maximum power from the two cycles for the same boundary conditions, i.e., the same heat source/sink inlet temperatures, thermal capacitance rates, and heat exchanger conductances, shows that the Brayton cycle can produce more power than the Carnot cycle. This comparison illustrates that cycles exist that can produce more power than the Carnot cycle. The optimum heat power cycle, which will provide the upper limit of power obtained from any thermodynamic cycle for specified boundary conditions and heat exchanger conductances is considered. The optimum heat power cycle is identified by optimizing the sum of the power output from a sequence of Carnot cycles. The shape of the optimum heat power cycle, the power output, and corresponding efficiency are presented. The efficiency at maximum power of all cycles investigated in this study is found to be equal to (or well approximated by) η = 1 - sq. root T L.in /φT H.in where φ is a factor relating the entropy changes during heat rejection and heat addition

  16. Optimum moisture levels for biodegradation of mortality composting envelope materials.

    Science.gov (United States)

    Ahn, H K; Richard, T L; Glanville, T D

    2008-01-01

    Moisture affects the physical and biological properties of compost and other solid-state fermentation matrices. Aerobic microbial systems experience different respiration rates (oxygen uptake and CO2 evolution) as a function of moisture content and material type. In this study the microbial respiration rates of 12 mortality composting envelope materials were measured by a pressure sensor method at six different moisture levels. A wide range of respiration (1.6-94.2mg O2/g VS-day) rates were observed for different materials, with alfalfa hay, silage, oat straw, and turkey litter having the highest values. These four envelope materials may be particularly suitable for improving internal temperature and pathogen destruction rates for disease-related mortality composting. Optimum moisture content was determined based on measurements across a range that spans the maximum respiration rate. The optimum moisture content of each material was observed near water holding capacity, which ranged from near 60% to over 80% on a wet basis for all materials except a highly stabilized soil compost blend (optimum around 25% w.b.). The implications of the results for moisture management and process control strategies during mortality composting are discussed.

  17. Optimum structure of Whipple shield against hypervelocity impact

    International Nuclear Information System (INIS)

    Lee, M

    2014-01-01

    Hypervelocity impact of a spherical aluminum projectile onto two spaced aluminum plates (Whipple shield) was simulated to estimate an optimum structure. The Smooth Particle Hydrodynamics (SPH) code which has a unique migration scheme from a rectangular coordinate to an axisymmetic coordinate was used. The ratio of the front plate thickness to sphere diameter varied from 0.06 to 0.48. The impact velocities considered here were 6.7 km/s. This is the procedure we explored. To guarantee the early stage simulation, the shapes of debris clouds were first compared with the previous experimental pictures, indicating a good agreement. Next, the debris cloud expansion angle was predicted and it shows a maximum value of 23 degree for thickness ratio of front bumper to sphere diameter of 0.23. A critical sphere diameter causing failure of rear wall was also examined while keeping the total thickness of two plates constant. There exists an optimum thickness ratio of front bumper to rear wall, which is identified as a function of the size combination of the impacting body, front and rear plates. The debris cloud expansion-correlated-optimum thickness ratio study provides a good insight on the hypervelocity impact onto spaced target system.

  18. Optimum structure of Whipple shield against hypervelocity impact

    Science.gov (United States)

    Lee, M.

    2014-05-01

    Hypervelocity impact of a spherical aluminum projectile onto two spaced aluminum plates (Whipple shield) was simulated to estimate an optimum structure. The Smooth Particle Hydrodynamics (SPH) code which has a unique migration scheme from a rectangular coordinate to an axisymmetic coordinate was used. The ratio of the front plate thickness to sphere diameter varied from 0.06 to 0.48. The impact velocities considered here were 6.7 km/s. This is the procedure we explored. To guarantee the early stage simulation, the shapes of debris clouds were first compared with the previous experimental pictures, indicating a good agreement. Next, the debris cloud expansion angle was predicted and it shows a maximum value of 23 degree for thickness ratio of front bumper to sphere diameter of 0.23. A critical sphere diameter causing failure of rear wall was also examined while keeping the total thickness of two plates constant. There exists an optimum thickness ratio of front bumper to rear wall, which is identified as a function of the size combination of the impacting body, front and rear plates. The debris cloud expansion-correlated-optimum thickness ratio study provides a good insight on the hypervelocity impact onto spaced target system.

  19. The optimum decision rules for the oddity task.

    Science.gov (United States)

    Versfeld, N J; Dai, H; Green, D M

    1996-01-01

    This paper presents the optimum decision rule for an m-interval oddity task in which m-1 intervals contain the same signal and one is different or odd. The optimum decision rule depends on the degree of correlation among observations. The present approach unifies the different strategies that occur with "roved" or "fixed" experiments (Macmillan & Creelman, 1991, p. 147). It is shown that the commonly used decision rule for an m-interval oddity task corresponds to the special case of highly correlated observations. However, as is also true for the same-different paradigm, there exists a different optimum decision rule when the observations are independent. The relation between the probability of a correct response and d' is derived for the three-interval oddity task. Tables are presented of this relation for the three-, four-, and five-interval oddity task. Finally, an experimental method is proposed that allows one to determine the decision rule used by the observer in an oddity experiment.

  20. An optimum organizational structure for a large earth-orbiting multidisciplinary Space Base

    Science.gov (United States)

    Ragusa, J. M.

    1973-01-01

    The purpose of this exploratory study was to identify an optimum hypothetical organizational structure for a large earth-orbiting multidisciplinary research and applications (R&A) Space Base manned by a mixed crew of technologists. Since such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than the empirical testing of it. The essential finding of this research was that a four-level project type 'total matrix' model will optimize the efficiency and effectiveness of Space Base technologists.

  1. Increased power generation from primary sludge by a submersible microbial fuel cell and optimum operational conditions

    DEFF Research Database (Denmark)

    Vologni, Valentina; Kakarla, Ramesh; Angelidaki, Irini

    2013-01-01

    Microbial fuel cells (MFCs) have received attention as a promising renewable energy technology for waste treatment and energy recovery. We tested a submersible MFC with an innovative design capable of generating a stable voltage of 0.250 ± 0.008 V (with a fixed 470 Ω resistor) directly from prima...... prolonged the current generation and increased the power density by 7 and 1.5 times, respectively, in comparison with raw primary sludge. These findings suggest that energy recovery from primary sludge can be maximized using an advanced MFC system with optimum conditions....

  2. The optimum circular field size for dental radiography with intraoral films

    International Nuclear Information System (INIS)

    van Straaten, F.J.; van Aken, J.

    1982-01-01

    Intraoral radiographs are often made with circular fields to irradiate the film, and in many instances these fields are much larger than the film. The feasibility of reducing a circular radiation field without increasing the probability of excessive cone cutting was evaluated clinically, and an optimum field size was determined. A circular radiation field 4.5 cm. at the tube end was found to minimize cone cutting and reduce the area of tissue irradiated by at least 44 percent. Findings suggest that current I.C.R.P. recommendations for a 6 to 7.5 cm. diameter circular field may be too liberal

  3. A Parallel Approach To Optimum Actuator Selection With a Genetic Algorithm

    Science.gov (United States)

    Rogers, James L.

    2000-01-01

    Recent discoveries in smart technologies have created a variety of aerodynamic actuators which have great potential to enable entirely new approaches to aerospace vehicle flight control. For a revolutionary concept such as a seamless aircraft with no moving control surfaces, there is a large set of candidate locations for placing actuators, resulting in a substantially larger number of combinations to examine in order to find an optimum placement satisfying the mission requirements. The placement of actuators on a wing determines the control effectiveness of the airplane. One approach to placement Maximizes the moments about the pitch, roll, and yaw axes, while minimizing the coupling. Genetic algorithms have been instrumental in achieving good solutions to discrete optimization problems, such as the actuator placement problem. As a proof of concept, a genetic has been developed to find the minimum number of actuators required to provide uncoupled pitch, roll, and yaw control for a simplified, untapered, unswept wing model. To find the optimum placement by searching all possible combinations would require 1,100 hours. Formulating the problem and as a multi-objective problem and modifying it to take advantage of the parallel processing capabilities of a multi-processor computer, reduces the optimization time to 22 hours.

  4. Environmental Control and Life Support (ECLS) Hardware Commonality for Exploration Vehicles

    Science.gov (United States)

    Carrasquillo, Robyn; Anderson, Molly

    2012-01-01

    In August 2011, the Environmental Control and Life Support Systems (ECLSS) technical community, along with associated stakeholders, held a workshop to review NASA s plans for Exploration missions and vehicles with two objectives: revisit the Exploration Atmospheres Working Group (EAWG) findings from 2006, and discuss preliminary ECLSS architecture concepts and technology choices for Exploration vehicles, identifying areas for potential common hardware or technologies to be utilized. Key considerations for selection of vehicle design total pressure and percent oxygen include operational concepts for extravehicular activity (EVA) and prebreathe protocols, materials flammability, and controllability within pressure and oxygen ranges. New data for these areas since the 2006 study were presented and discussed, and the community reached consensus on conclusions and recommendations for target design pressures for each Exploration vehicle concept. For the commonality study, the workshop identified many areas of potential commonality across the Exploration vehicles as well as with heritage International Space Station (ISS) and Shuttle hardware. Of the 36 ECLSS functions reviewed, 16 were considered to have strong potential for commonality, 13 were considered to have some potential commonality, and 7 were considered to have limited potential for commonality due to unique requirements or lack of sufficient heritage hardware. These findings, which will be utilized in architecture studies and budget exercises going forward, are presented in detail.

  5. Industrial hardware and software verification with ACL2.

    Science.gov (United States)

    Hunt, Warren A; Kaufmann, Matt; Moore, J Strother; Slobodova, Anna

    2017-10-13

    The ACL2 theorem prover has seen sustained industrial use since the mid-1990s. Companies that have used ACL2 regularly include AMD, Centaur Technology, IBM, Intel, Kestrel Institute, Motorola/Freescale, Oracle and Rockwell Collins. This paper introduces ACL2 and focuses on how and why ACL2 is used in industry. ACL2 is well-suited to its industrial application to numerous software and hardware systems, because it is an integrated programming/proof environment supporting a subset of the ANSI standard Common Lisp programming language. As a programming language ACL2 permits the coding of efficient and robust programs; as a prover ACL2 can be fully automatic but provides many features permitting domain-specific human-supplied guidance at various levels of abstraction. ACL2 specifications and models often serve as efficient execution engines for the modelled artefacts while permitting formal analysis and proof of properties. Crucially, ACL2 also provides support for the development and verification of other formal analysis tools. However, ACL2 did not find its way into industrial use merely because of its technical features. The core ACL2 user/development community has a shared vision of making mechanized verification routine when appropriate and has been committed to this vision for the quarter century since the Computational Logic, Inc., Verified Stack. The community has focused on demonstrating the viability of the tool by taking on industrial projects (often at the expense of not being able to publish much).This article is part of the themed issue 'Verified trustworthy software systems'. © 2017 The Author(s).

  6. Flight Hardware Virtualization for On-Board Science Data Processing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Utilize Hardware Virtualization technology to benefit on-board science data processing by investigating new real time embedded Hardware Virtualization solutions and...

  7. GOSH! A roadmap for open-source science hardware

    CERN Multimedia

    Stefania Pandolfi

    2016-01-01

    The goal of the Gathering for Open Science Hardware (GOSH! 2016), held from 2 to 5 March 2016 at IdeaSquare, was to lay the foundations of the open-source hardware for science movement.   The participants in the GOSH! 2016 meeting gathered in IdeaSquare. (Image: GOSH Community) “Despite advances in technology, many scientific innovations are held back because of a lack of affordable and customisable hardware,” says François Grey, a professor at the University of Geneva and coordinator of Citizen Cyberlab – a partnership between CERN, the UN Institute for Training and Research and the University of Geneva – which co-organised the GOSH! 2016 workshop. “This scarcity of accessible science hardware is particularly obstructive for citizen science groups and humanitarian organisations that don’t have the same economic means as a well-funded institution.” Instead, open sourcing science hardware co...

  8. Fast DRR splat rendering using common consumer graphics hardware

    International Nuclear Information System (INIS)

    Spoerk, Jakob; Bergmann, Helmar; Wanschitz, Felix; Dong, Shuo; Birkfellner, Wolfgang

    2007-01-01

    Digitally rendered radiographs (DRR) are a vital part of various medical image processing applications such as 2D/3D registration for patient pose determination in image-guided radiotherapy procedures. This paper presents a technique to accelerate DRR creation by using conventional graphics hardware for the rendering process. DRR computation itself is done by an efficient volume rendering method named wobbled splatting. For programming the graphics hardware, NVIDIAs C for Graphics (Cg) is used. The description of an algorithm used for rendering DRRs on the graphics hardware is presented, together with a benchmark comparing this technique to a CPU-based wobbled splatting program. Results show a reduction of rendering time by about 70%-90% depending on the amount of data. For instance, rendering a volume of 2x10 6 voxels is feasible at an update rate of 38 Hz compared to 6 Hz on a common Intel-based PC using the graphics processing unit (GPU) of a conventional graphics adapter. In addition, wobbled splatting using graphics hardware for DRR computation provides higher resolution DRRs with comparable image quality due to special processing characteristics of the GPU. We conclude that DRR generation on common graphics hardware using the freely available Cg environment is a major step toward 2D/3D registration in clinical routine

  9. Studies on optimum harvest time for hybrid rice seed.

    Science.gov (United States)

    Fu, Hong; Cao, Dong-Dong; Hu, Wei-Min; Guan, Ya-Jing; Fu, Yu-Ying; Fang, Yong-Feng; Hu, Jin

    2017-03-01

    Timely harvest is critical for hybrid rice to achieve maximum seed viability, vigor and yield. However, how to predict the optimum harvest time has been rarely reported so far. The seed vigor of Zhuliangyou 06 (ZLY06) increased and reached the highest level at 20 days after pollination (DAP), when seed moisture content had a lower value, which was maintained until final seed maturation. For Chunyou 84 (CY84), seed vigor, fresh and dry weight had relatively high values at 25 DAP, when seed moisture content reached the lowest value and changed slightly from 25 to 55 DAP. In both hybrid rice varieties, seed glume chlorophyll content declined rapidly from 10 to 30 DAP and remained at a very low level after 35 DAP. Starch content exhibited an increasing trend during seed maturation, while both soluble sugar content and amylase activity decreased significantly at the early stages of seed development. Moreover, correlation analyses showed that seed dry weight, starch content and superoxide dismutase activity were significantly positively correlated with seed vigor. In contrast, chlorophyll content, moisture content, soluble sugar, soluble protein, abscisic acid, gibberellin content, electrical conductivity, catalase and ascorbate peroxidase activities were significantly negatively correlated with seed vigor. Physiological and biochemical parameters were obviously more closely related with seed vigor than with seed germinability during seed development. Seed vigor could be better used as a comprehensive factor to predict the optimum seed harvest time. It is suggested that for ZLY06 seeds could be harvested as early as 20 DAP, whereas for CY84 the earliest optimum harvest time was 25 DAP. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  10. Optimum MRS site location to minimize spent fuel transportation impacts

    International Nuclear Information System (INIS)

    Hoskins, R.E.

    1987-01-01

    A range of spent fuel transportation system parameters are examined in terms of attributes important to minimizing transportation impacts as a basis for identifying geographic regions best suited for siting a monitored retrievable storage (MRS) facility. Transportation system parameters within existing transport cask design and transportation mode capabilities were systematically analyzed. The optimum MRS location was found to be very sensitive to transportation system assumptions particularly with regard to the relative efficiencies of the reactor-to-MRS and MRS-to-repository components of the system. Moreover, dramatic improvements in the reactor-to-MRS component can be made through use of multiple cask shipment of the largest practical casks by dedicated train compared to the traditional single cask rail (70%) and truck (30%) shipments assumed the Department of Energy in their studies that defined the optimum MRS location in the vicinity of Tennessee. It is important to develop and utilize an efficient transportation system irrespective of whether or not an MRS is in the system. Assuming reasonably achievable efficiency in reactor-to-MRS spent fuel transportation and assigning equal probabilities to the three western sites selected for characterization of being the repository site, the optimum MRS location would be in the far-mid-western states. Based on various geographic criteria including barge access and location in a nuclear service area, the State of Tennessee ranks any place from 12th to the 25th at a penalty of about 30% over the minimum achievable impacts. While minimizing transportation impacts is an important factor, other criteria should also be considered in selecting an MRS site

  11. Is optimum and effective work done in administrative jurisdiction

    International Nuclear Information System (INIS)

    Hoecht, H.

    1980-01-01

    Is optimum and effective work done in administrative jurisdiction. The author describes the general situation prevailing in administrative jurisdiction. He gives tables on the number of subjects received per annum, of judges administering justice and figures on executed and non-executed proceedings. He reports on districts of jurisdiction, personnel, court administration and the amount of work. The investigation into administrative jurisdiction has shown accomplishments for 1978 which are not bad at all. Sporadic administrative shortcomings are to be realized and put to an end. (HSCH) [de

  12. Optimum filter-based discrimination of neutrons and gamma rays

    International Nuclear Information System (INIS)

    Amiri, Moslem; Prenosil, Vaclav; Cvachovec, Frantisek

    2015-01-01

    An optimum filter-based method for discrimination of neutrons and gamma-rays in a mixed radiation field is presented. The existing filter-based implementations of discriminators require sample pulse responses in advance of the experiment run to build the filter coefficients, which makes them less practical. Our novel technique creates the coefficients during the experiment and improves their quality gradually. Applied to several sets of mixed neutron and photon signals obtained through different digitizers using stilbene scintillator, this approach is analyzed and its discrimination quality is measured. (authors)

  13. Optimum policies for a system with general imperfect maintenance

    International Nuclear Information System (INIS)

    Sheu, S.-H.; Lin, Y.-B.; Liao, G.-L.

    2006-01-01

    This study considers periodic preventive maintenance policies, which maximizes the availability of a repairable system with major repair at failure. Three types of preventive maintenance are performed, namely: imperfect preventive maintenance (IPM), perfect preventive maintenance (PPM) and failed preventive maintenance (FPM). The probability that preventive maintenance is perfect depends on the number of imperfect maintenances conducted since the previous renewal cycle, and the probability that preventive maintenance remains imperfect is not increasing. The optimum preventive maintenance time that maximizes availability is derived. Various special cases are considered. A numerical example is given

  14. Optimum amount of an insurance sum in life insurance

    Directory of Open Access Journals (Sweden)

    Janez Balkovec

    2001-01-01

    Full Text Available Personal insurance represents one of the sources of personal social security as a category of personal property. How to get a proper life insurance is a frequently asked question. When insuring material objects (car, house..., the problem is usually not in the amount of the taken insurance. With life insurance (abstract goods, problems as such occur. In this paper, we wish to present a model that, according to the financial situation and the anticipated future, makes it possible to calculate the optimum insurance sum in life insurance.

  15. Search for an optimum time response of spark counters

    International Nuclear Information System (INIS)

    Devismes, A.; Finck, Ch.; Kress, T.; Gobbi, A.; Eschke, J.; Herrmann, N.; Hildenbrand, K.D.; Koczon, P.; Petrovici, M.

    2002-01-01

    A spark counter of the type developed by Pestov has been tested with the aim of searching for an optimum time response function, changing voltage, content of noble and quencher gases, pressure and energy-loss. Replacing the usual argon by neon has brought an improvement of the resolution and a significant reduction of tails in the time response function. It has been proven that a counter as long as 90 cm can deliver, using neon gas mixture, a time resolution σ<60 ps with about 1% absolute tail and an efficiency of about 90%

  16. Optimum value of original events on the PEPT technique

    International Nuclear Information System (INIS)

    Sadremomtaz, Alireza; Taherparvar, Payvand

    2011-01-01

    Do Positron emission particle tracking (PEPT) has been used to track the motion of a single radioactively labeled tracer particle within a bed of similar particles. In this paper, the effect of the original event fraction on the results precise in two experiments has been reviewed. Results showed that the algorithm can no longer distinguish some corrupt trajectories, in addition to; further iteration reduces the statistical significance of the sample without improving its quality. Results show that the optimum value of trajectories depends on the type of experiment.

  17. Optimum Value of Original Events on the Pept Technique

    Science.gov (United States)

    Sadremomtaz, Alireza; Taherparvar, Payvand

    2011-12-01

    Do Positron emission particle tracking (PEPT) has been used to track the motion of a single radioactively labeled tracer particle within a bed of similar particles. In this paper, the effect of the original event fraction on the results precise in two experiments has been reviewed. Results showed that the algorithm can no longer distinguish some corrupt trajectories, in addition to; further iteration reduces the statistical significance of the sample without improving its quality. Results show that the optimum value of trajectories depends on the type of experiment.

  18. Optimum commodity taxation with a non-renewable resource

    DEFF Research Database (Denmark)

    Daubanes, Julien Xavier; Lasserre, Pierre

    2017-01-01

    We examine optimum commodity taxation (OCT), including the taxation of non-renewable resources (NRRs), by a government that needs to rely on commodity taxes to raise revenues. NRRs should be taxed at higher rates than otherwise-identical conventional commodities, according to an augmented, dynamic...... formulas can directly be used to indicate how Pigovian taxation of carbon NRRs should be increased in the presence of public-revenue needs, as illustrated in a numerical example. We show that NRR substitutes and complements should receive a particular tax treatment. Finally, in a NRR-importing economy...

  19. Modified loss coefficients in the determination of optimum generation scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Hazarika, D.; Bordoloi, P.K. (Assam Engineering Coll. (IN))

    1991-03-01

    A modified method has been evolved to form the loss coefficients of an electrical power system network by decoupling load and generation and thereby creating additional fictitious load buses. The system losses are then calculated and co-ordinated to arrive at an optimum scheduling of generation using the standard co-ordination equation. The method presented is superior to the ones currently available, in that it is applicable to a multimachine system with random variation of load and it accounts for limits in plant generations and line losses. The precise nature of results and the economy in the cost of energy production obtained by this method is quantified and presented. (author).

  20. Optimum back-pressure forging using servo die cushion

    OpenAIRE

    Kawamoto, Kiichiro; Yoneyama, Takeshi; Okada, Masato; Kitayama, Satoshi; Chikahisa, Junpei

    2014-01-01

    This study focused on utilizing a servo die cushion (in conjunction with a servo press) as a "back-pressure load generator," to determine its effect on shape accuracy of the formed part and total forming load in forward extrusion during cold forging. The effect of back-pressure load application was confirmed in experiments, and the optimum setting pattern of back-pressure load was considered to minimize both shape accuracy of the formed part and back-pressure energy, which was representative ...

  1. Optimum heat storage design for heat integrated multipurpose batch plants

    CSIR Research Space (South Africa)

    Stamp, J

    2011-01-01

    Full Text Available procedure is presented tha journal homepage: www All rights reserved. ajozi T, Optimum heat storage grated multipurpose batch plants , South Africa y usage in multipurpose batch plants has been in published literature most present methods, time... � 2pL?u?kins ? 1 h3A3?u?cu?U (36) The internal area for heat loss by convection from the heat transfer medium is given by Constraint (37) and the area for convective heat transfer losses to the environment is given in Constraint (38). A1?u? ? 2...

  2. Hardware Realization of Chaos-based Symmetric Video Encryption

    KAUST Repository

    Ibrahim, Mohamad A.

    2013-05-01

    This thesis reports original work on hardware realization of symmetric video encryption using chaos-based continuous systems as pseudo-random number generators. The thesis also presents some of the serious degradations caused by digitally implementing chaotic systems. Subsequently, some techniques to eliminate such defects, including the ultimately adopted scheme are listed and explained in detail. Moreover, the thesis describes original work on the design of an encryption system to encrypt MPEG-2 video streams. Information about the MPEG-2 standard that fits this design context is presented. Then, the security of the proposed system is exhaustively analyzed and the performance is compared with other reported systems, showing superiority in performance and security. The thesis focuses more on the hardware and the circuit aspect of the system’s design. The system is realized on Xilinx Vetrix-4 FPGA with hardware parameters and throughput performance surpassing conventional encryption systems.

  3. Hardware and software maintenance strategies for upgrading vintage computers

    International Nuclear Information System (INIS)

    Wang, B.C.; Buijs, W.J.; Banting, R.D.

    1992-01-01

    The paper focuses on the maintenance of the computer hardware and software for digital control computers (DCC). Specific design and problems related to various maintenance strategies are reviewed. A foundation was required for a reliable computer maintenance and upgrading program to provide operation of the DCC with high availability and reliability for 40 years. This involved a carefully planned and executed maintenance and upgrading program, involving complementary hardware and software strategies. The computer system was designed on a modular basis, with large sections easily replaceable, to facilitate maintenance and improve availability of the system. Advances in computer hardware have made it possible to replace DCC peripheral devices with reliable, inexpensive, and widely available components from PC-based systems (PC = personal computer). By providing a high speed link from the DCC to a PC, it is now possible to use many commercial software packages to process data from the plant. 1 fig

  4. Asymmetric Hardware Distortions in Receive Diversity Systems: Outage Performance Analysis

    KAUST Repository

    Javed, Sidrah; Amin, Osama; Ikki, Salama S.; Alouini, Mohamed-Slim

    2017-01-01

    This paper studies the impact of asymmetric hardware distortion (HWD) on the performance of receive diversity systems using linear and switched combining receivers. The asymmetric attribute of the proposed model motivates the employment of improper Gaussian signaling (IGS) scheme rather than the traditional proper Gaussian signaling (PGS) scheme. The achievable rate performance is analyzed for the ideal and non-ideal hardware scenarios using PGS and IGS transmission schemes for different combining receivers. In addition, the IGS statistical characteristics are optimized to maximize the achievable rate performance. Moreover, the outage probability performance of the receive diversity systems is analyzed yielding closed form expressions for both PGS and IGS based transmission schemes. HWD systems that employ IGS is proven to efficiently combat the self interference caused by the HWD. Furthermore, the obtained analytic expressions are validated through Monte-Carlo simulations. Eventually, non-ideal hardware transceivers degradation and IGS scheme acquired compensation are quantified through suitable numerical results.

  5. Hardware controls for the STAR experiment at RHIC

    International Nuclear Information System (INIS)

    Reichhold, D.; Bieser, F.; Bordua, M.; Cherney, M.; Chrin, J.; Dunlop, J.C.; Ferguson, M.I.; Ghazikhanian, V.; Gross, J.; Harper, G.; Howe, M.; Jacobson, S.; Klein, S.R.; Kravtsov, P.; Lewis, S.; Lin, J.; Lionberger, C.; LoCurto, G.; McParland, C.; McShane, T.; Meier, J.; Sakrejda, I.; Sandler, Z.; Schambach, J.; Shi, Y.; Willson, R.; Yamamoto, E.; Zhang, W.

    2003-01-01

    The STAR detector sits in a high radiation area when operating normally; therefore it was necessary to develop a robust system to remotely control all hardware. The STAR hardware controls system monitors and controls approximately 14,000 parameters in the STAR detector. Voltages, currents, temperatures, and other parameters are monitored. Effort has been minimized by the adoption of experiment-wide standards and the use of pre-packaged software tools. The system is based on the Experimental Physics and Industrial Control System (EPICS) . VME processors communicate with subsystem-based sensors over a variety of field busses, with High-level Data Link Control (HDLC) being the most prevalent. Other features of the system include interfaces to accelerator and magnet control systems, a web-based archiver, and C++-based communication between STAR online, run control and hardware controls and their associated databases. The system has been designed for easy expansion as new detector elements are installed in STAR

  6. Plutonium Protection System (PPS). Volume 2. Hardware description. Final report

    International Nuclear Information System (INIS)

    Miyoshi, D.S.

    1979-05-01

    The Plutonium Protection System (PPS) is an integrated safeguards system developed by Sandia Laboratories for the Department of Energy, Office of Safeguards and Security. The system is designed to demonstrate and test concepts for the improved safeguarding of plutonium. Volume 2 of the PPS final report describes the hardware elements of the system. The major areas containing hardware elements are the vault, where plutonium is stored, the packaging room, where plutonium is packaged into Container Modules, the Security Operations Center, which controls movement of personnel, the Material Accountability Center, which maintains the system data base, and the Material Operations Center, which monitors the operating procedures in the system. References are made to documents in which details of the hardware items can be found

  7. Asymmetric Hardware Distortions in Receive Diversity Systems: Outage Performance Analysis

    KAUST Repository

    Javed, Sidrah

    2017-02-22

    This paper studies the impact of asymmetric hardware distortion (HWD) on the performance of receive diversity systems using linear and switched combining receivers. The asymmetric attribute of the proposed model motivates the employment of improper Gaussian signaling (IGS) scheme rather than the traditional proper Gaussian signaling (PGS) scheme. The achievable rate performance is analyzed for the ideal and non-ideal hardware scenarios using PGS and IGS transmission schemes for different combining receivers. In addition, the IGS statistical characteristics are optimized to maximize the achievable rate performance. Moreover, the outage probability performance of the receive diversity systems is analyzed yielding closed form expressions for both PGS and IGS based transmission schemes. HWD systems that employ IGS is proven to efficiently combat the self interference caused by the HWD. Furthermore, the obtained analytic expressions are validated through Monte-Carlo simulations. Eventually, non-ideal hardware transceivers degradation and IGS scheme acquired compensation are quantified through suitable numerical results.

  8. Optimized hardware design for the divertor remote handling control system

    Energy Technology Data Exchange (ETDEWEB)

    Saarinen, Hannu [Tampere University of Technology, Korkeakoulunkatu 6, 33720 Tampere (Finland)], E-mail: hannu.saarinen@tut.fi; Tiitinen, Juha; Aha, Liisa; Muhammad, Ali; Mattila, Jouni; Siuko, Mikko; Vilenius, Matti [Tampere University of Technology, Korkeakoulunkatu 6, 33720 Tampere (Finland); Jaervenpaeae, Jorma [VTT Systems Engineering, Tekniikankatu 1, 33720 Tampere (Finland); Irving, Mike; Damiani, Carlo; Semeraro, Luigi [Fusion for Energy, Josep Pla 2, Torres Diagonal Litoral B3, 08019 Barcelona (Spain)

    2009-06-15

    A key ITER maintenance activity is the exchange of the divertor cassettes. One of the major focuses of the EU Remote Handling (RH) programme has been the study and development of the remote handling equipment necessary for divertor exchange. The current major step in this programme involves the construction of a full scale physical test facility, namely DTP2 (Divertor Test Platform 2), in which to demonstrate and refine the RH equipment designs for ITER using prototypes. The major objective of the DTP2 project is the proof of concept studies of various RH devices, but is also important to define principles for standardizing control hardware and methods around the ITER maintenance equipment. This paper focuses on describing the control system hardware design optimization that is taking place at DTP2. Here there will be two RH movers, namely the Cassette Multifuctional Mover (CMM), Cassette Toroidal Mover (CTM) and assisting water hydraulic force feedback manipulators (WHMAN) located aboard each Mover. The idea here is to use common Real Time Operating Systems (RTOS), measurement and control IO-cards etc. for all maintenance devices and to standardize sensors and control components as much as possible. In this paper, new optimized DTP2 control system hardware design and some initial experimentation with the new DTP2 RH control system platform are presented. The proposed new approach is able to fulfil the functional requirements for both Mover and Manipulator control systems. Since the new control system hardware design has reduced architecture there are a number of benefits compared to the old approach. The simplified hardware solution enables the use of a single software development environment and a single communication protocol. This will result in easier maintainability of the software and hardware, less dependence on trained personnel, easier training of operators and hence reduced the development costs of ITER RH.

  9. Alternative, Green Processes for the Precision Cleaning of Aerospace Hardware

    Science.gov (United States)

    Maloney, Phillip R.; Grandelli, Heather Eilenfield; Devor, Robert; Hintze, Paul E.; Loftin, Kathleen B.; Tomlin, Douglas J.

    2014-01-01

    Precision cleaning is necessary to ensure the proper functioning of aerospace hardware, particularly those systems that come in contact with liquid oxygen or hypergolic fuels. Components that have not been cleaned to the appropriate levels may experience problems ranging from impaired performance to catastrophic failure. Traditionally, this has been achieved using various halogenated solvents. However, as information on the toxicological and/or environmental impacts of each came to light, they were subsequently regulated out of use. The solvent currently used in Kennedy Space Center (KSC) precision cleaning operations is Vertrel MCA. Environmental sampling at KSC indicates that continued use of this or similar solvents may lead to high remediation costs that must be borne by the Program for years to come. In response to this problem, the Green Solvents Project seeks to develop state-of-the-art, green technologies designed to meet KSCs precision cleaning needs.Initially, 23 solvents were identified as potential replacements for the current Vertrel MCA-based process. Highly halogenated solvents were deliberately omitted since historical precedents indicate that as the long-term consequences of these solvents become known, they will eventually be regulated out of practical use, often with significant financial burdens for the user. Three solvent-less cleaning processes (plasma, supercritical carbon dioxide, and carbon dioxide snow) were also chosen since they produce essentially no waste stream. Next, experimental and analytical procedures were developed to compare the relative effectiveness of these solvents and technologies to the current KSC standard of Vertrel MCA. Individually numbered Swagelok fittings were used to represent the hardware in the cleaning process. First, the fittings were cleaned using Vertrel MCA in order to determine their true cleaned mass. Next, the fittings were dipped into stock solutions of five commonly encountered contaminants and were

  10. Electrical, electronics, and digital hardware essentials for scientists and engineers

    CERN Document Server

    Lipiansky, Ed

    2012-01-01

    A practical guide for solving real-world circuit board problems Electrical, Electronics, and Digital Hardware Essentials for Scientists and Engineers arms engineers with the tools they need to test, evaluate, and solve circuit board problems. It explores a wide range of circuit analysis topics, supplementing the material with detailed circuit examples and extensive illustrations. The pros and cons of various methods of analysis, fundamental applications of electronic hardware, and issues in logic design are also thoroughly examined. The author draws on more than tw

  11. Automating an EXAFS facility: hardware and software considerations

    International Nuclear Information System (INIS)

    Georgopoulos, P.; Sayers, D.E.; Bunker, B.; Elam, T.; Grote, W.A.

    1981-01-01

    The basic design considerations for computer hardware and software, applicable not only to laboratory EXAFS facilities, but also to synchrotron installations, are reviewed. Uniformity and standardization of both hardware configurations and program packages for data collection and analysis are heavily emphasized. Specific recommendations are made with respect to choice of computers, peripherals, and interfaces, and guidelines for the development of software packages are set forth. A description of two working computer-interfaced EXAFS facilities is presented which can serve as prototypes for future developments. 3 figures

  12. Surface moisture measurement system hardware acceptance test report

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.A., Westinghouse Hanford

    1996-05-28

    This document summarizes the results of the hardware acceptance test for the Surface Moisture Measurement System (SMMS). This test verified that the mechanical and electrical features of the SMMS functioned as designed and that the unit is ready for field service. The bulk of hardware testing was performed at the 306E Facility in the 300 Area and the Fuels and Materials Examination Facility in the 400 Area. The SMMS was developed primarily in support of Tank Waste Remediation System (TWRS) Safety Programs for moisture measurement in organic and ferrocyanide watch list tanks.

  13. Hardware Evaluation of the Horizontal Exercise Fixture with Weight Stack

    Science.gov (United States)

    Newby, Nate; Leach, Mark; Fincke, Renita; Sharp, Carwyn

    2009-01-01

    HEF with weight stack seems to be a very sturdy and reliable exercise device that should function well in a bed rest training setting. A few improvements should be made to both the hardware and software to improve usage efficiency, but largely, this evaluation has demonstrated HEF's robustness. The hardware offers loading to muscles, bones, and joints, potentially sufficient to mitigate the loss of muscle mass and bone mineral density during long-duration bed rest campaigns. With some minor modifications, the HEF with weight stack equipment provides the best currently available means of performing squat, heel raise, prone row, bench press, and hip flexion/extension exercise in a supine orientation.

  14. Computer organization and design the hardware/software interface

    CERN Document Server

    Hennessy, John L

    1994-01-01

    Computer Organization and Design: The Hardware/Software Interface presents the interaction between hardware and software at a variety of levels, which offers a framework for understanding the fundamentals of computing. This book focuses on the concepts that are the basis for computers.Organized into nine chapters, this book begins with an overview of the computer revolution. This text then explains the concepts and algorithms used in modern computer arithmetic. Other chapters consider the abstractions and concepts in memory hierarchies by starting with the simplest possible cache. This book di

  15. Carbonate fuel cell endurance: Hardware corrosion and electrolyte management status

    Energy Technology Data Exchange (ETDEWEB)

    Yuh, C.; Johnsen, R.; Farooque, M.; Maru, H.

    1993-01-01

    Endurance tests of carbonate fuel cell stacks (up to 10,000 hours) have shown that hardware corrosion and electrolyte losses can be reasonably controlled by proper material selection and cell design. Corrosion of stainless steel current collector hardware, nickel clad bipolar plate and aluminized wet seal show rates within acceptable limits. Electrolyte loss rate to current collector surface has been minimized by reducing exposed current collector surface area. Electrolyte evaporation loss appears tolerable. Electrolyte redistribution has been restrained by proper design of manifold seals.

  16. Carbonate fuel cell endurance: Hardware corrosion and electrolyte management status

    Energy Technology Data Exchange (ETDEWEB)

    Yuh, C.; Johnsen, R.; Farooque, M.; Maru, H.

    1993-05-01

    Endurance tests of carbonate fuel cell stacks (up to 10,000 hours) have shown that hardware corrosion and electrolyte losses can be reasonably controlled by proper material selection and cell design. Corrosion of stainless steel current collector hardware, nickel clad bipolar plate and aluminized wet seal show rates within acceptable limits. Electrolyte loss rate to current collector surface has been minimized by reducing exposed current collector surface area. Electrolyte evaporation loss appears tolerable. Electrolyte redistribution has been restrained by proper design of manifold seals.

  17. Integrated circuit authentication hardware Trojans and counterfeit detection

    CERN Document Server

    Tehranipoor, Mohammad; Zhang, Xuehui

    2013-01-01

    This book describes techniques to verify the authenticity of integrated circuits (ICs). It focuses on hardware Trojan detection and prevention and counterfeit detection and prevention. The authors discuss a variety of detection schemes and design methodologies for improving Trojan detection techniques, as well as various attempts at developing hardware Trojans in IP cores and ICs. While describing existing Trojan detection methods, the authors also analyze their effectiveness in disclosing various types of Trojans, and demonstrate several architecture-level solutions. 

  18. Hardware-assisted software clock synchronization for homogeneous distributed systems

    Science.gov (United States)

    Ramanathan, P.; Kandlur, Dilip D.; Shin, Kang G.

    1990-01-01

    A clock synchronization scheme that strikes a balance between hardware and software solutions is proposed. The proposed is a software algorithm that uses minimal additional hardware to achieve reasonably tight synchronization. Unlike other software solutions, the guaranteed worst-case skews can be made insensitive to the maximum variation of message transit delay in the system. The scheme is particularly suitable for large partially connected distributed systems with topologies that support simple point-to-point broadcast algorithms. Examples of such topologies include the hypercube and the mesh interconnection structures.

  19. Optimum Operating Conditions for PZT Actuators for Vibrotactile Wearables

    Science.gov (United States)

    Logothetis, Irini; Matsouka, Dimitra; Vassiliadis, Savvas; Vossou, Clio; Siores, Elias

    2018-04-01

    Recently, vibrotactile wearables have received much attention in fields such as medicine, psychology, athletics and video gaming. The electrical components presently used to generate vibration are rigid; hence, the design and creation of ergonomical wearables are limited. Significant advances in piezoelectric components have led to the production of flexible actuators such as piezoceramic lead zirconate titanate (PZT) film. To verify the functionality of PZT actuators for use in vibrotactile wearables, the factors influencing the electromechanical conversion were analysed and tested. This was achieved through theoretical and experimental analyses of a monomorph clamped-free structure for the PZT actuator. The research performed for this article is a three-step process. First, a theoretical analysis presents the equations governing the actuator. In addition, the eigenfrequency of the film was analysed preceding the experimental section. For this stage, by applying an electric voltage and varying the stimulating electrical characteristics (i.e., voltage, electrical waveform and frequency), the optimum operating conditions for a PZT film were determined. The tip displacement was measured referring to the mechanical energy converted from electrical energy. From the results obtained, an equation for the mechanical behaviour of PZT films as actuators was deduced. It was observed that the square waveform generated larger tip displacements. In conjunction with large voltage inputs at the predetermined eigenfrequency, the optimum operating conditions for the actuator were achieved. To conclude, PZT films can be adapted to assist designers in creating comfortable vibrotactile wearables.

  20. Optimum Tower Crane Selection and Supporting Design Management

    Directory of Open Access Journals (Sweden)

    Hyo Won Sohn

    2014-08-01

    Full Text Available To optimize tower crane selection and supporting design, lifting requirements (as well as stability should be examined, followed by a review of economic feasibility. However, construction engineers establish plans based on data provided by equipment suppliers since there are no tools with which to thoroughly examine a support design's suitability for various crane types, and such plans lack the necessary supporting data. In such cases it is impossible to optimize a tower crane selection to satisfy lifting requirements in terms of cost, and to perform lateral support and foundation design. Thus, this study is intended to develop an optimum tower crane selection and supporting design management method based on stability. All cases that are capable of generating an optimization of approximately 3,000 ˜ 15,000 times are calculated to identify the candidate cranes with minimized cost, which are examined. The optimization method developed in the study is expected to support engineers in determining the optimum lifting equipment management.

  1. Optimum concrete compression strength using bio-enzyme

    Directory of Open Access Journals (Sweden)

    Bagio Tony Hartono

    2017-01-01

    Full Text Available To make concrete with high compressive strength and has a certain concrete specifications other than the main concrete materials are also needed concrete mix quality control and other added material is also in line with the current technology of concrete mix that produces concrete with specific characteristics. Addition of bio enzyme on five concrete mixture that will be compared with normal concrete in order to know the optimum level bio-enzyme in concrete to increase the strength of the concrete. Concrete with bio-enzyme 200 ml/m3, 400 ml/m3, 600 ml/m3, 800 ml/m3, 1000 ml/m3 and normal concrete. Refer to the crushing test result, its tends to the mathematical model using 4th degree polynomial regression (least quartic, as represent on the attached data series, which is for the design mix fc′ = 25 MPa generate optimum value for 33,98 MPa, on the bio-additive dosage of 509 ml bio enzymes.

  2. The optimum content of rubber ash in concrete: flexural strength

    Science.gov (United States)

    Senin, M. S.; Shahidan, S.; Shamsuddin, S. M.; Ariffin, S. F. A.; Othman, N. H.; Rahman, R.; Khalid, F. S.; Nazri, F. M.

    2017-11-01

    Discarded scrap tyres have become one of the major environmental problems nowadays. Several studies have been carried out to reuse waste tires as an additive or sand replacement in concrete with appropriate percentages of tire rubber, called as rubberized concrete to solve this problem. The main objectives of this study are to investigate the flexural strength performance of concrete when adding the rubber ash and also to analyse the optimum content of rubber ash in concrete prisms. The performance total of 30 number of concrete prisms in size of 100mm x 100mm x 500 mm were investigated, by partially replacement of rubber ash with percentage of 0%, 3%, 5%, 7% and 9% from the volume of the sand. The flexural strength is increased when percentage of rubber ash is added 3% from control concrete prism, RA 0 for both concrete prism age, 7 days and 28 days with value 1.21% and 0.976% respectively. However, for RA 5, RA 7 and RA 9, the flexural strength was decreased compared to the control for both age, 7 days and 28 days. In conclusion, 3% is the optimum content of rubber ash in concrete prism for both concrete age

  3. Determination of the Optimum Conditions for Production of Chitosan Nanoparticles

    Directory of Open Access Journals (Sweden)

    A. Dustgani

    2007-12-01

    Full Text Available Bioedegradable nanoparticles are intensively investigated for their potential applications in drug delivery systems. Being a biocompatible and biodegradable polymer, chitosan holds great promise for use in this area. This investigation was concerned with determination and optimization of the effective parameters involved in the production of chitosan nanoparticles using ionic gelation method. Studied variables were concentration and pH of the chitosan solution, the ratio of chitosan to sodium tripolyphosphate therein and the molecular weight of chitosan. For this purpose, Taguchistatistical method was used for design of experiments in three levels. The size of chitosan nanoparticle was determined using laser light scattering. The experimental results showed that concentration of chitosan solution was the most important parameter and chitosan molecular weight the least effective parameter. The optimum conditions for preparation of nanoparticles were found to be 1 mg/mL chitosan solution with pH=5, chitosan to sodium tripolyphosphate ratio of 3 and chitosan molecular weight of 200,000 daltons. The average nanoparticle size at optimum conditions was found to be about 150 nm.

  4. Optimum distributed generation placement with voltage sag effect minimization

    International Nuclear Information System (INIS)

    Biswas, Soma; Goswami, Swapan Kumar; Chatterjee, Amitava

    2012-01-01

    Highlights: ► A new optimal distributed generation placement algorithm is proposed. ► Optimal number, sizes and locations of the DGs are determined. ► Technical factors like loss, voltage sag problem are minimized. ► The percentage savings are optimized. - Abstract: The present paper proposes a new formulation for the optimum distributed generator (DG) placement problem which considers a hybrid combination of technical factors, like minimization of the line loss, reduction in the voltage sag problem, etc., and economical factors, like installation and maintenance cost of the DGs. The new formulation proposed is inspired by the idea that the optimum placement of the DGs can help in reducing and mitigating voltage dips in low voltage distribution networks. The problem is configured as a multi-objective, constrained optimization problem, where the optimal number of DGs, along with their sizes and bus locations, are simultaneously obtained. This problem has been solved using genetic algorithm, a traditionally popular stochastic optimization algorithm. A few benchmark systems radial and networked (like 34-bus radial distribution system, 30 bus loop distribution system and IEEE 14 bus system) are considered as the case study where the effectiveness of the proposed algorithm is aptly demonstrated.

  5. Optimum dietary protein requirement of Malaysian mahseer (Tor tambroides) fingerling.

    Science.gov (United States)

    Misieng, Josephine Dorin; Kamarudin, Mohd Salleh; Musa, Mazlinda

    2011-02-01

    The optimum dietary protein requirement of the Malaysian mahseer (Tor tambroides) fingerlings was determined in this study. In this completely randomized designed experiment, formulated diets of five levels of dietary protein (30, 35, 40, 45 and 50%) were tested on the T. tambroides fingerlings (initial body weight of 5.85 +/- 0.40 g), reared in aquarium fitted with a biofiltering system. The fingerlings were fed twice daily at 5% of biomass. The fingerling body weight and total length was taken at every two weeks. Mortality was recorded daily. The dietary protein had significant effects on the body weight gain and Specific Growth Rate (SGR) of the fingerlings. The body weight gain and SGR of fingerlings fed with the diet with the dietary protein level of 40% was significantly higher (p<0.05) than that of 30, 35 and 50%. The feed conversion ratio of the 40% dietary protein was the significantly lowest at 2.19 +/- 0.163. The dietary protein level of 40% was the most optimum for T. tambroides fingerlings.

  6. Optimum energies for dual-energy computed tomography

    International Nuclear Information System (INIS)

    Talbert, A.J.; Brooks, R.A.; Morgenthaler, D.G.

    1980-01-01

    By performing a dual-energy scan, separate information can be obtained on the Compton and photoelectric components of attenuation for an unknown material. This procedure has been analysed for the optimum energies, and for the optimum dose distribution between the two scans. It was found that an equal dose at both energies was a good compromise, compared with optimising the dose distributing for either the Compton or photoelectric components individually. For monoenergetic beams, it was found that low energy of 40 keV produced minimum noise when using high-energy beams of 80 to 100 keV. This was true whether one maintained constant integral dose or constant surface dose. A low energy of 50 keV which is more nearly attainable in practice, produced almost as good a degree of accuracy. The analysis can be extended to polyenergetic beams by the inclusion of a noise factor. The above results were qualitatively unchanged, although the noise was increased by about 20% with integral dose equivalence and 50% with surface dose equivalence. It is very important to make the spectra as narrow as possible, especially at the low energy, in order to minimise the noise. (author)

  7. Optimum Discharge Burnup and Cycle Length for PWRs

    International Nuclear Information System (INIS)

    Secker, Jeffrey R.; Johansen, Baard J.; Stucker, David L.; Ozer, Odelli; Ivanov, Kostadin; Yilmaz, Serkan; Young, E.H.

    2005-01-01

    This paper discusses the results of a pressurized water reactor fuel management study determining the optimum discharge burnup and cycle length. A comprehensive study was performed considering 12-, 18-, and 24-month fuel cycles over a wide range of discharge burnups. A neutronic study was performed followed by an economic evaluation. The first phase of the study limited the fuel enrichments used in the study to 235 U consistent with constraints today. The second phase extended the range of discharge burnups for 18-month cycles by using fuel enriched in excess of 5 wt%. The neutronic study used state-of-the-art reactor physics methods to accurately determine enrichment requirements. Energy requirements were consistent with today's high capacity factors (>98%) and short (15-day) refueling outages. The economic evaluation method considers various component costs including uranium, conversion, enrichment, fabrication and spent-fuel storage costs as well as the effect of discounting of the revenue stream. The resulting fuel cycle costs as a function of cycle length and discharge burnup are presented and discussed. Fuel costs decline with increasing discharge burnup for all cycle lengths up to the maximum discharge burnup considered. The choice of optimum cycle length depends on assumptions for outage costs

  8. Superior Generalization Capability of Hardware-Learing Algorithm Developed for Self-Learning Neuron-MOS Neural Networks

    Science.gov (United States)

    Kondo, Shuhei; Shibata, Tadashi; Ohmi, Tadahiro

    1995-02-01

    We have investigated the learning performance of the hardware backpropagation (HBP) algorithm, a hardware-oriented learning algorithm developed for the self-learning architecture of neural networks constructed using neuron MOS (metal-oxide-semiconductor) transistors. The solution to finding a mirror symmetry axis in a 4×4 binary pixel array was tested by computer simulation based on the HBP algorithm. Despite the inherent restrictions imposed on the hardware-learning algorithm, HBP exhibits equivalent learning performance to that of the original backpropagation (BP) algorithm when all the pertinent parameters are optimized. Very importantly, we have found that HBP has a superior generalization capability over BP; namely, HBP exhibits higher performance in solving problems that the network has not yet learnt.

  9. Is a 4-bit synaptic weight resolution enough? - constraints on enabling spike-timing dependent plasticity in neuromorphic hardware.

    Science.gov (United States)

    Pfeil, Thomas; Potjans, Tobias C; Schrader, Sven; Potjans, Wiebke; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz

    2012-01-01

    Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists.

  10. Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models

    Science.gov (United States)

    Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro

    2017-10-01

    Mainstream machine-learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks or to make them more effective. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful generative unsupervised machine-learning models. Here, we use embedding techniques to add redundancy to data sets, allowing us to increase the modeling capacity of quantum annealers. We illustrate our findings by training hardware-embedded graphical models on a binarized data set of handwritten digits and two synthetic data sets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Gibbs-like distribution; therefore, this approach avoids the need to infer the effective temperature at each iteration, speeding up learning; it also mitigates the effect of noise in the control parameters, making it robust to deviations from the reference Gibbs distribution. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models, and it provides a suitable framework for benchmarking these quantum technologies on machine-learning-related tasks.

  11. Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models

    Directory of Open Access Journals (Sweden)

    Marcello Benedetti

    2017-11-01

    Full Text Available Mainstream machine-learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks or to make them more effective. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful generative unsupervised machine-learning models. Here, we use embedding techniques to add redundancy to data sets, allowing us to increase the modeling capacity of quantum annealers. We illustrate our findings by training hardware-embedded graphical models on a binarized data set of handwritten digits and two synthetic data sets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Gibbs-like distribution; therefore, this approach avoids the need to infer the effective temperature at each iteration, speeding up learning; it also mitigates the effect of noise in the control parameters, making it robust to deviations from the reference Gibbs distribution. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models, and it provides a suitable framework for benchmarking these quantum technologies on machine-learning-related tasks.

  12. TreeBASIS Feature Descriptor and Its Hardware Implementation

    Directory of Open Access Journals (Sweden)

    Spencer Fowers

    2014-01-01

    Full Text Available This paper presents a novel feature descriptor called TreeBASIS that provides improvements in descriptor size, computation time, matching speed, and accuracy. This new descriptor uses a binary vocabulary tree that is computed using basis dictionary images and a test set of feature region images. To facilitate real-time implementation, a feature region image is binary quantized and the resulting quantized vector is passed into the BASIS vocabulary tree. A Hamming distance is then computed between the feature region image and the effectively descriptive basis dictionary image at a node to determine the branch taken and the path the feature region image takes is saved as a descriptor. The TreeBASIS feature descriptor is an excellent candidate for hardware implementation because of its reduced descriptor size and the fact that descriptors can be created and features matched without the use of floating point operations. The TreeBASIS descriptor is more computationally and space efficient than other descriptors such as BASIS, SIFT, and SURF. Moreover, it can be computed entirely in hardware without the support of a CPU for additional software-based computations. Experimental results and a hardware implementation show that the TreeBASIS descriptor compares well with other descriptors for frame-to-frame homography computation while requiring fewer hardware resources.

  13. Hardware Algorithms For Tile-Based Real-Time Rendering

    NARCIS (Netherlands)

    Crisu, D.

    2012-01-01

    In this dissertation, we present the GRAphics AcceLerator (GRAAL) framework for developing embedded tile-based rasterization hardware for mobile devices, meant to accelerate real-time 3-D graphics (OpenGL compliant) applications. The goal of the framework is a low-cost, low-power, high-performance

  14. Hardware and software techniques for boiler operation and management

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, Hiroshi (Hirakawa Iron Works, Ltd., Osaka (Japan))

    1989-04-01

    A study was conducted on the requirements for easy-operable boiler from the view points of hardware and software technologies. Relation among efficiency, energy-saving, and economics, and control of total emission regarding low NOx operation, were explained, with suggestion of orientation to developed necessary hard- and soft- ware for the realization. 8 figs.

  15. Chip-Multiprocessor Hardware Locks for Safety-Critical Java

    DEFF Research Database (Denmark)

    Strøm, Torur Biskopstø; Puffitsch, Wolfgang; Schoeberl, Martin

    2013-01-01

    and may void a task set's schedulability. In this paper we present a hardware locking mechanism to reduce the synchronization overhead. The solution is implemented for the chip-multiprocessor version of the Java Optimized Processor in the context of safety-critical Java. The implementation is compared...

  16. PACE: A dynamic programming algorithm for hardware/software partitioning

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1996-01-01

    This paper presents the PACE partitioning algorithm which is used in the LYCOS co-synthesis system for partitioning control/dataflow graphs into hardware and software parts. The algorithm is a dynamic programming algorithm which solves both the problem of minimizing system execution time...

  17. A selective logging mechanism for hardware transactional memory systems

    OpenAIRE

    Lupon Navazo, Marc; Magklis, Grigorios; González Colás, Antonio María

    2011-01-01

    Log-based Hardware Transactional Memory (HTM) systems offer an elegant solution to handle speculative data that overflow transactional L1 caches. By keeping the pre-transactional values on a software-resident log, speculative values can be safely moved across the memory hierarchy, without requiring expensive searches on L1 misses or commits.

  18. Hardware, Languages, and Architectures for Defense Against Hostile Operating Systems

    Science.gov (United States)

    2015-05-14

    complex instruction sets. The scale of this problem is multiplied by the diversity of hardware platforms in deployment today. We developed a novel approach...www.seclab.cs.sunysb.edu/seclab/lbc/. Professor King has been invited to and has given lectures at the NSA, Sandia, DARPA, Intel, Microsoft, Samsung

  19. Hardware prototype with component specification and usage description

    NARCIS (Netherlands)

    Azam, Tre; Aswat, Soyeb; Klemke, Roland; Sharma, Puneet; Wild, Fridolin

    2017-01-01

    Following on from D3.1 and the final selection of sensors, in this D3.2 report we present the first version of the experience capturing hardware prototype design and API architecture taking into account the current limitations of the Hololens not being available until early next month in time for

  20. Efficient Architecture for Spike Sorting in Reconfigurable Hardware

    Directory of Open Access Journals (Sweden)

    Sheng-Ying Lai

    2013-11-01

    Full Text Available This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA and fuzzy C-means (FCM algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA. It is embedded in a System-on-Chip (SOC platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation.

  1. Efficient Architecture for Spike Sorting in Reconfigurable Hardware

    Science.gov (United States)

    Hwang, Wen-Jyi; Lee, Wei-Hao; Lin, Shiow-Jyu; Lai, Sheng-Ying

    2013-01-01

    This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA) and fuzzy C-means (FCM) algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA). It is embedded in a System-on-Chip (SOC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation. PMID:24189331

  2. Another way of doing RSA cryptography in hardware

    NARCIS (Netherlands)

    Batina, L.; Bruin - Muurling, G.; Honary, B.

    2001-01-01

    In this paper we describe an efficient and secure hardware implementation of the RSA cryptosystem. Modular exponentiation is based on Montgomery’s method without any modular reduction achieving the optimal bound. The presented systolic array architecture is scalable in severalparameters which makes

  3. Foundations of digital signal processing theory, algorithms and hardware design

    CERN Document Server

    Gaydecki, Patrick

    2005-01-01

    An excellent introductory text, this book covers the basic theoretical, algorithmic and real-time aspects of digital signal processing (DSP). Detailed information is provided on off-line, real-time and DSP programming and the reader is effortlessly guided through advanced topics such as DSP hardware design, FIR and IIR filter design and difference equation manipulation.

  4. Hardware Descriptive Languages: An Efficient Approach to Device ...

    African Journals Online (AJOL)

    Contemporarily, owing to astronomical advancements in the very large scale integration (VLSI) market segments, hardware engineers are now focusing on how to develop their new digital system designs in programmable languages like very high speed integrated circuit hardwaredescription language (VHDL) and Verilog ...

  5. Detecting System of Nested Hardware Virtual Machine Monitor

    Directory of Open Access Journals (Sweden)

    Artem Vladimirovich Iuzbashev

    2015-03-01

    Full Text Available Method of nested hardware virtual machine monitor detection was proposed in this work. The method is based on HVM timing attack. In case of HVM presence in system, the number of different instruction sequences execution time values will increase. We used this property as indicator in our detection.

  6. CT image reconstruction system based on hardware implementation

    International Nuclear Information System (INIS)

    Silva, Hamilton P. da; Evseev, Ivan; Schelin, Hugo R.; Paschuk, Sergei A.; Milhoretto, Edney; Setti, Joao A.P.; Zibetti, Marcelo; Hormaza, Joel M.; Lopes, Ricardo T.

    2009-01-01

    Full text: The timing factor is very important for medical imaging systems, which can nowadays be synchronized by vital human signals, like heartbeats or breath. The use of hardware implemented devices in such a system has advantages considering the high speed of information treatment combined with arbitrary low cost on the market. This article refers to a hardware system which is based on electronic programmable logic called FPGA, model Cyclone II from ALTERA Corporation. The hardware was implemented on the UP3 ALTERA Kit. A partially connected neural network with unitary weights was programmed. The system was tested with 60 topographic projections, 100 points in each, of the Shepp and Logan phantom created by MATLAB. The main restriction was found to be the memory size available on the device: the dynamic range of reconstructed image was limited to 0 65535. Also, the normalization factor must be observed in order to do not saturate the image during the reconstruction and filtering process. The test shows a principal possibility to build CT image reconstruction systems for any reasonable amount of input data by arranging the parallel work of the hardware units like we have tested. However, further studies are necessary for better understanding of the error propagation from topographic projections to reconstructed image within the implemented method. (author)

  7. Lab at Home: Hardware Kits for a Digital Design Lab

    Science.gov (United States)

    Oliver, J. P.; Haim, F.

    2009-01-01

    An innovative laboratory methodology for an introductory digital design course is presented. Instead of having traditional lab experiences, where students have to come to school classrooms, a "lab at home" concept is proposed. Students perform real experiments in their own homes, using hardware kits specially developed for this purpose. They…

  8. 3D IBFV : Hardware-Accelerated 3D Flow Visualization

    NARCIS (Netherlands)

    Telea, Alexandru; Wijk, Jarke J. van

    2003-01-01

    We present a hardware-accelerated method for visualizing 3D flow fields. The method is based on insertion, advection, and decay of dye. To this aim, we extend the texture-based IBFV technique for 2D flow visualization in two main directions. First, we decompose the 3D flow visualization problem in a

  9. Enabling Self-Organization in Embedded Systems with Reconfigurable Hardware

    Directory of Open Access Journals (Sweden)

    Christophe Bobda

    2009-01-01

    Full Text Available We present a methodology based on self-organization to manage resources in networked embedded systems based on reconfigurable hardware. Two points are detailed in this paper, the monitoring system used to analyse the system and the Local Marketplaces Global Symbiosis (LMGS concept defined for self-organization of dynamically reconfigurable nodes.

  10. Generalized Distance Transforms and Skeletons in Graphics Hardware

    NARCIS (Netherlands)

    Strzodka, R.; Telea, A.

    2004-01-01

    We present a framework for computing generalized distance transforms and skeletons of two-dimensional objects using graphics hardware. Our method is based on the concept of footprint splatting. Combining different splats produces weighted distance transforms for different metrics, as well as the

  11. 3D IBFV : hardware-accelerated 3D flow visualization

    NARCIS (Netherlands)

    Telea, A.C.; Wijk, van J.J.

    2003-01-01

    We present a hardware-accelerated method for visualizing 3D flow fields. The method is based on insertion, advection, and decay of dye. To this aim, we extend the texture-based IBFV technique presented by van Wijk (2001) for 2D flow visualization in two main directions. First, we decompose the 3D

  12. Smart Home Hardware-in-the-Loop Testing

    Energy Technology Data Exchange (ETDEWEB)

    Pratt, Annabelle

    2017-07-12

    This presentation provides a high-level overview of NREL's smart home hardware-in-the-loop testing. It was presented at the Fourth International Workshop on Grid Simulator Testing of Energy Systems and Wind Turbine Powertrains, held April 25-26, 2017, hosted by NREL and Clemson University at the Energy Systems Integration Facility in Golden, Colorado.

  13. Motion compensation in digital subtraction angiography using graphics hardware.

    Science.gov (United States)

    Deuerling-Zheng, Yu; Lell, Michael; Galant, Adam; Hornegger, Joachim

    2006-07-01

    An inherent disadvantage of digital subtraction angiography (DSA) is its sensitivity to patient motion which causes artifacts in the subtraction images. These artifacts could often reduce the diagnostic value of this technique. Automated, fast and accurate motion compensation is therefore required. To cope with this requirement, we first examine a method explicitly designed to detect local motions in DSA. Then, we implement a motion compensation algorithm by means of block matching on modern graphics hardware. Both methods search for maximal local similarity by evaluating a histogram-based measure. In this context, we are the first who have mapped an optimizing search strategy on graphics hardware while paralleling block matching. Moreover, we provide an innovative method for creating histograms on graphics hardware with vertex texturing and frame buffer blending. It turns out that both methods can effectively correct the artifacts in most case, as the hardware implementation of block matching performs much faster: the displacements of two 1024 x 1024 images can be calculated at 3 frames/s with integer precision or 2 frames/s with sub-pixel precision. Preliminary clinical evaluation indicates that the computation with integer precision could already be sufficient.

  14. Combining hardware and simulation for datacenter scaling studies

    DEFF Research Database (Denmark)

    Ruepp, Sarah Renée; Pilimon, Artur; Thrane, Jakob

    2017-01-01

    and simulation to illustrate the scalability and performance of datacenter networks. We simulate a Datacenter network and interconnect it with real world traffic generation hardware. Analysis of the introduced packet conversion and virtual queueing delays shows that the conversion efficiency is at the order...

  15. Hiding State in CλaSH Hardware Descriptions

    NARCIS (Netherlands)

    Gerards, Marco Egbertus Theodorus; Baaij, C.P.R.; Kuper, Jan; Kooijman, Matthijs

    Synchronous hardware can be modelled as a mapping from input and state to output and a new state, such mappings are referred to as transition functions. It is natural to use a functional language to implement transition functions. The CaSH compiler is capable of translating transition functions to

  16. Towards Shop Floor Hardware Reconfiguration for Industrial Collaborative Robots

    DEFF Research Database (Denmark)

    Schou, Casper; Madsen, Ole

    2016-01-01

    In this paper we propose a roadmap for hardware reconfiguration of industrial collaborative robots. As a flexible resource, the collaborative robot will often need transitioning to a new task. Our goal is, that this transitioning should be done by the shop floor operators, not highly specialized...

  17. Parallel asynchronous hardware implementation of image processing algorithms

    Science.gov (United States)

    Coon, Darryl D.; Perera, A. G. U.

    1990-01-01

    Research is being carried out on hardware for a new approach to focal plane processing. The hardware involves silicon injection mode devices. These devices provide a natural basis for parallel asynchronous focal plane image preprocessing. The simplicity and novel properties of the devices would permit an independent analog processing channel to be dedicated to every pixel. A laminar architecture built from arrays of the devices would form a two-dimensional (2-D) array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuron-like asynchronous pulse-coded form through the laminar processor. No multiplexing, digitization, or serial processing would occur in the preprocessing state. High performance is expected, based on pulse coding of input currents down to one picoampere with noise referred to input of about 10 femtoamperes. Linear pulse coding has been observed for input currents ranging up to seven orders of magnitude. Low power requirements suggest utility in space and in conjunction with very large arrays. Very low dark current and multispectral capability are possible because of hardware compatibility with the cryogenic environment of high performance detector arrays. The aforementioned hardware development effort is aimed at systems which would integrate image acquisition and image processing.

  18. Tomographic image reconstruction and rendering with texture-mapping hardware

    International Nuclear Information System (INIS)

    Azevedo, S.G.; Cabral, B.K.; Foran, J.

    1994-07-01

    The image reconstruction problem, also known as the inverse Radon transform, for x-ray computed tomography (CT) is found in numerous applications in medicine and industry. The most common algorithm used in these cases is filtered backprojection (FBP), which, while a simple procedure, is time-consuming for large images on any type of computational engine. Specially-designed, dedicated parallel processors are commonly used in medical CT scanners, whose results are then passed to graphics workstation for rendering and analysis. However, a fast direct FBP algorithm can be implemented on modern texture-mapping hardware in current high-end workstation platforms. This is done by casting the FBP algorithm as an image warping operation with summing. Texture-mapping hardware, such as that on the Silicon Graphics Reality Engine (TM), shows around 600 times speedup of backprojection over a CPU-based implementation (a 100 Mhz R4400 in this case). This technique has the further advantages of flexibility and rapid programming. In addition, the same hardware can be used for both image reconstruction and for volumetric rendering. The techniques can also be used to accelerate iterative reconstruction algorithms. The hardware architecture also allows more complex operations than straight-ray backprojection if they are required, including fan-beam, cone-beam, and curved ray paths, with little or no speed penalties

  19. Hardware realization of an SVM algorithm implemented in FPGAs

    Science.gov (United States)

    Wiśniewski, Remigiusz; Bazydło, Grzegorz; Szcześniak, Paweł

    2017-08-01

    The paper proposes a technique of hardware realization of a space vector modulation (SVM) of state function switching in matrix converter (MC), oriented on the implementation in a single field programmable gate array (FPGA). In MC the SVM method is based on the instantaneous space-vector representation of input currents and output voltages. The traditional computation algorithms usually involve digital signal processors (DSPs) which consumes the large number of power transistors (18 transistors and 18 independent PWM outputs) and "non-standard positions of control pulses" during the switching sequence. Recently, hardware implementations become popular since computed operations may be executed much faster and efficient due to nature of the digital devices (especially concurrency). In the paper, we propose a hardware algorithm of SVM computation. In opposite to the existing techniques, the presented solution applies COordinate Rotation DIgital Computer (CORDIC) method to solve the trigonometric operations. Furthermore, adequate arithmetic modules (that is, sub-devices) used for intermediate calculations, such as code converters or proper sectors selectors (for output voltages and input current) are presented in detail. The proposed technique has been implemented as a design described with the use of Verilog hardware description language. The preliminary results of logic implementation oriented on the Xilinx FPGA (particularly, low-cost device from Artix-7 family from Xilinx was used) are also presented.

  20. Towards automated construction of dependable software/hardware systems

    Energy Technology Data Exchange (ETDEWEB)

    Yakhnis, A.; Yakhnis, V. [Pioneer Technologies & Rockwell Science Center, Albuquerque, NM (United States)

    1997-11-01

    This report contains viewgraphs on the automated construction of dependable computer architecture systems. The outline of this report is: examples of software/hardware systems; dependable systems; partial delivery of dependability; proposed approach; removing obstacles; advantages of the approach; criteria for success; current progress of the approach; and references.

  1. Improving Reliability, Security, and Efficiency of Reconfigurable Hardware Systems (Habilitation)

    NARCIS (Netherlands)

    Ziener, Daniel

    2017-01-01

    In this treatise,  my research on methods to improve efficiency, reliability, and security of reconfigurable hardware systems, i.e., FPGAs, through partial dynamic reconfiguration is outlined. The efficiency of reconfigurable systems can be improved by loading optimized data paths on-the-fly on an

  2. Evaluation of In-House versus Contract Computer Hardware Maintenance

    International Nuclear Information System (INIS)

    Wright, H.P.

    1981-09-01

    The issue of In-House versus Contract Computer Hardware Maintenance is one which every organization who uses computers must resolve. This report discusses the advantages and disadvantages of both approaches to computer maintenance, the costs involved (based on the current AGNS computer inventory), and the AGNS maintenance experience to date. A recommendation on an appropriate approach for AGNS is made

  3. Hardware Approach for Real Time Machine Stereo Vision

    Directory of Open Access Journals (Sweden)

    Michael Tornow

    2006-02-01

    Full Text Available Image processing is an effective tool for the analysis of optical sensor information for driver assistance systems and controlling of autonomous robots. Algorithms for image processing are often very complex and costly in terms of computation. In robotics and driver assistance systems, real-time processing is necessary. Signal processing algorithms must often be drastically modified so they can be implemented in the hardware. This task is especially difficult for continuous real-time processing at high speeds. This article describes a hardware-software co-design for a multi-object position sensor based on a stereophotogrammetric measuring method. In order to cover a large measuring area, an optimized algorithm based on an image pyramid is implemented in an FPGA as a parallel hardware solution for depth map calculation. Object recognition and tracking are then executed in real-time in a processor with help of software. For this task a statistical cluster method is used. Stabilization of the tracking is realized through use of a Kalman filter. Keywords: stereophotogrammetry, hardware-software co-design, FPGA, 3-d image analysis, real-time, clustering and tracking.

  4. Hardware Design Considerations for Edge-Accelerated Stereo Correspondence Algorithms

    Directory of Open Access Journals (Sweden)

    Christos Ttofis

    2012-01-01

    Full Text Available Stereo correspondence is a popular algorithm for the extraction of depth information from a pair of rectified 2D images. Hence, it has been used in many computer vision applications that require knowledge about depth. However, stereo correspondence is a computationally intensive algorithm and requires high-end hardware resources in order to achieve real-time processing speed in embedded computer vision systems. This paper presents an overview of the use of edge information as a means to accelerate hardware implementations of stereo correspondence algorithms. The presented approach restricts the stereo correspondence algorithm only to the edges of the input images rather than to all image points, thus resulting in a considerable reduction of the search space. The paper highlights the benefits of the edge-directed approach by applying it to two stereo correspondence algorithms: an SAD-based fixed-support algorithm and a more complex adaptive support weight algorithm. Furthermore, we present design considerations about the implementation of these algorithms on reconfigurable hardware and also discuss issues related to the memory structures needed, the amount of parallelism that can be exploited, the organization of the processing blocks, and so forth. The two architectures (fixed-support based versus adaptive-support weight based are compared in terms of processing speed, disparity map accuracy, and hardware overheads, when both are implemented on a Virtex-5 FPGA platform.

  5. Detection of hardware backdoor through microcontroller read time ...

    African Journals Online (AJOL)

    The objective of this work, christened “HABA” (Hardware Backdoor Aware) is to collect data samples of series of read time of microcontroller embedded on military grade equipments and correlate it with previously stored expected behavior read time samples so as to detect abnormality or otherwise. I was motivated by the ...

  6. Hardware Transactional Memory Optimization Guidelines, Applied to Ordered Maps

    DEFF Research Database (Denmark)

    Bonnichsen, Lars Frydendal; Probst, Christian W.; Karlsson, Sven

    2015-01-01

    efficiently requires reasoning about those differences. In this paper we present 5 guidelines for applying hardware transactional memory efficiently, and apply the guidelines to BT-trees, a concurrent ordered map. Evaluating BT-trees on standard benchmarks shows that they are up to 5.3 times faster than...

  7. Flight Hardware Packaging Design for Stringent EMC Radiated Emission Requirements

    Science.gov (United States)

    Lortz, Charlene L.; Huang, Chi-Chien N.; Ravich, Joshua A.; Steiner, Carl N.

    2013-01-01

    This packaging design approach can help heritage hardware meet a flight project's stringent EMC radiated emissions requirement. The approach requires only minor modifications to a hardware's chassis and mainly concentrates on its connector interfaces. The solution is to raise the surface area where the connector is mounted by a few millimeters using a pedestal, and then wrapping with conductive tape from the cable backshell down to the surface-mounted connector. This design approach has been applied to JPL flight project subsystems. The EMC radiated emissions requirements for flight projects can vary from benign to mission critical. If the project's EMC requirements are stringent, the best approach to meet EMC requirements would be to design an EMC control program for the project early on and implement EMC design techniques starting with the circuit board layout. This is the ideal scenario for hardware that is built from scratch. Implementation of EMC radiated emissions mitigation techniques can mature as the design progresses, with minimal impact to the design cycle. The real challenge exists for hardware that is planned to be flown following a built-to-print approach, in which heritage hardware from a past project with a different set of requirements is expected to perform satisfactorily for a new project. With acceptance of heritage, the design would already be established (circuit board layout and components have already been pre-determined), and hence any radiated emissions mitigation techniques would only be applicable at the packaging level. The key is to take a heritage design with its known radiated emissions spectrum and repackage, or modify its chassis design so that it would have a better chance of meeting the new project s radiated emissions requirements.

  8. OPTIMUM, CRITICAL AND THRESHOLD VALUES FOR WATER OXYGENATION FOR MULLETS (MUGILIDAE AND FLATFISHES (PLEURONECTIDAE IN ONTOGENESIS

    Directory of Open Access Journals (Sweden)

    P. Shekk

    2014-12-01

    Full Text Available Purpose. To determine the optimum, critical, and threshold values of water oxygenation for embryos, larvae and fingerlings of mullets and flatfishes under different temperature conditions. Methodology. Oxygen consumption was studied in chronic experiments with «interrupted flow» method with automatic fixation of dissolved oxygen in water with the aid of an oxygen sensor and automatic, continuous recording of the obtained results. «Critical» (Pcrit., and the «threshold» (Pthr. oxygen tension in the water have been determined. Findings. Under optimum conditions, the normal embryogenesis of mullets and flatfish to the gastrulation stage, provided 90–130% oxygen saturation. The critical content was 80–85%, the threshold – 65–70% of the saturation. At the stage of «movable embryo» depending on water temperature and fish species, the optimum range of water oxygenation was within 70‒127.1%. The most tolerant to oxygen deficiency was flounder Platichthys luscus (Pcrit – 25.4–27,5; Pthr. – 20.5–22.5%, the least resistant to hypoxia was striped mullet Mugil серhalus (Pcrit. – 50–60; Pthr. – 35–40%. The limits of the critical and threshold concentration of dissolved oxygen directly depended on the temperature and salinity, at which embryogenesis occurred. An increase in water temperature and salinity resulted in an increase in critical and threshold values for oxygen tension embryos. Mullet and flatfish fingerlings in all stages of development had a high tolerance to hypoxia, which increased as they grew. They were resistant to the oversaturation of water with oxygen. The most demanding for the oxygen regime are larvae and fingerlings of striped mullet and Liza aurata. Hypoxia tolerance of Psetta maeoticus (Psetta maeoticus and flounder at all stages of development is very high. The fingerlings of these species can endure reduction of the dissolved oxygen in water to 2.10 and 1.65 mgO2/dm3 respectively for a long time

  9. Optimum regulation of grid monopoly in the power trade

    International Nuclear Information System (INIS)

    Hope, E.

    1994-06-01

    The report discusses the organization and behaviour of grid monopolies in the Norwegian power trade and relations to the socio-economic effectiveness. The main attention is laid on analyzing regulation mechanisms and measures leading to an efficient short-term operation and to the investment of optimum production capacity in a long run. Regarding the management, measures are discussed for increasing the efficiency of total power trade by evaluating the existing marketing function of Statnett. Some basic conditions are accounted concerning the regulation problem of grid monopolies with a particular attention to asymmetric information between the authority and the monopoly. In addition, forms of regulation and regulation mechanisms together with the incentive characteristics of these, are discussed. The existing profit regulation principles in relation to an alternative system design such as maximum price regulation combined with standard regulation, are evaluated. 16 refs., 7 figs

  10. Optimum investment strategy in the power industry mathematical models

    CERN Document Server

    Bartnik, Ryszard; Hnydiuk-Stefan, Anna

    2016-01-01

    This book presents an innovative methodology for identifying optimum investment strategies in the power industry. To do so, it examines results including, among others, the impact of oxy-fuel technology on CO2 emissions prices, and the specific cost of electricity production. The technical and economic analysis presented here extend the available knowledge in the field of investment optimization in energy engineering, while also enabling investors to make decisions involving its application. Individual chapters explore the potential impacts of different factors like environmental charges on costs connected with investments in the power sector, as well as discussing the available technologies for heat and power generation. The book offers a valuable resource for researchers, market analysts, decision makers, power engineers and students alike.

  11. Optimum Choice of RF Frequency for Two Beam Linear Colliders

    CERN Document Server

    Braun, Hans Heinrich

    2003-01-01

    Recent experimental results on normal conducting RF structures indicate that the scaling of the gradient limit with frequency is less favourable than what was believed. We therefore reconsider the optimum choice of RF frequency and iris aperture for a normal conducting, two-beam linear collider with E_CMS=3 TeV, a loaded accelerating gradient of 150 MV/m and a luminosity of 8 10^34 cm-^2 s^-1. The optimisation criterion is minimizing overall RF costs for investment and operation with constraints put on peak surface electric fields and pulsed heating of accelerating structures. Analytical models are employed where applicable, while interpolation on simulation program results is used for the calculation of luminosity and RF structure properties.

  12. Kernel Optimum Nearly-analytical Discretization (KOND) algorithm

    International Nuclear Information System (INIS)

    Kondoh, Yoshiomi; Hosaka, Yasuo; Ishii, Kenji

    1992-10-01

    Two applications of the Kernel Optimum Nearly-analytical Discretization (KOND) algorithm to the parabolic- and the hyperbolic type equations a presented in detail to lead to novel numerical schemes with very high numerical accuracy. It is demonstrated numerically that the two dimensional KOND-P scheme for the parabolic type yields quite less numerical error by over 2-3 orders and reduces the CPU time to about 1/5 for a common numerical accuracy, compared with the conventional explicit scheme of reference. It is also demonstrated numerically that the KOND-H scheme for the hyperbolic type yields fairly less diffusive error and has fairly high stability for both of the linear- and the nonlinear wave propagations compared with other conventional schemes. (author)

  13. A Decision Support System for Optimum Use of Fertilizers

    Energy Technology Data Exchange (ETDEWEB)

    R. L. Hoskinson; J. R. Hess; R. K. Fink

    1999-07-01

    The Decision Support System for Agriculture (DSS4Ag) is an expert system being developed by the Site-Specific Technologies for Agriculture (SST4Ag) precision farming research project at the INEEL. DSS4Ag uses state-of-the-art artificial intelligence and computer science technologies to make spatially variable, site-specific, economically optimum decisions on fertilizer use. The DSS4Ag has an open architecture that allows for external input and addition of new requirements and integrates its results with existing agricultural systems' infrastructures. The DSS4Ag reflects a paradigm shift in the information revolution in agriculture that is precision farming. We depict this information revolution in agriculture as an historic trend in the agricultural decision-making process.

  14. A Decision Support System for Optimum Use of Fertilizers

    Energy Technology Data Exchange (ETDEWEB)

    Hoskinson, Reed Louis; Hess, John Richard; Fink, Raymond Keith

    1999-07-01

    The Decision Support System for Agriculture (DSS4Ag) is an expert system being developed by the Site-Specific Technologies for Agriculture (SST4Ag) precision farming research project at the INEEL. DSS4Ag uses state-of-the-art artificial intelligence and computer science technologies to make spatially variable, site-specific, economically optimum decisions on fertilizer use. The DSS4Ag has an open architecture that allows for external input and addition of new requirements and integrates its results with existing agricultural systems’ infrastructures. The DSS4Ag reflects a paradigm shift in the information revolution in agriculture that is precision farming. We depict this information revolution in agriculture as an historic trend in the agricultural decision-making process.

  15. Narratives of Optimum Currency Area theory and Eurozone Governance

    DEFF Research Database (Denmark)

    Snaith, Holly Grace

    2014-01-01

    is the subject of very significant internal disagreement, to the extent that economists writing within the field do not commonly agree upon the ontological foundations of the theory. This entails that the translation of the theory into political reality has been characterised by a series of often mutually...... contradictory narratives, which build upon schisms in the academic corpus. The political realisation of this can be seen during the negotiations over the 1992 process, where certain aspects of the theory concerning governance (of fiscal policy and preferences for conflict adjudication) have been notably......Optimum Currency Area theory (OCA) is a body of research that has, since its inception in 1961, been highly influential for the discourse and design of Economic and Monetary Union, exercising a significant hermeneutical force. Nonetheless, there has been little acknowledgement that OCA...

  16. Optimum conditions for aging of stainless maraging steels

    International Nuclear Information System (INIS)

    Mironenko, P.A.; Krasnikova, S.I.; Drobot, A.V.

    1980-01-01

    Aging kinetics of two 0Kh11N10M2T type steels in which 3 % Mo (steel 1), and 3 % Mo and 11 % Co (steel 2) had been additionally introduced instead of titanium were investigated. Electron microscopy and X-ray methods were used. It was ascertained that the process of steel aging proceeded in 3 stages. Steel 2 was hardened more intensively during the aging, had a higher degree of hardness and strength after the aging, weakened more slowly if overaged than steel 1. The intermetallide hcp-phase Fe 2 Mo was the hardening phase on steels extended aging. Optimum combination of impact strength and strength was was achieved using two-stage aging: the first stage - maximum strength aging was achieved, the second stage - aging at minimum temperatures of two-phase α+γ region

  17. Testing Optimum Seeding Rates for five Bread Wheat Cultivars

    International Nuclear Information System (INIS)

    Wekesa, S.J.; Kiriswa, F.; Owuoche, J.

    1999-01-01

    A cultivar by seed rate trial was conducted in 1994-1995 crop seasons at Njoro, Kenya. Yield results were found to be significant (P > 0.01) for year, variety, seed rate and year by seed rate interaction. Test weight was highly significant (P -1 were grouped together for significantly higher yields (A) whereas seed rates 85 and 50 kg ha -1 had lower significant yields (B and C respectively). The same grouping was repeated for test weight. There was no significant cultivar by seed rate interaction and no cultivar, specific seed rate. However, since seed rates 245, 205, 165 and 125 kg ha -1 were grouped together, the lowest seed rate, 125 kg ha -1 can be recommended as the optimum seed rate for the above cultivars, as higher seed rates do not give significantly higher yields or higher test weights

  18. Optimum design of band-gap beam structures

    DEFF Research Database (Denmark)

    Olhoff, Niels; Niu, Bin; Cheng, Gengdong

    2012-01-01

    The design of band-gap structures receives increasing attention for many applications in mitigation of undesirable vibration and noise emission levels. A band-gap structure usually consists of a periodic distribution of elastic materials or segments, where the propagation of waves is impeded...... or significantly suppressed for a range of external excitation frequencies. Maximization of the band-gap is therefore an obvious objective for optimum design. This problem is sometimes formulated by optimizing a parameterized design model which assumes multiple periodicity in the design. However, it is shown...... in the present paper that such an a priori assumption is not necessary since, in general, just the maximization of the gap between two consecutive natural frequencies leads to significant design periodicity. The aim of this paper is to maximize frequency gaps by shape optimization of transversely vibrating...

  19. Optimum energy management of a photovoltaic water pumping system

    International Nuclear Information System (INIS)

    Sallem, Souhir; Chaabene, Maher; Kamoun, M.B.A.

    2009-01-01

    This paper presents a new management approach which makes decision on the optimum connection times of the elements of a photovoltaic water pumping installation: battery, water pump and photovoltaic panel. The decision is made by fuzzy rules considering the battery safety on the first hand and the Photovoltaic Panel Generation (PVPG) forecast during a considered day and the load required power on the second hand. The optimization approach consists of the extension of the operation time of the water pump with respects to multi objective management criteria. Compared to the stand alone management method, the new approach effectiveness is confirmed by the extension of the pumping period for more than 5 h a day.

  20. Optimum hot water temperature for absorption solar cooling

    Energy Technology Data Exchange (ETDEWEB)

    Lecuona, A.; Ventas, R.; Venegas, M.; Salgado, R. [Dpto. Ingenieria Termica y de Fluidos, Universidad Carlos III de Madrid, Avda. Universidad 30, 28911 Leganes, Madrid (Spain); Zacarias, A. [ESIME UPA, IPN, Av. de las Granjas 682, Col. Santa Catarina, 02550, D.F. Mexico (Mexico)

    2009-10-15

    The hot water temperature that maximizes the overall instantaneous efficiency of a solar cooling facility is determined. A modified characteristic equation model is used and applied to single-effect lithium bromide-water absorption chillers. This model is based on the characteristic temperature difference and serves to empirically calculate the performance of real chillers. This paper provides an explicit equation for the optimum temperature of vapor generation, in terms of only the external temperatures of the chiller. The additional data required are the four performance parameters of the chiller and essentially a modified stagnation temperature from the detailed model of the thermal collector operation. This paper presents and discusses the results for small capacity machines for air conditioning of homes and small buildings. The discussion highlights the influence of the relevant parameters. (author)

  1. Optimum Identification Method of Sorting Green Household Waste

    Directory of Open Access Journals (Sweden)

    Daud Mohd Hisam

    2016-01-01

    Full Text Available This project is related to design of sorting facility for reducing, reusing, recycling green waste material, and in particular to invent an automatic system to distinguish household waste in order to separate them from the main waste stream. The project focuses on thorough analysis of the properties of green household waste. The method of identification is using capacitive sensor where the characteristic data taken on three different sensor drive frequency. Three types of material have been chosen as a medium of this research, to be separated using the selected method. Based on capacitance characteristics and its ability to penetrate green object, optimum identification method is expected to be recognized in this project. The output capacitance sensor is in analogue value. The results demonstrate that the information from the sensor is enough to recognize the materials that have been selected.

  2. On the optimum area-balanced filters for nuclear spectroscopy

    International Nuclear Information System (INIS)

    Ripamonti, G.; Pullia, A.

    1996-01-01

    The minimum noise area-balanced (A-B) filters for nuclear spectroscopy are disentangled in the sum of two optimized individual filters. The former is the unipolar finite cusp filter, used for pulse amplitude estimation but affected by baseline shift errors, the latter is a specific filter used for baseline estimation. Each of them is optimized so as to give the minimum noise in the estimation of the pulse amplitude or of its baseline level. It is shown that double optimisation produces an overall optimum filter exhibiting a total noise V 2 n equal to the sum of the noises V 2 n1 and V 2 n2 exhibited by each filter individually. This is a consequence of the orthogonality of the individual filter weight-functions in a function space where the norm is defined as √(V 2 n ). (orig.)

  3. Environmental Planning Strategies for Optimum Solid Waste Landfill Siting

    International Nuclear Information System (INIS)

    Sumiani, Y.; Onn, C.C.; Mohd, M.A.D.; Wan, W.Z.J.

    2009-01-01

    The use of environmental planning tools for optimum solid waste landfill siting taking into account all environmental implications was carried out by applying Life Cycle Analysis (LCA) to enhance the research information obtained from initial analysis using Geographical Information Systems (GIS). The objective of this study is to identify the most eco-friendly landfill site by conducting a LCA analysis upon 5 potential GIS generated sites which incorporated eleven important criteria related to the social, environmental, and economical factors. The LCA analysis utilized the daily distance covered by collection trucks among the 5 selected landfill sites to generate inventory data on total energy usage for each landfill sites. The planning and selection of the potential sites were facilitated after conducting environmental impact analysis upon the inventory data which showed the least environmental impact. (author)

  4. Optimum fuel allocation in parallel steam generator systems

    International Nuclear Information System (INIS)

    Bollettini, U.; Cangioli, E.; Cerri, G.; Rome Univ. 'La Sapienza'; Trento Univ.

    1991-01-01

    An optimization procedure was developed to allocate fuels into parallel steam generators. The procedure takes into account the level of performance deterioration connected with the loading history (fossil fuel allocation and maintenance) of each steam generator. The optimization objective function is the system hourly cost, overall steam demand being satisfied. Costs are due to fuel and electric power supply and to plant depreciation and maintenance as well. In order to easily updata the state of each steam generator, particular care was put in the general formulation of the steam production function by adopting a special efficiency-load curve description based on a deterioration scaling parameter. The influence of the characteristic time interval length on the optimum operation result is investigated. A special implementation of the method based on minimum cost paths is suggested

  5. Optimum radars and filters for the passive sphere system

    Science.gov (United States)

    Luers, J. K.; Soltes, A.

    1971-01-01

    Studies have been conducted to determine the influence of the tracking radar and data reduction technique on the accuracy of the meteorological measurements made in the 30 to 100 kilometer altitude region by the ROBIN passive falling sphere. A survey of accuracy requirements was made of agencies interested in data from this region of the atmosphere. In light of these requirements, various types of radars were evaluated to determine the tracking system most applicable to the ROBIN, and methods were developed to compute the errors in wind and density that arise from noise errors in the radar supplied data. The effects of launch conditions on the measurements were also examined. Conclusions and recommendations have been made concerning the optimum tracking and data reduction techniques for the ROBIN falling sphere system.

  6. The optimum functionalization of carbon nanotube/ferritin composites

    International Nuclear Information System (INIS)

    Lee, Ji Won; Shin, Kwang Min; Kim, Seon Jeong; Lynam, Carol; Spinks, Geoffrey M; Wallace, Gordon G

    2008-01-01

    We fabricated a covalently linked composite composed of functionalized single-walled carbon nanotubes (f-SWNT) and ferritin protein as nanoparticles. The various f-SWNTs were prepared using an acid treatment of purified SWNT for different functionalization times (30, 60, 120 and 180 min), and ferritin was immobilized on each of the f-SWNT by covalent immobilization. The specific capacitance of the f-SWNT and the electrochemical activity of the f-SWNT/ferritin composites showed a Gaussian distribution. From the electrochemical analysis, the ferritin composite with functionalized SWNT for 60 min showed the highest capacitance and electrochemical activity than other f-SWNT/ferritin composites. This result suggests the optimum value for the best performance of the electrochemical properties of f-SWNT/ferritin composites was found for a potential bioapplication

  7. Optimum selection of an energy resource using fuzzy logic

    International Nuclear Information System (INIS)

    Abouelnaga, Ayah E.; Metwally, Abdelmohsen; Nagy, Mohammad E.; Agamy, Saeed

    2009-01-01

    Optimum selection of an energy resource is a vital issue in developed countries. Considering energy resources as alternatives (nuclear, hydroelectric, gas/oil, and solar) and factors upon which the proper decision will be taken as attributes (economics, availability, environmental impact, and proliferation), one can use the multi-attribute utility theory (MAUT) to optimize the selection process. Recently, fuzzy logic is extensively applied to the MAUT as it expresses the linguistic appraisal for all attributes in wide and reliable manners. The rise in oil prices and the increased concern about environmental protection from CO 2 emissions have promoted the attention to the use of nuclear power as a viable energy source for power generation. For Egypt, as a case study, the nuclear option is found to be an appropriate choice. Following the introduction of innovative designs of nuclear power plants, improvements in the proliferation resistance, environmental impacts, and economics will enhance the selection of the nuclear option.

  8. The optimum choice of gate width for neutron coincidence counting

    Energy Technology Data Exchange (ETDEWEB)

    Croft, S., E-mail: crofts@ornl.gov [Safeguards and Security Technology (SST), Global Nuclear Security Technology Divisions, PO Box 2008, Building 5700, MS-6166, Oak Ridge, TN 37831-6166 (United States); Henzlova, D.; Favalli, A.; Hauck, D.K.; Santi, P.A. [Safeguards Science and Technology Group (NEN-1), Nuclear Engineering and Nonproliferation Division, MS-E540, Los Alamos, NM 87545 (United States)

    2014-11-11

    In the measurement field of international nuclear safeguards, passive neutron coincidence counting is used to quantify the spontaneous fission rate of certain special nuclear materials. The shift register autocorrelation analysis method is the most commonly used approach. However, the Feynman-Y technique, which is more commonly applied in reactor noise analysis, provides an alternative means to extract the correlation information from a pulse train. In this work we consider how to select the optimum gate width for each of these two time-correlation analysis techniques. The optimum is considered to be that which gives the lowest fractional precision on the net doublets rate. Our theoretical approach is approximate but is instructional in terms of revealing the key functional dependence. We show that in both cases the same performance figure of merit applies so that common design criteria apply to the neutron detector head. Our prediction is that near optimal results, suitable for most practical applications, can be obtained from both techniques using a common gate width setting. The estimated precision is also comparable in the two cases. The theoretical expressions are tested experimentally using {sup 252}Cf spontaneous fission sources measured in two thermal well counters representative of the type in common use by international inspectorates. Fast accidental sampling was the favored method of acquiring the Feynman-Y data. Our experimental study confirmed the basic functional dependences predicted although experimental results when available are preferred. With an appropriate gate setting Feynman-Y analysis provides an alternative to shift register analysis for safeguards applications which is opening up new avenues of data collection and data reduction to explore.

  9. Defining poor and optimum performance in an IVF programme.

    Science.gov (United States)

    Castilla, Jose A; Hernandez, Juana; Cabello, Yolanda; Lafuente, Alejandro; Pajuelo, Nuria; Marqueta, Javier; Coroleu, Buenaventura

    2008-01-01

    At present there is considerable interest in healthcare administration, among professionals and among the general public concerning the quality of programmes of assisted reproduction. There exist various methods for comparing and analysing the results of clinical activity, with graphical methods being the most commonly used for this purpose. As yet, there is no general consensus as to how the poor performance (PP) or optimum performance (OP) of assisted reproductive technologies should be defined. Data from the IVF/ICSI register of the Spanish Fertility Society were used to compare and analyse different definitions of PP or OP. The primary variable best reflecting the quality of an IVF/ICSI programme was taken to be the percentage of singleton births per IVF/ICSI cycle initiated. Of the 75 infertility clinics that took part in the SEF-2003 survey, data on births were provided by 58. A total of 25 462 cycles were analysed. The following graphical classification methods were used: ranking of the proportion of singleton births per cycles started in each centre (league table), Shewhart control charts, funnel plots, best and worst-case scenarios and state of the art methods. The clinics classified as producing PP or OP varied considerably depending on the classification method used. Only three were rated as providing 'PP' or 'OP' by all methods, unanimously. Another four clinics were classified as 'poor' or 'optimum' by all the methods except one. On interpreting the results derived from IVF/ICSI centres, it is essential to take into account the characteristics of the method used for this purpose.

  10. Optimum design of cogeneration system for nuclear seawater desalination - 15272

    International Nuclear Information System (INIS)

    Jung, Y.H.; Jeong, Y.H.

    2015-01-01

    A nuclear desalination process, which uses the energy released by nuclear fission, has less environmental impact and is generally cost-competitive with a fossil-fuel desalination process. A reference cogeneration system focused on in this study is the APR-1400 coupled with a MED (multi-effect distillation) process using the thermal vapor compression (TVC) technology. The thermal condition of the heat source is the most crucial factor that determines the desalination performance, i.e. energy consumption or freshwater production, of the MED-TVC process. The MED-TVC process operating at a higher motive steam pressure clearly shows a higher desalination performance. However, this increased performance does not necessarily translate to an advantage over processes operated at lower motive steam pressures. For instance, a higher motive steam pressure will increase the heat cost resulting from larger electricity generation loss, and thus may make this process unfavorable from an economic point of view. Therefore, there exists an optimum design point in the coupling configuration that makes the nuclear cogeneration system the most economical. This study is mainly aimed at investigating this optimum coupling design point of the reference nuclear cogeneration system using corresponding analysis tools. The following tools are used: MEE developed by the MEDRC for desalination performance analysis of the MED-TVC process, DE-TOP and DEEP developed by the IAEA for modeling of coupling configuration and economic evaluation of the nuclear cogeneration system, respectively. The results indicate that steam extraction from the MS exhaust and condensate return to HP FWHTR 5 is the most economical coupling design

  11. Optimum Condition for Plutonium Electrodeposition Process in Radiochemistry and Environment Laboratory, Nuclear Malaysia

    International Nuclear Information System (INIS)

    Yii, Mei-Wo; Abdullah Siddiqi Ismail

    2014-01-01

    Determination of alpha emitting plutonium radionuclides such as Pu-238, Pu-239 and Pu-240 concentrations inside a sample require lots of radiochemistry purification process to separate them from other interfering alpha emitters. These pure isotopes are then been electrodeposited onto a stainless steel disc and quantified using alpha spectrometry counter. In Radiochemistry and Environment Laboratory (RAS), Nuclear Malaysia, the quantification is done by comparing these isotopes with the recovery of known amount plutonium tracer, Pu-242, that been added into the sample prior analysis. This study been conducted to find the optimum conditions for the electrolysis process used at RAS. Four variable parameters that may interfere the percentage recovery of tracer hence the current, cathode to anode distance, pH and electrolysis duration had been identify and studied. Study was carry out using Pu-242 standard solution and the deposition disc was counted using Zinc Sulphite (silver) counter. Studies outcome suggested that the optimum conditions to reduce plutonium ion happens at 1-1.1 ampere of current, 3-5 mm of electrodes distance, pH 2.2-2.5 and a minimal electrolysis duration of 2 hours. (author)

  12. Multidisciplinary Aerodynamic Design of a Rotor Blade for an Optimum Rotor Speed Helicopter

    Directory of Open Access Journals (Sweden)

    Jiayi Xie

    2017-06-01

    Full Text Available The aerodynamic design of rotor blades is challenging, and is crucial for the development of helicopter technology. Previous aerodynamic optimizations that focused only on limited design points find it difficult to balance flight performance across the entire flight envelope. This study develops a global optimum envelope (GOE method for determining blade parameters—blade twist, taper ratio, tip sweep—for optimum rotor speed helicopters (ORS-helicopters, balancing performance improvements in hover and various freestream velocities. The GOE method implements aerodynamic blade design by a bi-level optimization, composed of a global optimization step and a secondary optimization step. Power loss as a measure of rotor performance is chosen as the objective function, referred to as direct power loss (DPL in this study. A rotorcraft comprehensive code for trim simulation with a prescribed wake method is developed. With the application of the GOE method, a DPL reduction of as high as 16.7% can be achieved in hover, and 24% at high freestream velocity.

  13. Optimum analysis of pavement maintenance using multi-objective genetic algorithms

    Directory of Open Access Journals (Sweden)

    Amr A. Elhadidy

    2015-04-01

    Full Text Available Road network expansion in Egypt is considered as a vital issue for the development of the country. This is done while upgrading current road networks to increase the safety and efficiency. A pavement management system (PMS is a set of tools or methods that assist decision makers in finding optimum strategies for providing and maintaining pavements in a serviceable condition over a given period of time. A multi-objective optimization problem for pavement maintenance and rehabilitation strategies on network level is discussed in this paper. A two-objective optimization model considers minimum action costs and maximum condition for used road network. In the proposed approach, Markov-chain models are used for predicting the performance of road pavement and to calculate the expected decline at different periods of time. A genetic-algorithm-based procedure is developed for solving the multi-objective optimization problem. The model searched for the optimum maintenance actions at adequate time to be implemented on an appropriate pavement. Based on the computing results, the Pareto optimal solutions of the two-objective optimization functions are obtained. From the optimal solutions represented by cost and condition, a decision maker can easily obtain the information of the maintenance and rehabilitation planning with minimum action costs and maximum condition. The developed model has been implemented on a network of roads and showed its ability to derive the optimal solution.

  14. A study on optimum conditions for reducing Polonium-210 ion in electrolysis process

    International Nuclear Information System (INIS)

    Yii Mei Wo

    2006-01-01

    Polonium-210 is one of the most important radionuclide to be study while studying radioactive contaminants on marine lives. It usually be self-deposited on a pure silver foil and counted using an Alpha Spectrometry System. However, using pure silver foil involves high cost. Therefore, study had been conducted to find the suitability of using stainless steel disc to deposit polonium-210 using electrolysis process and the optimum conditions for such process. This was carry out by using pure polonium-210 standard solution and the ready disc was counted using Zinc Sulphite Counter. Results show that reduction of polonium ion on stainless steel disc can be done but the efficiency of the process only around 70 percent. Besides this, studies also show that, at 1.1 ampere constant current and cathode to anode distance at 8 mm, the optimum conditions to reduce polonium ion were at pH 2.2-2.3 with the electrolysis time of 5 hours. (Author)

  15. Subgrouping Automata: automatic sequence subgrouping using phylogenetic tree-based optimum subgrouping algorithm.

    Science.gov (United States)

    Seo, Joo-Hyun; Park, Jihyang; Kim, Eun-Mi; Kim, Juhan; Joo, Keehyoung; Lee, Jooyoung; Kim, Byung-Gee

    2014-02-01

    Sequence subgrouping for a given sequence set can enable various informative tasks such as the functional discrimination of sequence subsets and the functional inference of unknown sequences. Because an identity threshold for sequence subgrouping may vary according to the given sequence set, it is highly desirable to construct a robust subgrouping algorithm which automatically identifies an optimal identity threshold and generates subgroups for a given sequence set. To meet this end, an automatic sequence subgrouping method, named 'Subgrouping Automata' was constructed. Firstly, tree analysis module analyzes the structure of tree and calculates the all possible subgroups in each node. Sequence similarity analysis module calculates average sequence similarity for all subgroups in each node. Representative sequence generation module finds a representative sequence using profile analysis and self-scoring for each subgroup. For all nodes, average sequence similarities are calculated and 'Subgrouping Automata' searches a node showing statistically maximum sequence similarity increase using Student's t-value. A node showing the maximum t-value, which gives the most significant differences in average sequence similarity between two adjacent nodes, is determined as an optimum subgrouping node in the phylogenetic tree. Further analysis showed that the optimum subgrouping node from SA prevents under-subgrouping and over-subgrouping. Copyright © 2013. Published by Elsevier Ltd.

  16. On convergence of differential evolution over a class of continuous functions with unique global optimum.

    Science.gov (United States)

    Ghosh, Sayan; Das, Swagatam; Vasilakos, Athanasios V; Suresh, Kaushik

    2012-02-01

    Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms of current interest. Since its inception in the mid 1990s, DE has been finding many successful applications in real-world optimization problems from diverse domains of science and engineering. This paper takes a first significant step toward the convergence analysis of a canonical DE (DE/rand/1/bin) algorithm. It first deduces a time-recursive relationship for the probability density function (PDF) of the trial solutions, taking into consideration the DE-type mutation, crossover, and selection mechanisms. Then, by applying the concepts of Lyapunov stability theorems, it shows that as time approaches infinity, the PDF of the trial solutions concentrates narrowly around the global optimum of the objective function, assuming the shape of a Dirac delta distribution. Asymptotic convergence behavior of the population PDF is established by constructing a Lyapunov functional based on the PDF and showing that it monotonically decreases with time. The analysis is applicable to a class of continuous and real-valued objective functions that possesses a unique global optimum (but may have multiple local optima). Theoretical results have been substantiated with relevant computer simulations.

  17. Asymptotically optimum multialternative sequential procedures for discernment of processes minimizing average length of observations

    Science.gov (United States)

    Fishman, M. M.

    1985-01-01

    The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.

  18. How to create successful Open Hardware projects — About White Rabbits and open fields

    International Nuclear Information System (INIS)

    Bij, E van der; Arruat, M; Cattin, M; Daniluk, G; Cobas, J D Gonzalez; Gousiou, E; Lewis, J; Lipinski, M M; Serrano, J; Stana, T; Voumard, N; Wlostowski, T

    2013-01-01

    CERN's accelerator control group has embraced ''Open Hardware'' (OH) to facilitate peer review, avoid vendor lock-in and make support tasks scalable. A web-based tool for easing collaborative work was set up and the CERN OH Licence was created. New ADC, TDC, fine delay and carrier cards based on VITA and PCI-SIG standards were designed and drivers for Linux were written. Often industry was paid for developments, while quality and documentation was controlled by CERN. An innovative timing network was also developed with the OH paradigm. Industry now sells and supports these designs that find their way into new fields

  19. Hardware/software co-design and optimization for cyberphysical integration in digital microfluidic biochips

    CERN Document Server

    Luo, Yan; Ho, Tsung-Yi

    2015-01-01

    This book describes a comprehensive framework for hardware/software co-design, optimization, and use of robust, low-cost, and cyberphysical digital microfluidic systems. Readers with a background in electronic design automation will find this book to be a valuable reference for leveraging conventional VLSI CAD techniques for emerging technologies, e.g., biochips or bioMEMS. Readers from the circuit/system design community will benefit from methods presented to extend design and testing techniques from microelectronics to mixed-technology microsystems. For readers from the microfluidics domain,

  20. How to create successful Open Hardware projects - About White Rabbits and open fields

    CERN Document Server

    van der Bij, E; Lewis, J; Stana, T; Wlostowski, T; Gousiou, E; Serrano, J; Arruat, M; Lipinski, M M; Daniluk, G; Voumard, N; Cattin, M

    2013-01-01

    CERN's accelerator control group has embraced "Open Hardware" (OH) to facilitate peer review, avoid vendor lock-in and make support tasks scalable. A web-based tool for easing collaborative work was set up and the CERN OH Licence was created. New ADC, TDC, fine delay and carrier cards based on VITA and PCI-SIG standards were designed and drivers for Linux were written. Often industry was paid for developments, while quality and documentation was controlled by CERN. An innovative timing network was also developed with the OH paradigm. Industry now sells and supports these designs that find their way into new fields.

  1. Finding Sliesthorp?

    DEFF Research Database (Denmark)

    Dobat, Andres S.

    2016-01-01

    In 2003, a hitherto unknown Viking age settlement was discovered at Füsing in Northern Germany close to Hedeby/Schleswig, the largest of the early Scandinavian towns. Finds and building features suggest a high status residence and a seat of some chiefly elite that flourished from around 700 to th...... and the transformation of socio‐political structures in Northern Europe as it transitioned from prehistory into the middle Ages....

  2. Fast and Reliable Mouse Picking Using Graphics Hardware

    Directory of Open Access Journals (Sweden)

    Hanli Zhao

    2009-01-01

    Full Text Available Mouse picking is the most commonly used intuitive operation to interact with 3D scenes in a variety of 3D graphics applications. High performance for such operation is necessary in order to provide users with fast responses. This paper proposes a fast and reliable mouse picking algorithm using graphics hardware for 3D triangular scenes. Our approach uses a multi-layer rendering algorithm to perform the picking operation in linear time complexity. The objectspace based ray-triangle intersection test is implemented in a highly parallelized geometry shader. After applying the hardware-supported occlusion queries, only a small number of objects (or sub-objects are rendered in subsequent layers, which accelerates the picking efficiency. Experimental results demonstrate the high performance of our novel approach. Due to its simplicity, our algorithm can be easily integrated into existing real-time rendering systems.

  3. Hardware emulation of Memristor based Ternary Content Addressable Memory

    KAUST Repository

    Bahloul, Mohamed A.

    2017-12-13

    MTCAM (Memristor Ternary Content Addressable Memory) is a special purpose storage medium in which data could be retrieved based on the stored content. Using Memristors as the main storage element provides the potential of achieving higher density and more efficient solutions than conventional methods. A key missing item in the validation of such approaches is the wide spread availability of hardware emulation platforms that can provide reliable and repeatable performance statistics. In this paper, we present a hardware MTCAM emulation based on 2-Transistors-2Memristors (2T2M) bit-cell. It builds on a bipolar memristor model with storing and fetching capabilities based on the actual current-voltage behaviour. The proposed design offers a flexible verification environment with quick design revisions, high execution speeds and powerful debugging techniques. The proposed design is modeled using VHDL and prototyped on Xilinx Virtex® FPGA.

  4. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    Science.gov (United States)

    Barr, David R. W.; Dudek, Piotr

    2009-12-01

    We present a software environment for the efficient simulation of cellular processor arrays (CPAs). This software (APRON) is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  5. The LISA Pathfinder interferometry-hardware and system testing

    Energy Technology Data Exchange (ETDEWEB)

    Audley, H; Danzmann, K; MarIn, A Garcia; Heinzel, G; Monsky, A; Nofrarias, M; Steier, F; Bogenstahl, J [Albert-Einstein-Institut, Max-Planck-Institut fuer Gravitationsphysik und Universitaet Hannover, 30167 Hannover (Germany); Gerardi, D; Gerndt, R; Hechenblaikner, G; Johann, U; Luetzow-Wentzky, P; Wand, V [EADS Astrium GmbH, Friedrichshafen (Germany); Antonucci, F [Dipartimento di Fisica, Universita di Trento and INFN, Gruppo Collegato di Trento, 38050 Povo, Trento (Italy); Armano, M [European Space Astronomy Centre, European Space Agency, Villanueva de la Canada, 28692 Madrid (Spain); Auger, G; Binetruy, P [APC UMR7164, Universite Paris Diderot, Paris (France); Benedetti, M [Dipartimento di Ingegneria dei Materiali e Tecnologie Industriali, Universita di Trento and INFN, Gruppo Collegato di Trento, Mesiano, Trento (Italy); Boatella, C, E-mail: antonio.garcia@aei.mpg.de [CNES, DCT/AQ/EC, 18 Avenue Edouard Belin, 31401 Toulouse, Cedex 9 (France)

    2011-05-07

    Preparations for the LISA Pathfinder mission have reached an exciting stage. Tests of the engineering model (EM) of the optical metrology system have recently been completed at the Albert Einstein Institute, Hannover, and flight model tests are now underway. Significantly, they represent the first complete integration and testing of the space-qualified hardware and are the first tests on an optical system level. The results and test procedures of these campaigns will be utilized directly in the ground-based flight hardware tests, and subsequently during in-flight operations. In addition, they allow valuable testing of the data analysis methods using the MATLAB-based LTP data analysis toolbox. This paper presents an overview of the results from the EM test campaign that was successfully completed in December 2009.

  6. Verification of OpenSSL version via hardware performance counters

    Science.gov (United States)

    Bruska, James; Blasingame, Zander; Liu, Chen

    2017-05-01

    Many forms of malware and security breaches exist today. One type of breach downgrades a cryptographic program by employing a man-in-the-middle attack. In this work, we explore the utilization of hardware events in conjunction with machine learning algorithms to detect which version of OpenSSL is being run during the encryption process. This allows for the immediate detection of any unknown downgrade attacks in real time. Our experimental results indicated this detection method is both feasible and practical. When trained with normal TLS and SSL data, our classifier was able to detect which protocol was being used with 99.995% accuracy. After the scope of the hardware event recording was enlarged, the accuracy diminished greatly, but to 53.244%. Upon removal of TLS 1.1 from the data set, the accuracy returned to 99.905%.

  7. Parallel random number generator for inexpensive configurable hardware cells

    Science.gov (United States)

    Ackermann, J.; Tangen, U.; Bödekker, B.; Breyer, J.; Stoll, E.; McCaskill, J. S.

    2001-11-01

    A new random number generator ( RNG) adapted to parallel processors has been created. This RNG can be implemented with inexpensive hardware cells. The correlation between neighboring cells is suppressed with smart connections. With such connection structures, sequences of pseudo-random numbers are produced. Numerical tests including a self-avoiding random walk test and the simulation of the order parameter and energy of the 2D Ising model give no evidence for correlation in the pseudo-random sequences. Because the new random number generator has suppressed the correlation between neighboring cells which is usually observed in cellular automaton implementations, it is applicable for extended time simulations. It gives an immense speed-up factor if implemented directly in configurable hardware, and has recently been used for long time simulations of spatially resolved molecular evolution.

  8. Computer organization and design the hardware/software interface

    CERN Document Server

    Patterson, David A

    2013-01-01

    The 5th edition of Computer Organization and Design moves forward into the post-PC era with new examples, exercises, and material highlighting the emergence of mobile computing and the cloud. This generational change is emphasized and explored with updated content featuring tablet computers, cloud infrastructure, and the ARM (mobile computing devices) and x86 (cloud computing) architectures. Because an understanding of modern hardware is essential to achieving good performance and energy efficiency, this edition adds a new concrete example, "Going Faster," used throughout the text to demonstrate extremely effective optimization techniques. Also new to this edition is discussion of the "Eight Great Ideas" of computer architecture. As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, computer arithmetic, pipelining, memory hierarchies and I/O. Optimization techniques featured throughout the text. It covers parallelism in depth with...

  9. Fast image interpolation for motion estimation using graphics hardware

    Science.gov (United States)

    Kelly, Francis; Kokaram, Anil

    2004-05-01

    Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.

  10. Summary of multi-core hardware and programming model investigations

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Suzanne Marie; Pedretti, Kevin Thomas Tauke; Levenhagen, Michael J.

    2008-05-01

    This report summarizes our investigations into multi-core processors and programming models for parallel scientific applications. The motivation for this study was to better understand the landscape of multi-core hardware, future trends, and the implications on system software for capability supercomputers. The results of this study are being used as input into the design of a new open-source light-weight kernel operating system being targeted at future capability supercomputers made up of multi-core processors. A goal of this effort is to create an agile system that is able to adapt to and efficiently support whatever multi-core hardware and programming models gain acceptance by the community.

  11. Web tools to monitor and debug DAQ hardware

    International Nuclear Information System (INIS)

    Desavouret, Eugene; Nogiec, Jerzy M.

    2003-01-01

    A web-based toolkit to monitor and diagnose data acquisition hardware has been developed. It allows for remote testing, monitoring, and control of VxWorks data acquisition computers and associated instrumentation using the HTTP protocol and a web browser. This solution provides concurrent and platform independent access, supplementary to the standard single-user rlogin mechanism. The toolkit is based on a specialized web server, and allows remote access and execution of select system commands and tasks, execution of test procedures, and provides remote monitoring of computer system resources and connected hardware. Various DAQ components such as multiplexers, digital I/O boards, analog to digital converters, or current sources can be accessed and diagnosed remotely in a uniform and well-organized manner. Additionally, the toolkit application supports user authentication and is able to enforce specified access restrictions

  12. Development of Hardware and Software for Automated Ultrasonic Testing

    International Nuclear Information System (INIS)

    Choi, Sung Nam; Lee, Hee Jong; Yang, Seung Ok

    2012-01-01

    Nondestructive testing (NDT) for the construction and operating of NPPs plays an important role in confirming the integrity of the NPPs. Especially, Automated ultrasonic testing (AUT) is one of the primary nondestructive examination methods for in-service inspection of the welding parts in major components in NPPs. AUT is a reliable nondestructive testing because the data of AUT are saved and reviewed with other examiners. Korea Hydro and Nuclear Power-Central Research Institute (KHNP-CRI) has developed an automated ultrasonic testing (AUT) system based on a high speed pulser-receiver. In combination with the designed software and hardware architecture, this new system permits user configurations for a wide range of user-specific applications through fully automated inspections using compact portable systems with up to eight channels. This paper gives an overview of hardware (H/W) and software (S/W) for the AUT system to inspect welds in NPPs

  13. Hardware emulation of Memristor based Ternary Content Addressable Memory

    KAUST Repository

    Bahloul, Mohamed A.; Naous, Rawan; Masmoudi, M.

    2017-01-01

    MTCAM (Memristor Ternary Content Addressable Memory) is a special purpose storage medium in which data could be retrieved based on the stored content. Using Memristors as the main storage element provides the potential of achieving higher density and more efficient solutions than conventional methods. A key missing item in the validation of such approaches is the wide spread availability of hardware emulation platforms that can provide reliable and repeatable performance statistics. In this paper, we present a hardware MTCAM emulation based on 2-Transistors-2Memristors (2T2M) bit-cell. It builds on a bipolar memristor model with storing and fetching capabilities based on the actual current-voltage behaviour. The proposed design offers a flexible verification environment with quick design revisions, high execution speeds and powerful debugging techniques. The proposed design is modeled using VHDL and prototyped on Xilinx Virtex® FPGA.

  14. Hardware support for CSP on a Java chip multiprocessor

    DEFF Research Database (Denmark)

    Gruian, Flavius; Schoeberl, Martin

    2013-01-01

    Due to memory bandwidth limitations, chip multiprocessors (CMPs) adopting the convenient shared memory model for their main memory architecture scale poorly. On-chip core-to-core communication is a solution to this problem, that can lead to further performance increase for a number of multithreaded...... applications. Programmatically, the Communicating Sequential Processes (CSPs) paradigm provides a sound computational model for such an architecture with message based communication. In this paper we explore hardware support for CSP in the context of an embedded Java CMP. The hardware support for CSP are on......-chip communication channels, implemented by a ring-based network-on-chip (NoC), to reduce the memory bandwidth pressure on the shared memory.The presented solution is scalable and also specific for our limited resources and real-time predictability requirements. CMP architectures of three to eight processors were...

  15. Advances in neuromorphic hardware exploiting emerging nanoscale devices

    CERN Document Server

    2017-01-01

    This book covers all major aspects of cutting-edge research in the field of neuromorphic hardware engineering involving emerging nanoscale devices. Special emphasis is given to leading works in hybrid low-power CMOS-Nanodevice design. The book offers readers a bidirectional (top-down and bottom-up) perspective on designing efficient bio-inspired hardware. At the nanodevice level, it focuses on various flavors of emerging resistive memory (RRAM) technology. At the algorithm level, it addresses optimized implementations of supervised and stochastic learning paradigms such as: spike-time-dependent plasticity (STDP), long-term potentiation (LTP), long-term depression (LTD), extreme learning machines (ELM) and early adoptions of restricted Boltzmann machines (RBM) to name a few. The contributions discuss system-level power/energy/parasitic trade-offs, and complex real-world applications. The book is suited for both advanced researchers and students interested in the field.

  16. A Hardware Framework for on-Chip FPGA Acceleration

    DEFF Research Database (Denmark)

    Lomuscio, Andrea; Cardarilli, Gian Carlo; Nannarelli, Alberto

    2016-01-01

    In this work, we present a new framework to dynamically load hardware accelerators on reconfigurable platforms (FPGAs). Provided a library of application-specific processors, we load on-the-fly the specific processor in the FPGA, and we transfer the execution from the CPU to the FPGA-based accele......In this work, we present a new framework to dynamically load hardware accelerators on reconfigurable platforms (FPGAs). Provided a library of application-specific processors, we load on-the-fly the specific processor in the FPGA, and we transfer the execution from the CPU to the FPGA......-based accelerator. Results show that significant speed-up can be obtained by the proposed acceleration framework on system-on-chips where reconfigurable fabric is placed next to the CPUs. The speed-up is due to both the intrinsic acceleration in the application-specific processors, and to the increased parallelism....

  17. Binary Associative Memories as a Benchmark for Spiking Neuromorphic Hardware

    Directory of Open Access Journals (Sweden)

    Andreas Stöckel

    2017-08-01

    Full Text Available Large-scale neuromorphic hardware platforms, specialized computer systems for energy efficient simulation of spiking neural networks, are being developed around the world, for example as part of the European Human Brain Project (HBP. Due to conceptual differences, a universal performance analysis of these systems in terms of runtime, accuracy and energy efficiency is non-trivial, yet indispensable for further hard- and software development. In this paper we describe a scalable benchmark based on a spiking neural network implementation of the binary neural associative memory. We treat neuromorphic hardware and software simulators as black-boxes and execute exactly the same network description across all devices. Experiments on the HBP platforms under varying configurations of the associative memory show that the presented method allows to test the quality of the neuron model implementation, and to explain significant deviations from the expected reference output.

  18. Hardware accuracy counters for application precision and quality feedback

    Science.gov (United States)

    de Paula Rosa Piga, Leonardo; Majumdar, Abhinandan; Paul, Indrani; Huang, Wei; Arora, Manish; Greathouse, Joseph L.

    2018-06-05

    Methods, devices, and systems for capturing an accuracy of an instruction executing on a processor. An instruction may be executed on the processor, and the accuracy of the instruction may be captured using a hardware counter circuit. The accuracy of the instruction may be captured by analyzing bits of at least one value of the instruction to determine a minimum or maximum precision datatype for representing the field, and determining whether to adjust a value of the hardware counter circuit accordingly. The representation may be output to a debugger or logfile for use by a developer, or may be output to a runtime or virtual machine to automatically adjust instruction precision or gating of portions of the processor datapath.

  19. Design Tools for Reconfigurable Hardware in Orbit (RHinO)

    Science.gov (United States)

    French, Mathew; Graham, Paul; Wirthlin, Michael; Larchev, Gregory; Bellows, Peter; Schott, Brian

    2004-01-01

    The Reconfigurable Hardware in Orbit (RHinO) project is focused on creating a set of design tools that facilitate and automate design techniques for reconfigurable computing in space, using SRAM-based field-programmable-gate-array (FPGA) technology. These tools leverage an established FPGA design environment and focus primarily on space effects mitigation and power optimization. The project is creating software to automatically test and evaluate the single-event-upsets (SEUs) sensitivities of an FPGA design and insert mitigation techniques. Extensions into the tool suite will also allow evolvable algorithm techniques to reconfigure around single-event-latchup (SEL) events. In the power domain, tools are being created for dynamic power visualiization and optimization. Thus, this technology seeks to enable the use of Reconfigurable Hardware in Orbit, via an integrated design tool-suite aiming to reduce risk, cost, and design time of multimission reconfigurable space processors using SRAM-based FPGAs.

  20. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    Directory of Open Access Journals (Sweden)

    David R. W. Barr

    2009-01-01

    Full Text Available We present a software environment for the efficient simulation of cellular processor arrays (CPAs. This software (APRON is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.