WorldWideScience

Sample records for finding optimum hardware

  1. MORPION: a fast hardware processor for straight line finding in MWPC

    International Nuclear Information System (INIS)

    Mur, M.

    1980-02-01

    A fast hardware processor for straight line finding in MWPC has been built in Saclay and successfully operated in the NA3 experiment at CERN. We give the motivations to build this processor, and describe the hardware implementation of the line finding algorithm. Finally its use and performance in NA3 are described

  2. Hardware-Based Non-Optimum Factors for Launch Vehicle Structural Design

    Science.gov (United States)

    Wu, K. Chauncey; Cerro, Jeffrey A.

    2010-01-01

    During aerospace vehicle conceptual and preliminary design, empirical non-optimum factors are typically applied to predicted structural component weights to account for undefined manufacturing and design details. Non-optimum factors are developed here for 32 aluminum-lithium 2195 orthogrid panels comprising the liquid hydrogen tank barrel of the Space Shuttle External Tank using measured panel weights and manufacturing drawings. Minimum values for skin thickness, axial and circumferential blade stiffener thickness and spacing, and overall panel thickness are used to estimate individual panel weights. Panel non-optimum factors computed using a coarse weights model range from 1.21 to 1.77, and a refined weights model (including weld lands and skin and stiffener transition details) yields non-optimum factors of between 1.02 and 1.54. Acreage panels have an average 1.24 non-optimum factor using the coarse model, and 1.03 with the refined version. The observed consistency of these acreage non-optimum factors suggests that relatively simple models can be used to accurately predict large structural component weights for future launch vehicles.

  3. Finding the Optimum Scenario in Risk-benefit Assessment: An Example on Vitamin D

    DEFF Research Database (Denmark)

    Berjia, Firew Lemma; Hoekstra, J.; Verhagen, H.

    2014-01-01

    when changing from the reference to the optimum scenario. Conclusion: The method allowed us to find the optimum serum level in the vitamin D example. Additional case studies are needed to further validate the applicability of the approach to other nutrients or foods, especially with regards...... a method for finding the optimum scenario that provides maximum net health gains. Methods: A multiple scenario simulation. The method is presented using vitamin D intake in Denmark as an example. In addition to the reference scenario, several alternative scenarios are simulated to detect the scenario...... that provides maximum net health gains. As a common health metric, Disability Adjusted Life Years (DALY) has been used to project the net health effect by using the QALIBRA (Quality of Life for Benefit Risk Assessment) software. Results: The method used in the vitamin D example shows that it is feasible to find...

  4. Optimum SNR data compression in hardware using an Eigencoil array.

    Science.gov (United States)

    King, Scott B; Varosi, Steve M; Duensing, G Randy

    2010-05-01

    With the number of receivers available on clinical MRI systems now ranging from 8 to 32 channels, data compression methods are being explored to lessen the demands on the computer for data handling and processing. Although software-based methods of compression after reception lessen computational requirements, a hardware-based method before the receiver also reduces the number of receive channels required. An eight-channel Eigencoil array is constructed by placing a hardware radiofrequency signal combiner inline after preamplification, before the receiver system. The Eigencoil array produces signal-to-noise ratio (SNR) of an optimal reconstruction using a standard sum-of-squares reconstruction, with peripheral SNR gains of 30% over the standard array. The concept of "receiver channel reduction" or MRI data compression is demonstrated, with optimal SNR using only four channels, and with a three-channel Eigencoil, superior sum-of-squares SNR was achieved over the standard eight-channel array. A three-channel Eigencoil portion of a product neurovascular array confirms in vivo SNR performance and demonstrates parallel MRI up to R = 3. This SNR-preserving data compression method advantageously allows users of MRI systems with fewer receiver channels to achieve the SNR of higher-channel MRI systems. (c) 2010 Wiley-Liss, Inc.

  5. Finding the Optimum Scenario in Risk-benefit Assessment: An Example on Vitamin D

    DEFF Research Database (Denmark)

    Berjia, Firew Lemma; Hoekstra, J.; Verhagen, H.

    2014-01-01

    an optimum scenario that provides maximum net health gain in health risk-benefit assessment of dietary exposure as expressed by serum vitamin D level. With regard to the vitamin D assessment, a considerable health gain is observed due to the reduction of risk of other cause mortality, fall and hip fractures......Background: In risk-benefit assessment of food and nutrients, several studies so far have focused on comparison of two scenarios to weigh the health effect against each other. One obvious next step is finding the optimum scenario that provides maximum net health gains. Aim: This paper aims to show...... that provides maximum net health gains. As a common health metric, Disability Adjusted Life Years (DALY) has been used to project the net health effect by using the QALIBRA (Quality of Life for Benefit Risk Assessment) software. Results: The method used in the vitamin D example shows that it is feasible to find...

  6. Finding optimum airfoil shape to get maximum aerodynamic efficiency for a wind turbine

    Science.gov (United States)

    Sogukpinar, Haci; Bozkurt, Ismail

    2017-02-01

    In this study, aerodynamic performances of S-series wind turbine airfoil of S 825 are investigated to find optimum angle of attack. Aerodynamic performances calculations are carried out by utilization of a Computational Fluid Dynamics (CFD) method withstand finite capacity approximation by using Reynolds-Averaged-Navier Stokes (RANS) theorem. The lift and pressure coefficients, lift to drag ratio of airfoil S 825 are analyzed with SST turbulence model then obtained results crosscheck with wind tunnel data to verify the precision of computational Fluid Dynamics (CFD) approximation. The comparison indicates that SST turbulence model used in this study can predict aerodynamics properties of wind blade.

  7. Do preliminary chest X-ray findings define the optimum role of pulmonary scintigraphy in suspected pulmonary embolism?

    International Nuclear Information System (INIS)

    Forbes, Kirsten P.N.; Reid, John H.; Murchison, John T.

    2001-01-01

    AIM: To investigate if preliminary chest radiograph (CXR) findings can define the optimum role of lung scintigraphy in subjects investigated for pulmonary embolism (PE). MATERIALS AND METHODS: The CXR and scintigraphy findings from 613 consecutive subjects investigated for suspected PE were retrieved from a radiological database. Of 393 patients with abnormal CXRs, a subgroup of 238 was examined and individual radiographic abnormalities were characterized. CXR findings were related to the scintigraphy result. RESULTS: Scintigraphy was normal in 286 subjects (47%), non-diagnostic in 207 (34%) and high probability for PE in 120 (20%). In 393 subjects (64%) the preliminary CXR was abnormal and 188 (48%) of scintigrams in this group were non-diagnostic. Individual radiographic abnormalities were not associated with significantly different scintigraphic outcomes. If the preliminary CXR was normal (36%), the proportion of non-diagnostic scintigrams decreased to 9% (19 of 220 subjects) (P < 0.05). CONCLUSION: In subjects investigated for PE, an abnormal CXR increases the prevalence of non-diagnostic scintigrams. A normal pre-test CXR is more often associated with a definitive (normal or high probability) scintigram result. The chest radiograph may be useful in deciding the optimum sequence of investigations. Forbes, K.P.N., Reid, J.H., Murchison, J.T. (2001)

  8. Optimum aberration coefficients for recording high-resolution off-axis holograms in a Cs-corrected TEM

    Energy Technology Data Exchange (ETDEWEB)

    Linck, Martin, E-mail: linck@ceos-gmbh.de [CEOS GmbH, Englerstr. 28, D-69126 Heidelberg (Germany)

    2013-01-15

    Amongst the impressive improvements in high-resolution electron microscopy, the Cs-corrector also has significantly enhanced the capabilities of off-axis electron holography. Recently, it has been shown that the signal above noise in the reconstructable phase can be significantly improved by combining holography and hardware aberration correction. Additionally, with a spherical aberration close to zero, the traditional optimum focus for recording high-resolution holograms ('Lichte's defocus') has become less stringent and both, defocus and spherical aberration, can be selected freely within a certain range. This new degree of freedom can be used to improve the signal resolution in the holographically reconstructed object wave locally, e.g. at the atomic positions. A brute force simulation study for an aberration corrected 200 kV TEM is performed to determine optimum values for defocus and spherical aberration for best possible signal to noise in the reconstructed atomic phase signals. Compared to the optimum aberrations for conventional phase contrast imaging (NCSI), which produce 'bright atoms' in the image intensity, the resulting optimum values of defocus and spherical aberration for off-axis holography enable 'black atom contrast' in the hologram. However, they can significantly enhance the local signal resolution at the atomic positions. At the same time, the benefits of hardware aberration correction for high-resolution off-axis holography are preserved. It turns out that the optimum is depending on the object and its thickness and therefore not universal. -- Highlights: Black-Right-Pointing-Pointer Optimized aberration parameters for high-resolution off-axis holography. Black-Right-Pointing-Pointer Simulation and analysis of noise in high-resolution off-axis holograms. Black-Right-Pointing-Pointer Improving signal resolution in the holographically reconstructed phase shift. Black-Right-Pointing-Pointer Comparison of &apos

  9. Optimum aberration coefficients for recording high-resolution off-axis holograms in a Cs-corrected TEM

    International Nuclear Information System (INIS)

    Linck, Martin

    2013-01-01

    Amongst the impressive improvements in high-resolution electron microscopy, the Cs-corrector also has significantly enhanced the capabilities of off-axis electron holography. Recently, it has been shown that the signal above noise in the reconstructable phase can be significantly improved by combining holography and hardware aberration correction. Additionally, with a spherical aberration close to zero, the traditional optimum focus for recording high-resolution holograms (“Lichte's defocus”) has become less stringent and both, defocus and spherical aberration, can be selected freely within a certain range. This new degree of freedom can be used to improve the signal resolution in the holographically reconstructed object wave locally, e.g. at the atomic positions. A brute force simulation study for an aberration corrected 200 kV TEM is performed to determine optimum values for defocus and spherical aberration for best possible signal to noise in the reconstructed atomic phase signals. Compared to the optimum aberrations for conventional phase contrast imaging (NCSI), which produce “bright atoms” in the image intensity, the resulting optimum values of defocus and spherical aberration for off-axis holography enable “black atom contrast” in the hologram. However, they can significantly enhance the local signal resolution at the atomic positions. At the same time, the benefits of hardware aberration correction for high-resolution off-axis holography are preserved. It turns out that the optimum is depending on the object and its thickness and therefore not universal. -- Highlights: ► Optimized aberration parameters for high-resolution off-axis holography. ► Simulation and analysis of noise in high-resolution off-axis holograms. ► Improving signal resolution in the holographically reconstructed phase shift. ► Comparison of “black” and “white” atom contrast in off-axis holograms.

  10. Hardware Demonstrator of a Level-1 Track Finding Algorithm with FPGAs for the Phase II CMS Experiment

    CERN Document Server

    AUTHOR|(CDS)2090481

    2016-01-01

    At the HL-LHC, proton bunches collide every 25\\,ns, producing an average of 140 pp interactions per bunch crossing. To operate in such an environment, the CMS experiment will need a Level-1 (L1) hardware trigger, able to identify interesting events within a latency of 12.5\\,$\\mu$s. This novel L1 trigger will make use of data coming from the silicon tracker to constrain the trigger rate. Goal of this new \\textit{track trigger} will be to build L1 tracks from the tracker information. The architecture that will be implemented in future to process tracker data is still under discussion. One possibility is to adopt a system entirely based on FPGA electronic. The proposed track finding algorithm is based on the Hough transform method. The algorithm has been tested using simulated pp collision data and it is currently being demonstrated in hardware, using the ``MP7'', which is a $\\mu$TCA board with a powerful FPGA capable of handling data rates approaching 1 Tb/s. Two different implementations of the Hough tran...

  11. Optimum unambiguous discrimination of linearly independent pure state

    DEFF Research Database (Denmark)

    Pang, Shengshi; Wu, Shengjun

    2009-01-01

    be satisfied by the optimum solution in different situations. We also provide the detailed steps to find the optimum measurement strategy. The method and results we obtain are given a geometrical illustration with a numerical example. Furthermore, using these equations, we derive a formula which shows a clear...

  12. Development of Non-Optimum Factors for Launch Vehicle Propellant Tank Bulkhead Weight Estimation

    Science.gov (United States)

    Wu, K. Chauncey; Wallace, Matthew L.; Cerro, Jeffrey A.

    2012-01-01

    Non-optimum factors are used during aerospace conceptual and preliminary design to account for the increased weights of as-built structures due to future manufacturing and design details. Use of higher-fidelity non-optimum factors in these early stages of vehicle design can result in more accurate predictions of a concept s actual weights and performance. To help achieve this objective, non-optimum factors are calculated for the aluminum-alloy gores that compose the ogive and ellipsoidal bulkheads of the Space Shuttle Super-Lightweight Tank propellant tanks. Minimum values for actual gore skin thicknesses and weld land dimensions are extracted from selected production drawings, and are used to predict reference gore weights. These actual skin thicknesses are also compared to skin thicknesses predicted using classical structural mechanics and tank proof-test pressures. Both coarse and refined weights models are developed for the gores. The coarse model is based on the proof pressure-sized skin thicknesses, and the refined model uses the actual gore skin thicknesses and design detail dimensions. To determine the gore non-optimum factors, these reference weights are then compared to flight hardware weights reported in a mass properties database. When manufacturing tolerance weight estimates are taken into account, the gore non-optimum factors computed using the coarse weights model range from 1.28 to 2.76, with an average non-optimum factor of 1.90. Application of the refined weights model yields non-optimum factors between 1.00 and 1.50, with an average non-optimum factor of 1.14. To demonstrate their use, these calculated non-optimum factors are used to predict heavier, more realistic gore weights for a proposed heavy-lift launch vehicle s propellant tank bulkheads. These results indicate that relatively simple models can be developed to better estimate the actual weights of large structures for future launch vehicles.

  13. Problem of determining optimum geological and technical measures

    Energy Technology Data Exchange (ETDEWEB)

    Osipov, G N; Roste, Z A; Salimzhanov, E S

    1968-01-01

    This article is concerned with the mathematical simulation of oilfield operation, particularly the use of linear programing to determine optimum conditions for exploitation of a field. The basic approach is to define the field operation by a series of equations, apply boundary conditions and through an iterative computer technique find optimum operating conditions. Application of the method to Tuimazy field is illustrated.

  14. Raspberry Pi hardware projects 1

    CERN Document Server

    Robinson, Andrew

    2013-01-01

    Learn how to take full advantage of all of Raspberry Pi's amazing features and functions-and have a blast doing it! Congratulations on becoming a proud owner of a Raspberry Pi, the credit-card-sized computer! If you're ready to dive in and start finding out what this amazing little gizmo is really capable of, this ebook is for you. Taken from the forthcoming Raspberry Pi Projects, Raspberry Pi Hardware Projects 1 contains three cool hardware projects that let you have fun with the Raspberry Pi while developing your Raspberry Pi skills. The authors - PiFace inventor, Andrew Robinson and Rasp

  15. Proof-Carrying Hardware: Concept and Prototype Tool Flow for Online Verification

    OpenAIRE

    Drzevitzky, Stephanie; Kastens, Uwe; Platzner, Marco

    2010-01-01

    Dynamically reconfigurable hardware combines hardware performance with software-like flexibility and finds increasing use in networked systems. The capability to load hardware modules at runtime provides these systems with an unparalleled degree of adaptivity but at the same time poses new challenges for security and safety. In this paper, we elaborate on the presentation of proof carrying hardware (PCH) as a novel approach to reconfigurable system security. PCH takes ...

  16. Hardware Resource Allocation for Hardware/Software Partitioning in the LYCOS System

    DEFF Research Database (Denmark)

    Grode, Jesper Nicolai Riis; Knudsen, Peter Voigt; Madsen, Jan

    1998-01-01

    as a designer's/design tool's aid to generate good hardware allocations for use in hardware/software partitioning. The algorithm has been implemented in a tool under the LYCOS system. The results show that the allocations produced by the algorithm come close to the best allocations obtained by exhaustive search.......This paper presents a novel hardware resource allocation technique for hardware/software partitioning. It allocates hardware resources to the hardware data-path using information such as data-dependencies between operations in the application, and profiling information. The algorithm is useful...

  17. Hardware Demonstrator of a Level-1 Track Finding Algorithm with FPGAs for the Phase II CMS Experiment

    International Nuclear Information System (INIS)

    Cieri, D.

    2016-01-01

    At the HL-LHC, proton bunches collide every 25 ns, producing an average of 140 pp interactions per bunch crossing. To operate in such an environment, the CMS experiment will need a Level-1 (L1) hardware trigger, able to identify interesting events within a latency of 12.5 μs. This novel L1 trigger will make use of data coming from the silicon tracker to constrain the trigger rate . Goal of this new track trigger will be to build L1 tracks from the tracker information. The architecture that will be implemented in future to process tracker data is still under discussion. One possibility is to adopt a system entirely based on FPGA electronic. The proposed track finding algorithm is based on the Hough transform method. The algorithm has been tested using simulated pp collision data and it is currently being demonstrated in hardware, using the “MP7”, which is a μTCA board with a powerful FPGA capable of handling data rates approaching 1 Tb/s. Two different implementations of the Hough transform technique are currently under investigation: one utilizes a systolic array to represent the Hough space, while the other exploits a pipelined approach. (paper)

  18. Static Scheduling of Periodic Hardware Tasks with Precedence and Deadline Constraints on Reconfigurable Hardware Devices

    Directory of Open Access Journals (Sweden)

    Ikbel Belaid

    2011-01-01

    Full Text Available Task graph scheduling for reconfigurable hardware devices can be defined as finding a schedule for a set of periodic tasks with precedence, dependence, and deadline constraints as well as their optimal allocations on the available heterogeneous hardware resources. This paper proposes a new methodology comprising three main stages. Using these three main stages, dynamic partial reconfiguration and mixed integer programming, pipelined scheduling and efficient placement are achieved and enable parallel computing of the task graph on the reconfigurable devices by optimizing placement/scheduling quality. Experiments on an application of heterogeneous hardware tasks demonstrate an improvement of resource utilization of 12.45% of the available reconfigurable resources corresponding to a resource gain of 17.3% compared to a static design. The configuration overhead is reduced to 2% of the total running time. Due to pipelined scheduling, the task graph spanning is minimized by 4% compared to sequential execution of the graph.

  19. Hardware for soft computing and soft computing for hardware

    CERN Document Server

    Nedjah, Nadia

    2014-01-01

    Single and Multi-Objective Evolutionary Computation (MOEA),  Genetic Algorithms (GAs), Artificial Neural Networks (ANNs), Fuzzy Controllers (FCs), Particle Swarm Optimization (PSO) and Ant colony Optimization (ACO) are becoming omnipresent in almost every intelligent system design. Unfortunately, the application of the majority of these techniques is complex and so requires a huge computational effort to yield useful and practical results. Therefore, dedicated hardware for evolutionary, neural and fuzzy computation is a key issue for designers. With the spread of reconfigurable hardware such as FPGAs, digital as well as analog hardware implementations of such computation become cost-effective. The idea behind this book is to offer a variety of hardware designs for soft computing techniques that can be embedded in any final product. Also, to introduce the successful application of soft computing technique to solve many hard problem encountered during the design of embedded hardware designs. Reconfigurable em...

  20. Introduction to Hardware Security

    Directory of Open Access Journals (Sweden)

    Yier Jin

    2015-10-01

    Full Text Available Hardware security has become a hot topic recently with more and more researchers from related research domains joining this area. However, the understanding of hardware security is often mixed with cybersecurity and cryptography, especially cryptographic hardware. For the same reason, the research scope of hardware security has never been clearly defined. To help researchers who have recently joined in this area better understand the challenges and tasks within the hardware security domain and to help both academia and industry investigate countermeasures and solutions to solve hardware security problems, we will introduce the key concepts of hardware security as well as its relations to related research topics in this survey paper. Emerging hardware security topics will also be clearly depicted through which the future trend will be elaborated, making this survey paper a good reference for the continuing research efforts in this area.

  1. The role of the visual hardware system in rugby performance ...

    African Journals Online (AJOL)

    This study explores the importance of the 'hardware' factors of the visual system in the game of rugby. A group of professional and club rugby players were tested and the results compared. The results were also compared with the established norms for elite athletes. The findings indicate no significant difference in hardware ...

  2. FY1995 evolvable hardware chip; 1995 nendo shinkasuru hardware chip

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This project aims at the development of 'Evolvable Hardware' (EHW) which can adapt its hardware structure to the environment to attain better hardware performance, under the control of genetic algorithms. EHW is a key technology to explore the new application area requiring real-time performance and on-line adaptation. 1. Development of EHW-LSI for function level hardware evolution, which includes 15 DSPs in one chip. 2. Application of the EHW to the practical industrial applications such as data compression, ATM control, digital mobile communication. 3. Two patents : (1) the architecture and the processing method for programmable EHW-LSI. (2) The method of data compression for loss-less data, using EHW. 4. The first international conference for evolvable hardware was held by authors: Intl. Conf. on Evolvable Systems (ICES96). It was determined at ICES96 that ICES will be held every two years between Japan and Europe. So the new society has been established by us. (NEDO)

  3. FY1995 evolvable hardware chip; 1995 nendo shinkasuru hardware chip

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This project aims at the development of 'Evolvable Hardware' (EHW) which can adapt its hardware structure to the environment to attain better hardware performance, under the control of genetic algorithms. EHW is a key technology to explore the new application area requiring real-time performance and on-line adaptation. 1. Development of EHW-LSI for function level hardware evolution, which includes 15 DSPs in one chip. 2. Application of the EHW to the practical industrial applications such as data compression, ATM control, digital mobile communication. 3. Two patents : (1) the architecture and the processing method for programmable EHW-LSI. (2) The method of data compression for loss-less data, using EHW. 4. The first international conference for evolvable hardware was held by authors: Intl. Conf. on Evolvable Systems (ICES96). It was determined at ICES96 that ICES will be held every two years between Japan and Europe. So the new society has been established by us. (NEDO)

  4. Problems in the optimum display of SPECT images

    International Nuclear Information System (INIS)

    Fielding, S.L.

    1988-01-01

    The instrumentation, computer hardware and software, and the image display system are all very important in the production of diagnostically useful SPECT images. Acquisition and processing parameters are discussed which can affect the quality of SPECT images. Regular quality control of the gamma camera and computer is important to keep the artifacts due to instrumentation to a minimum. The choice of reconstruction method will depend on the statistics in the study. The paper has shown that for high count rate studies, a high pass filter can be used to enhance the reconstructions. For lower count rate studies, pre-filtering is useful and the data can be reconstructed into thicker slices to reduce the effect of image noise. Finally, the optimum display for the images must be chosen, so that the information contained in the SPECT data can be easily perceived by the clinician. (orig.) [de

  5. Optimum design for pipe-support allocation against seismic loading

    International Nuclear Information System (INIS)

    Hara, Fumio; Iwasaki, Akira

    1996-01-01

    This paper deals with the optimum design methodology of a piping system subjected to a seismic design loading to reduce its dynamic response by selecting the location of pipe supports and whereby reducing the number of pipe supports to be used. The author employs the Genetic Algorithm for obtaining a reasonably optimum solution of the pipe support location, support capacity and number of supports. The design condition specified by the support location, support capacity and the number of supports to be used is encored by an integer number string for each of the support allocation candidates and they prepare many strings for expressing various kinds of pipe-support allocation state. Corresponding to each string, the authors evaluate the seismic response of the piping system to the design seismic excitation and apply the Genetic Algorithm to select the next generation candidates of support allocation to improve the seismic design performance specified by a weighted linear combination of seismic response magnitude, support capacity and the number of supports needed. Continuing this selection process, they find a reasonably optimum solution to the seismic design problem. They examine the feasibility of this optimum design method by investigating the optimum solution for 5, 7 and 10 degree-of-freedom models of piping system, and find that this method can offer one a theoretically feasible solution to the problem. They will be, thus, liberated from the severe uncertainty of damping value when the pipe support guaranties the design capacity of damping. Finally, they discuss the usefulness of the Genetic Algorithm for the seismic design problem of piping systems and some sensitive points when it will be applied to actual design problems

  6. Optimum design of steel structures

    CERN Document Server

    Farkas, József

    2013-01-01

    This book helps designers and manufacturers to select and develop the most suitable and competitive steel structures, which are safe, fit for production and economic. An optimum design system is used to find the best characteristics of structural models, which guarantee the fulfilment of design and fabrication requirements and minimize the cost function. Realistic numerical models are used as main components of industrial steel structures. Chapter 1 containts some experiences with the optimum design of steel structures Chapter 2 treats some newer mathematical optimization methods. Chapter 3 gives formulae for fabrication times and costs. Chapters 4 deals with beams and columns. Summarizes the Eurocode rules for design. Chapter 5 deals with the design of tubular trusses. Chapter 6 gives the design of frame structures and fire-resistant design rules for a frame. In Chapters 7 some minimum cost design problems of stiffened and cellular plates and shells are worked out for cases of different stiffenings and loads...

  7. PERANCANGAN APLIKASI SISTEM PAKAR DIAGNOSA KERUSAKAN HARDWARE KOMPUTER METODE FORWARD CHAINING

    Directory of Open Access Journals (Sweden)

    Ali Akbar Rismayadi

    2016-09-01

    Full Text Available Abstract Damage to computer hardware, not a big disaster, because not all damage to computer hardware can not be repaired, nearly all computer users, whether public or institutions often suffer various kinds of damage that occurred in the computer hardware it has, and the damage can be caused by various factors that are basically as the user does not know the cause of what makes the computer hardware used damaged. Therefore, it is necessary to build an application that can help users to mendiganosa damage to computer hardware. So that everyone can diagnose the type of hardware damage his computer. Development of expert system diagnosis of damage to computer hardware uses forward chaining method by promoting alisisis descriptive of various damage data obtained from several experts and other sources of literature to reach a conclusion on the diagnosis of damage. As well as using the waterfall model as a model system development, starting from the analysis stage to stage software needs support. This application is built using a programming language tools Eclipse ADT as well as SQLite as its database. diagnosis expert system damage computer hardware is expected to be used as a tool to help find the causes of damage to computer hardware independently without the help of a computer technician.

  8. Finding an optimum immuno-histochemical feature set to distinguish benign phyllodes from fibroadenoma.

    Science.gov (United States)

    Maity, Priti Prasanna; Chatterjee, Subhamoy; Das, Raunak Kumar; Mukhopadhyay, Subhalaxmi; Maity, Ashok; Maulik, Dhrubajyoti; Ray, Ajoy Kumar; Dhara, Santanu; Chatterjee, Jyotirmoy

    2013-05-01

    Benign phyllodes and fibroadenoma are two well-known breast tumors with remarkable diagnostic ambiguity. The present study is aimed at determining an optimum set of immuno-histochemical features to distinguish them by analyzing important observations on expressions of important genes in fibro-glandular tissue. Immuno-histochemically, the expressions of p63 and α-SMA in myoepithelial cells and collagen I, III and CD105 in stroma of tumors and their normal counterpart were studied. Semi-quantified features were analyzed primarily by ANOVA and ranked through F-scores for understanding relative importance of group of features in discriminating three classes followed by reduction in F-score arranged feature space dimension and application of inter-class Bhattacharyya distances to distinguish tumors with an optimum set of features. Among thirteen studied features except one all differed significantly in three study classes. F-Ranking of features revealed highest discriminative potential of collagen III (initial region). F-Score arranged feature space dimension and application of Bhattacharyya distance gave rise to a feature set of lower dimension which can discriminate benign phyllodes and fibroadenoma effectively. The work definitely separated normal breast, fibroadenoma and benign phyllodes, through an optimal set of immuno-histochemical features which are not only useful to address diagnostic ambiguity of the tumors but also to spell about malignant potentiality. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Hardware description languages

    Science.gov (United States)

    Tucker, Jerry H.

    1994-01-01

    Hardware description languages are special purpose programming languages. They are primarily used to specify the behavior of digital systems and are rapidly replacing traditional digital system design techniques. This is because they allow the designer to concentrate on how the system should operate rather than on implementation details. Hardware description languages allow a digital system to be described with a wide range of abstraction, and they support top down design techniques. A key feature of any hardware description language environment is its ability to simulate the modeled system. The two most important hardware description languages are Verilog and VHDL. Verilog has been the dominant language for the design of application specific integrated circuits (ASIC's). However, VHDL is rapidly gaining in popularity.

  10. Hardware availability calculations and results of the IFMIF accelerator facility

    International Nuclear Information System (INIS)

    Bargalló, Enric; Arroyo, Jose Manuel; Abal, Javier; Beauvais, Pierre-Yves; Gobin, Raphael; Orsini, Fabienne; Weber, Moisés; Podadera, Ivan; Grespan, Francesco; Fagotti, Enrico; De Blas, Alfredo; Dies, Javier; Tapia, Carlos; Mollá, Joaquín; Ibarra, Ángel

    2014-01-01

    Highlights: • IFMIF accelerator facility hardware availability analyses methodology is described. • Results of the individual hardware availability analyses are shown for the reference design. • Accelerator design improvements are proposed for each system. • Availability results are evaluated and compared with the requirements. - Abstract: Hardware availability calculations have been done individually for each system of the deuteron accelerators of the International Fusion Materials Irradiation Facility (IFMIF). The principal goal of these analyses is to estimate the availability of the systems, compare it with the challenging IFMIF requirements and find new paths to improve availability performances. Major unavailability contributors are highlighted and possible design changes are proposed in order to achieve the hardware availability requirements established for each system. In this paper, such possible improvements are implemented in fault tree models and the availability results are evaluated. The parallel activity on the design and construction of the linear IFMIF prototype accelerator (LIPAc) provides detailed design information for the RAMI (reliability, availability, maintainability and inspectability) analyses and allows finding out the improvements that the final accelerator could have. Because of the R and D behavior of the LIPAc, RAMI improvements could be the major differences between the prototype and the IFMIF accelerator design

  11. Hardware availability calculations and results of the IFMIF accelerator facility

    Energy Technology Data Exchange (ETDEWEB)

    Bargalló, Enric, E-mail: enric.bargallo-font@upc.edu [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Arroyo, Jose Manuel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Abal, Javier [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Beauvais, Pierre-Yves; Gobin, Raphael; Orsini, Fabienne [Commissariat à l’Energie Atomique, Saclay (France); Weber, Moisés; Podadera, Ivan [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Grespan, Francesco; Fagotti, Enrico [Istituto Nazionale di Fisica Nucleare, Legnaro (Italy); De Blas, Alfredo; Dies, Javier; Tapia, Carlos [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Mollá, Joaquín; Ibarra, Ángel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain)

    2014-10-15

    Highlights: • IFMIF accelerator facility hardware availability analyses methodology is described. • Results of the individual hardware availability analyses are shown for the reference design. • Accelerator design improvements are proposed for each system. • Availability results are evaluated and compared with the requirements. - Abstract: Hardware availability calculations have been done individually for each system of the deuteron accelerators of the International Fusion Materials Irradiation Facility (IFMIF). The principal goal of these analyses is to estimate the availability of the systems, compare it with the challenging IFMIF requirements and find new paths to improve availability performances. Major unavailability contributors are highlighted and possible design changes are proposed in order to achieve the hardware availability requirements established for each system. In this paper, such possible improvements are implemented in fault tree models and the availability results are evaluated. The parallel activity on the design and construction of the linear IFMIF prototype accelerator (LIPAc) provides detailed design information for the RAMI (reliability, availability, maintainability and inspectability) analyses and allows finding out the improvements that the final accelerator could have. Because of the R and D behavior of the LIPAc, RAMI improvements could be the major differences between the prototype and the IFMIF accelerator design.

  12. Foundations of hardware IP protection

    CERN Document Server

    Torres, Lionel

    2017-01-01

    This book provides a comprehensive and up-to-date guide to the design of security-hardened, hardware intellectual property (IP). Readers will learn how IP can be threatened, as well as protected, by using means such as hardware obfuscation/camouflaging, watermarking, fingerprinting (PUF), functional locking, remote activation, hidden transmission of data, hardware Trojan detection, protection against hardware Trojan, use of secure element, ultra-lightweight cryptography, and digital rights management. This book serves as a single-source reference to design space exploration of hardware security and IP protection. · Provides readers with a comprehensive overview of hardware intellectual property (IP) security, describing threat models and presenting means of protection, from integrated circuit layout to digital rights management of IP; · Enables readers to transpose techniques fundamental to digital rights management (DRM) to the realm of hardware IP security; · Introduce designers to the concept of salutar...

  13. Compiling quantum circuits to realistic hardware architectures using temporal planners

    Science.gov (United States)

    Venturelli, Davide; Do, Minh; Rieffel, Eleanor; Frank, Jeremy

    2018-04-01

    To run quantum algorithms on emerging gate-model quantum hardware, quantum circuits must be compiled to take into account constraints on the hardware. For near-term hardware, with only limited means to mitigate decoherence, it is critical to minimize the duration of the circuit. We investigate the application of temporal planners to the problem of compiling quantum circuits to newly emerging quantum hardware. While our approach is general, we focus on compiling to superconducting hardware architectures with nearest neighbor constraints. Our initial experiments focus on compiling Quantum Alternating Operator Ansatz (QAOA) circuits whose high number of commuting gates allow great flexibility in the order in which the gates can be applied. That freedom makes it more challenging to find optimal compilations but also means there is a greater potential win from more optimized compilation than for less flexible circuits. We map this quantum circuit compilation problem to a temporal planning problem, and generated a test suite of compilation problems for QAOA circuits of various sizes to a realistic hardware architecture. We report compilation results from several state-of-the-art temporal planners on this test set. This early empirical evaluation demonstrates that temporal planning is a viable approach to quantum circuit compilation.

  14. Open hardware for open science

    CERN Multimedia

    CERN Bulletin

    2011-01-01

    Inspired by the open source software movement, the Open Hardware Repository was created to enable hardware developers to share the results of their R&D activities. The recently published CERN Open Hardware Licence offers the legal framework to support this knowledge and technology exchange.   Two years ago, a group of electronics designers led by Javier Serrano, a CERN engineer, working in experimental physics laboratories created the Open Hardware Repository (OHR). This project was initiated in order to facilitate the exchange of hardware designs across the community in line with the ideals of “open science”. The main objectives include avoiding duplication of effort by sharing results across different teams that might be working on the same need. “For hardware developers, the advantages of open hardware are numerous. For example, it is a great learning tool for technologies some developers would not otherwise master, and it avoids unnecessary work if someone ha...

  15. Open Hardware Business Models

    OpenAIRE

    Edy Ferreira

    2008-01-01

    In the September issue of the Open Source Business Resource, Patrick McNamara, president of the Open Hardware Foundation, gave a comprehensive introduction to the concept of open hardware, including some insights about the potential benefits for both companies and users. In this article, we present the topic from a different perspective, providing a classification of market offers from companies that are making money with open hardware.

  16. The optimum spanning catenary cable

    Science.gov (United States)

    Wang, C. Y.

    2015-03-01

    A heavy cable spans two points in space. There exists an optimum cable length such that the maximum tension is minimized. If the two end points are at the same level, the optimum length is 1.258 times the distance between the ends. The optimum lengths for end points of different heights are also found.

  17. Open Hardware Business Models

    Directory of Open Access Journals (Sweden)

    Edy Ferreira

    2008-04-01

    Full Text Available In the September issue of the Open Source Business Resource, Patrick McNamara, president of the Open Hardware Foundation, gave a comprehensive introduction to the concept of open hardware, including some insights about the potential benefits for both companies and users. In this article, we present the topic from a different perspective, providing a classification of market offers from companies that are making money with open hardware.

  18. Outline of a Hardware Reconfiguration Framework for Modular Industrial Mobile Manipulators

    DEFF Research Database (Denmark)

    Schou, Casper; Bøgh, Simon; Madsen, Ole

    2014-01-01

    This paper presents concepts and ideas of a hard- ware reconfiguration framework for modular industrial mobile manipulators. Mobile manipulators pose a highly flexible pro- duction resource due to their ability to autonomously navigate between workstations. However, due to this high flexibility new...... approaches to the operation of the robots are needed. Reconfig- uring the robot to a new task should be carried out by shop floor operators and, thus, be both quick and intuitive. Late research has already proposed a method for intuitive robot programming. However, this relies on a predetermined hardware...... configuration. Finding a single multi-purpose hardware configuration suited to all tasks is considered unrealistic. As a result, the need for reconfiguration of the hardware is inevitable. In this paper an outline of a framework for making hardware reconfiguration quick and intuitive is presented. Two main...

  19. Is there an optimum level for renewable energy?

    International Nuclear Information System (INIS)

    Moriarty, Patrick; Honnery, Damon

    2011-01-01

    Because continued heavy use of fossil fuel will lead to both global climate change and resource depletion of easily accessible fuels, many researchers advocate a rapid transition to renewable energy (RE) sources. In this paper we examine whether RE can provide anywhere near the levels of primary energy forecast by various official organisations in a business-as-usual world. We find that the energy costs of energy will rise in a non-linear manner as total annual primary RE output increases. In addition, increasing levels of RE will lead to increasing levels of ecosystem maintenance energy costs per unit of primary energy output. The result is that there is an optimum level of primary energy output, in the sense that the sustainable level of energy available to the economy is maximised at that level. We further argue that this optimum occurs at levels well below the energy consumption forecasts for a few decades hence. - Highlights: → We need to shift to renewable energy for climate change and fuel depletion reasons. → We examine whether renewable energy can provide the primary energy levels forecast. → The energy costs of energy rise non-linearly with renewable energy output. → There is thus an optimum level of primary energy output. → This optimum occurs at levels well below future official energy use forecasts.

  20. Nondissipative optimum charge regulator

    Science.gov (United States)

    Rosen, R.; Vitebsky, J. N.

    1970-01-01

    Optimum charge regulator provides constant level charge/discharge control of storage batteries. Basic power transfer and control is performed by solar panel coupled to battery through power switching circuit. Optimum controller senses battery current and modifies duty cycle of switching circuit to maximize current available to battery.

  1. Open Hardware at CERN

    CERN Multimedia

    CERN Knowledge Transfer Group

    2015-01-01

    CERN is actively making its knowledge and technology available for the benefit of society and does so through a variety of different mechanisms. Open hardware has in recent years established itself as a very effective way for CERN to make electronics designs and in particular printed circuit board layouts, accessible to anyone, while also facilitating collaboration and design re-use. It is creating an impact on many levels, from companies producing and selling products based on hardware designed at CERN, to new projects being released under the CERN Open Hardware Licence. Today the open hardware community includes large research institutes, universities, individual enthusiasts and companies. Many of the companies are actively involved in the entire process from design to production, delivering services and consultancy and even making their own products available under open licences.

  2. Hardware protection through obfuscation

    CERN Document Server

    Bhunia, Swarup; Tehranipoor, Mark

    2017-01-01

    This book introduces readers to various threats faced during design and fabrication by today’s integrated circuits (ICs) and systems. The authors discuss key issues, including illegal manufacturing of ICs or “IC Overproduction,” insertion of malicious circuits, referred as “Hardware Trojans”, which cause in-field chip/system malfunction, and reverse engineering and piracy of hardware intellectual property (IP). The authors provide a timely discussion of these threats, along with techniques for IC protection based on hardware obfuscation, which makes reverse-engineering an IC design infeasible for adversaries and untrusted parties with any reasonable amount of resources. This exhaustive study includes a review of the hardware obfuscation methods developed at each level of abstraction (RTL, gate, and layout) for conventional IC manufacturing, new forms of obfuscation for emerging integration strategies (split manufacturing, 2.5D ICs, and 3D ICs), and on-chip infrastructure needed for secure exchange o...

  3. Expert System analysis of non-fuel assembly hardware and spent fuel disassembly hardware: Its generation and recommended disposal

    International Nuclear Information System (INIS)

    Williamson, D.A.

    1991-01-01

    Almost all of the effort being expended on radioactive waste disposal in the United States is being focused on the disposal of spent Nuclear Fuel, with little consideration for other areas that will have to be disposed of in the same facilities. one area of radioactive waste that has not been addressed adequately because it is considered a secondary part of the waste issue is the disposal of the various Non-Fuel Bearing Components of the reactor core. These hardware components fall somewhat arbitrarily into two categories: Non-Fuel Assembly (NFA) hardware and Spent Fuel Disassembly (SFD) hardware. This work provides a detailed examination of the generation and disposal of NFA hardware and SFD hardware by the nuclear utilities of the United States as it relates to the Civilian Radioactive Waste Management Program. All available sources of data on NFA and SFD hardware are analyzed with particular emphasis given to the Characteristics Data Base developed by Oak Ridge National Laboratory and the characterization work performed by Pacific Northwest Laboratories and Rochester Gas ampersand Electric. An Expert System developed as a portion of this work is used to assist in the prediction of quantities of NFA hardware and SFD hardware that will be generated by the United States' utilities. Finally, the hardware waste management practices of the United Kingdom, France, Germany, Sweden, and Japan are studied for possible application to the disposal of domestic hardware wastes. As a result of this work, a general classification scheme for NFA and SFD hardware was developed. Only NFA and SFD hardware constructed of zircaloy and experiencing a burnup of less than 70,000 MWD/MTIHM and PWR control rods constructed of stainless steel are considered Low-Level Waste. All other hardware is classified as Greater-ThanClass-C waste

  4. Universal Curve of Optimum Thermoelectric Figures of Merit for Bulk and Low-Dimensional Semiconductors

    Science.gov (United States)

    Hung, Nguyen T.; Nugraha, Ahmad R. T.; Saito, Riichiro

    2018-02-01

    This paper is a contribution to the Physical Review Applied collection in memory of Mildred S. Dresselhaus. Analytical formulas for thermoelectric figures of merit and power factors are derived based on the one-band model. We find that there is a direct relationship between the optimum figures of merit and the optimum power factors of semiconductors despite of the fact that the two quantities are generally given by different values of chemical potentials. By introducing a dimensionless parameter consisting of the optimum power factor and lattice thermal conductivity (without electronic thermal conductivity), it is possible to unify optimum figures of merit of both bulk and low-dimensional semiconductors into a single universal curve that covers many materials with different dimensionalities.

  5. Hardware Support for Embedded Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2012-01-01

    The general Java runtime environment is resource hungry and unfriendly for real-time systems. To reduce the resource consumption of Java in embedded systems, direct hardware support of the language is a valuable option. Furthermore, an implementation of the Java virtual machine in hardware enables...... worst-case execution time analysis of Java programs. This chapter gives an overview of current approaches to hardware support for embedded and real-time Java....

  6. HARDWARE TROJAN IDENTIFICATION AND DETECTION

    OpenAIRE

    Samer Moein; Fayez Gebali; T. Aaron Gulliver; Abdulrahman Alkandari

    2017-01-01

    ABSTRACT The majority of techniques developed to detect hardware trojans are based on specific attributes. Further, the ad hoc approaches employed to design methods for trojan detection are largely ineffective. Hardware trojans have a number of attributes which can be used to systematically develop detection techniques. Based on this concept, a detailed examination of current trojan detection techniques and the characteristics of existing hardware trojans is presented. This is used to dev...

  7. Hunting for hardware changes in data centres

    International Nuclear Information System (INIS)

    Coelho dos Santos, M; Steers, I; Szebenyi, I; Xafi, A; Barring, O; Bonfillou, E

    2012-01-01

    With many servers and server parts the environment of warehouse sized data centres is increasingly complex. Server life-cycle management and hardware failures are responsible for frequent changes that need to be managed. To manage these changes better a project codenamed “hardware hound” focusing on hardware failure trending and hardware inventory has been started at CERN. By creating and using a hardware oriented data set - the inventory - with detailed information on servers and their parts as well as tracking changes to this inventory, the project aims at, for example, being able to discover trends in hardware failure rates.

  8. Open-source hardware for medical devices.

    Science.gov (United States)

    Niezen, Gerrit; Eslambolchilar, Parisa; Thimbleby, Harold

    2016-04-01

    Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device.

  9. Hardware Locks with Priority Ceiling Emulation for a Java Chip-Multiprocessor

    DEFF Research Database (Denmark)

    Strøm, Torur Biskopstø; Schoeberl, Martin

    2015-01-01

    According to the safety-critical Java specification, priority ceiling emulation is a requirement for implementations, as it has preferable properties, such as avoiding priority inversion and being deadlock free on uni-core systems. In this paper we explore our hardware supported implementation...... of priority ceiling emulation on the multicore Java optimized processor, and compare it to the existing hardware locks on the Java optimized processor. We find that the additional overhead for priority ceiling emulation on a multicore processor is several times higher than simpler, non-premptive locks, mainly...

  10. A novel approach for optimum allocation of FACTS devices using multi-objective function

    International Nuclear Information System (INIS)

    Gitizadeh, M.; Kalantar, M.

    2009-01-01

    This paper presents a novel approach to find optimum type, location, and capacity of flexible alternating current transmission systems (FACTS) devices in a power system using a multi-objective optimization function. Thyristor controlled series compensator (TCSC) and static var compensator (SVC) are utilized to achieve these objectives: active power loss reduction, new introduced FACTS devices cost reduction, increase the robustness of the security margin against voltage collapse, and voltage deviation reduction. The operational and controlling constraints as well as load constraints are considered in the optimum allocation procedure. Here, a goal attainment method based on simulated annealing is used to approach the global optimum. In addition, the estimated annual load profile has been utilized to the optimum siting and sizing of FACTS devices to approach a practical solution. The standard IEEE 14-bus test system is used to validate the performance and effectiveness of the proposed method

  11. Fast image processing on parallel hardware

    International Nuclear Information System (INIS)

    Bittner, U.

    1988-01-01

    Current digital imaging modalities in the medical field incorporate parallel hardware which is heavily used in the stage of image formation like the CT/MR image reconstruction or in the DSA real time subtraction. In order to image post-processing as efficient as image acquisition, new software approaches have to be found which take full advantage of the parallel hardware architecture. This paper describes the implementation of two-dimensional median filter which can serve as an example for the development of such an algorithm. The algorithm is analyzed by viewing it as a complete parallel sort of the k pixel values in the chosen window which leads to a generalization to rank order operators and other closely related filters reported in literature. A section about the theoretical base of the algorithm gives hints for how to characterize operations suitable for implementations on pipeline processors and the way to find the appropriate algorithms. Finally some results that computation time and usefulness of medial filtering in radiographic imaging are given

  12. An evaluation of Skylab habitability hardware

    Science.gov (United States)

    Stokes, J.

    1974-01-01

    For effective mission performance, participants in space missions lasting 30-60 days or longer must be provided with hardware to accommodate their personal needs. Such habitability hardware was provided on Skylab. Equipment defined as habitability hardware was that equipment composing the food system, water system, sleep system, waste management system, personal hygiene system, trash management system, and entertainment equipment. Equipment not specifically defined as habitability hardware but which served that function were the Wardroom window, the exercise equipment, and the intercom system, which was occasionally used for private communications. All Skylab habitability hardware generally functioned as intended for the three missions, and most items could be considered as adequate concepts for future flights of similar duration. Specific components were criticized for their shortcomings.

  13. Is Hardware Removal Recommended after Ankle Fracture Repair?

    Directory of Open Access Journals (Sweden)

    Hong-Geun Jung

    2016-01-01

    Full Text Available The indications and clinical necessity for routine hardware removal after treating ankle or distal tibia fracture with open reduction and internal fixation are disputed even when hardware-related pain is insignificant. Thus, we determined the clinical effects of routine hardware removal irrespective of the degree of hardware-related pain, especially in the perspective of patients’ daily activities. This study was conducted on 80 consecutive cases (78 patients treated by surgery and hardware removal after bony union. There were 56 ankle and 24 distal tibia fractures. The hardware-related pain, ankle joint stiffness, discomfort on ambulation, and patient satisfaction were evaluated before and at least 6 months after hardware removal. Pain score before hardware removal was 3.4 (range 0 to 6 and decreased to 1.3 (range 0 to 6 after removal. 58 (72.5% patients experienced improved ankle stiffness and 65 (81.3% less discomfort while walking on uneven ground and 63 (80.8% patients were satisfied with hardware removal. These results suggest that routine hardware removal after ankle or distal tibia fracture could ameliorate hardware-related pain and improves daily activities and patient satisfaction even when the hardware-related pain is minimal.

  14. Door Hardware and Installations; Carpentry: 901894.

    Science.gov (United States)

    Dade County Public Schools, Miami, FL.

    The curriculum guide outlines a course designed to provide instruction in the selection, preparation, and installation of hardware for door assemblies. The course is divided into five blocks of instruction (introduction to doors and hardware, door hardware, exterior doors and jambs, interior doors and jambs, and a quinmester post-test) totaling…

  15. OPTIMUM PROSESSENTRERING

    Directory of Open Access Journals (Sweden)

    K. Adendorff

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: The paper derives an expression for optimum process centreing for a given design specification and spoilage and/or rework costs.

    AFRIKAANSE OPSOMMING: Die problem Van prosessentrering vir n gegewe ontwerpspesifikasie en herwerk- en/of skrootkoste word behandel.

  16. CT4 - Cost-Optimum Procedures

    DEFF Research Database (Denmark)

    Thomsen, Kirsten Engelund; Wittchen, Kim Bjarne

    This report collects the status in European member states regarding implementation of the cos optimum procedure for setting energy performance requirements to new and existing buildings.......This report collects the status in European member states regarding implementation of the cos optimum procedure for setting energy performance requirements to new and existing buildings....

  17. The optimum lead thickness for lead-activation detectors

    International Nuclear Information System (INIS)

    Si Fenni; Hu Qingyuan

    2009-01-01

    The optimum lead thickness for lead-activation detectors has been studied in this paper. First existence of the optimum lead thickness is explained theoretically. Then the optimum lead thickness is obtained by two methods, MCNP5 calculation and mathematical estimation. At last factors which affect the optimum lead thickness are discussed. It turns out that the optimum lead thickness is irrelevant to incident neutron energies. It is recommended 2.5 cm generally.

  18. From Open Source Software to Open Source Hardware

    OpenAIRE

    Viseur , Robert

    2012-01-01

    Part 2: Lightning Talks; International audience; The open source software principles progressively give rise to new initiatives for culture (free culture), data (open data) or hardware (open hardware). The open hardware is experiencing a significant growth but the business models and legal aspects are not well known. This paper is dedicated to the economics of open hardware. We define the open hardware concept and determine intellectual property tools we can apply to open hardware, with a str...

  19. {sup 18}F-FDG PET/CT evaluation of children and young adults with suspected spinal fusion hardware infection

    Energy Technology Data Exchange (ETDEWEB)

    Bagrosky, Brian M. [University of Colorado School of Medicine, Department of Pediatric Radiology, Children' s Hospital Colorado, 12123 E. 16th Ave., Box 125, Aurora, CO (United States); University of Colorado School of Medicine, Department of Radiology, Division of Nuclear Medicine, Aurora, CO (United States); Hayes, Kari L.; Fenton, Laura Z. [University of Colorado School of Medicine, Department of Pediatric Radiology, Children' s Hospital Colorado, 12123 E. 16th Ave., Box 125, Aurora, CO (United States); Koo, Phillip J. [University of Colorado School of Medicine, Department of Radiology, Division of Nuclear Medicine, Aurora, CO (United States)

    2013-08-15

    Evaluation of the child with spinal fusion hardware and concern for infection is challenging because of hardware artifact with standard imaging (CT and MRI) and difficult physical examination. Studies using {sup 18}F-FDG PET/CT combine the benefit of functional imaging with anatomical localization. To discuss a case series of children and young adults with spinal fusion hardware and clinical concern for hardware infection. These people underwent FDG PET/CT imaging to determine the site of infection. We performed a retrospective review of whole-body FDG PET/CT scans at a tertiary children's hospital from December 2009 to January 2012 in children and young adults with spinal hardware and suspected hardware infection. The PET/CT scan findings were correlated with pertinent clinical information including laboratory values of inflammatory markers, postoperative notes and pathology results to evaluate the diagnostic accuracy of FDG PET/CT. An exempt status for this retrospective review was approved by the Institution Review Board. Twenty-five FDG PET/CT scans were performed in 20 patients. Spinal fusion hardware infection was confirmed surgically and pathologically in six patients. The most common FDG PET/CT finding in patients with hardware infection was increased FDG uptake in the soft tissue and bone immediately adjacent to the posterior spinal fusion rods at multiple contiguous vertebral levels. Noninfectious hardware complications were diagnosed in ten patients and proved surgically in four. Alternative sources of infection were diagnosed by FDG PET/CT in seven patients (five with pneumonia, one with pyonephrosis and one with superficial wound infections). FDG PET/CT is helpful in evaluation of children and young adults with concern for spinal hardware infection. Noninfectious hardware complications and alternative sources of infection, including pneumonia and pyonephrosis, can be diagnosed. FDG PET/CT should be the first-line cross-sectional imaging study in

  20. On Optimum Safety Levels of Breakwaters

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Sørensen, John Dalsgaard

    2006-01-01

    The paper presents results from numerical simulations performed with the objective of identifying optimum design safety levels of conventional rubble mound and caisson breakwaters, corresponding to the lowest costs over the service life of the structures. The work is related to the PIANC Working...... Group 47 on "Selection of type of breakwater structures". The paper summaries results given in Burcharth and Sorensen (2005) related to outer rubble mound breakwaters but focus on optimum safety levels for outer caisson breakwaters on low and high rubble foundations placed on sea beds strong enough...... to resist geotechnical slip failures. Optimum safety levels formulated for use both in deterministic and probabilistic design procedures are given. Results obtained so far indicate that the optimum safety levels for caisson breakwaters are much higher than for rubble mound breakwaters....

  1. ZEUS hardware control system

    Science.gov (United States)

    Loveless, R.; Erhard, P.; Ficenec, J.; Gather, K.; Heath, G.; Iacovacci, M.; Kehres, J.; Mobayyen, M.; Notz, D.; Orr, R.; Orr, R.; Sephton, A.; Stroili, R.; Tokushuku, K.; Vogel, W.; Whitmore, J.; Wiggers, L.

    1989-12-01

    The ZEUS collaboration is building a system to monitor, control and document the hardware of the ZEUS detector. This system is based on a network of VAX computers and microprocessors connected via ethernet. The database for the hardware values will be ADAMO tables; the ethernet connection will be DECNET, TCP/IP, or RPC. Most of the documentation will also be kept in ADAMO tables for easy access by users.

  2. ZEUS hardware control system

    International Nuclear Information System (INIS)

    Loveless, R.; Erhard, P.; Ficenec, J.; Gather, K.; Heath, G.; Iacovacci, M.; Kehres, J.; Mobayyen, M.; Notz, D.; Orr, R.; Sephton, A.; Stroili, R.; Tokushuku, K.; Vogel, W.; Whitmore, J.; Wiggers, L.

    1989-01-01

    The ZEUS collaboration is building a system to monitor, control and document the hardware of the ZEUS detector. This system is based on a network of VAX computers and microprocessors connected via ethernet. The database for the hardware values will be ADAMO tables; the ethernet connection will be DECNET, TCP/IP, or RPC. Most of the documentation will also be kept in ADAMO tables for easy access by users. (orig.)

  3. NDAS Hardware Translation Layer Development

    Science.gov (United States)

    Nazaretian, Ryan N.; Holladay, Wendy T.

    2011-01-01

    The NASA Data Acquisition System (NDAS) project is aimed to replace all DAS software for NASA s Rocket Testing Facilities. There must be a software-hardware translation layer so the software can properly talk to the hardware. Since the hardware from each test stand varies, drivers for each stand have to be made. These drivers will act more like plugins for the software. If the software is being used in E3, then the software should point to the E3 driver package. If the software is being used at B2, then the software should point to the B2 driver package. The driver packages should also be filled with hardware drivers that are universal to the DAS system. For example, since A1, A2, and B2 all use the Preston 8300AU signal conditioners, then the driver for those three stands should be the same and updated collectively.

  4. Hardware standardization for embedded systems

    International Nuclear Information System (INIS)

    Sharma, M.K.; Kalra, Mohit; Patil, M.B.; Mohanty, Ashutos; Ganesh, G.; Biswas, B.B.

    2010-01-01

    Reactor Control Division (RCnD) has been one of the main designers of safety and safety related systems for power reactors. These systems have been built using in-house developed hardware. Since the present set of hardware was designed long ago, a need was felt to design a new family of hardware boards. A Working Group on Electronics Hardware Standardization (WG-EHS) was formed with an objective to develop a family of boards, which is general purpose enough to meet the requirements of the system designers/end users. RCnD undertook the responsibility of design, fabrication and testing of boards for embedded systems. VME and a proprietary I/O bus were selected as the two system buses. The boards have been designed based on present day technology and components. The intelligence of these boards has been implemented on FPGA/CPLD using VHDL. This paper outlines the various boards that have been developed with a brief description. (author)

  5. Hardware for dynamic quantum computing.

    Science.gov (United States)

    Ryan, Colm A; Johnson, Blake R; Ristè, Diego; Donovan, Brian; Ohki, Thomas A

    2017-10-01

    We describe the hardware, gateware, and software developed at Raytheon BBN Technologies for dynamic quantum information processing experiments on superconducting qubits. In dynamic experiments, real-time qubit state information is fed back or fed forward within a fraction of the qubits' coherence time to dynamically change the implemented sequence. The hardware presented here covers both control and readout of superconducting qubits. For readout, we created a custom signal processing gateware and software stack on commercial hardware to convert pulses in a heterodyne receiver into qubit state assignments with minimal latency, alongside data taking capability. For control, we developed custom hardware with gateware and software for pulse sequencing and steering information distribution that is capable of arbitrary control flow in a fraction of superconducting qubit coherence times. Both readout and control platforms make extensive use of field programmable gate arrays to enable tailored qubit control systems in a reconfigurable fabric suitable for iterative development.

  6. Hardware device binding and mutual authentication

    Science.gov (United States)

    Hamlet, Jason R; Pierson, Lyndon G

    2014-03-04

    Detection and deterrence of device tampering and subversion by substitution may be achieved by including a cryptographic unit within a computing device for binding multiple hardware devices and mutually authenticating the devices. The cryptographic unit includes a physically unclonable function ("PUF") circuit disposed in or on the hardware device, which generates a binding PUF value. The cryptographic unit uses the binding PUF value during an enrollment phase and subsequent authentication phases. During a subsequent authentication phase, the cryptographic unit uses the binding PUF values of the multiple hardware devices to generate a challenge to send to the other device, and to verify a challenge received from the other device to mutually authenticate the hardware devices.

  7. Secure coupling of hardware components

    NARCIS (Netherlands)

    Hoepman, J.H.; Joosten, H.J.M.; Knobbe, J.W.

    2011-01-01

    A method and a system for securing communication between at least a first and a second hardware components of a mobile device is described. The method includes establishing a first shared secret between the first and the second hardware components during an initialization of the mobile device and,

  8. A portable anaerobic microbioreactor reveals optimum growth conditions for the methanogen Methanosaeta concilii.

    Science.gov (United States)

    Steinhaus, Benjamin; Garcia, Marcelo L; Shen, Amy Q; Angenent, Largus T

    2007-03-01

    Conventional studies of the optimum growth conditions for methanogens (methane-producing, obligate anaerobic archaea) are typically conducted with serum bottles or bioreactors. The use of microfluidics to culture methanogens allows direct microscopic observations of the time-integrated response of growth. Here, we developed a microbioreactor (microBR) with approximately 1-microl microchannels to study some optimum growth conditions for the methanogen Methanosaeta concilii. The microBR is contained in an anaerobic chamber specifically designed to place it directly onto an inverted light microscope stage while maintaining a N2-CO2 environment. The methanogen was cultured for months inside microchannels of different widths. Channel width was manipulated to create various fluid velocities, allowing the direct study of the behavior and responses of M. concilii to various shear stresses and revealing an optimum shear level of approximately 20 to 35 microPa. Gradients in a single microchannel were then used to find an optimum pH level of 7.6 and an optimum total NH4-N concentration of less than 1,100 mg/liter (<47 mg/liter as free NH3-N) for M. concilii under conditions of the previously determined ideal shear stress and pH and at a temperature of 35 degrees C.

  9. Choice of word length in the design of a specialized hardware for lossless wavelet compression of medical images

    Science.gov (United States)

    Urriza, Isidro; Barragan, Luis A.; Artigas, Jose I.; Garcia, Jose I.; Navarro, Denis

    1997-11-01

    Image compression plays an important role in the archiving and transmission of medical images. Discrete cosine transform (DCT)-based compression methods are not suitable for medical images because of block-like image artifacts that could mask or be mistaken for pathology. Wavelet transforms (WTs) are used to overcome this problem. When implementing WTs in hardware, finite precision arithmetic introduces quantization errors. However, lossless compression is usually required in the medical image field. Thus, the hardware designer must look for the optimum register length that, while ensuring the lossless accuracy criteria, will also lead to a high-speed implementation with small chip area. In addition, wavelet choice is a critical issue that affects image quality as well as system design. We analyze the filters best suited to image compression that appear in the literature. For them, we obtain the maximum quantization errors produced in the calculation of the WT components. Thus, we deduce the minimum word length required for the reconstructed image to be numerically identical to the original image. The theoretical results are compared with experimental results obtained from algorithm simulations on random test images. These results enable us to compare the hardware implementation cost of the different filter banks. Moreover, to reduce the word length, we have analyzed the case of increasing the integer part of the numbers while maintaining constant the word length when the scale increases.

  10. The ATLAS FTK Auxiliary Card: A Highly Functional VME Rear Transition Module for a Hardware Track Finding Processing Unit

    CERN Document Server

    Alison, John; The ATLAS collaboration; Bogdan, Mircea; Bryant, Patrick; Cheng, Yangyang; Krizka, Karol; Shochet, Mel; Tompkins, Lauren; Webster, Jordan S

    2014-01-01

    The ATLAS Fast TracKer is a hardware-based charged particle track finder for the High Level Trigger system of the ATLAS Experiment at the LHC. Using a multi-component system, it finds charged particle trajectories of 1 GeV/c and greater using data from the full ATLAS silicon tracking detectors at a rate of 100 kHz. Pattern recognition and preliminary track fitting are performed by VME Processing Units consisting of an Associative Memory Board containing custom associative memory chips for pattern recognition, and the Auxiliary Card (AUX), a powerful rear transition module which formats the data for pattern recognition and performs linearized fits on track candidates. We report on the design and testing of the AUX, which utilizes six FPGAs to process up to 32 Gbps of hit data, as well as fit the helical trajectory of one track candidate per nanosecond through a highly parallel track fitting architecture. Both the board and firmware design will be discussed, as well as the performance observed in tests at CERN ...

  11. A methodology for selecting optimum organizations for space communities

    Science.gov (United States)

    Ragusa, J. M.

    1978-01-01

    This paper suggests that a methodology exists for selecting optimum organizations for future space communities of various sizes and purposes. Results of an exploratory study to identify an optimum hypothetical organizational structure for a large earth-orbiting multidisciplinary research and applications (R&A) Space Base manned by a mixed crew of technologists are presented. Since such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than the empirical testing of it. The principal finding of this research was that a four-level project type 'total matrix' model will optimize the effectiveness of Space Base technologists. An overall conclusion which can be reached from the research is that application of this methodology, or portions of it, may provide planning insights for the formal organizations which will be needed during the Space Industrialization Age.

  12. The optimum size of rotating qarḍ ḥasan savings and credit associations

    Directory of Open Access Journals (Sweden)

    Seyed Kazem Sadr

    2017-07-01

    Full Text Available Purpose - Several indigenous credit and savings schemes have been accredited recently in developing countries for the benefit of households and entrepreneurs alike. Famous among them are the Rotating Savings and Credit Associations (ROSCAs that exist in almost all continents currently. The rapid development of ROSCAs and their varied structures in many countries have been the subject of numerous studies. What has not been thoroughly analysed is the optimum size of these associations and the fact that lending and borrowing is without interest. The aim of this paper is to present a model that would determine the optimum size of ROSCAs and deal with the following issues: how the group size varies with changes in the income level of the members, the demand for the loan, the size of the collected loan and its duration. Further, the question of whether or not lending to the association in return for obtaining larger sums is a violation of the qarḍ (loan contract is dealt with, and several Sharīʿah compatible formulations are provided. Design/methodology/approach - Economic analysis has been applied to show the optimum size of Qarḍ Ḥasan Associations (QHAs, which are the Sharīʿah-compliant equivalent of ROSCAs, and the Sharīʿah rules of the qarḍ contract to illustrate the legitimacy of group lending. Findings - The major findings of this study are determination of the optimum size of QHAs, the factors that affect the size and suggestion of alternative legal forms for group financing. Research limitations/implications - Inaccessibility to sources of data to test the hypothesis that has been put forth is the main difficulty encountered when conducting research on the subject. Practical implications - The paper concludes that the development of informal interest-free ROSCAs in both Muslim and non-Muslim countries is an efficient informal microfinance scheme and that it is compatible with Sharīʿah rules. Originality/value - The optimum size

  13. Optimum Design of Plasma Focus

    International Nuclear Information System (INIS)

    Ramos, Ruben; Gonzalez, Jose; Clausse, Alejandro

    2000-01-01

    The optimum design of Plasma Focus devices is presented based in a lumped parameter model of the MHD equations.Maps in the design parameters space are obtained, which determine the length and deuterium pressure required to produce a given neutron yield.Sensitivity analyses of the main effective numbers (sweeping efficiencies) was performed, and lately the optimum values were determined in order to set a basis for the conceptual design

  14. The Impact of Flight Hardware Scavenging on Space Logistics

    Science.gov (United States)

    Oeftering, Richard C.

    2011-01-01

    For a given fixed launch vehicle capacity the logistics payload delivered to the moon may be only roughly 20 percent of the payload delivered to the International Space Station (ISS). This is compounded by the much lower flight frequency to the moon and thus low availability of spares for maintenance. This implies that lunar hardware is much more scarce and more costly per kilogram than ISS and thus there is much more incentive to preserve hardware. The Constellation Lunar Surface System (LSS) program is considering ways of utilizing hardware scavenged from vehicles including the Altair lunar lander. In general, the hardware will have only had a matter of hours of operation yet there may be years of operational life remaining. By scavenging this hardware the program, in effect, is treating vehicle hardware as part of the payload. Flight hardware may provide logistics spares for system maintenance and reduce the overall logistics footprint. This hardware has a wide array of potential applications including expanding the power infrastructure, and exploiting in-situ resources. Scavenging can also be seen as a way of recovering the value of, literally, billions of dollars worth of hardware that would normally be discarded. Scavenging flight hardware adds operational complexity and steps must be taken to augment the crew s capability with robotics, capabilities embedded in flight hardware itself, and external processes. New embedded technologies are needed to make hardware more serviceable and scavengable. Process technologies are needed to extract hardware, evaluate hardware, reconfigure or repair hardware, and reintegrate it into new applications. This paper also illustrates how scavenging can be used to drive down the cost of the overall program by exploiting the intrinsic value of otherwise discarded flight hardware.

  15. Environmental Control and Life Support (ECLS) Hardware Commonality for Exploration Vehicles

    Science.gov (United States)

    Carrasquillo, Robyn; Anderson, Molly

    2012-01-01

    In August 2011, the Environmental Control and Life Support Systems (ECLSS) technical community, along with associated stakeholders, held a workshop to review NASA s plans for Exploration missions and vehicles with two objectives: revisit the Exploration Atmospheres Working Group (EAWG) findings from 2006, and discuss preliminary ECLSS architecture concepts and technology choices for Exploration vehicles, identifying areas for potential common hardware or technologies to be utilized. Key considerations for selection of vehicle design total pressure and percent oxygen include operational concepts for extravehicular activity (EVA) and prebreathe protocols, materials flammability, and controllability within pressure and oxygen ranges. New data for these areas since the 2006 study were presented and discussed, and the community reached consensus on conclusions and recommendations for target design pressures for each Exploration vehicle concept. For the commonality study, the workshop identified many areas of potential commonality across the Exploration vehicles as well as with heritage International Space Station (ISS) and Shuttle hardware. Of the 36 ECLSS functions reviewed, 16 were considered to have strong potential for commonality, 13 were considered to have some potential commonality, and 7 were considered to have limited potential for commonality due to unique requirements or lack of sufficient heritage hardware. These findings, which will be utilized in architecture studies and budget exercises going forward, are presented in detail.

  16. Constructing Hardware in a Scale Embedded Language

    Energy Technology Data Exchange (ETDEWEB)

    2014-08-21

    Chisel is a new open-source hardware construction language developed at UC Berkeley that supports advanced hardware design using highly parameterized generators and layered domain-specific hardware languages. Chisel is embedded in the Scala programming language, which raises the level of hardware design abstraction by providing concepts including object orientation, functional programming, parameterized types, and type inference. From the same source, Chisel can generate a high-speed C++-based cycle-accurate software simulator, or low-level Verilog designed to pass on to standard ASIC or FPGA tools for synthesis and place and route.

  17. Hardware Objects for Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Thalinger, Christian; Korsholm, Stephan

    2008-01-01

    Java, as a safe and platform independent language, avoids access to low-level I/O devices or direct memory access. In standard Java, low-level I/O it not a concern; it is handled by the operating system. However, in the embedded domain resources are scarce and a Java virtual machine (JVM) without...... an underlying middleware is an attractive architecture. When running the JVM on bare metal, we need access to I/O devices from Java; therefore we investigate a safe and efficient mechanism to represent I/O devices as first class Java objects, where device registers are represented by object fields. Access...... to those registers is safe as Java’s type system regulates it. The access is also fast as it is directly performed by the bytecodes getfield and putfield. Hardware objects thus provide an object-oriented abstraction of low-level hardware devices. As a proof of concept, we have implemented hardware objects...

  18. Hardware Development Process for Human Research Facility Applications

    Science.gov (United States)

    Bauer, Liz

    2000-01-01

    The simple goal of the Human Research Facility (HRF) is to conduct human research experiments on the International Space Station (ISS) astronauts during long-duration missions. This is accomplished by providing integration and operation of the necessary hardware and software capabilities. A typical hardware development flow consists of five stages: functional inputs and requirements definition, market research, design life cycle through hardware delivery, crew training, and mission support. The purpose of this presentation is to guide the audience through the early hardware development process: requirement definition through selecting a development path. Specific HRF equipment is used to illustrate the hardware development paths. The source of hardware requirements is the science community and HRF program. The HRF Science Working Group, consisting of SCientists from various medical disciplines, defined a basic set of equipment with functional requirements. This established the performance requirements of the hardware. HRF program requirements focus on making the hardware safe and operational in a space environment. This includes structural, thermal, human factors, and material requirements. Science and HRF program requirements are defined in a hardware requirements document which includes verification methods. Once the hardware is fabricated, requirements are verified by inspection, test, analysis, or demonstration. All data is compiled and reviewed to certify the hardware for flight. Obviously, the basis for all hardware development activities is requirement definition. Full and complete requirement definition is ideal prior to initiating the hardware development. However, this is generally not the case, but the hardware team typically has functional inputs as a guide. The first step is for engineers to conduct market research based on the functional inputs provided by scientists. CommerCially available products are evaluated against the science requirements as

  19. Optimum Tilt Angle at Tropical Region

    Directory of Open Access Journals (Sweden)

    S Soulayman

    2015-02-01

    Full Text Available : One of the important parameters that affect the performance of a solar collector is its tilt angle with the horizon. This is because of the variation of tilt angle changes the amount of solar radiation reaching the collector surface. Meanwhile, is the rule of thumb, which says that solar collector Equator facing position is the best, is valid for tropical region? Thus, it is required to determine the optimum tilt as for Equator facing and for Pole oriented collectors. In addition, the question that may arise: how many times is reasonable for adjusting collector tilt angle for a definite value of surface azimuth angle? A mathematical model was used for estimating the solar radiation on a tilted surface, and to determine the optimum tilt angle and orientation (surface azimuth angle for the solar collector at any latitude. This model was applied for determining optimum tilt angle and orientation in the tropical zones, on a daily basis, as well as for a specific period. The optimum angle was computed by searching for the values for which the radiation on the collector surface is a maximum for a particular day or a specific period. The results reveal that changing the tilt angle 12 times in a year (i.e. using the monthly optimum tilt angle maintains approximately the total amount of solar radiation near the maximum value that is found by changing the tilt angle daily to its optimum value. This achieves a yearly gain in solar radiation of 11% to 18% more than the case of a solar collector fixed on a horizontal surface.

  20. Impacts of optimum cost effective energy efficiency standards

    International Nuclear Information System (INIS)

    Brancic, A.B.; Peters, J.S.; Arch, M.

    1991-01-01

    Building Codes are increasingly required to be responsive to social and economic policy concerns. In 1990 the State of Connecticut passes An Act Concerning Global Warming, Public Act 90-219, which mandates the revision of the state building code to require that buildings and building elements be designed to provide optimum cost-effective energy efficiency over the useful life of the building. Further, such revision must meet the American Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE) Standard 90.1 - 1989. As the largest electric energy supplier in Connecticut, Northeast Utilities (NU) sponsored a pilot study of the cost effectiveness of alternative building code standards for commercial construction. This paper reports on this study which analyzed design and construction means, building elements, incremental construction costs, and energy savings to determine the optimum cost-effective building code standard. Findings are that ASHRAE 90.1 results in 21% energy savings and alternative standards above it result in significant additional savings. Benefit/cost analysis showed that both are cost effective

  1. An optimum analysis sequence for environmental gamma-ray spectrometry

    International Nuclear Information System (INIS)

    De la Torre, F.; Rios M, C.; Ruvalcaba A, M. G.; Mireles G, F.; Saucedo A, S.; Davila R, I.; Pinedo, J. L.

    2010-10-01

    This work aims to obtain an optimum analysis sequence for environmental gamma-ray spectroscopy by means of Genie 2000 (Canberra). Twenty different analysis sequences were customized using different peak area percentages and different algorithms for: 1) peak finding, and 2) peak area determination, and with or without the use of a library -based on evaluated nuclear data- of common gamma-ray emitters in environmental samples. The use of an optimum analysis sequence with certified nuclear information avoids the problems originated by the significant variations in out-of-date nuclear parameters of commercial software libraries. Interference-free gamma ray energies with absolute emission probabilities greater than 3.75% were included in the customized library. The gamma-ray spectroscopy system (based on a Ge Re-3522 Canberra detector) was calibrated both in energy and shape by means of the IAEA-2002 reference spectra for software intercomparison. To test the performance of the analysis sequences, the IAEA-2002 reference spectrum was used. The z-score and the reduced χ 2 criteria were used to determine the optimum analysis sequence. The results show an appreciable variation in the peak area determinations and their corresponding uncertainties. Particularly, the combination of second derivative peak locate with simple peak area integration algorithms provides the greater accuracy. Lower accuracy comes from the combination of library directed peak locate algorithm and Genie's Gamma-M peak area determination. (Author)

  2. WHAT IS THE OPTIMUM SIZE OF GOVERNMENT: A SUGGESTION

    Directory of Open Access Journals (Sweden)

    Aykut Ekinci

    2011-01-01

    Full Text Available What is the optimum size of government? When the rule of law and the establishment of private property rights are taken into consideration, it is clear that the answer will not be at some 0%. On the other hand, when the experience of the old Soviet Union, East Germany and North Korea is considered, the answer will not be at some 100% either. Therefore, extreme points should not be the right answer. This study offers using normal distribution to answer this question. The study has revealed the following findings: (i The total amount of public expenditures as % of GDP, a is at minimum level at 4.55% rate, b is at optimum level at 13.4% rate, c is at maximum level at 31.7%. (ii Thus, as a fiscal rule, countries should: a choose the total amount of public expenditures as % of GDP ≤ 31.7% b target 13.4%. (iii Tree dimensional (3D normal distribution demonstrates that a healthy market system could be built upon a healthy government system (iv This approach rejects Wagner’s law. In a healthy growing economy, optimum government size could be kept at 13.4%. (v The UK, the USA and the European countries have been in the Keynesian-Marxist area, which reduces their average growth.

  3. VEG-01: Veggie Hardware Verification Testing

    Science.gov (United States)

    Massa, Gioia; Newsham, Gary; Hummerick, Mary; Morrow, Robert; Wheeler, Raymond

    2013-01-01

    The Veggie plant/vegetable production system is scheduled to fly on ISS at the end of2013. Since much of the technology associated with Veggie has not been previously tested in microgravity, a hardware validation flight was initiated. This test will allow data to be collected about Veggie hardware functionality on ISS, allow crew interactions to be vetted for future improvements, validate the ability of the hardware to grow and sustain plants, and collect data that will be helpful to future Veggie investigators as they develop their payloads. Additionally, food safety data on the lettuce plants grown will be collected to help support the development of a pathway for the crew to safely consume produce grown on orbit. Significant background research has been performed on the Veggie plant growth system, with early tests focusing on the development of the rooting pillow concept, and the selection of fertilizer, rooting medium and plant species. More recent testing has been conducted to integrate the pillow concept into the Veggie hardware and to ensure that adequate water is provided throughout the growth cycle. Seed sanitation protocols have been established for flight, and hardware sanitation between experiments has been studied. Methods for shipping and storage of rooting pillows and the development of crew procedures and crew training videos for plant activities on-orbit have been established. Science verification testing was conducted and lettuce plants were successfully grown in prototype Veggie hardware, microbial samples were taken, plant were harvested, frozen, stored and later analyzed for microbial growth, nutrients, and A TP levels. An additional verification test, prior to the final payload verification testing, is desired to demonstrate similar growth in the flight hardware and also to test a second set of pillows containing zinnia seeds. Issues with root mat water supply are being resolved, with final testing and flight scheduled for later in 2013.

  4. Implementation of Hardware Accelerators on Zynq

    DEFF Research Database (Denmark)

    Toft, Jakob Kenn

    of the ARM Cortex-9 processor featured on the Zynq SoC, with regard to execution time, power dissipation and energy consumption. The implementation of the hardware accelerators were successful. Use of the Monte Carlo processor resulted in a significant increase in performance. The Telco hardware accelerator......In the recent years it has become obvious that the performance of general purpose processors are having trouble meeting the requirements of high performance computing applications of today. This is partly due to the relatively high power consumption, compared to the performance, of general purpose...... processors, which has made hardware accelerators an essential part of several datacentres and the worlds fastest super-computers. In this work, two different hardware accelerators were implemented on a Xilinx Zynq SoC platform mounted on the ZedBoard platform. The two accelerators are based on two different...

  5. Computer hardware fault administration

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  6. A Parallel Approach To Optimum Actuator Selection With a Genetic Algorithm

    Science.gov (United States)

    Rogers, James L.

    2000-01-01

    Recent discoveries in smart technologies have created a variety of aerodynamic actuators which have great potential to enable entirely new approaches to aerospace vehicle flight control. For a revolutionary concept such as a seamless aircraft with no moving control surfaces, there is a large set of candidate locations for placing actuators, resulting in a substantially larger number of combinations to examine in order to find an optimum placement satisfying the mission requirements. The placement of actuators on a wing determines the control effectiveness of the airplane. One approach to placement Maximizes the moments about the pitch, roll, and yaw axes, while minimizing the coupling. Genetic algorithms have been instrumental in achieving good solutions to discrete optimization problems, such as the actuator placement problem. As a proof of concept, a genetic has been developed to find the minimum number of actuators required to provide uncoupled pitch, roll, and yaw control for a simplified, untapered, unswept wing model. To find the optimum placement by searching all possible combinations would require 1,100 hours. Formulating the problem and as a multi-objective problem and modifying it to take advantage of the parallel processing capabilities of a multi-processor computer, reduces the optimization time to 22 hours.

  7. Solar cooling in the hardware-in-the-loop test; Solare Kuehlung im Hardware-in-the-Loop-Test

    Energy Technology Data Exchange (ETDEWEB)

    Lohmann, Sandra; Radosavljevic, Rada; Goebel, Johannes; Gottschald, Jonas; Adam, Mario [Fachhochschule Duesseldorf (Germany). Erneuerbare Energien und Energieeffizienz E2

    2012-07-01

    The first part of the BMBF-funded research project 'Solar cooling in the hardware-in-the-loop test' (SoCool HIL) deals with the simulation of a solar refrigeration system using the simulation environment Matlab / Simulink with the toolboxes Stateflow and Carnot. Dynamic annual simulations and DoE supported parameter variations were used to select meaningful system configurations, control strategies and dimensioning of components. The second part of this project deals with hardware-in-the-loop tests using the 17.5 kW absorption chiller of the company Yazaki Europe Limited (Hertfordshire, United Kingdom). For this, the chiller is operated on a test bench in order to emulate the behavior of other system components (solar circuit with heat storage, recooling, buildings and cooling distribution / transfer). The chiller is controlled by a simulation of the system using MATLAB / Simulink / Carnot. Based on the knowledge on the real dynamic performance of the chiller the simulation model of the chiller can then be validated. Further tests are used to optimize the control of the chiller to the current cooling load. In addition, some changes in system configurations (for example cold backup) are tested with the real machine. The results of these tests and the findings on the dynamic performance of the chiller are presented.

  8. Superior Generalization Capability of Hardware-Learing Algorithm Developed for Self-Learning Neuron-MOS Neural Networks

    Science.gov (United States)

    Kondo, Shuhei; Shibata, Tadashi; Ohmi, Tadahiro

    1995-02-01

    We have investigated the learning performance of the hardware backpropagation (HBP) algorithm, a hardware-oriented learning algorithm developed for the self-learning architecture of neural networks constructed using neuron MOS (metal-oxide-semiconductor) transistors. The solution to finding a mirror symmetry axis in a 4×4 binary pixel array was tested by computer simulation based on the HBP algorithm. Despite the inherent restrictions imposed on the hardware-learning algorithm, HBP exhibits equivalent learning performance to that of the original backpropagation (BP) algorithm when all the pertinent parameters are optimized. Very importantly, we have found that HBP has a superior generalization capability over BP; namely, HBP exhibits higher performance in solving problems that the network has not yet learnt.

  9. A data acquisition computer for high energy physics applications DAFNE:- hardware manual

    International Nuclear Information System (INIS)

    Barlow, J.; Seller, P.; De-An, W.

    1983-07-01

    A high performance stand alone computer system based on the Motorola 68000 micro processor has been built at the Rutherford Appleton Laboratory. Although the design was strongly influenced by the requirement to provide a compact data acquisition computer for the high energy physics environment, the system is sufficiently general to find applications in a wider area. It provides colour graphics and tape and disc storage together with access to CAMAC systems. This report is the hardware manual of the data acquisition computer, DAFNE (Data Acquisition For Nuclear Experiments), and as such contains a full description of the hardware structure of the computer system. (author)

  10. Non-fuel bearing hardware melting technology

    International Nuclear Information System (INIS)

    Newman, D.F.

    1993-01-01

    Battelle has developed a portable hardware melter concept that would allow spent fuel rod consolidation operations at commercial nuclear power plants to provide significantly more storage space for other spent fuel assemblies in existing pool racks at lower cost. Using low pressure compaction, the non-fuel bearing hardware (NFBH) left over from the removal of spent fuel rods from the stainless steel end fittings and the Zircaloy guide tubes and grid spacers still occupies 1/3 to 2/5 of the volume of the consolidated fuel rod assemblies. Melting the non-fuel bearing hardware reduces its volume by a factor 4 from that achievable with low-pressure compaction. This paper describes: (1) the configuration and design features of Battelle's hardware melter system that permit its portability, (2) the system's throughput capacity, (3) the bases for capital and operating estimates, and (4) the status of NFBH melter demonstration to reduce technical risks for implementation of the concept. Since all NFBH handling and processing operations would be conducted at the reactor site, costs for shipping radioactive hardware to and from a stationary processing facility for volume reduction are avoided. Initial licensing, testing, and installation in the field would follow the successful pattern achieved with rod consolidation technology

  11. Optimum Safety Levels for Breakwaters

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Sørensen, John Dalsgaard

    2005-01-01

    Optimum design safety levels for rock and cube armoured rubble mound breakwaters without superstructure are investigated by numerical simulations on the basis of minimization of the total costs over the service life of the structure, taking into account typical uncertainties related to wave...... statistics and structure response. The study comprises the influence of interest rate, service lifetime, downtime costs and damage accumulation. Design limit states and safety classes for breakwaters are discussed. The results indicate that optimum safety levels are somewhat higher than the safety levels...

  12. A Practical Introduction to HardwareSoftware Codesign

    CERN Document Server

    Schaumont, Patrick R

    2013-01-01

    This textbook provides an introduction to embedded systems design, with emphasis on integration of custom hardware components with software. The key problem addressed in the book is the following: how can an embedded systems designer strike a balance between flexibility and efficiency? The book describes how combining hardware design with software design leads to a solution to this important computer engineering problem. The book covers four topics in hardware/software codesign: fundamentals, the design space of custom architectures, the hardware/software interface and application examples. The book comes with an associated design environment that helps the reader to perform experiments in hardware/software codesign. Each chapter also includes exercises and further reading suggestions. Improvements in this second edition include labs and examples using modern FPGA environments from Xilinx and Altera, which make the material applicable to a greater number of courses where these tools are already in use.  Mo...

  13. Comparative Modal Analysis of Sieve Hardware Designs

    Science.gov (United States)

    Thompson, Nathaniel

    2012-01-01

    The CMTB Thwacker hardware operates as a testbed analogue for the Flight Thwacker and Sieve components of CHIMRA, a device on the Curiosity Rover. The sieve separates particles with a diameter smaller than 150 microns for delivery to onboard science instruments. The sieving behavior of the testbed hardware should be similar to the Flight hardware for the results to be meaningful. The elastodynamic behavior of both sieves was studied analytically using the Rayleigh Ritz method in conjunction with classical plate theory. Finite element models were used to determine the mode shapes of both designs, and comparisons between the natural frequencies and mode shapes were made. The analysis predicts that the performance of the CMTB Thwacker will closely resemble the performance of the Flight Thwacker within the expected steady state operating regime. Excitations of the testbed hardware that will mimic the flight hardware were recommended, as were those that will improve the efficiency of the sieving process.

  14. Optimum Assembly Sequence Planning System Using Discrete Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Özkan Özmen

    2018-01-01

    Full Text Available Assembly refers both to the process of combining parts to create a structure and to the product resulting therefrom. The complexity of this process increases with the number of pieces in the assembly. This paper presents the assembly planning system design (APSD program, a computer program developed based on a matrix-based approach and the discrete artificial bee colony (DABC algorithm, which determines the optimum assembly sequence among numerous feasible assembly sequences (FAS. Specifically, the assembly sequences of three-dimensional (3D parts prepared in the computer-aided design (CAD software AutoCAD are first coded using the matrix-based methodology and the resulting FAS are assessed and the optimum assembly sequence is selected according to the assembly time optimisation criterion using DABC. The results of comparison of the performance of the proposed method with other methods proposed in the literature verify its superiority in finding the sequence with the lowest overall time. Further, examination of the results of application of APSD to assemblies consisting of parts in different numbers and shapes shows that it can select the optimum sequence from among hundreds of FAS.

  15. Remote hardware-reconfigurable robotic camera

    Science.gov (United States)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.

    2001-10-01

    In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.

  16. An optimum analysis sequence for environmental gamma-ray spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    De la Torre, F.; Rios M, C.; Ruvalcaba A, M. G.; Mireles G, F.; Saucedo A, S.; Davila R, I.; Pinedo, J. L., E-mail: fta777@hotmail.co [Universidad Autonoma de Zacatecas, Centro Regional de Estudis Nucleares, Calle Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas (Mexico)

    2010-10-15

    This work aims to obtain an optimum analysis sequence for environmental gamma-ray spectroscopy by means of Genie 2000 (Canberra). Twenty different analysis sequences were customized using different peak area percentages and different algorithms for: 1) peak finding, and 2) peak area determination, and with or without the use of a library -based on evaluated nuclear data- of common gamma-ray emitters in environmental samples. The use of an optimum analysis sequence with certified nuclear information avoids the problems originated by the significant variations in out-of-date nuclear parameters of commercial software libraries. Interference-free gamma ray energies with absolute emission probabilities greater than 3.75% were included in the customized library. The gamma-ray spectroscopy system (based on a Ge Re-3522 Canberra detector) was calibrated both in energy and shape by means of the IAEA-2002 reference spectra for software intercomparison. To test the performance of the analysis sequences, the IAEA-2002 reference spectrum was used. The z-score and the reduced {chi}{sup 2} criteria were used to determine the optimum analysis sequence. The results show an appreciable variation in the peak area determinations and their corresponding uncertainties. Particularly, the combination of second derivative peak locate with simple peak area integration algorithms provides the greater accuracy. Lower accuracy comes from the combination of library directed peak locate algorithm and Genie's Gamma-M peak area determination. (Author)

  17. Optimum Parameters for Tuned Mass Damper Using Shuffled Complex Evolution (SCE Algorithm

    Directory of Open Access Journals (Sweden)

    Hessamoddin Meshkat Razavi

    2015-06-01

    Full Text Available This study is investigated the optimum parameters for a tuned mass damper (TMD under the seismic excitation. Shuffled complex evolution (SCE is a meta-heuristic optimization method which is used to find the optimum damping and tuning frequency ratio for a TMD. The efficiency of the TMD is evaluated by decreasing the structural displacement dynamic magnification factor (DDMF and acceleration dynamic magnification factor (ADMF for a specific vibration mode of the structure. The optimum TMD parameters and the corresponding optimized DDMF and ADMF are achieved for two control levels (displacement control and acceleration control, different structural damping ratio and mass ratio of the TMD system. The optimum TMD parameters are checked for a 10-storey building under earthquake excitations. The maximum storey displacement and acceleration obtained by SCE method are compared with the results of other existing approaches. The results show that the peak building response decreased with decreases of about 20% for displacement and 30% for acceleration of the top floor. To show the efficiency of the adopted algorithm (SCE, a comparison is also made between SCE and other meta-heuristic optimization methods such as genetic algorithm (GA, particle swarm optimization (PSO method and harmony search (HS algorithm in terms of success rate and computational processing time. The results show that the proposed algorithm outperforms other meta-heuristic optimization methods.

  18. Optimum gain and phase for stochastic cooling systems

    International Nuclear Information System (INIS)

    Meer, S. van der.

    1984-01-01

    A detailed analysis of optimum gain and phase adjustment in stochastic cooling systems reveals that the result is strongly influenced by the beam feedback effect and that for optimum performance the system phase should change appreciably across each Schottky band. It is shown that the performance is not greatly diminished if a constant phase is adopted instead. On the other hand, the effect of mixing between pick-up and kicker (which produces a phase change similar to the optimum one) is shown to be less perturbing than is usually assumed, provided that the absolute value of the gain is not too far from the optimum value. (orig.)

  19. Transmission delays in hardware clock synchronization

    Science.gov (United States)

    Shin, Kang G.; Ramanathan, P.

    1988-01-01

    Various methods, both with software and hardware, have been proposed to synchronize a set of physical clocks in a system. Software methods are very flexible and economical but suffer an excessive time overhead, whereas hardware methods require no time overhead but are unable to handle transmission delays in clock signals. The effects of nonzero transmission delays in synchronization have been studied extensively in the communication area in the absence of malicious or Byzantine faults. The authors show that it is easy to incorporate the ideas from the communication area into the existing hardware clock synchronization algorithms to take into account the presence of both malicious faults and nonzero transmission delays.

  20. Computer hardware description languages - A tutorial

    Science.gov (United States)

    Shiva, S. G.

    1979-01-01

    The paper introduces hardware description languages (HDL) as useful tools for hardware design and documentation. The capabilities and limitations of HDLs are discussed along with the guidelines needed in selecting an appropriate HDL. The directions for future work are provided and attention is given to the implementation of HDLs in microcomputers.

  1. Support for NUMA hardware in HelenOS

    OpenAIRE

    Horký, Vojtěch

    2011-01-01

    The goal of this master thesis is to extend HelenOS operating system with the support for ccNUMA hardware. The text of the thesis contains a brief introduction to ccNUMA hardware, an overview of NUMA features and relevant features of HelenOS (memory management, scheduling, etc.). The thesis analyses various design decisions of the implementation of NUMA support -- introducing the hardware topology into the kernel data structures, propagating this information to user space, thread affinity to ...

  2. Optimum coolant chemistry in BWRs

    International Nuclear Information System (INIS)

    Lin, C.C.; Cowan, R.L.; Kiss, E.

    2004-01-01

    LWR water chemistry parameters are directly or indirectly related to the plant's operational performance and for a significant amount of Operation and Maintenance (O and M) costs. Obvious impacts are the operational costs associated with water treatment, monitoring and associated radwaste generation. Less obvious is the important role water chemistry plays in the magnitude of drywell shutdown dose rates, fuel corrosion performance and, (probably most importantly) materials degradation such as from stress corrosion cracking of piping and Reactor Pressure Vessel (RPV) internal components. To improve the operational excellence of the BWR and to minimize the impact of water chemistry on O and M costs. General Electric has developed the concept of Optimum Water Chemistry (OWC). The 'best practices' and latest technology findings from the U.S., Asia and Europe are integrated into the suggested OWC Specification. This concept, together with cost effective ways to meet the requirement, are discussed. (author)

  3. Sterilization of space hardware.

    Science.gov (United States)

    Pflug, I. J.

    1971-01-01

    Discussion of various techniques of sterilization of space flight hardware using either destructive heating or the action of chemicals. Factors considered in the dry-heat destruction of microorganisms include the effects of microbial water content, temperature, the physicochemical properties of the microorganism and adjacent support, and nature of the surrounding gas atmosphere. Dry-heat destruction rates of microorganisms on the surface, between mated surface areas, or buried in the solid material of space vehicle hardware are reviewed, along with alternative dry-heat sterilization cycles, thermodynamic considerations, and considerations of final sterilization-process design. Discussed sterilization chemicals include ethylene oxide, formaldehyde, methyl bromide, dimethyl sulfoxide, peracetic acid, and beta-propiolactone.

  4. Software for Managing Inventory of Flight Hardware

    Science.gov (United States)

    Salisbury, John; Savage, Scott; Thomas, Shirman

    2003-01-01

    The Flight Hardware Support Request System (FHSRS) is a computer program that relieves engineers at Marshall Space Flight Center (MSFC) of most of the non-engineering administrative burden of managing an inventory of flight hardware. The FHSRS can also be adapted to perform similar functions for other organizations. The FHSRS affords a combination of capabilities, including those formerly provided by three separate programs in purchasing, inventorying, and inspecting hardware. The FHSRS provides a Web-based interface with a server computer that supports a relational database of inventory; electronic routing of requests and approvals; and electronic documentation from initial request through implementation of quality criteria, acquisition, receipt, inspection, storage, and final issue of flight materials and components. The database lists both hardware acquired for current projects and residual hardware from previous projects. The increased visibility of residual flight components provided by the FHSRS has dramatically improved the re-utilization of materials in lieu of new procurements, resulting in a cost savings of over $1.7 million. The FHSRS includes subprograms for manipulating the data in the database, informing of the status of a request or an item of hardware, and searching the database on any physical or other technical characteristic of a component or material. The software structure forces normalization of the data to facilitate inquiries and searches for which users have entered mixed or inconsistent values.

  5. Targeting multiple heterogeneous hardware platforms with OpenCL

    Science.gov (United States)

    Fox, Paul A.; Kozacik, Stephen T.; Humphrey, John R.; Paolini, Aaron; Kuller, Aryeh; Kelmelis, Eric J.

    2014-06-01

    The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations have substantial implementation differences. The abstractions provided by the OpenCL API are often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations often do not take advantage of potential performance gains from certain features due to hardware limitations and other factors. These factors make it challenging to produce code that is portable in practice, resulting in much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort offsets the principal advantage of OpenCL: portability. The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted to perform well across a wide range of hardware platforms. To this end, we explore some general practices for producing performant code that are effective across platforms. Additionally, we explore some ways of modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics. The minimum requirement for portability implies avoiding the use of OpenCL features that are optional, not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down to explicit vector operations. Static optimizations and branch elimination in device code help the platform compiler to effectively optimize programs. Modularization of some code is important to allow operations to be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in hardware

  6. Optimum condition determination of Rirang uranium ores grinding using ball mill

    International Nuclear Information System (INIS)

    Affandi, Kosim; Waluyo, Sugeng; Sarono, Budi; Sujono; Muhammad

    2002-01-01

    The grinding experiment on Rirang Uranium ore has been carried out with the aim is to find out the optimum condition of wet grinding using ball mill to produce particle size -325, -200 and -100 mesh. This will be used for decomposition feed the test was done by examine the parameters comparison of ore's weight against ball's weight and time of grinding. The test shown that the product of particle size -325 meshes was achieved optimum condition at the comparison ore's weight: ball = 1:3, grinding time 150 minutes, % solid 60, speed rotation of ball mill 60 rpm and recovery of grinding was 93.51 % of -325 mesh. The product of particle size -200 mesh was achieved optimum condition at comparison ore's weight: ball = 1:2, time of grinding 60 minutes, the fraction of + 200 mesh was regrind, the recovery of grinding 6.82% at particle size of (-200 + 250) mesh, 5.75 % at (-250 + 325)m mesh and, 47.93 % -325 mesh. The product of particle size -100 mesh was achieved the optimum condition at comparison ore's weight: ball = 1:2, time of grinding at 30 minutes particle size +100 mesh regrinding using mortar grinder, recovery of grinding 30.10% at particle size (-100 + 150) m, 12.28 % at (-150 + 200) mesh, 15.92 % at (-200 + 250) mesh, 12.44 % at (-250 + 325) mesh and 29.26 % -325 mesh. The determination of specific gravity of Rirang uranium ore was between 4.15 - 4.55 g/cm 3

  7. Optimum Maintenance Strategies for Highway Bridges

    DEFF Research Database (Denmark)

    Frangopol, Dan M.; Thoft-Christensen, Palle; Das, Parag C.

    As bridges become older and maintenance costs become higher, transportation agencies are facing challenges related to implementation of optimal bridge management programs based on life cycle cost considerations. A reliability-based approach is necessary to find optimal solutions based on minimum...... expected life-cycle costs or maximum life-cycle benefits. This is because many maintenance activities can be associated with significant costs, but their effects on bridge safety can be minor. In this paper, the program of an investigation on optimum maintenance strategies for different bridge types...... is described. The end result of this investigation will be a general reliability-based framework to be used by the UK Highways Agency in order to plan optimal strategies for the maintenance of its bridge network so as to optimize whole-life costs....

  8. Improvement of hardware basic testing : Identification and development of a scripted automation tool that will support hardware basic testing

    OpenAIRE

    Rask, Ulf; Mannestig, Pontus

    2002-01-01

    In the ever-increasing development pace, circuits and hardware are no exception. Hardware designs grow and circuits gets more complex at the same time as the market pressure lowers the expected time-to-market. In this rush, verification methods often lag behind. Hardware manufacturers must be aware of the importance of total verification if they want to avoid quality flaws and broken deadlines which in the long run will lead to delayed time-to-market, bad publicity and a decreasing market sha...

  9. Optimum Stratification of a Skewed Population

    OpenAIRE

    D.K. Rao; M.G.M. Khan; K.G. Reddy

    2014-01-01

    The focus of this paper is to develop a technique of solving a combined problem of determining Optimum Strata Boundaries(OSB) and Optimum Sample Size (OSS) of each stratum, when the population understudy isskewed and the study variable has a Pareto frequency distribution. The problem of determining the OSB isformulated as a Mathematical Programming Problem (MPP) which is then solved by dynamic programming technique. A numerical example is presented to illustrate the compu...

  10. Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models

    Science.gov (United States)

    Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro

    2017-10-01

    Mainstream machine-learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks or to make them more effective. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful generative unsupervised machine-learning models. Here, we use embedding techniques to add redundancy to data sets, allowing us to increase the modeling capacity of quantum annealers. We illustrate our findings by training hardware-embedded graphical models on a binarized data set of handwritten digits and two synthetic data sets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Gibbs-like distribution; therefore, this approach avoids the need to infer the effective temperature at each iteration, speeding up learning; it also mitigates the effect of noise in the control parameters, making it robust to deviations from the reference Gibbs distribution. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models, and it provides a suitable framework for benchmarking these quantum technologies on machine-learning-related tasks.

  11. Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models

    Directory of Open Access Journals (Sweden)

    Marcello Benedetti

    2017-11-01

    Full Text Available Mainstream machine-learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks or to make them more effective. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful generative unsupervised machine-learning models. Here, we use embedding techniques to add redundancy to data sets, allowing us to increase the modeling capacity of quantum annealers. We illustrate our findings by training hardware-embedded graphical models on a binarized data set of handwritten digits and two synthetic data sets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Gibbs-like distribution; therefore, this approach avoids the need to infer the effective temperature at each iteration, speeding up learning; it also mitigates the effect of noise in the control parameters, making it robust to deviations from the reference Gibbs distribution. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models, and it provides a suitable framework for benchmarking these quantum technologies on machine-learning-related tasks.

  12. COMPUTER HARDWARE MARKING

    CERN Multimedia

    Groupe de protection des biens

    2000-01-01

    As part of the campaign to protect CERN property and for insurance reasons, all computer hardware belonging to the Organization must be marked with the words 'PROPRIETE CERN'.IT Division has recently introduced a new marking system that is both economical and easy to use. From now on all desktop hardware (PCs, Macintoshes, printers) issued by IT Division with a value equal to or exceeding 500 CHF will be marked using this new system.For equipment that is already installed but not yet marked, including UNIX workstations and X terminals, IT Division's Desktop Support Service offers the following services free of charge:Equipment-marking wherever the Service is called out to perform other work (please submit all work requests to the IT Helpdesk on 78888 or helpdesk@cern.ch; for unavoidable operational reasons, the Desktop Support Service will only respond to marking requests when these coincide with requests for other work such as repairs, system upgrades, etc.);Training of personnel designated by Division Leade...

  13. GOSH! A roadmap for open-source science hardware

    CERN Multimedia

    Stefania Pandolfi

    2016-01-01

    The goal of the Gathering for Open Science Hardware (GOSH! 2016), held from 2 to 5 March 2016 at IdeaSquare, was to lay the foundations of the open-source hardware for science movement.   The participants in the GOSH! 2016 meeting gathered in IdeaSquare. (Image: GOSH Community) “Despite advances in technology, many scientific innovations are held back because of a lack of affordable and customisable hardware,” says François Grey, a professor at the University of Geneva and coordinator of Citizen Cyberlab – a partnership between CERN, the UN Institute for Training and Research and the University of Geneva – which co-organised the GOSH! 2016 workshop. “This scarcity of accessible science hardware is particularly obstructive for citizen science groups and humanitarian organisations that don’t have the same economic means as a well-funded institution.” Instead, open sourcing science hardware co...

  14. Optimum burnup of BAEC TRIGA research reactor

    International Nuclear Information System (INIS)

    Lyric, Zoairia Idris; Mahmood, Mohammad Sayem; Motalab, Mohammad Abdul; Khan, Jahirul Haque

    2013-01-01

    Highlights: ► Optimum loading scheme for BAEC TRIGA core is out-to-in loading with 10 fuels/cycle starting with 5 for the first reload. ► The discharge burnup ranges from 17% to 24% of U235 per fuel element for full power (3 MW) operation. ► Optimum extension of operating core life is 100 MWD per reload cycle. - Abstract: The TRIGA Mark II research reactor of BAEC (Bangladesh Atomic Energy Commission) has been operating since 1986 without any reshuffling or reloading yet. Optimum fuel burnup strategy has been investigated for the present BAEC TRIGA core, where three out-to-in loading schemes have been inspected in terms of core life extension, burnup economy and safety. In considering different schemes of fuel loading, optimization has been searched by only varying the number of fuels discharged and loaded. A cost function has been defined and evaluated based on the calculated core life and fuel load and discharge. The optimum loading scheme has been identified for the TRIGA core, the outside-to-inside fuel loading with ten fuels for each cycle starting with five fuels for the first reload. The discharge burnup has been found ranging from 17% to 24% of U235 per fuel element and optimum extension of core operating life is 100 MWD for each loading cycle. This study will contribute to the in-core fuel management of TRIGA reactor

  15. Hardware Accelerated Simulated Radiography

    International Nuclear Information System (INIS)

    Laney, D; Callahan, S; Max, N; Silva, C; Langer, S; Frank, R

    2005-01-01

    We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32 bit floating point texture capabilities to obtain validated solutions to the radiative transport equation for X-rays. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedra that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester. We show that the hardware accelerated solution is faster than the current technique used by scientists

  16. Optimum design of dual pressure heat recovery steam generator using non-dimensional parameters based on thermodynamic and thermoeconomic approaches

    International Nuclear Information System (INIS)

    Naemi, Sanaz; Saffar-Avval, Majid; Behboodi Kalhori, Sahand; Mansoori, Zohreh

    2013-01-01

    The thermodynamic and thermoeconomic analyses are investigated to achieve the optimum operating parameters of a dual pressure heat recovery steam generator (HRSG), coupled with a heavy duty gas turbine. In this regard, the thermodynamic objective function including the exergy waste and the exergy destruction, is defined in such a way to find the optimum pinch point, and consequently to minimize the objective function by using non-dimensional operating parameters. The results indicated that, the optimum pinch point from thermodynamic viewpoint is 2.5 °C and 2.1 °C for HRSGs with live steam at 75 bar and 90 bar respectively. Since thermodynamic analysis is not able to consider economic factors, another objective function including annualized installation cost and annual cost of irreversibilities is proposed. To find the irreversibility cost, electricity price and also fuel price are considered independently. The optimum pinch point from thermoeconomic viewpoint on basis of electricity price is 20.6 °C (75 bar) and 19.2 °C (90 bar), whereas according to the fuel price it is 25.4 °C and 23.7 °C. Finally, an extensive sensitivity analysis is performed to compare optimum pinch point for different electricity and fuel prices. -- Highlights: ► Presenting thermodynamic and thermoeconomic optimization of a heat recovery steam generator. ► Defining an objective function consists of exergy waste and exergy destruction. ► Defining an objective function including capital cost and cost of irreversibilities. ► Obtaining the optimized operating parameters of a dual pressure heat recovery boiler. ► Computing the optimum pinch point using non-dimensional operating parameters

  17. Carbon sequestration, optimum forest rotation and their environmental impact

    International Nuclear Information System (INIS)

    Kula, Erhun; Gunalay, Yavuz

    2012-01-01

    Due to their large biomass forests assume an important role in the global carbon cycle by moderating the greenhouse effect of atmospheric pollution. The Kyoto Protocol recognises this contribution by allocating carbon credits to countries which are able to create new forest areas. Sequestrated carbon provides an environmental benefit thus must be taken into account in cost–benefit analysis of afforestation projects. Furthermore, like timber output carbon credits are now tradable assets in the carbon exchange. By using British data, this paper looks at the issue of identifying optimum felling age by considering carbon sequestration benefits simultaneously with timber yields. The results of this analysis show that the inclusion of carbon benefits prolongs the optimum cutting age by requiring trees to stand longer in order to soak up more CO 2 . Consequently this finding must be considered in any carbon accounting calculations. - Highlights: ► Carbon sequestration in forestry is an environmental benefit. ► It moderates the problem of global warming. ► It prolongs the gestation period in harvesting. ► This paper uses British data in less favoured districts for growing Sitka spruce species.

  18. Carbon sequestration, optimum forest rotation and their environmental impact

    Energy Technology Data Exchange (ETDEWEB)

    Kula, Erhun, E-mail: erhun.kula@bahcesehir.edu.tr [Department of Economics, Bahcesehir University, Besiktas, Istanbul (Turkey); Gunalay, Yavuz, E-mail: yavuz.gunalay@bahcesehir.edu.tr [Department of Business Studies, Bahcesehir University, Besiktas, Istanbul (Turkey)

    2012-11-15

    Due to their large biomass forests assume an important role in the global carbon cycle by moderating the greenhouse effect of atmospheric pollution. The Kyoto Protocol recognises this contribution by allocating carbon credits to countries which are able to create new forest areas. Sequestrated carbon provides an environmental benefit thus must be taken into account in cost-benefit analysis of afforestation projects. Furthermore, like timber output carbon credits are now tradable assets in the carbon exchange. By using British data, this paper looks at the issue of identifying optimum felling age by considering carbon sequestration benefits simultaneously with timber yields. The results of this analysis show that the inclusion of carbon benefits prolongs the optimum cutting age by requiring trees to stand longer in order to soak up more CO{sub 2}. Consequently this finding must be considered in any carbon accounting calculations. - Highlights: Black-Right-Pointing-Pointer Carbon sequestration in forestry is an environmental benefit. Black-Right-Pointing-Pointer It moderates the problem of global warming. Black-Right-Pointing-Pointer It prolongs the gestation period in harvesting. Black-Right-Pointing-Pointer This paper uses British data in less favoured districts for growing Sitka spruce species.

  19. Developed Hybrid Model for Propylene Polymerisation at Optimum Reaction Conditions

    Directory of Open Access Journals (Sweden)

    Mohammad Jakir Hossain Khan

    2016-02-01

    Full Text Available A statistical model combined with CFD (computational fluid dynamic method was used to explain the detailed phenomena of the process parameters, and a series of experiments were carried out for propylene polymerisation by varying the feed gas composition, reaction initiation temperature, and system pressure, in a fluidised bed catalytic reactor. The propylene polymerisation rate per pass was considered the response to the analysis. Response surface methodology (RSM, with a full factorial central composite experimental design, was applied to develop the model. In this study, analysis of variance (ANOVA indicated an acceptable value for the coefficient of determination and a suitable estimation of a second-order regression model. For better justification, results were also described through a three-dimensional (3D response surface and a related two-dimensional (2D contour plot. These 3D and 2D response analyses provided significant and easy to understand findings on the effect of all the considered process variables on expected findings. To diagnose the model adequacy, the mathematical relationship between the process variables and the extent of polymer conversion was established through the combination of CFD with statistical tools. All the tests showed that the model is an excellent fit with the experimental validation. The maximum extent of polymer conversion per pass was 5.98% at the set time period and with consistent catalyst and co-catalyst feed rates. The optimum conditions for maximum polymerisation was found at reaction temperature (RT 75 °C, system pressure (SP 25 bar, and 75% monomer concentration (MC. The hydrogen percentage was kept fixed at all times. The coefficient of correlation for reaction temperature, system pressure, and monomer concentration ratio, was found to be 0.932. Thus, the experimental results and model predicted values were a reliable fit at optimum process conditions. Detailed and adaptable CFD results were capable

  20. Reliable software for unreliable hardware a cross layer perspective

    CERN Document Server

    Rehman, Semeen; Henkel, Jörg

    2016-01-01

    This book describes novel software concepts to increase reliability under user-defined constraints. The authors’ approach bridges, for the first time, the reliability gap between hardware and software. Readers will learn how to achieve increased soft error resilience on unreliable hardware, while exploiting the inherent error masking characteristics and error (stemming from soft errors, aging, and process variations) mitigations potential at different software layers. · Provides a comprehensive overview of reliability modeling and optimization techniques at different hardware and software levels; · Describes novel optimization techniques for software cross-layer reliability, targeting unreliable hardware.

  1. Hardware device to physical structure binding and authentication

    Science.gov (United States)

    Hamlet, Jason R.; Stein, David J.; Bauer, Todd M.

    2013-08-20

    Detection and deterrence of device tampering and subversion may be achieved by including a cryptographic fingerprint unit within a hardware device for authenticating a binding of the hardware device and a physical structure. The cryptographic fingerprint unit includes an internal physically unclonable function ("PUF") circuit disposed in or on the hardware device, which generate an internal PUF value. Binding logic is coupled to receive the internal PUF value, as well as an external PUF value associated with the physical structure, and generates a binding PUF value, which represents the binding of the hardware device and the physical structure. The cryptographic fingerprint unit also includes a cryptographic unit that uses the binding PUF value to allow a challenger to authenticate the binding.

  2. A Hardware Abstraction Layer in Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Korsholm, Stephan; Kalibera, Tomas

    2011-01-01

    Embedded systems use specialized hardware devices to interact with their environment, and since they have to be dependable, it is attractive to use a modern, type-safe programming language like Java to develop programs for them. Standard Java, as a platform-independent language, delegates access...... to devices, direct memory access, and interrupt handling to some underlying operating system or kernel, but in the embedded systems domain resources are scarce and a Java Virtual Machine (JVM) without an underlying middleware is an attractive architecture. The contribution of this article is a proposal...... for Java packages with hardware objects and interrupt handlers that interface to such a JVM. We provide implementations of the proposal directly in hardware, as extensions of standard interpreters, and finally with an operating system middleware. The latter solution is mainly seen as a migration path...

  3. Designing Secure Systems on Reconfigurable Hardware

    OpenAIRE

    Huffmire, Ted; Brotherton, Brett; Callegari, Nick; Valamehr, Jonathan; White, Jeff; Kastner, Ryan; Sherwood, Ted

    2008-01-01

    The extremely high cost of custom ASIC fabrication makes FPGAs an attractive alternative for deployment of custom hardware. Embedded systems based on reconfigurable hardware integrate many functions onto a single device. Since embedded designers often have no choice but to use soft IP cores obtained from third parties, the cores operate at different trust levels, resulting in mixed trust designs. The goal of this project is to evaluate recently proposed security primitives for reconfigurab...

  4. Hardware-Accelerated Simulated Radiography

    International Nuclear Information System (INIS)

    Laney, D; Callahan, S; Max, N; Silva, C; Langer, S.; Frank, R

    2005-01-01

    We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32-bit floating point texture capabilities to obtain solutions to the radiative transport equation for X-rays. The hardware accelerated solutions are accurate enough to enable scientists to explore the experimental design space with greater efficiency than the methods currently in use. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedral meshes that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester

  5. Optimum filters for narrow-band frequency modulation.

    Science.gov (United States)

    Shelton, R. D.

    1972-01-01

    The results of a computer search for the optimum type of bandpass filter for low-index angle-modulated signals are reported. The bandpass filters are discussed in terms of their low-pass prototypes. Only filter functions with constant numerators are considered. The pole locations for the optimum filters of several cases are shown in a table. The results are fairly independent of modulation index and bandwidth.

  6. Hardware descriptions of the I and C systems for NPP

    International Nuclear Information System (INIS)

    Lee, Cheol Kwon; Oh, In Suk; Park, Joo Hyun; Kim, Dong Hoon; Han, Jae Bok; Shin, Jae Whal; Kim, Young Bak

    2003-09-01

    The hardware specifications for I and C Systems of SNPP(Standard Nuclear Power Plant) are reviewed in order to acquire the hardware requirement and specification of KNICS (Korea Nuclear Instrumentation and Control System). In the study, we investigated hardware requirements, hardware configuration, hardware specifications, man-machine hardware requirements, interface requirements with the other system, and data communication requirements that are applicable to SNP. We reviewed those things of control systems, protection systems, monitoring systems, information systems, and process instrumentation systems. Through the study, we described the requirements and specifications of digital systems focusing on a microprocessor and a communication interface, and repeated it for analog systems focusing on the manufacturing companies. It is expected that the experience acquired from this research will provide vital input for the development of the KNICS

  7. Optimum material gradient composition for the functionally graded ...

    African Journals Online (AJOL)

    This study investigates the relation between the material gradient properties and the optimum sensing/actuation design of the functionally graded piezoelectric beams. Three-dimensional (3D) finite element analysis has been employed for the prediction of an optimum composition profile in these types of sensors and ...

  8. Generic Advertising Optimum Budget for Iran’s Milk Industry

    Directory of Open Access Journals (Sweden)

    H. Shahbazi

    2016-05-01

    Full Text Available Introduction One of the main targets of planners, decision makers and governments is increasing society health with promotion and production of suitable and healthy food. One of the basic commodities that have important role in satisfaction of required human food is milk. So, some part of government and producer healthy budget allocate to milk consumption promotion by using generic advertising. If effectiveness of advertising budget on profitability is more, producer will have more willing to spend for advertising. Determination of optimal generic advertising budget is one of important problem in managerial decision making in producing firm as well as increase in consumption and profit and decrease in wasting and non-optimality of budget. Materials and Methods: In this study, optimal generic advertising budget intensity index (advertising budget share of production cost was estimated under two different scenarios by using equilibrium replacement model. In equilibrium replacement model, producer surplus are maximized in respect to generic advertising in retail level. According to market where two levels of farm and processing before retail exist and there is trade in farm and retail level, we present different models. Fixed and variable proportion hypothesis is another one. Finally, eight relations are presented for determination of milk generic advertising optimum budget. So, we use data from several resources such as previous studies, national (Iran Static center and international institute (Fao formal data and own estimation. Because there are several estimations in previous studies, we identify some scenarios (in two general scenarios for calculation of milk generic advertising optimum budget. Results and Discussion: Estimation of milk generic advertising optimum budget in scenario 1 shows that in case of one market level, fixed supplies and no trade, optimum budget is 0.4672539 percent. In case of one market level and no trade, optimum

  9. Software-Controlled Dynamically Swappable Hardware Design in Partially Reconfigurable Systems

    Directory of Open Access Journals (Sweden)

    Huang Chun-Hsian

    2008-01-01

    Full Text Available Abstract We propose two basic wrapper designs and an enhanced wrapper design for arbitrary digital hardware circuit designs such that they can be enhanced with the capability for dynamic swapping controlled by software. A hardware design with either of the proposed wrappers can thus be swapped out of the partially reconfigurable logic at runtime in some intermediate state of computation and then swapped in when required to continue from that state. The context data is saved to a buffer in the wrapper at interruptible states, and then the wrapper takes care of saving the hardware context to communication memory through a peripheral bus, and later restoring the hardware context after the design is swapped in. The overheads of the hardware standardization and the wrapper in terms of additional reconfigurable logic resources and the time for context switching are small and generally acceptable. With the capability for dynamic swapping, high priority hardware tasks can interrupt low-priority tasks in real-time embedded systems so that the utilization of hardware space per unit time is increased.

  10. The optimum decision rules for the oddity task

    NARCIS (Netherlands)

    Versfeld, N.J.; Dai, H.; Green, D.M.

    1996-01-01

    This paper presents the optimum decision rule for an m-interval oddity task in which m-1 intervals contain the same signal and one is different or odd. The optimum decision rule depends on the degree of correlation among observations. The present approach unifies the different strategies that occur

  11. On Optimum Stratification

    OpenAIRE

    M. G. M. Khan; V. D. Prasad; D. K. Rao

    2014-01-01

    In this manuscript, we discuss the problem of determining the optimum stratification of a study (or main) variable based on the auxiliary variable that follows a uniform distribution. If the stratification of survey variable is made using the auxiliary variable it may lead to substantial gains in precision of the estimates. This problem is formulated as a Nonlinear Programming Problem (NLPP), which turn out to multistage decision problem and is solved using dynamic programming technique.

  12. Fracture of fusion mass after hardware removal in patients with high sagittal imbalance.

    Science.gov (United States)

    Sedney, Cara L; Daffner, Scott D; Stefanko, Jared J; Abdelfattah, Hesham; Emery, Sanford E; France, John C

    2016-04-01

    As spinal fusions become more common and more complex, so do the sequelae of these procedures, some of which remain poorly understood. The authors report on a series of patients who underwent removal of hardware after CT-proven solid fusion, confirmed by intraoperative findings. These patients later developed a spontaneous fracture of the fusion mass that was not associated with trauma. A series of such patients has not previously been described in the literature. An unfunded, retrospective review of the surgical logs of 3 fellowship-trained spine surgeons yielded 7 patients who suffered a fracture of a fusion mass after hardware removal. Adult patients from the West Virginia University Department of Orthopaedics who underwent hardware removal in the setting of adjacent-segment disease (ASD), and subsequently experienced fracture of the fusion mass through the uninstrumented segment, were studied. The medical records and radiological studies of these patients were examined for patient demographics and comorbidities, initial indication for surgery, total number of surgeries, timeline of fracture occurrence, risk factors for fracture, as well as sagittal imbalance. All 7 patients underwent hardware removal in conjunction with an extension of fusion for ASD. All had CT-proven solid fusion of their previously fused segments, which was confirmed intraoperatively. All patients had previously undergone multiple operations for a variety of indications, 4 patients were smokers, and 3 patients had osteoporosis. Spontaneous fracture of the fusion mass occurred in all patients and was not due to trauma. These fractures occurred 4 months to 4 years after hardware removal. All patients had significant sagittal imbalance of 13-15 cm. The fracture level was L-5 in 6 of the 7 patients, which was the first uninstrumented level caudal to the newly placed hardware in all 6 of these patients. Six patients underwent surgery due to this fracture. The authors present a case series of 7

  13. Optimal Reinsurance Design for Pareto Optimum: From the Perspective of Multiple Reinsurers

    Directory of Open Access Journals (Sweden)

    Xing Rong

    2016-01-01

    Full Text Available This paper investigates optimal reinsurance strategies for an insurer which cedes the insured risk to multiple reinsurers. Assume that the insurer and every reinsurer apply the coherent risk measures. Then, we find out the necessary and sufficient conditions for the reinsurance market to achieve Pareto optimum; that is, every ceded-loss function and the retention function are in the form of “multiple layers reinsurance.”

  14. Generation of Embedded Hardware/Software from SystemC

    Directory of Open Access Journals (Sweden)

    Dominique Houzet

    2006-08-01

    Full Text Available Designers increasingly rely on reusing intellectual property (IP and on raising the level of abstraction to respect system-on-chip (SoC market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propose a design flow intended to reduce the SoC design cost. This design flow unifies hardware and software using a single high-level language. It integrates hardware/software (HW/SW generation tools and an automatic interface synthesis through a custom library of adapters. We have validated our interface synthesis approach on a hardware producer/consumer case study and on the design of a given software radiocommunication application.

  15. Generation of Embedded Hardware/Software from SystemC

    Directory of Open Access Journals (Sweden)

    Ouadjaout Salim

    2006-01-01

    Full Text Available Designers increasingly rely on reusing intellectual property (IP and on raising the level of abstraction to respect system-on-chip (SoC market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propose a design flow intended to reduce the SoC design cost. This design flow unifies hardware and software using a single high-level language. It integrates hardware/software (HW/SW generation tools and an automatic interface synthesis through a custom library of adapters. We have validated our interface synthesis approach on a hardware producer/consumer case study and on the design of a given software radiocommunication application.

  16. Cooperative communications hardware, channel and PHY

    CERN Document Server

    Dohler, Mischa

    2010-01-01

    Facilitating Cooperation for Wireless Systems Cooperative Communications: Hardware, Channel & PHY focuses on issues pertaining to the PHY layer of wireless communication networks, offering a rigorous taxonomy of this dispersed field, along with a range of application scenarios for cooperative and distributed schemes, demonstrating how these techniques can be employed. The authors discuss hardware, complexity and power consumption issues, which are vital for understanding what can be realized at the PHY layer, showing how wireless channel models differ from more traditional

  17. IDD Archival Hardware Architecture and Workflow

    Energy Technology Data Exchange (ETDEWEB)

    Mendonsa, D; Nekoogar, F; Martz, H

    2008-10-09

    This document describes the functionality of every component in the DHS/IDD archival and storage hardware system shown in Fig. 1. The document describes steps by step process of image data being received at LLNL then being processed and made available to authorized personnel and collaborators. Throughout this document references will be made to one of two figures, Fig. 1 describing the elements of the architecture and the Fig. 2 describing the workflow and how the project utilizes the available hardware.

  18. Optimum Actuator Selection with a Genetic Algorithm for Aircraft Control

    Science.gov (United States)

    Rogers, James L.

    2004-01-01

    The placement of actuators on a wing determines the control effectiveness of the airplane. One approach to placement maximizes the moments about the pitch, roll, and yaw axes, while minimizing the coupling. For example, the desired actuators produce a pure roll moment without at the same time causing much pitch or yaw. For a typical wing, there is a large set of candidate locations for placing actuators, resulting in a substantially larger number of combinations to examine in order to find an optimum placement satisfying the mission requirements and mission constraints. A genetic algorithm has been developed for finding the best placement for four actuators to produce an uncoupled pitch moment. The genetic algorithm has been extended to find the minimum number of actuators required to provide uncoupled pitch, roll, and yaw control. A simplified, untapered, unswept wing is the model for each application.

  19. Aspects of system modelling in Hardware/Software partitioning

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1996-01-01

    This paper addresses fundamental aspects of system modelling and partitioning algorithms in the area of Hardware/Software Codesign. Three basic system models for partitioning are presented and the consequences of partitioning according to each of these are analyzed. The analysis shows...... the importance of making a clear distinction between the model used for partitioning and the model used for evaluation It also illustrates the importance of having a realistic hardware model such that hardware sharing can be taken into account. Finally, the importance of integrating scheduling and allocation...

  20. Hardware Acceleration of Adaptive Neural Algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - world conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.

  1. Hardware/software virtualization for the reconfigurable multicore platform.

    NARCIS (Netherlands)

    Ferger, M.; Al Kadi, M.; Hübner, M.; Koedam, M.L.P.J.; Sinha, S.S.; Goossens, K.G.W.; Marchesan Almeida, Gabriel; Rodrigo Azambuja, J.; Becker, Juergen

    2012-01-01

    This paper presents the Flex Tiles approach for the virtualization of hardware and software for a reconfigurable multicore architecture. The approach enables the virtualization of a dynamic tile-based hardware architecture consisting of processing tiles connected via a network-on-chip and a

  2. Is a 4-bit synaptic weight resolution enough? - constraints on enabling spike-timing dependent plasticity in neuromorphic hardware.

    Science.gov (United States)

    Pfeil, Thomas; Potjans, Tobias C; Schrader, Sven; Potjans, Wiebke; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz

    2012-01-01

    Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists.

  3. Optimum mobility’ facelift. Part 2 – the technique

    OpenAIRE

    Fanous, Nabil; Karsan, Naznin; Zakhary, Kristina; Tawile, Carolyne

    2006-01-01

    In the first of this two-part article on the ‘optimum mobility’ facelift, facial tissue mobility was analyzed, and three theories or mechanisms emerged: ‘intrinsic mobility’, ‘surgically induced mobility’ and ‘optimum mobility points’.

  4. Flight Hardware Virtualization for On-Board Science Data Processing

    Data.gov (United States)

    National Aeronautics and Space Administration — Utilize Hardware Virtualization technology to benefit on-board science data processing by investigating new real time embedded Hardware Virtualization solutions and...

  5. Speed challenge: a case for hardware implementation in soft-computing

    Science.gov (United States)

    Daud, T.; Stoica, A.; Duong, T.; Keymeulen, D.; Zebulum, R.; Thomas, T.; Thakoor, A.

    2000-01-01

    For over a decade, JPL has been actively involved in soft computing research on theory, architecture, applications, and electronics hardware. The driving force in all our research activities, in addition to the potential enabling technology promise, has been creation of a niche that imparts orders of magnitude speed advantage by implementation in parallel processing hardware with algorithms made especially suitable for hardware implementation. We review our work on neural networks, fuzzy logic, and evolvable hardware with selected application examples requiring real time response capabilities.

  6. Computer hardware for radiologists: Part I

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM), Picture Archiving and Communication System (PACS), Radiology information system (RIS) technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU), the chipset, the random access memory (RAM), the memory modules, bus, storage drives, and ports. The personnel computer (PC) has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs). The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called “buses”. The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute “programs”. A Pentium ® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM) is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration

  7. Computer hardware for radiologists: Part I

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM, Picture Archiving and Communication System (PACS, Radiology information system (RIS technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU, the chipset, the random access memory (RAM, the memory modules, bus, storage drives, and ports. The personnel computer (PC has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs. The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called "buses". The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute "programs". A Pentium® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration.

  8. Hardware malware

    CERN Document Server

    Krieg, Christian

    2013-01-01

    In our digital world, integrated circuits are present in nearly every moment of our daily life. Even when using the coffee machine in the morning, or driving our car to work, we interact with integrated circuits. The increasing spread of information technology in virtually all areas of life in the industrialized world offers a broad range of attack vectors. So far, mainly software-based attacks have been considered and investigated, while hardware-based attacks have attracted comparatively little interest. The design and production process of integrated circuits is mostly decentralized due to

  9. Hardware Accelerated Sequence Alignment with Traceback

    Directory of Open Access Journals (Sweden)

    Scott Lloyd

    2009-01-01

    in a timely manner. Known methods to accelerate alignment on reconfigurable hardware only address sequence comparison, limit the sequence length, or exhibit memory and I/O bottlenecks. A space-efficient, global sequence alignment algorithm and architecture is presented that accelerates the forward scan and traceback in hardware without memory and I/O limitations. With 256 processing elements in FPGA technology, a performance gain over 300 times that of a desktop computer is demonstrated on sequence lengths of 16000. For greater performance, the architecture is scalable to more processing elements.

  10. Hardware-in-the-Loop Testing

    Data.gov (United States)

    Federal Laboratory Consortium — RTC has a suite of Hardware-in-the Loop facilities that include three operational facilities that provide performance assessment and production acceptance testing of...

  11. Tax Efficiency vs. Tax Equity – Points of View regarding Tax Optimum

    Directory of Open Access Journals (Sweden)

    Stela Aurelia Toader

    2011-10-01

    Full Text Available Objectives. Starting from the idea that tax equity requirements, administration costs and the tendency towards tax evasion determine the design of tax systems, it is important to identify a satisfactory efficiency/equity deal in order to build a tax system as close to optimum requirements as possible. Prior Work Previous studies proved that an optimum tax system is that through which it will be collected a level of tax revenues which will satisfy budgetary demands, while losing only a minimum ‘amount’ of welfare. In what degree the Romanian tax system meets these requirements? Approach We envisage analyzing the possibilities of improving Romanian tax system as to come nearest to optimum requirements. Results We can conclude fiscal system can uphold important improvements in what assuring tax equity is concerned, resulting in raising the degree of free conformation in the field of tax payment and, implicitly, the degree of tax efficiency. Implications Knowing to what extent it can be acted upon in the direction of finding that satisfactory efficiency/equity deal may allow oneself to identify the blueprint of a tax system in which the loss of welfare is kept down to minimum. Value For the Romanian institutions empowered to impose taxes, the knowledge of the possibilities of making the tax system more efficient can be important while aiming at reducing the level of evasion phenomenon.

  12. Optimum energy levels and offsets for organic donor/acceptor binary photovoltaic materials and solar cells

    International Nuclear Information System (INIS)

    Sun, S.-S.

    2005-01-01

    Optimum frontier orbital energy levels and offsets of an organic donor/acceptor binary type photovoltaic material have been analyzed using classic Marcus electron transfer theory in order to achieve the most efficient photo induced charge separation. This study reveals that, an exciton quenching parameter (EQP) yields one optimum donor/acceptor frontier orbital energy offset that equals the sum of the exciton binding energy and the charge separation reorganization energy, where the photo generated excitons are converted into charges most efficiently. A recombination quenching parameter (RQP) yields a second optimum donor/acceptor energy offset where the ratio of charge separation rate constant over charge recombination rate constant becomes largest. It is desirable that the maximum RQP is coincidence or close to the maximum EQP. A third energy offset is also identified where charge recombination becomes most severe. It is desirable that the most severe charge recombination offset is far away from maximum EQP offset. These findings are very critical for evaluating and fine tuning frontier orbital energy levels of a donor/acceptor pair in order to realize high efficiency organic photovoltaic materials

  13. Learning Machines Implemented on Non-Deterministic Hardware

    OpenAIRE

    Gupta, Suyog; Sindhwani, Vikas; Gopalakrishnan, Kailash

    2014-01-01

    This paper highlights new opportunities for designing large-scale machine learning systems as a consequence of blurring traditional boundaries that have allowed algorithm designers and application-level practitioners to stay -- for the most part -- oblivious to the details of the underlying hardware-level implementations. The hardware/software co-design methodology advocated here hinges on the deployment of compute-intensive machine learning kernels onto compute platforms that trade-off deter...

  14. Design optimum frac jobs using virtual intelligence techniques

    Energy Technology Data Exchange (ETDEWEB)

    Shahab Mohaghegh; Andrei Popa; Sam Ameri [West Virginia University, Morgantown, WV (United States). Petroleum and Natural Gas Engineering

    2000-10-01

    Designing optimal frac jobs is a complex and time-consuming process. It usually involves the use of a two- or three-dimensional computer model. For the computer models to perform as intended, a wealth of input data is required. The input data includes wellbore configuration and reservoir characteristics such as porosity, permeability, stress and thickness profiles of the pay layers as well as the overburden layers. Among other essential information required for the design process is fracturing fluid type and volume, proppant type and volume, injection rate, proppant concentration and frac job schedule. Some of the parameters such as fluid and proppant types have discrete possible choices. Other parameters such as fluid and proppant volume, on the other hand, assume values from within a range of minimum and maximum values. A potential frac design for a particular pay zone is a combination of all of these parameters. Finding the optimum combination is not a trivial process. It usually requires an experienced engineer and a considerable amount of time to tune the parameters in order to achieve desirable outcome. This paper introduces a new methodology that integrates two virtual intelligence techniques, namely, artificial neural networks and genetic algorithms to automate and simplify the optimum frac job design process. This methodology requires little input from the engineer beyond the reservoir characterizations and wellbore configuration. The software tool that has been developed based on this methodology uses the reservoir characteristics and an optimization criteria indicated by the engineer, for example a certain propped frac length, and provides the detail of the optimum frac design that will result in the specified criteria. An ensemble of neural networks is trained to mimic the two- or three-dimensional frac simulator. Once successfully trained, these networks are capable of providing instantaneous results in response to any set of input parameters. These

  15. Design optimum frac jobs using virtual intelligence techniques

    Science.gov (United States)

    Mohaghegh, Shahab; Popa, Andrei; Ameri, Sam

    2000-10-01

    Designing optimal frac jobs is a complex and time-consuming process. It usually involves the use of a two- or three-dimensional computer model. For the computer models to perform as intended, a wealth of input data is required. The input data includes wellbore configuration and reservoir characteristics such as porosity, permeability, stress and thickness profiles of the pay layers as well as the overburden layers. Among other essential information required for the design process is fracturing fluid type and volume, proppant type and volume, injection rate, proppant concentration and frac job schedule. Some of the parameters such as fluid and proppant types have discrete possible choices. Other parameters such as fluid and proppant volume, on the other hand, assume values from within a range of minimum and maximum values. A potential frac design for a particular pay zone is a combination of all of these parameters. Finding the optimum combination is not a trivial process. It usually requires an experienced engineer and a considerable amount of time to tune the parameters in order to achieve desirable outcome. This paper introduces a new methodology that integrates two virtual intelligence techniques, namely, artificial neural networks and genetic algorithms to automate and simplify the optimum frac job design process. This methodology requires little input from the engineer beyond the reservoir characterizations and wellbore configuration. The software tool that has been developed based on this methodology uses the reservoir characteristics and an optimization criteria indicated by the engineer, for example a certain propped frac length, and provides the detail of the optimum frac design that will result in the specified criteria. An ensemble of neural networks is trained to mimic the two- or three-dimensional frac simulator. Once successfully trained, these networks are capable of providing instantaneous results in response to any set of input parameters. These

  16. Hardware Middleware for Person Tracking on Embedded Distributed Smart Cameras

    Directory of Open Access Journals (Sweden)

    Ali Akbar Zarezadeh

    2012-01-01

    Full Text Available Tracking individuals is a prominent application in such domains like surveillance or smart environments. This paper provides a development of a multiple camera setup with jointed view that observes moving persons in a site. It focuses on a geometry-based approach to establish correspondence among different views. The expensive computational parts of the tracker are hardware accelerated via a novel system-on-chip (SoC design. In conjunction with this vision application, a hardware object request broker (ORB middleware is presented as the underlying communication system. The hardware ORB provides a hardware/software architecture to achieve real-time intercommunication among multiple smart cameras. Via a probing mechanism, a performance analysis is performed to measure network latencies, that is, time traversing the TCP/IP stack, in both software and hardware ORB approaches on the same smart camera platform. The empirical results show that using the proposed hardware ORB as client and server in separate smart camera nodes will considerably reduce the network latency up to 100 times compared to the software ORB.

  17. Programming time-multiplexed reconfigurable hardware using a scalable neuromorphic compiler.

    Science.gov (United States)

    Minkovich, Kirill; Srinivasa, Narayan; Cruz-Albrecht, Jose M; Cho, Youngkwan; Nogin, Aleksey

    2012-06-01

    Scalability and connectivity are two key challenges in designing neuromorphic hardware that can match biological levels. In this paper, we describe a neuromorphic system architecture design that addresses an approach to meet these challenges using traditional complementary metal-oxide-semiconductor (CMOS) hardware. A key requirement in realizing such neural architectures in hardware is the ability to automatically configure the hardware to emulate any neural architecture or model. The focus for this paper is to describe the details of such a programmable front-end. This programmable front-end is composed of a neuromorphic compiler and a digital memory, and is designed based on the concept of synaptic time-multiplexing (STM). The neuromorphic compiler automatically translates any given neural architecture to hardware switch states and these states are stored in digital memory to enable desired neural architectures. STM enables our proposed architecture to address scalability and connectivity using traditional CMOS hardware. We describe the details of the proposed design and the programmable front-end, and provide examples to illustrate its capabilities. We also provide perspectives for future extensions and potential applications.

  18. Optimum target thickness for polarimeters

    International Nuclear Information System (INIS)

    Sitnik, I.M.

    2003-01-01

    Polarimeters with thick targets are a tool to measure the proton polarization. But the question about the optimum target thickness is still the subject of discussion. An attempt to calculate the most common parameters concerning this problem, in a few GeV region, is made

  19. The VMTG Hardware Description

    CERN Document Server

    Puccio, B

    1998-01-01

    The document describes the hardware features of the CERN Master Timing Generator. This board is the common platform for the transmission of General Timing Machine required by the CERN accelerators. In addition, the paper shows the various jumper options to customise the card which is compliant to the VMEbus standard.

  20. Dynamically-Loaded Hardware Libraries (HLL) Technology for Audio Applications

    DEFF Research Database (Denmark)

    Esposito, A.; Lomuscio, A.; Nunzio, L. Di

    2016-01-01

    In this work, we apply hardware acceleration to embedded systems running audio applications. We present a new framework, Dynamically-Loaded Hardware Libraries or HLL, to dynamically load hardware libraries on reconfigurable platforms (FPGAs). Provided a library of application-specific processors......, we load on-the-fly the specific processor in the FPGA, and we transfer the execution from the CPU to the FPGA-based accelerator. The proposed architecture provides excellent flexibility with respect to the different audio applications implemented, high quality audio, and an energy efficient solution....

  1. Performance characteristics of aerodynamically optimum turbines for wind energy generators

    Science.gov (United States)

    Rohrbach, C.; Worobel, R.

    1975-01-01

    This paper presents a brief discussion of the aerodynamic methodology for wind energy generator turbines, an approach to the design of aerodynamically optimum wind turbines covering a broad range of design parameters, some insight on the effect on performance of nonoptimum blade shapes which may represent lower fabrication costs, the annual wind turbine energy for a family of optimum wind turbines, and areas of needed research. On the basis of the investigation, it is concluded that optimum wind turbines show high performance over a wide range of design velocity ratios; that structural requirements impose constraints on blade geometry; that variable pitch wind turbines provide excellent power regulation and that annual energy output is insensitive to design rpm and solidity of optimum wind turbines.

  2. Optimum strategies for nuclear energy system development (method of synthesis)

    International Nuclear Information System (INIS)

    Belenky, V.Z.

    1983-01-01

    The problem of optimum long-term development of the nuclear energy system is considered. The optimum strategies (i.e. minimum total uranium consumption) for the transition phase leading to a stationary regime of development are found. For this purpose the author has elaborated a new method of solving linear problems of optimal control which can include jumps in trajectories. The method gives a possibility to fulfil a total synthesis of optimum strategies. A key characteristic of the problem is the productivity function of the nuclear energy system which connects technological system parameters with its growth rate. There are only two types of optimum strategies, according to an increasing or decreasing productivity function. Both cases are illustrated with numerical examples. (orig.) [de

  3. Hardware Approach for Real Time Machine Stereo Vision

    Directory of Open Access Journals (Sweden)

    Michael Tornow

    2006-02-01

    Full Text Available Image processing is an effective tool for the analysis of optical sensor information for driver assistance systems and controlling of autonomous robots. Algorithms for image processing are often very complex and costly in terms of computation. In robotics and driver assistance systems, real-time processing is necessary. Signal processing algorithms must often be drastically modified so they can be implemented in the hardware. This task is especially difficult for continuous real-time processing at high speeds. This article describes a hardware-software co-design for a multi-object position sensor based on a stereophotogrammetric measuring method. In order to cover a large measuring area, an optimized algorithm based on an image pyramid is implemented in an FPGA as a parallel hardware solution for depth map calculation. Object recognition and tracking are then executed in real-time in a processor with help of software. For this task a statistical cluster method is used. Stabilization of the tracking is realized through use of a Kalman filter. Keywords: stereophotogrammetry, hardware-software co-design, FPGA, 3-d image analysis, real-time, clustering and tracking.

  4. Hardware implementation of a GFSR pseudo-random number generator

    Science.gov (United States)

    Aiello, G. R.; Budinich, M.; Milotti, E.

    1989-12-01

    We describe the hardware implementation of a pseudo-random number generator of the "Generalized Feedback Shift Register" (GFSR) type. After brief theoretical considerations we describe two versions of the hardware, the tests done and the performances achieved.

  5. Hardware Realization of Chaos Based Symmetric Image Encryption

    KAUST Repository

    Barakat, Mohamed L.

    2012-06-01

    This thesis presents a novel work on hardware realization of symmetric image encryption utilizing chaos based continuous systems as pseudo random number generators. Digital implementation of chaotic systems results in serious degradations in the dynamics of the system. Such defects are illuminated through a new technique of generalized post proceeding with very low hardware cost. The thesis further discusses two encryption algorithms designed and implemented as a block cipher and a stream cipher. The security of both systems is thoroughly analyzed and the performance is compared with other reported systems showing a superior results. Both systems are realized on Xilinx Vetrix-4 FPGA with a hardware and throughput performance surpassing known encryption systems.

  6. Desenvolvimento de hardware reconfigurável de criptografia assimétrica

    Directory of Open Access Journals (Sweden)

    Otávio Souza Martins Gomes

    2015-01-01

    Full Text Available Este artigo apresenta o resultado parcial do desenvolvimento de uma interface de hardware reconfigurável para criptografia assimétrica que permite a troca segura de dados. Hardwares reconfiguráveis permitem o desenvolvimento deste tipo de dispositivo com segurança e flexibilidade e possibilitam a mudança de características no projeto com baixo custo e de forma rápida.Palavras-chave: Criptografia. Hardware. ElGamal. FPGA. Segurança. Development of an asymmetric cryptography reconfigurable harwadre ABSTRACTThis paper presents some conclusions and choices about the development of an asymmetric cryptography reconfigurable hardware interface to allow a safe data communication. Reconfigurable hardwares allows the development of this kind of device with safety and flexibility, and offer the possibility to change some features with low cost and in a fast way.Keywords: Cryptography. Hardware. ElGamal. FPGAs. Security.

  7. MRI monitoring of focused ultrasound sonications near metallic hardware.

    Science.gov (United States)

    Weber, Hans; Ghanouni, Pejman; Pascal-Tenorio, Aurea; Pauly, Kim Butts; Hargreaves, Brian A

    2018-07-01

    To explore the temperature-induced signal change in two-dimensional multi-spectral imaging (2DMSI) for fast thermometry near metallic hardware to enable MR-guided focused ultrasound surgery (MRgFUS) in patients with implanted metallic hardware. 2DMSI was optimized for temperature sensitivity and applied to monitor focus ultrasound surgery (FUS) sonications near metallic hardware in phantoms and ex vivo porcine muscle tissue. Further, we evaluated its temperature sensitivity for in vivo muscle in patients without metallic hardware. In addition, we performed a comparison of temperature sensitivity between 2DMSI and conventional proton-resonance-frequency-shift (PRFS) thermometry at different distances from metal devices and different signal-to-noise ratios (SNR). 2DMSI thermometry enabled visualization of short ultrasound sonications near metallic hardware. Calibration using in vivo muscle yielded a constant temperature sensitivity for temperatures below 43 °C. For an off-resonance coverage of ± 6 kHz, we achieved a temperature sensitivity of 1.45%/K, resulting in a minimum detectable temperature change of ∼2.5 K for an SNR of 100 with a temporal resolution of 6 s per frame. The proposed 2DMSI thermometry has the potential to allow MR-guided FUS treatments of patients with metallic hardware and therefore expand its reach to a larger patient population. Magn Reson Med 80:259-271, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  8. Parametric Investigation of Optimum Thermal Insulation Thickness for External Walls

    Directory of Open Access Journals (Sweden)

    Omer Kaynakli

    2011-06-01

    Full Text Available Numerous studies have estimated the optimum thickness of thermal insulation materials used in building walls for different climate conditions. The economic parameters (inflation rate, discount rate, lifetime and energy costs, the heating/cooling loads of the building, the wall structure and the properties of the insulation material all affect the optimum insulation thickness. This study focused on the investigation of these parameters that affect the optimum thermal insulation thickness for building walls. To determine the optimum thickness and payback period, an economic model based on life-cycle cost analysis was used. As a result, the optimum thermal insulation thickness increased with increasing the heating and cooling energy requirements, the lifetime of the building, the inflation rate, energy costs and thermal conductivity of insulation. However, the thickness decreased with increasing the discount rate, the insulation material cost, the total wall resistance, the coefficient of performance (COP of the cooling system and the solar radiation incident on a wall. In addition, the effects of these parameters on the total life-cycle cost, payback periods and energy savings were also investigated.

  9. Trends in computer hardware and software.

    Science.gov (United States)

    Frankenfeld, F M

    1993-04-01

    Previously identified and current trends in the development of computer systems and in the use of computers for health care applications are reviewed. Trends identified in a 1982 article were increasing miniaturization and archival ability, increasing software costs, increasing software independence, user empowerment through new software technologies, shorter computer-system life cycles, and more rapid development and support of pharmaceutical services. Most of these trends continue today. Current trends in hardware and software include the increasing use of reduced instruction-set computing, migration to the UNIX operating system, the development of large software libraries, microprocessor-based smart terminals that allow remote validation of data, speech synthesis and recognition, application generators, fourth-generation languages, computer-aided software engineering, object-oriented technologies, and artificial intelligence. Current trends specific to pharmacy and hospitals are the withdrawal of vendors of hospital information systems from the pharmacy market, improved linkage of information systems within hospitals, and increased regulation by government. The computer industry and its products continue to undergo dynamic change. Software development continues to lag behind hardware, and its high cost is offsetting the savings provided by hardware.

  10. CERN Neutrino Platform Hardware

    CERN Document Server

    Nelson, Kevin

    2017-01-01

    My summer research was broadly in CERN's neutrino platform hardware efforts. This project had two main components: detector assembly and data analysis work for ICARUS. Specifically, I worked on assembly for the ProtoDUNE project and monitored the safety of ICARUS as it was transported to Fermilab by analyzing the accelerometer data from its move.

  11. Human Centered Hardware Modeling and Collaboration

    Science.gov (United States)

    Stambolian Damon; Lawrence, Brad; Stelges, Katrine; Henderson, Gena

    2013-01-01

    In order to collaborate engineering designs among NASA Centers and customers, to in clude hardware and human activities from multiple remote locations, live human-centered modeling and collaboration across several sites has been successfully facilitated by Kennedy Space Center. The focus of this paper includes innovative a pproaches to engineering design analyses and training, along with research being conducted to apply new technologies for tracking, immersing, and evaluating humans as well as rocket, vehic le, component, or faci lity hardware utilizing high resolution cameras, motion tracking, ergonomic analysis, biomedical monitoring, wor k instruction integration, head-mounted displays, and other innovative human-system integration modeling, simulation, and collaboration applications.

  12. Analysis for Parallel Execution without Performing Hardware/Software Co-simulation

    OpenAIRE

    Muhammad Rashid

    2014-01-01

    Hardware/software co-simulation improves the performance of embedded applications by executing the applications on a virtual platform before the actual hardware is available in silicon. However, the virtual platform of the target architecture is often not available during early stages of the embedded design flow. Consequently, analysis for parallel execution without performing hardware/software co-simulation is required. This article presents an analysis methodology for parallel execution of ...

  13. Software error masking effect on hardware faults

    International Nuclear Information System (INIS)

    Choi, Jong Gyun; Seong, Poong Hyun

    1999-01-01

    Based on the Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL), in this work, a simulation model for fault injection is developed to estimate the dependability of the digital system in operational phase. We investigated the software masking effect on hardware faults through the single bit-flip and stuck-at-x fault injection into the internal registers of the processor and memory cells. The fault location reaches all registers and memory cells. Fault distribution over locations is randomly chosen based on a uniform probability distribution. Using this model, we have predicted the reliability and masking effect of an application software in a digital system-Interposing Logic System (ILS) in a nuclear power plant. We have considered four the software operational profiles. From the results it was found that the software masking effect on hardware faults should be properly considered for predicting the system dependability accurately in operation phase. It is because the masking effect was formed to have different values according to the operational profile

  14. Instrument hardware and software upgrades at IPNS

    International Nuclear Information System (INIS)

    Worlton, Thomas; Hammonds, John; Mikkelson, D.; Mikkelson, Ruth; Porter, Rodney; Tao, Julian; Chatterjee, Alok

    2006-01-01

    IPNS is in the process of upgrading their time-of-flight neutron scattering instruments with improved hardware and software. The hardware upgrades include replacing old VAX Qbus and Multibus-based data acquisition systems with new systems based on VXI and VME. Hardware upgrades also include expanded detector banks and new detector electronics. Old VAX Fortran-based data acquisition and analysis software is being replaced with new software as part of the ISAW project. ISAW is written in Java for ease of development and portability, and is now used routinely for data visualization, reduction, and analysis on all upgraded instruments. ISAW provides the ability to process and visualize the data from thousands of detector pixels, each having thousands of time channels. These operations can be done interactively through a familiar graphical user interface or automatically through simple scripts. Scripts and operators provided by end users are automatically included in the ISAW menu structure, along with those distributed with ISAW, when the application is started

  15. MFTF supervisory control and diagnostics system hardware

    International Nuclear Information System (INIS)

    Butner, D.N.

    1979-01-01

    The Supervisory Control and Diagnostics System (SCDS) for the Mirror Fusion Test Facility (MFTF) is a multiprocessor minicomputer system designed so that for most single-point failures, the hardware may be quickly reconfigured to provide continued operation of the experiment. The system is made up of nine Perkin-Elmer computers - a mixture of 8/32's and 7/32's. Each computer has ports on a shared memory system consisting of two independent shared memory modules. Each processor can signal other processors through hardware external to the shared memory. The system communicates with the Local Control and Instrumentation System, which consists of approximately 65 microprocessors. Each of the six system processors has facilities for communicating with a group of microprocessors; the groups consist of from four to 24 microprocessors. There are hardware switches so that if an SCDS processor communicating with a group of microprocessors fails, another SCDS processor takes over the communication

  16. Flight Hardware Virtualization for On-Board Science Data Processing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Utilize Hardware Virtualization technology to benefit on-board science data processing by investigating new real time embedded Hardware Virtualization solutions and...

  17. Event-driven processing for hardware-efficient neural spike sorting

    Science.gov (United States)

    Liu, Yan; Pereira, João L.; Constandinou, Timothy G.

    2018-02-01

    Objective. The prospect of real-time and on-node spike sorting provides a genuine opportunity to push the envelope of large-scale integrated neural recording systems. In such systems the hardware resources, power requirements and data bandwidth increase linearly with channel count. Event-based (or data-driven) processing can provide here a new efficient means for hardware implementation that is completely activity dependant. In this work, we investigate using continuous-time level-crossing sampling for efficient data representation and subsequent spike processing. Approach. (1) We first compare signals (synthetic neural datasets) encoded with this technique against conventional sampling. (2) We then show how such a representation can be directly exploited by extracting simple time domain features from the bitstream to perform neural spike sorting. (3) The proposed method is implemented in a low power FPGA platform to demonstrate its hardware viability. Main results. It is observed that considerably lower data rates are achievable when using 7 bits or less to represent the signals, whilst maintaining the signal fidelity. Results obtained using both MATLAB and reconfigurable logic hardware (FPGA) indicate that feature extraction and spike sorting accuracies can be achieved with comparable or better accuracy than reference methods whilst also requiring relatively low hardware resources. Significance. By effectively exploiting continuous-time data representation, neural signal processing can be achieved in a completely event-driven manner, reducing both the required resources (memory, complexity) and computations (operations). This will see future large-scale neural systems integrating on-node processing in real-time hardware.

  18. Fuel cell hardware-in-loop

    Energy Technology Data Exchange (ETDEWEB)

    Moore, R.M.; Randolf, G.; Virji, M. [University of Hawaii, Hawaii Natural Energy Institute (United States); Hauer, K.H. [Xcellvision (Germany)

    2006-11-08

    Hardware-in-loop (HiL) methodology is well established in the automotive industry. One typical application is the development and validation of control algorithms for drive systems by simulating the vehicle plus the vehicle environment in combination with specific control hardware as the HiL component. This paper introduces the use of a fuel cell HiL methodology for fuel cell and fuel cell system design and evaluation-where the fuel cell (or stack) is the unique HiL component that requires evaluation and development within the context of a fuel cell system designed for a specific application (e.g., a fuel cell vehicle) in a typical use pattern (e.g., a standard drive cycle). Initial experimental results are presented for the example of a fuel cell within a fuel cell vehicle simulation under a dynamic drive cycle. (author)

  19. Multidisciplinary Aerodynamic Design of a Rotor Blade for an Optimum Rotor Speed Helicopter

    Directory of Open Access Journals (Sweden)

    Jiayi Xie

    2017-06-01

    Full Text Available The aerodynamic design of rotor blades is challenging, and is crucial for the development of helicopter technology. Previous aerodynamic optimizations that focused only on limited design points find it difficult to balance flight performance across the entire flight envelope. This study develops a global optimum envelope (GOE method for determining blade parameters—blade twist, taper ratio, tip sweep—for optimum rotor speed helicopters (ORS-helicopters, balancing performance improvements in hover and various freestream velocities. The GOE method implements aerodynamic blade design by a bi-level optimization, composed of a global optimization step and a secondary optimization step. Power loss as a measure of rotor performance is chosen as the objective function, referred to as direct power loss (DPL in this study. A rotorcraft comprehensive code for trim simulation with a prescribed wake method is developed. With the application of the GOE method, a DPL reduction of as high as 16.7% can be achieved in hover, and 24% at high freestream velocity.

  20. A study on optimum conditions for reducing Polonium-210 ion in electrolysis process

    International Nuclear Information System (INIS)

    Yii Mei Wo

    2006-01-01

    Polonium-210 is one of the most important radionuclide to be study while studying radioactive contaminants on marine lives. It usually be self-deposited on a pure silver foil and counted using an Alpha Spectrometry System. However, using pure silver foil involves high cost. Therefore, study had been conducted to find the suitability of using stainless steel disc to deposit polonium-210 using electrolysis process and the optimum conditions for such process. This was carry out by using pure polonium-210 standard solution and the ready disc was counted using Zinc Sulphite Counter. Results show that reduction of polonium ion on stainless steel disc can be done but the efficiency of the process only around 70 percent. Besides this, studies also show that, at 1.1 ampere constant current and cathode to anode distance at 8 mm, the optimum conditions to reduce polonium ion were at pH 2.2-2.3 with the electrolysis time of 5 hours. (Author)

  1. Investigation of earthquake factor for optimum tuned mass dampers

    Science.gov (United States)

    Nigdeli, Sinan Melih; Bekdaş, Gebrail

    2012-09-01

    In this study the optimum parameters of tuned mass dampers (TMD) are investigated under earthquake excitations. An optimization strategy was carried out by using the Harmony Search (HS) algorithm. HS is a metaheuristic method which is inspired from the nature of musical performances. In addition to the HS algorithm, the results of the optimization objective are compared with the results of the other documented method and the corresponding results are eliminated. In that case, the best optimum results are obtained. During the optimization, the optimum TMD parameters were searched for single degree of freedom (SDOF) structure models with different periods. The optimization was done for different earthquakes separately and the results were compared.

  2. Flexible hardware design for RSA and Elliptic Curve Cryptosystems

    NARCIS (Netherlands)

    Batina, L.; Bruin - Muurling, G.; Örs, S.B.; Okamoto, T.

    2004-01-01

    This paper presents a scalable hardware implementation of both commonly used public key cryptosystems, RSA and Elliptic Curve Cryptosystem (ECC) on the same platform. The introduced hardware accelerator features a design which can be varied from very small (less than 20 Kgates) targeting wireless

  3. Sharing open hardware through ROP, the robotic open platform

    NARCIS (Netherlands)

    Lunenburg, J.; Soetens, R.P.T.; Schoenmakers, F.; Metsemakers, P.M.G.; van de Molengraft, M.J.G.; Steinbuch, M.; Behnke, S.; Veloso, M.; Visser, A.; Xiong, R.

    2014-01-01

    The robot open source software community, in particular ROS, drastically boosted robotics research. However, a centralized place to exchange open hardware designs does not exist. Therefore we launched the Robotic Open Platform (ROP). A place to share and discuss open hardware designs. Among others

  4. Sharing open hardware through ROP, the Robotic Open Platform

    NARCIS (Netherlands)

    Lunenburg, J.J.M.; Soetens, R.P.T.; Schoenmakers, Ferry; Metsemakers, P.M.G.; Molengraft, van de M.J.G.; Steinbuch, M.

    2013-01-01

    The robot open source software community, in particular ROS, drastically boosted robotics research. However, a centralized place to exchange open hardware designs does not exist. Therefore we launched the Robotic Open Platform (ROP). A place to share and discuss open hardware designs. Among others

  5. Hardware Abstraction and Protocol Optimization for Coded Sensor Networks

    DEFF Research Database (Denmark)

    Nistor, Maricica; Roetter, Daniel Enrique Lucani; Barros, João

    2015-01-01

    The design of the communication protocols in wireless sensor networks (WSNs) often neglects several key characteristics of the sensor's hardware, while assuming that the number of transmitted bits is the dominating factor behind the system's energy consumption. A closer look at the hardware speci...

  6. Fast DRR splat rendering using common consumer graphics hardware

    International Nuclear Information System (INIS)

    Spoerk, Jakob; Bergmann, Helmar; Wanschitz, Felix; Dong, Shuo; Birkfellner, Wolfgang

    2007-01-01

    Digitally rendered radiographs (DRR) are a vital part of various medical image processing applications such as 2D/3D registration for patient pose determination in image-guided radiotherapy procedures. This paper presents a technique to accelerate DRR creation by using conventional graphics hardware for the rendering process. DRR computation itself is done by an efficient volume rendering method named wobbled splatting. For programming the graphics hardware, NVIDIAs C for Graphics (Cg) is used. The description of an algorithm used for rendering DRRs on the graphics hardware is presented, together with a benchmark comparing this technique to a CPU-based wobbled splatting program. Results show a reduction of rendering time by about 70%-90% depending on the amount of data. For instance, rendering a volume of 2x10 6 voxels is feasible at an update rate of 38 Hz compared to 6 Hz on a common Intel-based PC using the graphics processing unit (GPU) of a conventional graphics adapter. In addition, wobbled splatting using graphics hardware for DRR computation provides higher resolution DRRs with comparable image quality due to special processing characteristics of the GPU. We conclude that DRR generation on common graphics hardware using the freely available Cg environment is a major step toward 2D/3D registration in clinical routine

  7. Optimum position of isolators within erbium-doped fibers

    DEFF Research Database (Denmark)

    Lumholt, Ole; Schüsler, Kim; Bjarklev, Anders Overgaard

    1992-01-01

    An isolator is used as an amplified spontaneous emission suppressing component within an erbium-doped fiber. The optimum isolator placement is both experimentally and theoretically determined and found to be slightly dependent upon pump power. Improvements of 4 dB in gain and 2 dB in noise figure...... are measured for the optimum isolator location at 25% of the fiber length when the fiber is pumped with 60 mW of pump power at 1.48 μm...

  8. Optimum hub height of a wind turbine for maximizing annual net profit

    International Nuclear Information System (INIS)

    Lee, Jaehwan; Kim, Dong Rip; Lee, Kwan-Soo

    2015-01-01

    Highlights: • Annual Net Profit was proposed to optimize the hub height of a wind turbine. • Procedures of the hub height optimization method were introduced. • Effect of local wind speed characteristics on optimum hub height was illustrated. • Effect of rated power on optimum hub height was negligible in the range 0.75–3 MW. • Rated speed and cut-out speed had great effects on optimum hub height. - Abstract: The optimization method of the hub height, which can ensure the economic feasibility of the wind turbine, is proposed in this study. Annual Net Profit is suggested as an objective function and the optimization procedure is developed. The effects of local wind speed and wind turbine power characteristics on the optimum hub height are investigated. The optimum hub height decreased as the mean wind speed and wind shear exponent increased. Rated power had little effect on optimum hub height; it follows that the economies of scale are negligible in the rated power range of 0.75–3 MW. Among the wind turbine power characteristics, rated speed and cut-out speed most strongly affected the optimum hub height

  9. FPS-RAM: Fast Prefix Search RAM-Based Hardware for Forwarding Engine

    Science.gov (United States)

    Zaitsu, Kazuya; Yamamoto, Koji; Kuroda, Yasuto; Inoue, Kazunari; Ata, Shingo; Oka, Ikuo

    Ternary content addressable memory (TCAM) is becoming very popular for designing high-throughput forwarding engines on routers. However, TCAM has potential problems in terms of hardware and power costs, which limits its ability to deploy large amounts of capacity in IP routers. In this paper, we propose new hardware architecture for fast forwarding engines, called fast prefix search RAM-based hardware (FPS-RAM). We designed FPS-RAM hardware with the intent of maintaining the same search performance and physical user interface as TCAM because our objective is to replace the TCAM in the market. Our RAM-based hardware architecture is completely different from that of TCAM and has dramatically reduced the costs and power consumption to 62% and 52%, respectively. We implemented FPS-RAM on an FPGA to examine its lookup operation.

  10. Generation of Efficient High-Level Hardware Code from Dataflow Programs

    OpenAIRE

    Siret , Nicolas; Wipliez , Matthieu; Nezan , Jean François; Palumbo , Francesca

    2012-01-01

    High-level synthesis (HLS) aims at reducing the time-to-market by providing an automated design process that interprets and compiles high-level abstraction programs into hardware. However, HLS tools still face limitations regarding the performance of the generated code, due to the difficulties of compiling input imperative languages into efficient hardware code. Moreover the hardware code generated by the HLS tools is usually target-dependant and at a low level of abstraction (i.e. gate-level...

  11. Hardware and software status of QCDOC

    International Nuclear Information System (INIS)

    Boyle, P.A.; Chen, D.; Christ, N.H.; Clark, M.; Cohen, S.D.; Cristian, C.; Dong, Z.; Gara, A.; Joo, B.; Jung, C.; Kim, C.; Levkova, L.; Liao, X.; Liu, G.; Mawhinney, R.D.; Ohta, S.; Petrov, K.; Wettig, T.; Yamaguchi, A.

    2004-01-01

    QCDOC is a massively parallel supercomputer whose processing nodes are based on an application-specific integrated circuit (ASIC). This ASIC was custom-designed so that crucial lattice QCD kernels achieve an overall sustained performance of 50% on machines with several 10,000 nodes. This strong scalability, together with low power consumption and a price/performance ratio of $1 per sustained MFlops, enable QCDOC to attack the most demanding lattice QCD problems. The first ASICs became available in June of 2003, and the testing performed so far has shown all systems functioning according to specification. We review the hardware and software status of QCDOC and present performance figures obtained in real hardware as well as in simulation

  12. Optimum dose of 2-hydroxyethyl methacrylate based bonding material on pulp cells toxicity

    OpenAIRE

    Saraswati, Widya

    2010-01-01

    Background: 2-hydroxyethyl methacrylate (HEMA), one type of resins commonly used as bonding base material, is commonly used due to its advantageous chemical characteristics. Several preliminary studies indicated that resin is a material capable to induce damage in dentin-pulp complex. It is necessary to perform further investigation related with its biological safety for hard and soft tissues in oral cavity. Purpose: The author performed an in vitro test to find optimum dose of HEMA resin mon...

  13. Generalized Pareto optimum and semi-classical spinors

    Science.gov (United States)

    Rouleux, M.

    2018-02-01

    In 1971, S. Smale presented a generalization of Pareto optimum he called the critical Pareto set. The underlying motivation was to extend Morse theory to several functions, i.e. to find a Morse theory for m differentiable functions defined on a manifold M of dimension ℓ. We use this framework to take a 2 × 2 Hamiltonian ℋ = ℋ(p) ∈ 2 C ∞(T * R 2) to its normal form near a singular point of the Fresnel surface. Namely we say that ℋ has the Pareto property if it decomposes, locally, up to a conjugation with regular matrices, as ℋ(p) = u ‧(p)C(p)(u ‧(p))*, where u : R 2 → R 2 has singularities of codimension 1 or 2, and C(p) is a regular Hermitian matrix (“integrating factor”). In particular this applies in certain cases to the matrix Hamiltonian of Elasticity theory and its (relative) perturbations of order 3 in momentum at the origin.

  14. Quantum neuromorphic hardware for quantum artificial intelligence

    Science.gov (United States)

    Prati, Enrico

    2017-08-01

    The development of machine learning methods based on deep learning boosted the field of artificial intelligence towards unprecedented achievements and application in several fields. Such prominent results were made in parallel with the first successful demonstrations of fault tolerant hardware for quantum information processing. To which extent deep learning can take advantage of the existence of a hardware based on qubits behaving as a universal quantum computer is an open question under investigation. Here I review the convergence between the two fields towards implementation of advanced quantum algorithms, including quantum deep learning.

  15. Introduction to co-simulation of software and hardware in embedded processor systems

    Energy Technology Data Exchange (ETDEWEB)

    Dreike, P.L.; McCoy, J.A.

    1996-09-01

    From the dawn of the first use of microprocessors and microcontrollers in embedded systems, the software has been blamed for products being late to market, This is due to software being developed after hardware is fabricated. During the past few years, the use of Hardware Description (or Design) Languages (HDLs) and digital simulation have advanced to a point where the concurrent development of software and hardware can be contemplated using simulation environments. This offers the potential of 50% or greater reductions in time-to-market for embedded systems. This paper is a tutorial on the technical issues that underlie software-hardware (swhw) co-simulation, and the current state of the art. We review the traditional sequential hardware-software design paradigm, and suggest a paradigm for concurrent design, which is supported by co-simulation of software and hardware. This is followed by sections on HDLs modeling and simulation;hardware assisted approaches to simulation; microprocessor modeling methods; brief descriptions of four commercial products for sw-hw co-simulation and a description of our own experiments to develop a co-simulation environment.

  16. PENENTUAN WAKTU KONTAK DAN pH OPTIMUM PENYERAPAN METILEN BIRU MENGGUNAKAN ABU SEKAM PADI

    Directory of Open Access Journals (Sweden)

    Anung Riapanitra

    2006-11-01

    Full Text Available Dyes are widely used for colouring in textile industries, significant losses occur during the manufacture and processing of the product, and these lost chemical are discharged in surrounding effluent. Adsorption of dyes is an effective technology for treatment of wastewater contaminated by the mismanaged of different types of dyes. In this research, we investigated the potential of rice husk ash for removal of methylene blue dyeing agent in aqueous system. The aim of this research is to find out the optimum contact time and pH on the adsorption of methylene blue using rice husk ash. Batch kinetics studies were carried out under varying experimental condition of contact time and pH. An adsorption equilibrium condition was reached within 10 minutes and the optimum condition for adsorption was at pH 3. The adsorption of methylene blue was decreasing with decreasing the solution pH value.

  17. The LASS hardware processor

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1976-01-01

    The problems of data analysis with hardware processors are reviewed and a description is given of a programmable processor. This processor, the 168/E, has been designed for use in the LASS multi-processor system; it has an execution speed comparable to the IBM 370/168 and uses the subset of IBM 370 instructions appropriate to the LASS analysis task. (Auth.)

  18. RRFC hardware operation manual

    International Nuclear Information System (INIS)

    Abhold, M.E.; Hsue, S.T.; Menlove, H.O.; Walton, G.

    1996-05-01

    The Research Reactor Fuel Counter (RRFC) system was developed to assay the 235 U content in spent Material Test Reactor (MTR) type fuel elements underwater in a spent fuel pool. RRFC assays the 235 U content using active neutron coincidence counting and also incorporates an ion chamber for gross gamma-ray measurements. This manual describes RRFC hardware, including detectors, electronics, and performance characteristics

  19. Removal of symptomatic craniofacial titanium hardware following craniotomy: Case series and review

    Directory of Open Access Journals (Sweden)

    Sheri K. Palejwala

    2015-06-01

    Full Text Available Titanium craniofacial hardware has become commonplace for reconstruction and bone flap fixation following craniotomy. Complications of titanium hardware include palpability, visibility, infection, exposure, pain, and hardware malfunction, which can necessitate hardware removal. We describe three patients who underwent craniofacial reconstruction following craniotomies for trauma with post-operative courses complicated by medically intractable facial pain. All three patients subsequently underwent removal of the symptomatic craniofacial titanium hardware and experienced rapid resolution of their painful parasthesias. Symptomatic plates were found in the region of the frontozygomatic suture or MacCarty keyhole, or in close proximity with the supraorbital nerve. Titanium plates, though relatively safe and low profile, can cause local nerve irritation or neuropathy. Surgeons should be cognizant of the potential complications of titanium craniofacial hardware and locations that are at higher risk for becoming symptomatic necessitating a second surgery for removal.

  20. The optimum circular field size for dental radiography with intraoral films

    International Nuclear Information System (INIS)

    van Straaten, F.J.; van Aken, J.

    1982-01-01

    Intraoral radiographs are often made with circular fields to irradiate the film, and in many instances these fields are much larger than the film. The feasibility of reducing a circular radiation field without increasing the probability of excessive cone cutting was evaluated clinically, and an optimum field size was determined. A circular radiation field 4.5 cm. at the tube end was found to minimize cone cutting and reduce the area of tissue irradiated by at least 44 percent. Findings suggest that current I.C.R.P. recommendations for a 6 to 7.5 cm. diameter circular field may be too liberal

  1. The optimum decision rules for the oddity task.

    Science.gov (United States)

    Versfeld, N J; Dai, H; Green, D M

    1996-01-01

    This paper presents the optimum decision rule for an m-interval oddity task in which m-1 intervals contain the same signal and one is different or odd. The optimum decision rule depends on the degree of correlation among observations. The present approach unifies the different strategies that occur with "roved" or "fixed" experiments (Macmillan & Creelman, 1991, p. 147). It is shown that the commonly used decision rule for an m-interval oddity task corresponds to the special case of highly correlated observations. However, as is also true for the same-different paradigm, there exists a different optimum decision rule when the observations are independent. The relation between the probability of a correct response and d' is derived for the three-interval oddity task. Tables are presented of this relation for the three-, four-, and five-interval oddity task. Finally, an experimental method is proposed that allows one to determine the decision rule used by the observer in an oddity experiment.

  2. Memory Based Machine Intelligence Techniques in VLSI hardware

    OpenAIRE

    James, Alex Pappachen

    2012-01-01

    We briefly introduce the memory based approaches to emulate machine intelligence in VLSI hardware, describing the challenges and advantages. Implementation of artificial intelligence techniques in VLSI hardware is a practical and difficult problem. Deep architectures, hierarchical temporal memories and memory networks are some of the contemporary approaches in this area of research. The techniques attempt to emulate low level intelligence tasks and aim at providing scalable solutions to high ...

  3. Hardware support for collecting performance counters directly to memory

    Science.gov (United States)

    Gara, Alan; Salapura, Valentina; Wisniewski, Robert W.

    2012-09-25

    Hardware support for collecting performance counters directly to memory, in one aspect, may include a plurality of performance counters operable to collect one or more counts of one or more selected activities. A first storage element may be operable to store an address of a memory location. A second storage element may be operable to store a value indicating whether the hardware should begin copying. A state machine may be operable to detect the value in the second storage element and trigger hardware copying of data in selected one or more of the plurality of performance counters to the memory location whose address is stored in the first storage element.

  4. Why Open Source Hardware matters and why you should care

    OpenAIRE

    Gürkaynak, Frank K.

    2017-01-01

    Open source hardware is currently where open source software was about 30 years ago. The idea is well received by enthusiasts, there is interest and the open source hardware has gained visible momentum recently, with several well-known universities including UC Berkeley, Cambridge and ETH Zürich actively working on large projects involving open source hardware, attracting the attention of companies big and small. But it is still not quite there yet. In this talk, based on my experience on the...

  5. Acceleration of Meshfree Radial Point Interpolation Method on Graphics Hardware

    International Nuclear Information System (INIS)

    Nakata, Susumu

    2008-01-01

    This article describes a parallel computational technique to accelerate radial point interpolation method (RPIM)-based meshfree method using graphics hardware. RPIM is one of the meshfree partial differential equation solvers that do not require the mesh structure of the analysis targets. In this paper, a technique for accelerating RPIM using graphics hardware is presented. In the method, the computation process is divided into small processes suitable for processing on the parallel architecture of the graphics hardware in a single instruction multiple data manner.

  6. No-hardware-signature cybersecurity-crypto-module: a resilient cyber defense agent

    Science.gov (United States)

    Zaghloul, A. R. M.; Zaghloul, Y. A.

    2014-06-01

    We present an optical cybersecurity-crypto-module as a resilient cyber defense agent. It has no hardware signature since it is bitstream reconfigurable, where single hardware architecture functions as any selected device of all possible ones of the same number of inputs. For a two-input digital device, a 4-digit bitstream of 0s and 1s determines which device, of a total of 16 devices, the hardware performs as. Accordingly, the hardware itself is not physically reconfigured, but its performance is. Such a defense agent allows the attack to take place, rendering it harmless. On the other hand, if the system is already infected with malware sending out information, the defense agent allows the information to go out, rendering it meaningless. The hardware architecture is immune to side attacks since such an attack would reveal information on the attack itself and not on the hardware. This cyber defense agent can be used to secure a point-to-point, point-to-multipoint, a whole network, and/or a single entity in the cyberspace. Therefore, ensuring trust between cyber resources. It can provide secure communication in an insecure network. We provide the hardware design and explain how it works. Scalability of the design is briefly discussed. (Protected by United States Patents No.: US 8,004,734; US 8,325,404; and other National Patents worldwide.)

  7. TreeBASIS Feature Descriptor and Its Hardware Implementation

    Directory of Open Access Journals (Sweden)

    Spencer Fowers

    2014-01-01

    Full Text Available This paper presents a novel feature descriptor called TreeBASIS that provides improvements in descriptor size, computation time, matching speed, and accuracy. This new descriptor uses a binary vocabulary tree that is computed using basis dictionary images and a test set of feature region images. To facilitate real-time implementation, a feature region image is binary quantized and the resulting quantized vector is passed into the BASIS vocabulary tree. A Hamming distance is then computed between the feature region image and the effectively descriptive basis dictionary image at a node to determine the branch taken and the path the feature region image takes is saved as a descriptor. The TreeBASIS feature descriptor is an excellent candidate for hardware implementation because of its reduced descriptor size and the fact that descriptors can be created and features matched without the use of floating point operations. The TreeBASIS descriptor is more computationally and space efficient than other descriptors such as BASIS, SIFT, and SURF. Moreover, it can be computed entirely in hardware without the support of a CPU for additional software-based computations. Experimental results and a hardware implementation show that the TreeBASIS descriptor compares well with other descriptors for frame-to-frame homography computation while requiring fewer hardware resources.

  8. Efficient Architecture for Spike Sorting in Reconfigurable Hardware

    Science.gov (United States)

    Hwang, Wen-Jyi; Lee, Wei-Hao; Lin, Shiow-Jyu; Lai, Sheng-Ying

    2013-01-01

    This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA) and fuzzy C-means (FCM) algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA). It is embedded in a System-on-Chip (SOC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation. PMID:24189331

  9. Efficient Architecture for Spike Sorting in Reconfigurable Hardware

    Directory of Open Access Journals (Sweden)

    Sheng-Ying Lai

    2013-11-01

    Full Text Available This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA and fuzzy C-means (FCM algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA. It is embedded in a System-on-Chip (SOC platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation.

  10. Parallel asynchronous hardware implementation of image processing algorithms

    Science.gov (United States)

    Coon, Darryl D.; Perera, A. G. U.

    1990-01-01

    Research is being carried out on hardware for a new approach to focal plane processing. The hardware involves silicon injection mode devices. These devices provide a natural basis for parallel asynchronous focal plane image preprocessing. The simplicity and novel properties of the devices would permit an independent analog processing channel to be dedicated to every pixel. A laminar architecture built from arrays of the devices would form a two-dimensional (2-D) array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuron-like asynchronous pulse-coded form through the laminar processor. No multiplexing, digitization, or serial processing would occur in the preprocessing state. High performance is expected, based on pulse coding of input currents down to one picoampere with noise referred to input of about 10 femtoamperes. Linear pulse coding has been observed for input currents ranging up to seven orders of magnitude. Low power requirements suggest utility in space and in conjunction with very large arrays. Very low dark current and multispectral capability are possible because of hardware compatibility with the cryogenic environment of high performance detector arrays. The aforementioned hardware development effort is aimed at systems which would integrate image acquisition and image processing.

  11. A Hybrid Hardware and Software Component Architecture for Embedded System Design

    Science.gov (United States)

    Marcondes, Hugo; Fröhlich, Antônio Augusto

    Embedded systems are increasing in complexity, while several metrics such as time-to-market, reliability, safety and performance should be considered during the design of such systems. A component-based design which enables the migration of its components between hardware and software can cope to achieve such metrics. To enable that, we define hybrid hardware and software components as a development artifact that can be deployed by different combinations of hardware and software elements. In this paper, we present an architecture for developing such components in order to construct a repository of components that can migrate between the hardware and software domains to meet the design system requirements.

  12. BIOLOGICALLY INSPIRED HARDWARE CELL ARCHITECTURE

    DEFF Research Database (Denmark)

    2010-01-01

    Disclosed is a system comprising: - a reconfigurable hardware platform; - a plurality of hardware units defined as cells adapted to be programmed to provide self-organization and self-maintenance of the system by means of implementing a program expressed in a programming language defined as DNA...... language, where each cell is adapted to communicate with one or more other cells in the system, and where the system further comprises a converter program adapted to convert keywords from the DNA language to a binary DNA code; where the self-organisation comprises that the DNA code is transmitted to one...... or more of the cells, and each of the one or more cells is adapted to determine its function in the system; where if a fault occurs in a first cell and the first cell ceases to perform its function, self-maintenance is performed by that the system transmits information to the cells that the first cell has...

  13. Travel Software using GPU Hardware

    CERN Document Server

    Szalwinski, Chris M; Dimov, Veliko Atanasov; CERN. Geneva. ATS Department

    2015-01-01

    Travel is the main multi-particle tracking code being used at CERN for the beam dynamics calculations through hadron and ion linear accelerators. It uses two routines for the calculation of space charge forces, namely, rings of charges and point-to-point. This report presents the studies to improve the performance of Travel using GPU hardware. The studies showed that the performance of Travel with the point-to-point simulations of space-charge effects can be speeded up at least 72 times using current GPU hardware. Simple recompilation of the source code using an Intel compiler can improve performance at least 4 times without GPU support. The limited memory of the GPU is the bottleneck. Two algorithms were investigated on this point: repeated computation and tiling. The repeating computation algorithm is simpler and is the currently recommended solution. The tiling algorithm was more complicated and degraded performance. Both build and test instructions for the parallelized version of the software are inclu...

  14. The principles of computer hardware

    CERN Document Server

    Clements, Alan

    2000-01-01

    Principles of Computer Hardware, now in its third edition, provides a first course in computer architecture or computer organization for undergraduates. The book covers the core topics of such a course, including Boolean algebra and logic design; number bases and binary arithmetic; the CPU; assembly language; memory systems; and input/output methods and devices. It then goes on to cover the related topics of computer peripherals such as printers; the hardware aspects of the operating system; and data communications, and hence provides a broader overview of the subject. Its readable, tutorial-based approach makes it an accessible introduction to the subject. The book has extensive in-depth coverage of two microprocessors, one of which (the 68000) is widely used in education. All chapters in the new edition have been updated. Major updates include: powerful software simulations of digital systems to accompany the chapters on digital design; a tutorial-based introduction to assembly language, including many exam...

  15. Hardware and software for image acquisition in nuclear medicine

    International Nuclear Information System (INIS)

    Fideles, E.L.; Vilar, G.; Silva, H.S.

    1992-01-01

    A system for image acquisition and processing in nuclear medicine is presented, including the hardware and software referring to acquisition. The hardware is consisted of an analog-digital conversion card, developed in wire-wape. Its function is digitate the analogic signs provided by gamma camera. The acquisitions are made in list or frame mode. (C.G.C.)

  16. Optimum analysis of pavement maintenance using multi-objective genetic algorithms

    Directory of Open Access Journals (Sweden)

    Amr A. Elhadidy

    2015-04-01

    Full Text Available Road network expansion in Egypt is considered as a vital issue for the development of the country. This is done while upgrading current road networks to increase the safety and efficiency. A pavement management system (PMS is a set of tools or methods that assist decision makers in finding optimum strategies for providing and maintaining pavements in a serviceable condition over a given period of time. A multi-objective optimization problem for pavement maintenance and rehabilitation strategies on network level is discussed in this paper. A two-objective optimization model considers minimum action costs and maximum condition for used road network. In the proposed approach, Markov-chain models are used for predicting the performance of road pavement and to calculate the expected decline at different periods of time. A genetic-algorithm-based procedure is developed for solving the multi-objective optimization problem. The model searched for the optimum maintenance actions at adequate time to be implemented on an appropriate pavement. Based on the computing results, the Pareto optimal solutions of the two-objective optimization functions are obtained. From the optimal solutions represented by cost and condition, a decision maker can easily obtain the information of the maintenance and rehabilitation planning with minimum action costs and maximum condition. The developed model has been implemented on a network of roads and showed its ability to derive the optimal solution.

  17. FTK: the hardware Fast TracKer of the ATLAS experiment at CERN

    CERN Document Server

    Maznas, Ioannis; The ATLAS collaboration

    2016-01-01

    FTK: the hardware Fast TracKer of the ATLAS experiment at CERN In the ever increasing pile-up of the Large Hadron Collider environment, the trigger systems of the experiments have to be exceedingly sophisticated and fast at the same time, in order to select the relevant physics processes against the background processes. The Fast TracKer (FTK) is a track finding implementation at hardware level that is designed to deliver full-scan tracks with $p_{T}$ above 1 GeV to the ATLAS trigger system for every L1 accept (at a maximum rate of 100kHz). To accomplish this, FTK is a highly parallel system which is currently under installation in ATLAS. It will first provide the trigger system with tracks in the central region of the ATLAS detector, and next year it is expected to cover the whole detector. The system is based on pattern matching between hits coming from the silicon trackers of the ATLAS detector and 1 billion simulated patterns stored in specially designed ASIC chips (Associative memory – AM06). In a firs...

  18. Replacement Technologies for Precision Cleaning of Aerospace Hardware for Propellant Service

    Science.gov (United States)

    Beeson, Harold; Kirsch, Mike; Hornung, Steven; Biesinger, Paul

    1997-01-01

    The NASA White Sands Test Facility (WSTF) is developing cleaning and verification processes to replace currently used chlorofluorocarbon-l13- (CFC-113-) based processes. The processes being evaluated include both aqueous- and solvent-based techniques. Replacement technologies are being investigated for aerospace hardware and for gauges and instrumentation. This paper includes the findings of investigations of aqueous cleaning and verification of aerospace hardware using known contaminants, such as hydraulic fluid and commonly used oils. The results correlate nonvolatile residue with CFC 113. The studies also include enhancements to aqueous sampling for organic and particulate contamination. Although aqueous alternatives have been identified for several processes, a need still exists for nonaqueous solvent cleaning, such as the cleaning and cleanliness verification of gauges used for oxygen service. The cleaning effectiveness of tetrachloroethylene (PCE), trichloroethylene (TCE), ethanol, hydrochlorofluorocarbon 225 (HCFC 225), HCFC 141b, HFE 7100(R), and Vertrel MCA(R) was evaluated using aerospace gauges and precision instruments and then compared to the cleaning effectiveness of CFC 113. Solvents considered for use in oxygen systems were also tested for oxygen compatibility using high-pressure oxygen autogenous ignition and liquid oxygen mechanical impact testing.

  19. Hardware Realization of Chaos-based Symmetric Video Encryption

    KAUST Repository

    Ibrahim, Mohamad A.

    2013-05-01

    This thesis reports original work on hardware realization of symmetric video encryption using chaos-based continuous systems as pseudo-random number generators. The thesis also presents some of the serious degradations caused by digitally implementing chaotic systems. Subsequently, some techniques to eliminate such defects, including the ultimately adopted scheme are listed and explained in detail. Moreover, the thesis describes original work on the design of an encryption system to encrypt MPEG-2 video streams. Information about the MPEG-2 standard that fits this design context is presented. Then, the security of the proposed system is exhaustively analyzed and the performance is compared with other reported systems, showing superiority in performance and security. The thesis focuses more on the hardware and the circuit aspect of the system’s design. The system is realized on Xilinx Vetrix-4 FPGA with hardware parameters and throughput performance surpassing conventional encryption systems.

  20. Analytical Solution for Optimum Design of Furrow Irrigation Systems

    Science.gov (United States)

    Kiwan, M. E.

    1996-05-01

    An analytical solution for the optimum design of furrow irrigation systems is derived. The non-linear calculus optimization method is used to formulate a general form for designing the optimum system elements under circumstances of maximizing the water application efficiency of the system during irrigation. Different system bases and constraints are considered in the solution. A full irrigation water depth is considered to be achieved at the tail of the furrow line. The solution is based on neglecting the recession and depletion times after off-irrigation. This assumption is valid in the case of open-end (free gradient) furrow systems rather than closed-end (closed dike) systems. Illustrative examples for different systems are presented and the results are compared with the output obtained using an iterative numerical solution method. The final derived solution is expressed as a function of the furrow length ratio (the furrow length to the water travelling distance). The function of water travelling developed by Reddy et al. is considered for reaching the optimum solution. As practical results from the study, the optimum furrow elements for free gradient systems can be estimated to achieve the maximum application efficiency, i.e. furrow length, water inflow rate and cutoff irrigation time.

  1. Systematic development of industrial control systems using Software/Hardware Engineering

    NARCIS (Netherlands)

    Voeten, J.P.M.; van der Putten, P.H.A.; Stevens, M.P.J.; Milligan, P.; Corr, P.

    1997-01-01

    SHE (Software/Hardware Engineering) is a new object-oriented analysis, specification and design method for complex reactive hardware/software systems. SHE is based on the formal specification language POOSL and a design framework guiding analysis and design activities. This paper reports on the

  2. CT image reconstruction system based on hardware implementation

    International Nuclear Information System (INIS)

    Silva, Hamilton P. da; Evseev, Ivan; Schelin, Hugo R.; Paschuk, Sergei A.; Milhoretto, Edney; Setti, Joao A.P.; Zibetti, Marcelo; Hormaza, Joel M.; Lopes, Ricardo T.

    2009-01-01

    Full text: The timing factor is very important for medical imaging systems, which can nowadays be synchronized by vital human signals, like heartbeats or breath. The use of hardware implemented devices in such a system has advantages considering the high speed of information treatment combined with arbitrary low cost on the market. This article refers to a hardware system which is based on electronic programmable logic called FPGA, model Cyclone II from ALTERA Corporation. The hardware was implemented on the UP3 ALTERA Kit. A partially connected neural network with unitary weights was programmed. The system was tested with 60 topographic projections, 100 points in each, of the Shepp and Logan phantom created by MATLAB. The main restriction was found to be the memory size available on the device: the dynamic range of reconstructed image was limited to 0 65535. Also, the normalization factor must be observed in order to do not saturate the image during the reconstruction and filtering process. The test shows a principal possibility to build CT image reconstruction systems for any reasonable amount of input data by arranging the parallel work of the hardware units like we have tested. However, further studies are necessary for better understanding of the error propagation from topographic projections to reconstructed image within the implemented method. (author)

  3. Hardware Implementation Of Line Clipping A lgorithm By Using FPGA

    Directory of Open Access Journals (Sweden)

    Amar Dawod

    2013-04-01

    Full Text Available The computer graphics system performance is increasing faster than any other computing application. Algorithms for line clipping against convex polygons and lines have been studied for a long time and many research papers have been published so far. In spite of the latest graphical hardware development and significant increase of performance the clipping is still a bottleneck of any graphical system. So its implementation in hardware is essential for real time applications. In this paper clipping operation is discussed and a hardware implementation of the line clipping algorithm is presented and finally formulated and tested using Field Programmable Gate Arrays (FPGA. The designed hardware unit consists of two parts : the first is positional code generator unit and the second is the clipping unit. Finally it is worth mentioning that the  designed unit is capable of clipping (232524 line segments per second.       

  4. Performance comparison between ISCSI and other hardware and software solutions

    CERN Document Server

    Gug, M

    2003-01-01

    We report on our investigations on some technologies that can be used to build disk servers and networks of disk servers using commodity hardware and software solutions. It focuses on the performance that can be achieved by these systems and gives measured figures for different configurations. It is divided into two parts : iSCSI and other technologies and hardware and software RAID solutions. The first part studies different technologies that can be used by clients to access disk servers using a gigabit ethernet network. It covers block access technologies (iSCSI, hyperSCSI, ENBD). Experimental figures are given for different numbers of clients and servers. The second part compares a system based on 3ware hardware RAID controllers, a system using linux software RAID and IDE cards and a system mixing both hardware RAID and software RAID. Performance measurements for reading and writing are given for different RAID levels.

  5. Hardware based redundant multi-threading inside a GPU for improved reliability

    Science.gov (United States)

    Sridharan, Vilas; Gurumurthi, Sudhanva

    2015-05-05

    A system and method for verifying computation output using computer hardware are provided. Instances of computation are generated and processed on hardware-based processors. As instances of computation are processed, each instance of computation receives a load accessible to other instances of computation. Instances of output are generated by processing the instances of computation. The instances of output are verified against each other in a hardware based processor to ensure accuracy of the output.

  6. Determination of optimum insulation thickness in pipe for exergetic life cycle assessment

    International Nuclear Information System (INIS)

    Keçebaş, Ali

    2015-01-01

    Highlights: • It is aimed to determine optimum insulation thickness in pipe. • A new methodology is used as exergetic life cycle assessment for this purpose. • It is evaluated for various fuels, different pipe diameters and some combustion parameters. • This methodology is not suitable for determining optimum insulation thickness of a pipe. • There are benefits to our understanding of the need for insulation use in pipes. - Abstract: The energy saving and the environmental impacts’ reduction in the world building sector have gained great importance. Therefore, great efforts have been invested to create energy-saving green buildings. To do so, one of the many things to be done is the insulation of cylindrical pipes, canals and tanks. In the current study, the main focus is on the determination of the optimum insulation thickness of the pipes with varying diameters when different fuels are used. Therefore, through a new method combining exergy analysis and life cycle assessment, optimum insulation thickness of the pipes, total exergetic environmental impact, net saving and payback period were calculated. The effects of the insulation thickness on environmental and combustion parameters were analyzed in a detailed manner. The results revealed that optimum insulation thickness was affected by the temperature of the fuel when it enters into the combustion chamber, the temperature of the stack gas and the temperature of the combustion chamber. Under these optimum effects, the optimum insulation thickness of a 100 mm pipe was determined to be 55.7 cm, 57.2 cm and 59.3 cm for coal, natural gas and fuel–oil, respectively with the ratios of 76.32%, 81.84% and 84.04% net savings in the exergetic environmental impact. As the environmental impacts of the fuels and their products are bigger than those of the insulation material, the values of the optimum insulation thickness of the method used this study was found greater. Moreover, in the pipes with greater

  7. Basics of spectroscopic instruments. Hardware of NMR spectrometer

    International Nuclear Information System (INIS)

    Sato, Hajime

    2009-01-01

    NMR is a powerful tool for structure analysis of small molecules, natural products, biological macromolecules, synthesized polymers, samples from material science and so on. Magnetic Resonance Imaging (MRI) is applicable to plants and animals Because most of NMR experiments can be done by an automation mode, one can forget hardware of NMR spectrometers. It would be good to understand features and performance of NMR spectrometers. Here I present hardware of a modern NMR spectrometer which is fully equipped with digital technology. (author)

  8. Integrated circuit authentication hardware Trojans and counterfeit detection

    CERN Document Server

    Tehranipoor, Mohammad; Zhang, Xuehui

    2013-01-01

    This book describes techniques to verify the authenticity of integrated circuits (ICs). It focuses on hardware Trojan detection and prevention and counterfeit detection and prevention. The authors discuss a variety of detection schemes and design methodologies for improving Trojan detection techniques, as well as various attempts at developing hardware Trojans in IP cores and ICs. While describing existing Trojan detection methods, the authors also analyze their effectiveness in disclosing various types of Trojans, and demonstrate several architecture-level solutions. 

  9. How stem defects affect the capability of optimum bucking method?

    Directory of Open Access Journals (Sweden)

    Abdullah Emin Akay

    2015-07-01

    Full Text Available In forest harvesting activities, computer-assisted optimum bucking method increases the economic value of harvested trees. The bucking decision highly depends on the log quality grades which mainly vary with the surface characteristics such as stem defects and form of the stems. In this study, the effects of stem defects on optimum bucking method was investigated by comparing bucking applications which were conducted during the logging operations in two different Brutian Pine (Pinus brutia Ten stands. In the applications, the first stand contained the stems with relatively more stem defects than that of the stems in the second stand. The average number of defects per log for sample trees in the first and the second stand was recorded as 3.64 and 2.70, respectively. The results indicated that optimum bucking method increased the average economic value of harvested trees by 15.45% and 8.26 % in the stands, respectively. Therefore, the computer-assisted optimum bucking method potentially provides better results than that of traditional bucking method especially for the harvested trees with more stem defects.

  10. A Hardware Lab Anywhere At Any Time

    Directory of Open Access Journals (Sweden)

    Tobias Schubert

    2004-12-01

    Full Text Available Scientific technical courses are an important component in any student's education. These courses are usually characterised by the fact that the students execute experiments in special laboratories. This leads to extremely high costs and a reduction in the maximum number of possible participants. From this traditional point of view, it doesn't seem possible to realise the concepts of a Virtual University in the context of sophisticated technical courses since the students must be "on the spot". In this paper we introduce the so-called Mobile Hardware Lab which makes student participation possible at any time and from any place. This lab nevertheless transfers a feeling of being present in a laboratory. This is accomplished with a special Learning Management System in combination with hardware components which correspond to a fully equipped laboratory workstation that are lent out to the students for the duration of the lab. The experiments are performed and solved at home, then handed in electronically. Judging and marking are also both performed electronically. Since 2003 the Mobile Hardware Lab is now offered in a completely web based form.

  11. Optimum detection for extracting maximum information from symmetric qubit sets

    International Nuclear Information System (INIS)

    Mizuno, Jun; Fujiwara, Mikio; Sasaki, Masahide; Akiba, Makoto; Kawanishi, Tetsuya; Barnett, Stephen M.

    2002-01-01

    We demonstrate a class of optimum detection strategies for extracting the maximum information from sets of equiprobable real symmetric qubit states of a single photon. These optimum strategies have been predicted by Sasaki et al. [Phys. Rev. A 59, 3325 (1999)]. The peculiar aspect is that the detections with at least three outputs suffice for optimum extraction of information regardless of the number of signal elements. The cases of ternary (or trine), quinary, and septenary polarization signals are studied where a standard von Neumann detection (a projection onto a binary orthogonal basis) fails to access the maximum information. Our experiments demonstrate that it is possible with present technologies to attain about 96% of the theoretical limit

  12. Optimum community energy storage system for demand load shifting

    International Nuclear Information System (INIS)

    Parra, David; Norman, Stuart A.; Walker, Gavin S.; Gillott, Mark

    2016-01-01

    Highlights: • PbA-acid and lithium-ion batteries are optimised up to a 100-home community. • A 4-period real-time pricing and Economy 7 (2-period time-of-use) are compared. • Li-ion batteries perform worse with Economy 7 for small communities and vice versa. • The community approach reduced the levelised cost by 56% compared to a single home. • Heat pumps reduced the levelised cost and increased the profitability of batteries. - Abstract: Community energy storage (CES) is becoming an attractive technological option to facilitate the use of distributed renewable energy generation, manage demand loads and decarbonise the residential sector. There is strong interest in understanding the techno-economic benefits of using CES systems, which energy storage technology is more suitable and the optimum CES size. In this study, the performance including equivalent full cycles and round trip efficiency of lead-acid (PbA) and lithium-ion (Li-ion) batteries performing demand load shifting are quantified as a function of the size of the community using simulation-based optimisation. Two different retail tariffs are compared: a time-of-use tariff (Economy 7) and a real-time-pricing tariff including four periods based on the electricity prices on the wholesale market. Additionally, the economic benefits are quantified when projected to two different years: 2020 and a hypothetical zero carbon year. The findings indicate that the optimum PbA capacity was approximately twice the optimum Li-ion capacity in the case of the real-time-pricing tariff and around 1.6 times for Economy 7 for any community size except a single home. The levelised cost followed a negative logarithmic trend while the internal rate of return followed a positive logarithmic trend as a function of the size of the community. PbA technology reduced the levelised cost down to 0.14 £/kW h when projected to the year 2020 for the retail tariff Economy 7. CES systems were sized according to the demand load and

  13. Hardware controls for the STAR experiment at RHIC

    International Nuclear Information System (INIS)

    Reichhold, D.; Bieser, F.; Bordua, M.; Cherney, M.; Chrin, J.; Dunlop, J.C.; Ferguson, M.I.; Ghazikhanian, V.; Gross, J.; Harper, G.; Howe, M.; Jacobson, S.; Klein, S.R.; Kravtsov, P.; Lewis, S.; Lin, J.; Lionberger, C.; LoCurto, G.; McParland, C.; McShane, T.; Meier, J.; Sakrejda, I.; Sandler, Z.; Schambach, J.; Shi, Y.; Willson, R.; Yamamoto, E.; Zhang, W.

    2003-01-01

    The STAR detector sits in a high radiation area when operating normally; therefore it was necessary to develop a robust system to remotely control all hardware. The STAR hardware controls system monitors and controls approximately 14,000 parameters in the STAR detector. Voltages, currents, temperatures, and other parameters are monitored. Effort has been minimized by the adoption of experiment-wide standards and the use of pre-packaged software tools. The system is based on the Experimental Physics and Industrial Control System (EPICS) . VME processors communicate with subsystem-based sensors over a variety of field busses, with High-level Data Link Control (HDLC) being the most prevalent. Other features of the system include interfaces to accelerator and magnet control systems, a web-based archiver, and C++-based communication between STAR online, run control and hardware controls and their associated databases. The system has been designed for easy expansion as new detector elements are installed in STAR

  14. Motion compensation in digital subtraction angiography using graphics hardware.

    Science.gov (United States)

    Deuerling-Zheng, Yu; Lell, Michael; Galant, Adam; Hornegger, Joachim

    2006-07-01

    An inherent disadvantage of digital subtraction angiography (DSA) is its sensitivity to patient motion which causes artifacts in the subtraction images. These artifacts could often reduce the diagnostic value of this technique. Automated, fast and accurate motion compensation is therefore required. To cope with this requirement, we first examine a method explicitly designed to detect local motions in DSA. Then, we implement a motion compensation algorithm by means of block matching on modern graphics hardware. Both methods search for maximal local similarity by evaluating a histogram-based measure. In this context, we are the first who have mapped an optimizing search strategy on graphics hardware while paralleling block matching. Moreover, we provide an innovative method for creating histograms on graphics hardware with vertex texturing and frame buffer blending. It turns out that both methods can effectively correct the artifacts in most case, as the hardware implementation of block matching performs much faster: the displacements of two 1024 x 1024 images can be calculated at 3 frames/s with integer precision or 2 frames/s with sub-pixel precision. Preliminary clinical evaluation indicates that the computation with integer precision could already be sufficient.

  15. Programming languages and compiler design for realistic quantum hardware

    Science.gov (United States)

    Chong, Frederic T.; Franklin, Diana; Martonosi, Margaret

    2017-09-01

    Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.

  16. Programming languages and compiler design for realistic quantum hardware.

    Science.gov (United States)

    Chong, Frederic T; Franklin, Diana; Martonosi, Margaret

    2017-09-13

    Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.

  17. Hardware packet pacing using a DMA in a parallel computer

    Science.gov (United States)

    Chen, Dong; Heidelberger, Phillip; Vranas, Pavlos

    2013-08-13

    Method and system for hardware packet pacing using a direct memory access controller in a parallel computer which, in one aspect, keeps track of a total number of bytes put on the network as a result of a remote get operation, using a hardware token counter.

  18. The priority queue as an example of hardware/software codesign

    DEFF Research Database (Denmark)

    Høeg, Flemming; Mellergaard, Niels; Staunstrup, Jørgen

    1994-01-01

    The paper identifies a number of issues that are believed to be important for hardware/software codesign. The issues are illustrated by a small comprehensible example: a priority queue. Based on simulations of a real application, we suggest a combined hardware/software realization of the priority...

  19. Hardware characteristic and application

    International Nuclear Information System (INIS)

    Gu, Dong Hyeon

    1990-03-01

    The contents of this book are system board on memory, performance, system timer system click and specification, coprocessor such as programing interface and hardware interface, power supply on input and output, protection for DC output, Power Good signal, explanation on 84 keyboard and 101/102 keyboard,BIOS system, 80286 instruction set and 80287 coprocessor, characters, keystrokes and colors, communication and compatibility of IBM personal computer on application direction, multitasking and code for distinction of system.

  20. Hardware architecture design of image restoration based on time-frequency domain computation

    Science.gov (United States)

    Wen, Bo; Zhang, Jing; Jiao, Zipeng

    2013-10-01

    The image restoration algorithms based on time-frequency domain computation is high maturity and applied widely in engineering. To solve the high-speed implementation of these algorithms, the TFDC hardware architecture is proposed. Firstly, the main module is designed, by analyzing the common processing and numerical calculation. Then, to improve the commonality, the iteration control module is planed for iterative algorithms. In addition, to reduce the computational cost and memory requirements, the necessary optimizations are suggested for the time-consuming module, which include two-dimensional FFT/IFFT and the plural calculation. Eventually, the TFDC hardware architecture is adopted for hardware design of real-time image restoration system. The result proves that, the TFDC hardware architecture and its optimizations can be applied to image restoration algorithms based on TFDC, with good algorithm commonality, hardware realizability and high efficiency.

  1. Automating an EXAFS facility: hardware and software considerations

    International Nuclear Information System (INIS)

    Georgopoulos, P.; Sayers, D.E.; Bunker, B.; Elam, T.; Grote, W.A.

    1981-01-01

    The basic design considerations for computer hardware and software, applicable not only to laboratory EXAFS facilities, but also to synchrotron installations, are reviewed. Uniformity and standardization of both hardware configurations and program packages for data collection and analysis are heavily emphasized. Specific recommendations are made with respect to choice of computers, peripherals, and interfaces, and guidelines for the development of software packages are set forth. A description of two working computer-interfaced EXAFS facilities is presented which can serve as prototypes for future developments. 3 figures

  2. An optimum organizational structure for a large earth-orbiting multidisciplinary Space Base

    Science.gov (United States)

    Ragusa, J. M.

    1973-01-01

    The purpose of this exploratory study was to identify an optimum hypothetical organizational structure for a large earth-orbiting multidisciplinary research and applications (R&A) Space Base manned by a mixed crew of technologists. Since such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than the empirical testing of it. The essential finding of this research was that a four-level project type 'total matrix' model will optimize the efficiency and effectiveness of Space Base technologists.

  3. NOAA Daily Optimum Interpolation Sea Surface Temperature

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA 1/4° daily Optimum Interpolation Sea Surface Temperature (or daily OISST) is an analysis constructed by combining observations from different platforms...

  4. Commodity hardware and software summary

    International Nuclear Information System (INIS)

    Wolbers, S.

    1997-04-01

    A review is given of the talks and papers presented in the Commodity Hardware and Software Session at the CHEP97 conference. An examination of the trends leading to the consideration of PC's for HEP is given, and a status of the work that is being done at various HEP labs and Universities is given

  5. An optimum silica flour-bentonite mixture for an engineered barrier

    International Nuclear Information System (INIS)

    Walker, J.N.; Daffern, D.D.; Emer, D.F.

    1991-01-01

    To dispose of low-level and mixed wastes (MAR) by burial, it is necessary to design an impermeable closure, which limits water infiltration through the waste. Bentonite has very low permeability to water but can be subject to volume alterations. Over time, these alterations can lead to channeling and subsequent permeability increases. The fluid conductivity and bulk properties of silica flour and bentonite mixtures were tested to find a mixture that would retain the low conductivity of the bentonite while maintaining volumetric stability. Silica flour was chosen for its small grain size and spherical shape, and its similarity to silty soil. Test results indicate that a 90% silica flour and 10% bentonite mixture will provide the optimum properties for this application. 5 refs., 2 figs., 2 tabs

  6. Optimum Operational Parameters for Yawed Wind Turbines

    Directory of Open Access Journals (Sweden)

    David A. Peters

    2011-01-01

    Full Text Available A set of systematical optimum operational parameters for wind turbines under various wind directions is derived by using combined momentum-energy and blade-element-energy concepts. The derivations are solved numerically by fixing some parameters at practical values. Then, the interactions between the produced power and the influential factors of it are generated in the figures. It is shown that the maximum power produced is strongly affected by the wind direction, the tip speed, the pitch angle of the rotor, and the drag coefficient, which are specifically indicated by figures. It also turns out that the maximum power can take place at two different optimum tip speeds in some cases. The equations derived herein can also be used in the modeling of tethered wind turbines which can keep aloft and deliver energy.

  7. Development of the optimum rotor theories

    DEFF Research Database (Denmark)

    Okulov, Valery; Sørensen, Jens Nørkær; van Kuik, Gijs A.M.

    The purpose of this study is the examination of optimum rotor theories with ideal load distributions along the blades, to analyze some of the underlying ideas and concepts, as well as to illuminate them. The book gives the historical background of the issue and presents the analysis of the problems...

  8. Predicting optimum crop designs using crop models and seasonal climate forecasts.

    Science.gov (United States)

    Rodriguez, D; de Voil, P; Hudson, D; Brown, J N; Hayman, P; Marrou, H; Meinke, H

    2018-02-02

    Expected increases in food demand and the need to limit the incorporation of new lands into agriculture to curtail emissions, highlight the urgency to bridge productivity gaps, increase farmers profits and manage risks in dryland cropping. A way to bridge those gaps is to identify optimum combination of genetics (G), and agronomic managements (M) i.e. crop designs (GxM), for the prevailing and expected growing environment (E). Our understanding of crop stress physiology indicates that in hindsight, those optimum crop designs should be known, while the main problem is to predict relevant attributes of the E, at the time of sowing, so that optimum GxM combinations could be informed. Here we test our capacity to inform that "hindsight", by linking a tested crop model (APSIM) with a skillful seasonal climate forecasting system, to answer "What is the value of the skill in seasonal climate forecasting, to inform crop designs?" Results showed that the GCM POAMA-2 was reliable and skillful, and that when linked with APSIM, optimum crop designs could be informed. We conclude that reliable and skillful GCMs that are easily interfaced with crop simulation models, can be used to inform optimum crop designs, increase farmers profits and reduce risks.

  9. Optimum Antenna Downtilt Angles for Macrocellular WCDMA Network

    Directory of Open Access Journals (Sweden)

    Niemelä Jarno

    2005-01-01

    Full Text Available The impact of antenna downtilt on the performance of cellular WCDMA network has been studied by using a radio network planning tool. An optimum downtilt angle has been evaluated for numerous practical macrocellular site and antenna configurations for electrical and mechanical antenna downtilt concepts. The aim of this massive simulation campaign was expected to provide an answer to two questions: firstly, how to select the downtilt angle of a macrocellular base station antenna? Secondly, what is the impact of antenna downtilt on system capacity and network coverage? Optimum downtilt angles were observed to vary between – depending on the network configuration. Moreover, the corresponding downlink capacity gains varied between – . Antenna vertical beamwidth affects clearly the required optimum downtilt angle the most. On the other hand, with wider antenna vertical beamwidth, the impact of downtilt on system performance is not such imposing. In addition, antenna height together with the size of the dominance area affect the required downtilt angle. Finally, the simulation results revealed how the importance of the antenna downtilt becomes more significant in dense networks, where the capacity requirements are typically also higher.

  10. NOAA Optimum Interpolation (OI) SST V2

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The optimum interpolation (OI) sea surface temperature (SST) analysis is produced weekly on a one-degree grid. The analysis uses in situ and satellite SST's plus...

  11. Surface moisture measurement system hardware acceptance test report

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.A., Westinghouse Hanford

    1996-05-28

    This document summarizes the results of the hardware acceptance test for the Surface Moisture Measurement System (SMMS). This test verified that the mechanical and electrical features of the SMMS functioned as designed and that the unit is ready for field service. The bulk of hardware testing was performed at the 306E Facility in the 300 Area and the Fuels and Materials Examination Facility in the 400 Area. The SMMS was developed primarily in support of Tank Waste Remediation System (TWRS) Safety Programs for moisture measurement in organic and ferrocyanide watch list tanks.

  12. Computer organization and design the hardware/software interface

    CERN Document Server

    Hennessy, John L

    1994-01-01

    Computer Organization and Design: The Hardware/Software Interface presents the interaction between hardware and software at a variety of levels, which offers a framework for understanding the fundamentals of computing. This book focuses on the concepts that are the basis for computers.Organized into nine chapters, this book begins with an overview of the computer revolution. This text then explains the concepts and algorithms used in modern computer arithmetic. Other chapters consider the abstractions and concepts in memory hierarchies by starting with the simplest possible cache. This book di

  13. Direction of Radio Finding via MUSIC (Multiple Signal Classification) Algorithm for Hardware Design System

    Science.gov (United States)

    Zhang, Zheng

    2017-10-01

    Concept of radio direction finding systems, which use radio direction finding is based on digital signal processing algorithms. Thus, the radio direction finding system becomes capable to locate and track signals by the both. Performance of radio direction finding significantly depends on effectiveness of digital signal processing algorithms. The algorithm uses the Direction of Arrival (DOA) algorithms to estimate the number of incidents plane waves on the antenna array and their angle of incidence. This manuscript investigates implementation of the DOA algorithms (MUSIC) on the uniform linear array in the presence of white noise. The experiment results exhibit that MUSIC algorithm changed well with the radio direction.

  14. Optimum tilt angle and orientation for solar collectors in Syria

    International Nuclear Information System (INIS)

    Skeiker, Kamal

    2009-01-01

    One of the important parameters that affect the performance of a solar collector is its tilt angle with the horizon. This is because of the variation of tilt angle changes the amount of solar radiation reaching the collector surface. A mathematical model was used for estimating the solar radiation on a tilted surface, and to determine the optimum tilt angle and orientation (surface azimuth angle) for the solar collector in the main Syrian zones, on a daily basis, as well as for a specific period. The optimum angle was computed by searching for the values for which the radiation on the collector surface is a maximum for a particular day or a specific period. The results reveal that changing the tilt angle 12 times in a year (i.e. using the monthly optimum tilt angle) maintains approximately the total amount of solar radiation near the maximum value that is found by changing the tilt angle daily to its optimum value. This achieves a yearly gain in solar radiation of approximately 30% more than the case of a solar collector fixed on a horizontal surface.

  15. Optimum supervision intervals and order of supervision in nuclear reactor protective systems

    International Nuclear Information System (INIS)

    Kontoleon, J.M.

    1978-01-01

    The optimum inspection strategy of an m-out-of-n:G nuclear reactor protective system with nonidentical units is analyzed. A 2-out-of-4:G system is used to formulate a multi-variable optimization problem to determine (a) the optimum order of supervision of the units and (b) the optimum supervision intervals between units. The case of systems with identical units is a special case of the above. Numerical results are derived using a computer algorithm

  16. OER Approach for Specific Student Groups in Hardware-Based Courses

    Science.gov (United States)

    Ackovska, Nevena; Ristov, Sasko

    2014-01-01

    Hardware-based courses in computer science studies require much effort from both students and teachers. The most important part of students' learning is attending in person and actively working on laboratory exercises on hardware equipment. This paper deals with a specific group of students, those who are marginalized by not being able to…

  17. 49 CFR 238.105 - Train electronic hardware and software safety.

    Science.gov (United States)

    2010-10-01

    ... and software system safety as part of the pre-revenue service testing of the equipment. (d)(1... safely by initiating a full service brake application in the event of a hardware or software failure that... 49 Transportation 4 2010-10-01 2010-10-01 false Train electronic hardware and software safety. 238...

  18. Evaluation of a binary optimization approach to find the optimum locations of energy storage devices in a power grid with stochastically varying loads and wind generation

    Science.gov (United States)

    Dar, Zamiyad

    The prices in the electricity market change every five minutes. The prices in peak demand hours can be four or five times more than the prices in normal off peak hours. Renewable energy such as wind power has zero marginal cost and a large percentage of wind energy in a power grid can reduce the price significantly. The variability of wind power prevents it from being constantly available in peak hours. The price differentials between off-peak and on-peak hours due to wind power variations provide an opportunity for a storage device owner to buy energy at a low price and sell it in high price hours. In a large and complex power grid, there are many locations for installation of a storage device. Storage device owners prefer to install their device at locations that allow them to maximize profit. Market participants do not possess much information about the system operator's dispatch, power grid, competing generators and transmission system. The publicly available data from the system operator usually consists of Locational Marginal Prices (LMP), load, reserve prices and regulation prices. In this thesis, we develop a method to find the optimum location of a storage device without using the grid, transmission or generator data. We formulate and solve an optimization problem to find the most profitable location for a storage device using only the publicly available market pricing data such as LMPs, and reserve prices. We consider constraints arising due to storage device operation limitations in our objective function. We use binary optimization and branch and bound method to optimize the operation of a storage device at a given location to earn maximum profit. We use two different versions of our method and optimize the profitability of a storage unit at each location in a 36 bus model of north eastern United States and south eastern Canada for four representative days representing four seasons in a year. Finally, we compare our results from the two versions of our

  19. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    Science.gov (United States)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  20. Technoeconomical Assessment of Optimum Design for Photovoltaic Water Pumping System for Rural Area in Oman

    Directory of Open Access Journals (Sweden)

    Hussein A. Kazem

    2015-01-01

    Full Text Available Photovoltaic (PV systems have been used globally for a long time to supply electricity for water pumping system for irrigation. System cost drops down with time since PV technology, efficiency, and design methodology have been improved and cost of wattage drops dramatically in the last decade. In the present paper optimum PV system design for water pumping system has been proposed for Oman. Intuitive and numerical methods were used to design the system. HOMER software as a numerical method was used to design the system to come up with optimum design for Oman. Also, REPS.OM software has been used to find the optimum design based on hourly meteorological data. The daily solar energy in Sohar was found to be 6.182 kWh/m2·day. However, it is found that the system annual yield factor is 2024.66 kWh/kWp. Furthermore, the capacity factor was found to be 23.05%, which is promising. The cost of energy and system capital cost has been compared with that of diesel generator and systems in literature. The comparison shows that the cost of energy is 0.180, 0.309, and 0.790 USD/kWh for PV-REPS.OM, PV-HOMER, and diesel systems, respectively, which sound that PV water pumping systems are promising in Oman.

  1. Determination of optimum filter in inferolateral view of myocardial SPECT

    International Nuclear Information System (INIS)

    Takavar; Eftekhari, M.; Fallahi, B.; Shamsipour, Gh.; Sohrabi, M.; Saghari, M.

    2004-01-01

    Background: In myocardial perfusion SPECT imaging, images are degraded by photon attenuation, distance-dependent collimator, detector response and photon scattering. As filters greatly affect quality of nuclear medicine images, in this study determination of optimum filter for inferolateral view is our prime objective. Materials and Methods: .A phantom simulating heart left ventricle was built. About 1mCi of 99m Tc, was injected into the phantom. Images were taken from this phantom. Parzen, Hamming, Hanning, Butter worth and Gaussian filters were exerted on the images obtained from the phantom.. By defining some criteria such as contrast, signal to noise ratio, and defect size delectability, the best filter was determined for our ADAC spect system at our nuclear medicine center. In this study, 27 patients who previously had undergone coronary angiography were chosen to be included. All of these patients revealed significant stenosis in the left circumflex artery. Myocardial SPECT images of these patients had inferolateral defect. The images of these patients were processed with 12 filters including the optimum filters obtained from phantom study and some other non-optimum filters. A nuclear medicine physician quantified the results by assigmng mark from 0 to 4. to every image. 0 mark for images that didn't show the defect properly and 4 for the best one. The data from patient study were analyzed with non-related, non -parametric Friedman test. Results: Nyquist frequency of 0.325 and 0.5 were obtained as the optimum cut-off frequencies for hamming and Hanning filters respectively. Order 11 and cut-off frequency of 0.45 and order 20. with cut-off frequency of 0.5 were found to be optimum for Butter worth and Gaussian filters. In patient studies it was found that, Butter worth filter with cut-off frequency of 0.45 and order of 11 produced the best quality images. Conclusion: In this study. Butter worth filter with cut-off frequency of 0.45 and order of 11 was the

  2. A Framework for Hardware-Accelerated Services Using Partially Reconfigurable SoCs

    Directory of Open Access Journals (Sweden)

    MACHIDON, O. M.

    2016-05-01

    Full Text Available The current trend towards ?Everything as a Service? fosters a new approach on reconfigurable hardware resources. This innovative, service-oriented approach has the potential of bringing a series of benefits for both reconfigurable and distributed computing fields by favoring a hardware-based acceleration of web services and increasing service performance. This paper proposes a framework for accelerating web services by offloading the compute-intensive tasks to reconfigurable System-on-Chip (SoC devices, as integrated IP (Intellectual Property cores. The framework provides a scalable, dynamic management of the tasks and hardware processing cores, based on dynamic partial reconfiguration of the SoC. We have enhanced security of the entire system by making use of the built-in detection features of the hardware device and also by implementing active counter-measures that protect the sensitive data.

  3. Optimized hardware design for the divertor remote handling control system

    Energy Technology Data Exchange (ETDEWEB)

    Saarinen, Hannu [Tampere University of Technology, Korkeakoulunkatu 6, 33720 Tampere (Finland)], E-mail: hannu.saarinen@tut.fi; Tiitinen, Juha; Aha, Liisa; Muhammad, Ali; Mattila, Jouni; Siuko, Mikko; Vilenius, Matti [Tampere University of Technology, Korkeakoulunkatu 6, 33720 Tampere (Finland); Jaervenpaeae, Jorma [VTT Systems Engineering, Tekniikankatu 1, 33720 Tampere (Finland); Irving, Mike; Damiani, Carlo; Semeraro, Luigi [Fusion for Energy, Josep Pla 2, Torres Diagonal Litoral B3, 08019 Barcelona (Spain)

    2009-06-15

    A key ITER maintenance activity is the exchange of the divertor cassettes. One of the major focuses of the EU Remote Handling (RH) programme has been the study and development of the remote handling equipment necessary for divertor exchange. The current major step in this programme involves the construction of a full scale physical test facility, namely DTP2 (Divertor Test Platform 2), in which to demonstrate and refine the RH equipment designs for ITER using prototypes. The major objective of the DTP2 project is the proof of concept studies of various RH devices, but is also important to define principles for standardizing control hardware and methods around the ITER maintenance equipment. This paper focuses on describing the control system hardware design optimization that is taking place at DTP2. Here there will be two RH movers, namely the Cassette Multifuctional Mover (CMM), Cassette Toroidal Mover (CTM) and assisting water hydraulic force feedback manipulators (WHMAN) located aboard each Mover. The idea here is to use common Real Time Operating Systems (RTOS), measurement and control IO-cards etc. for all maintenance devices and to standardize sensors and control components as much as possible. In this paper, new optimized DTP2 control system hardware design and some initial experimentation with the new DTP2 RH control system platform are presented. The proposed new approach is able to fulfil the functional requirements for both Mover and Manipulator control systems. Since the new control system hardware design has reduced architecture there are a number of benefits compared to the old approach. The simplified hardware solution enables the use of a single software development environment and a single communication protocol. This will result in easier maintainability of the software and hardware, less dependence on trained personnel, easier training of operators and hence reduced the development costs of ITER RH.

  4. Security challenges and opportunities in adaptive and reconfigurable hardware

    OpenAIRE

    Costan, Victor Marius; Devadas, Srinivas

    2011-01-01

    We present a novel approach to building hardware support for providing strong security guarantees for computations running in the cloud (shared hardware in massive data centers), while maintaining the high performance and low cost that make cloud computing attractive in the first place. We propose augmenting regular cloud servers with a Trusted Computation Base (TCB) that can securely perform high-performance computations. Our TCB achieves cost savings by spreading functionality across two pa...

  5. Review of Maxillofacial Hardware Complications and Indications for Salvage

    OpenAIRE

    Hernandez Rosa, Jonatan; Villanueva, Nathaniel L.; Sanati-Mehrizy, Paymon; Factor, Stephanie H.; Taub, Peter J.

    2015-01-01

    From 2002 to 2006, more than 117,000 facial fractures were recorded in the U.S. National Trauma Database. These fractures are commonly treated with open reduction and internal fixation. While in place, the hardware facilitates successful bony union. However, when postoperative complications occur, the plates may require removal before bony union. Indications for salvage versus removal of the maxillofacial hardware are not well defined. A literature review was performed to identify instances w...

  6. System-level protection and hardware Trojan detection using weighted voting.

    Science.gov (United States)

    Amin, Hany A M; Alkabani, Yousra; Selim, Gamal M I

    2014-07-01

    The problem of hardware Trojans is becoming more serious especially with the widespread of fabless design houses and design reuse. Hardware Trojans can be embedded on chip during manufacturing or in third party intellectual property cores (IPs) during the design process. Recent research is performed to detect Trojans embedded at manufacturing time by comparing the suspected chip with a golden chip that is fully trusted. However, Trojan detection in third party IP cores is more challenging than other logic modules especially that there is no golden chip. This paper proposes a new methodology to detect/prevent hardware Trojans in third party IP cores. The method works by gradually building trust in suspected IP cores by comparing the outputs of different untrusted implementations of the same IP core. Simulation results show that our method achieves higher probability of Trojan detection over a naive implementation of simple voting on the output of different IP cores. In addition, experimental results show that the proposed method requires less hardware overhead when compared with a simple voting technique achieving the same degree of security.

  7. System-level protection and hardware Trojan detection using weighted voting

    Directory of Open Access Journals (Sweden)

    Hany A.M. Amin

    2014-07-01

    Full Text Available The problem of hardware Trojans is becoming more serious especially with the widespread of fabless design houses and design reuse. Hardware Trojans can be embedded on chip during manufacturing or in third party intellectual property cores (IPs during the design process. Recent research is performed to detect Trojans embedded at manufacturing time by comparing the suspected chip with a golden chip that is fully trusted. However, Trojan detection in third party IP cores is more challenging than other logic modules especially that there is no golden chip. This paper proposes a new methodology to detect/prevent hardware Trojans in third party IP cores. The method works by gradually building trust in suspected IP cores by comparing the outputs of different untrusted implementations of the same IP core. Simulation results show that our method achieves higher probability of Trojan detection over a naive implementation of simple voting on the output of different IP cores. In addition, experimental results show that the proposed method requires less hardware overhead when compared with a simple voting technique achieving the same degree of security.

  8. A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems

    Science.gov (United States)

    Zinnecker, Alicia M.; Culley, Dennis E.; Aretskin-Hariton, Eliot D.

    2015-01-01

    Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a SimulinkR library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.

  9. Hardware/software co-design and optimization for cyberphysical integration in digital microfluidic biochips

    CERN Document Server

    Luo, Yan; Ho, Tsung-Yi

    2015-01-01

    This book describes a comprehensive framework for hardware/software co-design, optimization, and use of robust, low-cost, and cyberphysical digital microfluidic systems. Readers with a background in electronic design automation will find this book to be a valuable reference for leveraging conventional VLSI CAD techniques for emerging technologies, e.g., biochips or bioMEMS. Readers from the circuit/system design community will benefit from methods presented to extend design and testing techniques from microelectronics to mixed-technology microsystems. For readers from the microfluidics domain,

  10. The Global Optimal Algorithm of Reliable Path Finding Problem Based on Backtracking Method

    Directory of Open Access Journals (Sweden)

    Liang Shen

    2017-01-01

    Full Text Available There is a growing interest in finding a global optimal path in transportation networks particularly when the network suffers from unexpected disturbance. This paper studies the problem of finding a global optimal path to guarantee a given probability of arriving on time in a network with uncertainty, in which the travel time is stochastic instead of deterministic. Traditional path finding methods based on least expected travel time cannot capture the network user’s risk-taking behaviors in path finding. To overcome such limitation, the reliable path finding algorithms have been proposed but the convergence of global optimum is seldom addressed in the literature. This paper integrates the K-shortest path algorithm into Backtracking method to propose a new path finding algorithm under uncertainty. The global optimum of the proposed method can be guaranteed. Numerical examples are conducted to demonstrate the correctness and efficiency of the proposed algorithm.

  11. Test Program for Stirling Radioisotope Generator Hardware at NASA Glenn Research Center

    Science.gov (United States)

    Lewandowski, Edward J.; Bolotin, Gary S.; Oriti, Salvatore M.

    2015-01-01

    Stirling-based energy conversion technology has demonstrated the potential of high efficiency and low mass power systems for future space missions. This capability is beneficial, if not essential, to making certain deep space missions possible. Significant progress was made developing the Advanced Stirling Radioisotope Generator (ASRG), a 140-W radioisotope power system. A variety of flight-like hardware, including Stirling convertors, controllers, and housings, was designed and built under the ASRG flight development project. To support future Stirling-based power system development NASA has proposals that, if funded, will allow this hardware to go on test at the NASA Glenn Research Center. While future flight hardware may not be identical to the hardware developed under the ASRG flight development project, many components will likely be similar, and system architectures may have heritage to ASRG. Thus, the importance of testing the ASRG hardware to the development of future Stirling-based power systems cannot be understated. This proposed testing will include performance testing, extended operation to establish an extensive reliability database, and characterization testing to quantify subsystem and system performance and better understand system interfaces. This paper details this proposed test program for Stirling radioisotope generator hardware at NASA Glenn. It explains the rationale behind the proposed tests and how these tests will meet the stated objectives.

  12. Hardware and layout aspects affecting maintainability

    International Nuclear Information System (INIS)

    Jayaraman, V.N.; Surendar, Ch.

    1977-01-01

    It has been found from maintenance experience at the Rajasthan Atomic Power Station that proper hardware and instrumentation layout can reduce maintenance and down-time on the related equipment. The problems faced in this connection and how they were solved is narrated. (M.G.B.)

  13. Building Correlators with Many-Core Hardware

    NARCIS (Netherlands)

    van Nieuwpoort, R.V.

    2010-01-01

    Radio telescopes typically consist of multiple receivers whose signals are cross-correlated to filter out noise. A recent trend is to correlate in software instead of custom-built hardware, taking advantage of the flexibility that software solutions offer. Examples include e-VLBI and LOFAR. However,

  14. Subgrouping Automata: automatic sequence subgrouping using phylogenetic tree-based optimum subgrouping algorithm.

    Science.gov (United States)

    Seo, Joo-Hyun; Park, Jihyang; Kim, Eun-Mi; Kim, Juhan; Joo, Keehyoung; Lee, Jooyoung; Kim, Byung-Gee

    2014-02-01

    Sequence subgrouping for a given sequence set can enable various informative tasks such as the functional discrimination of sequence subsets and the functional inference of unknown sequences. Because an identity threshold for sequence subgrouping may vary according to the given sequence set, it is highly desirable to construct a robust subgrouping algorithm which automatically identifies an optimal identity threshold and generates subgroups for a given sequence set. To meet this end, an automatic sequence subgrouping method, named 'Subgrouping Automata' was constructed. Firstly, tree analysis module analyzes the structure of tree and calculates the all possible subgroups in each node. Sequence similarity analysis module calculates average sequence similarity for all subgroups in each node. Representative sequence generation module finds a representative sequence using profile analysis and self-scoring for each subgroup. For all nodes, average sequence similarities are calculated and 'Subgrouping Automata' searches a node showing statistically maximum sequence similarity increase using Student's t-value. A node showing the maximum t-value, which gives the most significant differences in average sequence similarity between two adjacent nodes, is determined as an optimum subgrouping node in the phylogenetic tree. Further analysis showed that the optimum subgrouping node from SA prevents under-subgrouping and over-subgrouping. Copyright © 2013. Published by Elsevier Ltd.

  15. Optimum Condition for Plutonium Electrodeposition Process in Radiochemistry and Environment Laboratory, Nuclear Malaysia

    International Nuclear Information System (INIS)

    Yii, Mei-Wo; Abdullah Siddiqi Ismail

    2014-01-01

    Determination of alpha emitting plutonium radionuclides such as Pu-238, Pu-239 and Pu-240 concentrations inside a sample require lots of radiochemistry purification process to separate them from other interfering alpha emitters. These pure isotopes are then been electrodeposited onto a stainless steel disc and quantified using alpha spectrometry counter. In Radiochemistry and Environment Laboratory (RAS), Nuclear Malaysia, the quantification is done by comparing these isotopes with the recovery of known amount plutonium tracer, Pu-242, that been added into the sample prior analysis. This study been conducted to find the optimum conditions for the electrolysis process used at RAS. Four variable parameters that may interfere the percentage recovery of tracer hence the current, cathode to anode distance, pH and electrolysis duration had been identify and studied. Study was carry out using Pu-242 standard solution and the deposition disc was counted using Zinc Sulphite (silver) counter. Studies outcome suggested that the optimum conditions to reduce plutonium ion happens at 1-1.1 ampere of current, 3-5 mm of electrodes distance, pH 2.2-2.5 and a minimal electrolysis duration of 2 hours. (author)

  16. Hardware and software maintenance strategies for upgrading vintage computers

    International Nuclear Information System (INIS)

    Wang, B.C.; Buijs, W.J.; Banting, R.D.

    1992-01-01

    The paper focuses on the maintenance of the computer hardware and software for digital control computers (DCC). Specific design and problems related to various maintenance strategies are reviewed. A foundation was required for a reliable computer maintenance and upgrading program to provide operation of the DCC with high availability and reliability for 40 years. This involved a carefully planned and executed maintenance and upgrading program, involving complementary hardware and software strategies. The computer system was designed on a modular basis, with large sections easily replaceable, to facilitate maintenance and improve availability of the system. Advances in computer hardware have made it possible to replace DCC peripheral devices with reliable, inexpensive, and widely available components from PC-based systems (PC = personal computer). By providing a high speed link from the DCC to a PC, it is now possible to use many commercial software packages to process data from the plant. 1 fig

  17. Optimum structure of Whipple shield against hypervelocity impact

    International Nuclear Information System (INIS)

    Lee, M

    2014-01-01

    Hypervelocity impact of a spherical aluminum projectile onto two spaced aluminum plates (Whipple shield) was simulated to estimate an optimum structure. The Smooth Particle Hydrodynamics (SPH) code which has a unique migration scheme from a rectangular coordinate to an axisymmetic coordinate was used. The ratio of the front plate thickness to sphere diameter varied from 0.06 to 0.48. The impact velocities considered here were 6.7 km/s. This is the procedure we explored. To guarantee the early stage simulation, the shapes of debris clouds were first compared with the previous experimental pictures, indicating a good agreement. Next, the debris cloud expansion angle was predicted and it shows a maximum value of 23 degree for thickness ratio of front bumper to sphere diameter of 0.23. A critical sphere diameter causing failure of rear wall was also examined while keeping the total thickness of two plates constant. There exists an optimum thickness ratio of front bumper to rear wall, which is identified as a function of the size combination of the impacting body, front and rear plates. The debris cloud expansion-correlated-optimum thickness ratio study provides a good insight on the hypervelocity impact onto spaced target system.

  18. Optimum structure of Whipple shield against hypervelocity impact

    Science.gov (United States)

    Lee, M.

    2014-05-01

    Hypervelocity impact of a spherical aluminum projectile onto two spaced aluminum plates (Whipple shield) was simulated to estimate an optimum structure. The Smooth Particle Hydrodynamics (SPH) code which has a unique migration scheme from a rectangular coordinate to an axisymmetic coordinate was used. The ratio of the front plate thickness to sphere diameter varied from 0.06 to 0.48. The impact velocities considered here were 6.7 km/s. This is the procedure we explored. To guarantee the early stage simulation, the shapes of debris clouds were first compared with the previous experimental pictures, indicating a good agreement. Next, the debris cloud expansion angle was predicted and it shows a maximum value of 23 degree for thickness ratio of front bumper to sphere diameter of 0.23. A critical sphere diameter causing failure of rear wall was also examined while keeping the total thickness of two plates constant. There exists an optimum thickness ratio of front bumper to rear wall, which is identified as a function of the size combination of the impacting body, front and rear plates. The debris cloud expansion-correlated-optimum thickness ratio study provides a good insight on the hypervelocity impact onto spaced target system.

  19. Optimum design for rotor-bearing system using advanced generic algorithm

    International Nuclear Information System (INIS)

    Kim, Young Chan; Choi, Seong Pil; Yang, Bo Suk

    2001-01-01

    This paper describes a combinational method to compute the global and local solutions of optimization problems. The present hybrid algorithm uses both a generic algorithm and a local concentrate search algorithm (e.g simplex method). The hybrid algorithm is not only faster than the standard genetic algorithm but also supplies a more accurate solution. In addition, this algorithm can find the global and local optimum solutions. The present algorithm can be supplied to minimize the resonance response (Q factor) and to yield the critical speeds as far from the operating speed as possible. These factors play very important roles in designing a rotor-bearing system under the dynamic behavior constraint. In the present work, the shaft diameter, the bearing length, and clearance are used as the design variables

  20. Optimum Arrangement of Reactive Power Sources While Using Genetic Algori

    Directory of Open Access Journals (Sweden)

    A. M. Gashimov

    2010-01-01

    Full Text Available Reduction of total losses in distribution electricity supply network is considered as an important measure which serves for improvement of efficiency of electric power supply systems. This objective can be achieved by optimum distribution of reactive power sources in proper places of distribution electricity supply network. The proposed methodology is based on application of a genetic algorithm. Total expenses for installation of capacitor banks, their operation and also expenses related to electric power losses are considered as an efficiency function which is used for determination of places with optimum values of capacitor bank power. The methodology is the most efficient for selection of optimum places in the network where it is necessary to install capacitor banks with due account of their power control depending on a switched-on load value in the units.

  1. Open source hardware and software platform for robotics and artificial intelligence applications

    Science.gov (United States)

    Liang, S. Ng; Tan, K. O.; Lai Clement, T. H.; Ng, S. K.; Mohammed, A. H. Ali; Mailah, Musa; Azhar Yussof, Wan; Hamedon, Zamzuri; Yussof, Zulkifli

    2016-02-01

    Recent developments in open source hardware and software platforms (Android, Arduino, Linux, OpenCV etc.) have enabled rapid development of previously expensive and sophisticated system within a lower budget and flatter learning curves for developers. Using these platform, we designed and developed a Java-based 3D robotic simulation system, with graph database, which is integrated in online and offline modes with an Android-Arduino based rubbish picking remote control car. The combination of the open source hardware and software system created a flexible and expandable platform for further developments in the future, both in the software and hardware areas, in particular in combination with graph database for artificial intelligence, as well as more sophisticated hardware, such as legged or humanoid robots.

  2. Open source hardware and software platform for robotics and artificial intelligence applications

    International Nuclear Information System (INIS)

    Liang, S Ng; Tan, K O; Clement, T H Lai; Ng, S K; Mohammed, A H Ali; Mailah, Musa; Yussof, Wan Azhar; Hamedon, Zamzuri; Yussof, Zulkifli

    2016-01-01

    Recent developments in open source hardware and software platforms (Android, Arduino, Linux, OpenCV etc.) have enabled rapid development of previously expensive and sophisticated system within a lower budget and flatter learning curves for developers. Using these platform, we designed and developed a Java-based 3D robotic simulation system, with graph database, which is integrated in online and offline modes with an Android-Arduino based rubbish picking remote control car. The combination of the open source hardware and software system created a flexible and expandable platform for further developments in the future, both in the software and hardware areas, in particular in combination with graph database for artificial intelligence, as well as more sophisticated hardware, such as legged or humanoid robots. (paper)

  3. X-Window for process control in a mixed hardware environment

    International Nuclear Information System (INIS)

    Clausen, M.; Rehlich, K.

    1992-01-01

    X-Window is a common standard for display purposes on the current workstations. The possibility to create more than one window on a single screen enables the operators to gain more information about the process. Multiple windows from different control systems using mixed hardware is one of the problems this paper will describe. The experience shows that X-Window is a standard per definition, but not in any case. But it is an excellent tool to separate data-acquisition and display from each other over long distances using different types of hardware and software for communications and display. Our experience with X-Window displays for the cryogenic control system and the vacuum control system at HERA on DEC and SUN hardware will be described. (author)

  4. Plutonium Protection System (PPS). Volume 2. Hardware description. Final report

    International Nuclear Information System (INIS)

    Miyoshi, D.S.

    1979-05-01

    The Plutonium Protection System (PPS) is an integrated safeguards system developed by Sandia Laboratories for the Department of Energy, Office of Safeguards and Security. The system is designed to demonstrate and test concepts for the improved safeguarding of plutonium. Volume 2 of the PPS final report describes the hardware elements of the system. The major areas containing hardware elements are the vault, where plutonium is stored, the packaging room, where plutonium is packaged into Container Modules, the Security Operations Center, which controls movement of personnel, the Material Accountability Center, which maintains the system data base, and the Material Operations Center, which monitors the operating procedures in the system. References are made to documents in which details of the hardware items can be found

  5. Current trends in hardware and software for brain-computer interfaces (BCIs).

    Science.gov (United States)

    Brunner, P; Bianchi, L; Guger, C; Cincotti, F; Schalk, G

    2011-04-01

    A brain-computer interface (BCI) provides a non-muscular communication channel to people with and without disabilities. BCI devices consist of hardware and software. BCI hardware records signals from the brain, either invasively or non-invasively, using a series of device components. BCI software then translates these signals into device output commands and provides feedback. One may categorize different types of BCI applications into the following four categories: basic research, clinical/translational research, consumer products, and emerging applications. These four categories use BCI hardware and software, but have different sets of requirements. For example, while basic research needs to explore a wide range of system configurations, and thus requires a wide range of hardware and software capabilities, applications in the other three categories may be designed for relatively narrow purposes and thus may only need a very limited subset of capabilities. This paper summarizes technical aspects for each of these four categories of BCI applications. The results indicate that BCI technology is in transition from isolated demonstrations to systematic research and commercial development. This process requires several multidisciplinary efforts, including the development of better integrated and more robust BCI hardware and software, the definition of standardized interfaces, and the development of certification, dissemination and reimbursement procedures.

  6. Current trends in hardware and software for brain-computer interfaces (BCIs)

    Science.gov (United States)

    Brunner, P.; Bianchi, L.; Guger, C.; Cincotti, F.; Schalk, G.

    2011-04-01

    A brain-computer interface (BCI) provides a non-muscular communication channel to people with and without disabilities. BCI devices consist of hardware and software. BCI hardware records signals from the brain, either invasively or non-invasively, using a series of device components. BCI software then translates these signals into device output commands and provides feedback. One may categorize different types of BCI applications into the following four categories: basic research, clinical/translational research, consumer products, and emerging applications. These four categories use BCI hardware and software, but have different sets of requirements. For example, while basic research needs to explore a wide range of system configurations, and thus requires a wide range of hardware and software capabilities, applications in the other three categories may be designed for relatively narrow purposes and thus may only need a very limited subset of capabilities. This paper summarizes technical aspects for each of these four categories of BCI applications. The results indicate that BCI technology is in transition from isolated demonstrations to systematic research and commercial development. This process requires several multidisciplinary efforts, including the development of better integrated and more robust BCI hardware and software, the definition of standardized interfaces, and the development of certification, dissemination and reimbursement procedures.

  7. Tomographic image reconstruction and rendering with texture-mapping hardware

    International Nuclear Information System (INIS)

    Azevedo, S.G.; Cabral, B.K.; Foran, J.

    1994-07-01

    The image reconstruction problem, also known as the inverse Radon transform, for x-ray computed tomography (CT) is found in numerous applications in medicine and industry. The most common algorithm used in these cases is filtered backprojection (FBP), which, while a simple procedure, is time-consuming for large images on any type of computational engine. Specially-designed, dedicated parallel processors are commonly used in medical CT scanners, whose results are then passed to graphics workstation for rendering and analysis. However, a fast direct FBP algorithm can be implemented on modern texture-mapping hardware in current high-end workstation platforms. This is done by casting the FBP algorithm as an image warping operation with summing. Texture-mapping hardware, such as that on the Silicon Graphics Reality Engine (TM), shows around 600 times speedup of backprojection over a CPU-based implementation (a 100 Mhz R4400 in this case). This technique has the further advantages of flexibility and rapid programming. In addition, the same hardware can be used for both image reconstruction and for volumetric rendering. The techniques can also be used to accelerate iterative reconstruction algorithms. The hardware architecture also allows more complex operations than straight-ray backprojection if they are required, including fan-beam, cone-beam, and curved ray paths, with little or no speed penalties

  8. Hardware realization of an SVM algorithm implemented in FPGAs

    Science.gov (United States)

    Wiśniewski, Remigiusz; Bazydło, Grzegorz; Szcześniak, Paweł

    2017-08-01

    The paper proposes a technique of hardware realization of a space vector modulation (SVM) of state function switching in matrix converter (MC), oriented on the implementation in a single field programmable gate array (FPGA). In MC the SVM method is based on the instantaneous space-vector representation of input currents and output voltages. The traditional computation algorithms usually involve digital signal processors (DSPs) which consumes the large number of power transistors (18 transistors and 18 independent PWM outputs) and "non-standard positions of control pulses" during the switching sequence. Recently, hardware implementations become popular since computed operations may be executed much faster and efficient due to nature of the digital devices (especially concurrency). In the paper, we propose a hardware algorithm of SVM computation. In opposite to the existing techniques, the presented solution applies COordinate Rotation DIgital Computer (CORDIC) method to solve the trigonometric operations. Furthermore, adequate arithmetic modules (that is, sub-devices) used for intermediate calculations, such as code converters or proper sectors selectors (for output voltages and input current) are presented in detail. The proposed technique has been implemented as a design described with the use of Verilog hardware description language. The preliminary results of logic implementation oriented on the Xilinx FPGA (particularly, low-cost device from Artix-7 family from Xilinx was used) are also presented.

  9. Hardware Design Considerations for Edge-Accelerated Stereo Correspondence Algorithms

    Directory of Open Access Journals (Sweden)

    Christos Ttofis

    2012-01-01

    Full Text Available Stereo correspondence is a popular algorithm for the extraction of depth information from a pair of rectified 2D images. Hence, it has been used in many computer vision applications that require knowledge about depth. However, stereo correspondence is a computationally intensive algorithm and requires high-end hardware resources in order to achieve real-time processing speed in embedded computer vision systems. This paper presents an overview of the use of edge information as a means to accelerate hardware implementations of stereo correspondence algorithms. The presented approach restricts the stereo correspondence algorithm only to the edges of the input images rather than to all image points, thus resulting in a considerable reduction of the search space. The paper highlights the benefits of the edge-directed approach by applying it to two stereo correspondence algorithms: an SAD-based fixed-support algorithm and a more complex adaptive support weight algorithm. Furthermore, we present design considerations about the implementation of these algorithms on reconfigurable hardware and also discuss issues related to the memory structures needed, the amount of parallelism that can be exploited, the organization of the processing blocks, and so forth. The two architectures (fixed-support based versus adaptive-support weight based are compared in terms of processing speed, disparity map accuracy, and hardware overheads, when both are implemented on a Virtex-5 FPGA platform.

  10. Introduction to Hardware Security and Trust

    CERN Document Server

    Wang, Cliff

    2012-01-01

    The emergence of a globalized, horizontal semiconductor business model raises a set of concerns involving the security and trust of the information systems on which modern society is increasingly reliant for mission-critical functionality. Hardware-oriented security and trust issues span a broad range including threats related to the malicious insertion of Trojan circuits designed, e.g.,to act as a ‘kill switch’ to disable a chip, to integrated circuit (IC) piracy,and to attacks designed to extract encryption keys and IP from a chip. This book provides the foundations for understanding hardware security and trust, which have become major concerns for national security over the past decade.  Coverage includes security and trust issues in all types of electronic devices and systems such as ASICs, COTS, FPGAs, microprocessors/DSPs, and embedded systems.  This serves as an invaluable reference to the state-of-the-art research that is of critical significance to the security of,and trust in, modern society�...

  11. Optimum profit model considering production, quality and sale problem

    Science.gov (United States)

    Chen, Chung-Ho; Lu, Chih-Lun

    2011-12-01

    Chen and Liu ['Procurement Strategies in the Presence of the Spot Market-an Analytical Framework', Production Planning and Control, 18, 297-309] presented the optimum profit model between the producers and the purchasers for the supply chain system with a pure procurement policy. However, their model with a simple manufacturing cost did not consider the used cost of the customer. In this study, the modified Chen and Liu's model will be addressed for determining the optimum product and process parameters. The authors propose a modified Chen and Liu's model under the two-stage screening procedure. The surrogate variable having a high correlation with the measurable quality characteristic will be directly measured in the first stage. The measurable quality characteristic will be directly measured in the second stage when the product decision cannot be determined in the first stage. The used cost of the customer will be measured by adopting Taguchi's quadratic quality loss function. The optimum purchaser's order quantity, the producer's product price and the process quality level will be jointly determined by maximising the expected profit between them.

  12. Connection of optimum temporal exponents with a principle of least action

    Science.gov (United States)

    Sergeev, E. V.; Karzanov, A. V.; Tremaskin, A. V.

    2008-06-01

    The principle of the least action states, that the motion of objects on optimum trajectories conjugates to the underload expenditure of activity. In the canonical approach this statement is reduced to searching extreme activity. For the immediate proof of the underload expenditure of activity on optimum trajectories the relevant mathematical algorithm in the basis of which bottom the concept of optimum time exponents lays is offered. Using this algorithm, various modes of a motion of charged particles are explored: the harmonic motion, a motion in the homogeneous force field, a motion in a central force field and a motion on inertia. The terrain clearance minimum under the rate of flux of activity for the harmonic motions is detected.

  13. Constraint-Based Local Search for Constrained Optimum Paths Problems

    Science.gov (United States)

    Pham, Quang Dung; Deville, Yves; van Hentenryck, Pascal

    Constrained Optimum Path (COP) problems arise in many real-life applications and are ubiquitous in communication networks. They have been traditionally approached by dedicated algorithms, which are often hard to extend with side constraints and to apply widely. This paper proposes a constraint-based local search (CBLS) framework for COP applications, bringing the compositionality, reuse, and extensibility at the core of CBLS and CP systems. The modeling contribution is the ability to express compositional models for various COP applications at a high level of abstraction, while cleanly separating the model and the search procedure. The main technical contribution is a connected neighborhood based on rooted spanning trees to find high-quality solutions to COP problems. The framework, implemented in COMET, is applied to Resource Constrained Shortest Path (RCSP) problems (with and without side constraints) and to the edge-disjoint paths problem (EDP). Computational results show the potential significance of the approach.

  14. Derivative load voltage and particle swarm optimization to determine optimum sizing and placement of shunt capacitor in improving line losses

    Directory of Open Access Journals (Sweden)

    Mohamed Milad Baiek

    2016-12-01

    Full Text Available The purpose of this research is to study optimal size and placement of shunt capacitor in order to minimize line loss. Derivative load bus voltage was calculated to determine the sensitive load buses which further being optimum with the placement of shunt capacitor. Particle swarm optimization (PSO was demonstrated on the IEEE 14 bus power system to find optimum size of shunt capacitor in reducing line loss. The objective function was applied to determine the proper placement of capacitor and get satisfied solutions towards constraints. The simulation was run over Matlab under two scenarios namely base case and increasing 100% load. Derivative load bus voltage was simulated to determine the most sensitive load bus. PSO was carried out to determine the optimum sizing of shunt capacitor at the most sensitive bus. The results have been determined that the most sensitive bus was bus number 14 for the base case and increasing 100% load. The optimum sizing was 8.17 Mvar for the base case and 23.98 Mvar for increasing load about 100%. Line losses were able to reduce approximately 0.98% for the base case and increasing 100% load reduced about 3.16%. The proposed method was also proven as a better result compared with harmony search algorithm (HSA method. HSA method recorded loss reduction ratio about 0.44% for the base case and 2.67% when the load was increased by 100% while PSO calculated loss reduction ratio about 1.12% and 4.02% for the base case and increasing 100% load respectively. The result of this study will support the previous study and it is concluded that PSO was successfully able to solve some engineering problems as well as to find a solution in determining shunt capacitor sizing on the power system simply and accurately compared with other evolutionary optimization methods.

  15. Utilizing IXP1200 hardware and software for packet filtering

    OpenAIRE

    Lindholm, Jeffery L.

    2004-01-01

    As network processors have advanced in speed and efficiency they have become more and more complex in both hardware and software configurations. Intel's IXP1200 is one of these new network processors that has been given to different universities worldwide to conduct research on. The goal of this thesis is to take the first step in starting that research by providing a stable system that can provide a reliable platform for further research. This thesis introduces the fundamental hardware of In...

  16. The hardware control system for WEAVE at the William Herschel telescope

    NARCIS (Netherlands)

    Delgado Hernandez, Jose M.; Rodríguez-Ramos, Luis F.; Cano Infantes, Diego; Martin, Carlos; Bevil, Craige; Picó, Sergio; Dee, Kevin M.; Abrams, Don Carlos; Lewis, Ian J.; Pragt, Johan; Stuik, Remko; Tromp, Niels; Dalton, Gavin; L. Aguerri, J. Alfonso; Bonifacio, Piercarlo; Middleton, Kevin F.; Trager, Scott C.

    This work describes the hardware control system of the Prime Focus Corrector (PFC) and the Spectrograph, two of the main parts of WEAVE, a multi-object fiber spectrograph for the WHT Telescope. The PFC and Spectrograph control system hardware is based on the Allen Bradley's Programmable Automation

  17. A Message-Passing Hardware/Software Cosimulation Environment for Reconfigurable Computing Systems

    Directory of Open Access Journals (Sweden)

    Manuel Saldaña

    2009-01-01

    Full Text Available High-performance reconfigurable computers (HPRCs provide a mix of standard processors and FPGAs to collectively accelerate applications. This introduces new design challenges, such as the need for portable programming models across HPRCs and system-level verification tools. To address the need for cosimulating a complete heterogeneous application using both software and hardware in an HPRC, we have created a tool called the Message-passing Simulation Framework (MSF. We have used it to simulate and develop an interface enabling an MPI-based approach to exchange data between X86 processors and hardware engines inside FPGAs. The MSF can also be used as an application development tool that enables multiple FPGAs in simulation to exchange messages amongst themselves and with X86 processors. As an example, we simulate a LINPACK benchmark hardware core using an Intel-FSB-Xilinx-FPGA platform to quickly prototype the hardware, to test the communications. and to verify the benchmark results.

  18. Establishing a novel modeling tool: a python-based interface for a neuromorphic hardware system.

    Science.gov (United States)

    Brüderle, Daniel; Müller, Eric; Davison, Andrew; Muller, Eilif; Schemmel, Johannes; Meier, Karlheinz

    2009-01-01

    Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated.

  19. Optimum position for wells producing at constant wellbore pressure

    Energy Technology Data Exchange (ETDEWEB)

    Camacho-Velazquez, R.; Rodriguez de la Garza, F. [Univ. Nacional Autonoma de Mexico, Mexico City (Mexico); Galindo-Nava, A. [Inst. Mexicanos del Petroleo, Mexico City (Mexico)]|[Univ. Nacional de Mexico, Mexico City (Mexico); Prats, M.

    1994-12-31

    This paper deals with the determination of the optimum position of several wells, producing at constant different wellbore pressures from a two-dimensional closed-boundary reservoirs, to maximize the cumulative production or the total flow rate. To achieve this objective they authors use an improved version of the analytical solution recently proposed by Rodriguez and Cinco-Ley and an optimization algorithm based on a quasi-Newton procedure with line search. At each iteration the algorithm approximates the negative of the objective function by a cuadratic relation derived from a Taylor series. The improvement of rodriguez and Cinco`s solution is attained in four ways. First, an approximation is obtained, which works better at earlier times (before the boundary dominated period starts) than the previous solution. Second, the infinite sums that are present in the solution are expressed in a condensed form, which is relevant for reducing the computer time when the optimization algorithm is used. Third, the solution is modified to take into account the possibility of having wells starting to produce at different times. This point allows them to deal with the problem of getting the optimum position for an infill drilling program. Last, the solution is extended to include the possibility of changing the value of wellbore pressure or being able to stimulate any of the wells at any time. When the wells are producing at different wellbore pressures it is found that the optimum position is a function of time, otherwise the optimum position is fixed.

  20. Reconfigurable Hardware for Compressing Hyperspectral Image Data

    Science.gov (United States)

    Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua

    2010-01-01

    High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of

  1. Uniform and Non-Uniform Optimum Scalar Quantizers Performances: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Fendy Santoso

    2008-05-01

    Full Text Available The aim of this research is to investigate source coding, the representation of information source output by finite R bits/symbol. The performance of optimum quantisers subject to an entropy constraint has been studied. The definitive work in this area is best summarised by Shannon’s source coding theorem, that is, a source with entropy H can be encoded with arbitrarily small error probability at any rate R (bits/source output as long as R>H. Conversely, If R the error probability will be driven away from zero, independent of the complexity of the encoder and the decoder employed. In this context, the main objective of engineers is however to design the optimum code. Unfortunately, the rate-distortion theorem does not provide the recipe for such a design. The theorem does, however, provide the theoretical limit so that we know how close we are to the optimum. The full understanding of the theorem also helps in setting the direction to achieve such an optimum. In this research, we have investigated the performances of two practical scalar quantisers, i.e., a Lloyd-Max quantiser and the uniformly defined one and also a well-known entropy coding scheme, i.e., Huffman coding against their theoretically attainable optimum performance due to Shannon’s limit R. It has been shown that our uniformly defined quantiser could demonstrate superior performance. The performance improvements, in fact, are more noticeable at higher bit rates.

  2. A first course in optimum design of yacht sails

    Science.gov (United States)

    Sugimoto, Takeshi

    1993-03-01

    The optimum sail geometry is analytically obtained for the case of maximizing the thrust under equality and inequality constraints on the lift and the heeling moment. A single mainsail is assumed to be set close-hauled in uniform wind and upright on the flat sea surface. The governing parameters are the mast height and the gap between the sail foot and the sea surface. The lifting line theory is applied to analyze the aerodynamic forces acting on a sail. The design method consists of the variational principle and a feasibility study. Almost triangular sails are found to be optimum. Their advantages are discussed.

  3. Deposition uniformity, particle nucleation and the optimum conditions for CVD in multi-wafer furnaces

    Energy Technology Data Exchange (ETDEWEB)

    Griffiths, S.K.; Nilson, R.H.

    1996-06-01

    A second-order perturbation solution describing the radial transport of a reactive species and concurrent deposition on wafer surfaces is derived for use in optimizing CVD process conditions. The result is applicable to a variety of deposition reactions and accounts for both diffusive and advective transport, as well as both ordinary and Knudsen diffusion. Based on the first-order approximation, the deposition rate is maximized subject to a constraint on the radial uniformity of the deposition rate. For a fixed reactant mole fraction, the optimum pressure and optimum temperature are obtained using the method of Lagrange multipliers. This yields a weak one-sided maximum; deposition rates fall as pressures are reduced but remain nearly constant at all pressures above the optimum value. The deposition rate is also maximized subject to dual constraints on the uniformity and particle nucleation rate. In this case, the optimum pressure, optimum temperature and optimum reactant fraction are similarly obtained, and the resulting maximum deposition rate is well defined. These results are also applicable to CVI processes used in composites manufacturing.

  4. Benchmarking Model Variants in Development of a Hardware-in-the-Loop Simulation System

    Science.gov (United States)

    Aretskin-Hariton, Eliot D.; Zinnecker, Alicia M.; Kratz, Jonathan L.; Culley, Dennis E.; Thomas, George L.

    2016-01-01

    Distributed engine control architecture presents a significant increase in complexity over traditional implementations when viewed from the perspective of system simulation and hardware design and test. Even if the overall function of the control scheme remains the same, the hardware implementation can have a significant effect on the overall system performance due to differences in the creation and flow of data between control elements. A Hardware-in-the-Loop (HIL) simulation system is under development at NASA Glenn Research Center that enables the exploration of these hardware dependent issues. The system is based on, but not limited to, the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k). This paper describes the step-by-step conversion from the self-contained baseline model to the hardware in the loop model, and the validation of each step. As the control model hardware fidelity was improved during HIL system development, benchmarking simulations were performed to verify that engine system performance characteristics remained the same. The results demonstrate the goal of the effort; the new HIL configurations have similar functionality and performance compared to the baseline C-MAPSS40k system.

  5. Establishing a novel modeling tool: a python-based interface for a neuromorphic hardware system

    Directory of Open Access Journals (Sweden)

    Daniel Brüderle

    2009-06-01

    Full Text Available Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated.

  6. Enabling Open Hardware through FOSS tools

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Software developers often take open file formats and tools for granted. When you publish code on github, you do not ask yourself if somebody will be able to open it and modify it. We need the same freedom in the open hardware world, to make it truly accessible for everyone.

  7. Integrated conception of hardware/software mixed systems used in nuclear instrumentation

    International Nuclear Information System (INIS)

    Dias, Ailton F.; Sorel, Yves; Akil, Mohamed

    2002-01-01

    Hardware/software codesign carries out the design of systems composed by a hardware portion, with specific components, and a software portion, with microprocessor based architecture. This paper describes the Algorithm Architecture Adequation (AAA) design methodology - originally oriented to programmable multicomponent architectures, its extension to reconfigurable circuits and its application to design and development of nuclear instrumentation systems composed by programmable and configurable circuits. AAA methodology uses an unified model to describe algorithm, architecture and implementation, based on graph theory. The great advantage of AAA methodology is the utilization of a same model from the specification to the implementation of hardware/software systems, reducing the complexity and design time. (author)

  8. Method for Estimating Optimum Free Resonant Frequencies in Overcoupled WPT System

    Directory of Open Access Journals (Sweden)

    Dong-Wook Seo

    2017-01-01

    Full Text Available In our previous work, we proposed the method to maximize the output power even in the overcoupled state of the wireless power transfer (WPT system by controlling free resonant frequencies and derived closed-form expression for optimum free resonant frequencies of the primary and secondary resonators. In this paper, we propose the mutual coupling approach to derive the optimum free resonant frequencies and show the measured power transfer efficiency (PTE using the transmission efficiency as well as the system energy efficiency. The results of the proposed approach exactly coincide with those of the previous work, and the fabricated prototype achieves the transmission efficiency of about 80% by tuning the free resonant frequencies to the optimum values in the overcoupled state.

  9. Calculations enable optimum design of magnetic brake

    Science.gov (United States)

    Kosmahl, H. G.

    1966-01-01

    Mathematical analysis and computations determine optimum magnetic coil configurations for a magnetic brake which controllably decelerates a free falling load to a soft stop. Calculations on unconventionally wound coils determine the required parameters for the desired deceleration with minimum electrical energy supplied to the stationary coil.

  10. A Fast hardware tracker for the ATLAS Trigger

    CERN Document Server

    Pandini, Carlo Enrico; The ATLAS collaboration

    2015-01-01

    The trigger system at the ATLAS experiment is designed to lower the event rate occurring from the nominal bunch crossing at 40 MHz to about 1 kHz for a designed LHC luminosity of 10$^{34}$ cm$^{-2}$ s$^{-1}$. To achieve high background rejection while maintaining good efficiency for interesting physics signals, sophisticated algorithms are needed which require extensive use of tracking information. The Fast TracKer (FTK) trigger system, part of the ATLAS trigger upgrade program, is a highly parallel hardware device designed to perform track-finding at 100 kHz and based on a mixture of advanced technologies. Modern, powerful Field Programmable Gate Arrays (FPGA) form an important part of the system architecture, and the combinatorial problem of pattern recognition is solved by ~8000 standard-cell ASICs named Associative Memories. The availability of the tracking and subsequent vertex information within a short latency ensures robust selections and allows improved trigger performance for the most difficult sign...

  11. Calculator: A Hardware Design, Math and Software Programming Project Base Learning

    Directory of Open Access Journals (Sweden)

    F. Criado

    2015-03-01

    Full Text Available This paper presents the implementation by the students of a complex calculator in hardware. This project meets hardware design goals, and also highly motivates them to use competences learned in others subjects. The learning process, associated to System Design, is hard enough because the students have to deal with parallel execution, signal delay, synchronization … Then, to strengthen the knowledge of hardware design a methodology as project based learning (PBL is proposed. Moreover, it is also used to reinforce cross subjects like math and software programming. This methodology creates a course dynamics that is closer to a professional environment where they will work with software and mathematics to resolve the hardware design problems. The students design from zero the functionality of the calculator. They are who make the decisions about the math operations that it is able to resolve it, and also the operands format or how to introduce a complex equation into the calculator. This will increase the student intrinsic motivation. In addition, since the choices may have consequences on the reliability of the calculator, students are encouraged to program in software the decisions about how implement the selected mathematical algorithm. Although math and hardware design are two tough subjects for students, the perception that they get at the end of the course is quite positive.

  12. Optimum concrete compression strength using bio-enzyme

    Directory of Open Access Journals (Sweden)

    Bagio Tony Hartono

    2017-01-01

    Full Text Available To make concrete with high compressive strength and has a certain concrete specifications other than the main concrete materials are also needed concrete mix quality control and other added material is also in line with the current technology of concrete mix that produces concrete with specific characteristics. Addition of bio enzyme on five concrete mixture that will be compared with normal concrete in order to know the optimum level bio-enzyme in concrete to increase the strength of the concrete. Concrete with bio-enzyme 200 ml/m3, 400 ml/m3, 600 ml/m3, 800 ml/m3, 1000 ml/m3 and normal concrete. Refer to the crushing test result, its tends to the mathematical model using 4th degree polynomial regression (least quartic, as represent on the attached data series, which is for the design mix fc′ = 25 MPa generate optimum value for 33,98 MPa, on the bio-additive dosage of 509 ml bio enzymes.

  13. The theory of an ‘optimum currency area’

    Directory of Open Access Journals (Sweden)

    Jarosław Kundera

    2012-12-01

    Full Text Available The main goal of this paper is to analyse and distinguish the main components of the theory of an ‘Optimum Currency Area’. The theory of an optimum currency area indicates some essential elements as preconditions for the successful introduction of a common currency: high mobility of labour, openness of the economy defined as a high proportion of tradable to non-tradable goods, and high diversification of domestic production before joining the union. The article’s analysis helps to better understanding the reasons of the current crisis in the euro zone. The main problem with a common currency area is the adjustment to imbalances, which cannot take place through exchange rates in conditions of a common currency. The missing elements of the theory are the role of the mobility of capital to correct interregional balance of payments disequilibria and lack of a common budget with sufficient own resources during the occurrence of debt crises in member countries. The theory of an optimum currency area has noticed the importance of coordination between fiscal and monetary policy and the necessity of redistribution of resources among partners. However, it does not say much about the methods applied, how to deal with debt crises and what the cost of a potential breaking up of monetary union would be.

  14. Semianalytical and Seminumerical Calculations of Optimum Material Distributions

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Gunnar

    1963-06-15

    Perturbation theory applied to the multigroup diffusion equations gives a general condition for optimum distribution of reactor materials. A certain function of the material densities and the fluxes, here called the W (eight) function, must thus be constant where the variable material density is larger than zero if changes in this density affect only the group constants where the changes occur. The weight function is, however, generally a complicated function and complete solutions have therefore previously been presented only for the special case when constant weight function implies constant thermal flux. It is demonstrated that the condition of constant weight function can be used together with well known methods for numerical solution of the multigroup diffusion equations to obtain optimum material distributions also when the thermal flux varies over the core. Solution of the minimum fuel mass problem for two reflected reactors thus shows that an effective reflector such as D{sub 2}O gives a peak in the optimum fuel distribution at the core-reflector interface, while an ineffective reflector such as a breeder blanket or a steel tank wall 'pushes' the fuel away from the strongly absorbing zone. It is also interesting to compare the effective reflector case with analytically obtained solutions corresponding to flat power density, flat thermal flux and flat fuel density.

  15. Optimum Wing Shape of Highly Flexible Morphing Aircraft for Improved Flight Performance

    Science.gov (United States)

    Su, Weihua; Swei, Sean Shan-Min; Zhu, Guoming G.

    2016-01-01

    In this paper, optimum wing bending and torsion deformations are explored for a mission adaptive, highly flexible morphing aircraft. The complete highly flexible aircraft is modeled using a strain-based geometrically nonlinear beam formulation, coupled with unsteady aerodynamics and six-degrees-of-freedom rigid-body motions. Since there are no conventional discrete control surfaces for trimming the flexible aircraft, the design space for searching the optimum wing geometries is enlarged. To achieve high performance flight, the wing geometry is best tailored according to the specific flight mission needs. In this study, the steady level flight and the coordinated turn flight are considered, and the optimum wing deformations with the minimum drag at these flight conditions are searched by utilizing a modal-based optimization procedure, subject to the trim and other constraints. The numerical study verifies the feasibility of the modal-based optimization approach, and shows the resulting optimum wing configuration and its sensitivity under different flight profiles.

  16. Experimental validation of optimum resistance moment of concrete ...

    African Journals Online (AJOL)

    Experimental validation of optimum resistance moment of concrete slabs reinforced ... other solutions to combat corrosion problems in steel reinforced concrete. ... Eight specimens of two-way spanning slabs reinforced with CFRP bars were ...

  17. Environmental Control System Software & Hardware Development

    Science.gov (United States)

    Vargas, Daniel Eduardo

    2017-01-01

    ECS hardware: (1) Provides controlled purge to SLS Rocket and Orion spacecraft. (2) Provide mission-focused engineering products and services. ECS software: (1) NASA requires Compact Unique Identifiers (CUIs); fixed-length identifier used to identify information items. (2) CUI structure; composed of nine semantic fields that aid the user in recognizing its purpose.

  18. Total knee arthroplasty using patient-specific blocks after prior femoral fracture without hardware removal

    Directory of Open Access Journals (Sweden)

    Raju Vaishya

    2018-01-01

    Full Text Available Background: The options to perform total knee arthroplasty (TKA with retained hardware in femur are mainly – removal of hardware, use of extramedullary guide, or computer-assisted surgery. Patient-specific blocks (PSBs have been introduced with many potential advantages, but their use in retained hardware has not been adequately explored. The purpose of the present study was to outline and assess the usefulness of the PSBs in performing TKA in patients with retained femoral hardware. Materials and Materials and Methods: Nine patients with retained femoral hardware underwent TKA using PSBs. All the surgeries were performed by the same surgeon using same implants. Nine cases (7 males and 2 females out of total of 120 primary TKA had retained hardware. The average age of the patients was 60.55 years. The retained hardware were 6 patients with nails, 2 with plates and one patient had screws. Out of the nine cases, only one patient needed removal of a screw which was hindering placement of pin for the PSB. Results: All the patients had significant improvement in their Knee Society Score (KSS which improved from 47.0 to postoperative KSS of 86.77 (P < 0.00. The mechanical axis was significantly improved (P < 0.03 after surgery. No patient required blood transfusion and the average tourniquet time was 41 min. Conclusion: TKA using PSBs is useful and can be used in patients with retained hardware with good functional and radiological outcome.

  19. Event management for large scale event-driven digital hardware spiking neural networks.

    Science.gov (United States)

    Caron, Louis-Charles; D'Haene, Michiel; Mailhot, Frédéric; Schrauwen, Benjamin; Rouat, Jean

    2013-09-01

    The interest in brain-like computation has led to the design of a plethora of innovative neuromorphic systems. Individually, spiking neural networks (SNNs), event-driven simulation and digital hardware neuromorphic systems get a lot of attention. Despite the popularity of event-driven SNNs in software, very few digital hardware architectures are found. This is because existing hardware solutions for event management scale badly with the number of events. This paper introduces the structured heap queue, a pipelined digital hardware data structure, and demonstrates its suitability for event management. The structured heap queue scales gracefully with the number of events, allowing the efficient implementation of large scale digital hardware event-driven SNNs. The scaling is linear for memory, logarithmic for logic resources and constant for processing time. The use of the structured heap queue is demonstrated on a field-programmable gate array (FPGA) with an image segmentation experiment and a SNN of 65,536 neurons and 513,184 synapses. Events can be processed at the rate of 1 every 7 clock cycles and a 406×158 pixel image is segmented in 200 ms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Hardware-assisted software clock synchronization for homogeneous distributed systems

    Science.gov (United States)

    Ramanathan, P.; Kandlur, Dilip D.; Shin, Kang G.

    1990-01-01

    A clock synchronization scheme that strikes a balance between hardware and software solutions is proposed. The proposed is a software algorithm that uses minimal additional hardware to achieve reasonably tight synchronization. Unlike other software solutions, the guaranteed worst-case skews can be made insensitive to the maximum variation of message transit delay in the system. The scheme is particularly suitable for large partially connected distributed systems with topologies that support simple point-to-point broadcast algorithms. Examples of such topologies include the hypercube and the mesh interconnection structures.

  1. Optimum Combining for Rapidly Fading Channels in Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Sonia Furman

    2003-10-01

    Full Text Available Research and technology in wireless communication systems such as radar and cellular networks have successfully implemented alternative design approaches that utilize antenna array techniques such as optimum combining, to mitigate the degradation effects of multipath in rapid fading channels. In ad hoc networks, these methods have not yet been exploited primarily due to the complexity inherent in the network's architecture. With the high demand for improved signal link quality, devices configured with omnidirectional antennas can no longer meet the growing need for link quality and spectrum efficiency. This study takes an empirical approach to determine an optimum combining antenna array based on 3 variants of interelement spacing. For rapid fading channels, the simulation results show that the performance in the network of devices retrofitted with our antenna arrays consistently exceeded those with an omnidirectional antenna. Further, with the optimum combiner, the performance increased by over 60% compared to that of an omnidirectional antenna in a rapid fading channel.

  2. An integrated expert system for optimum in core fuel management

    International Nuclear Information System (INIS)

    Abd Elmoatty, Mona S.; Nagy, M.S.; Aly, Mohamed N.; Shaat, M.K.

    2011-01-01

    Highlights: → An integrated expert system constructed for optimum in core fuel management. → Brief discussion of the ESOIFM Package modules, inputs and outputs. → Package was applied on the DALAT Nuclear Research Reactor (0.5 MW). → The Package verification showed good agreement. - Abstract: An integrated expert system called Efficient and Safe Optimum In-core Fuel Management (ESOIFM Package) has been constructed to achieve an optimum in core fuel management and automate the process of data analysis. The Package combines the constructed mathematical models with the adopted artificial intelligence techniques. The paper gives a brief discussion of the ESOIFM Package modules, inputs and outputs. The Package was applied on the DALAT Nuclear Research Reactor (0.5 MW). Moreover, the data of DNRR have been used as a case study for testing and evaluation of ESOIFM Package. This paper shows the comparison between the ESOIFM Package burn-up results, the DNRR experimental burn-up data, and other DNRR Codes burn-up results. The results showed good agreement.

  3. Determination of optimum filter in myocardial SPECT: A phantom study

    International Nuclear Information System (INIS)

    Takavar, A.; Shamsipour, Gh.; Sohrabi, M.; Eftekhari, M.

    2004-01-01

    Background: In myocardial perfusion SPECT images are degraded by photon attenuation, the distance-dependent collimator, detector response and photons scatter. Filters greatly affect quality of nuclear medicine images. Materials and Methods: A phantom simulating heart left ventricle was built. About 1mCi of 99m Tc was injected into the phantom. Images was taken from this phantom. Some filters including Parzen, Hamming, Hanning, Butter worth and Gaussian were exerted on the phantom images. By defining some criteria such as contrast, signal to noise ratio, and defect size detectability, the best filter can be determined. Results: 0.325 Nyquist frequency and 0.5 nq was obtained as the optimum cut off frequencies respectively for hamming and handing filters. Order 11, cut off 0.45 Nq and order 20 cut off 0.5 Nq obtained optimum respectively for Butter worth and Gaussian filters. Conclusion: The optimum member of every filter's family was obtained

  4. Planning of optimum production from a natural gas field

    Energy Technology Data Exchange (ETDEWEB)

    Van Dam, J

    1968-03-01

    The design of an optimum development plan for a natural gas field always depends on the typical characteristics of the producing field, as well as those of the market to be served by this field. Therefore, a good knowledge of the field parameters, such as the total natural gas reserves, the well productivity, and the dependence of production rates on pipeline pressure and depletion of natural gas reserves, is required prior to designing the development scheme of the field, which in fact depends on the gas-sales contract to be concluded in order to commit the natural gas reserves to the market. In this paper these various technical parameters are discussed in some detail, and on this basis a theoretical/economical analysis of natural gas production is given. For this purpose a simplified economical/mathematical model for the field is proposed, from which optimum production rates at various future dates can be calculated. The results of these calculations are represented in a dimensionless diagram which may serve as an aid in designing optimum development plans for a natural gas field. The use of these graphs is illustrated in a few examples.

  5. Optimum Temperature and Thermal Stability of Crude Polyphenol ...

    African Journals Online (AJOL)

    The optimum temperature was found to be 300C for the enzyme extracted from guava, ... processing industries because during the processing ... enhance the brown colour produced (Valero et al., ... considerable economic and nutritional loss.

  6. Thermal Comfort and Optimum Humidity Part 1

    Directory of Open Access Journals (Sweden)

    M. V. Jokl

    2002-01-01

    Full Text Available The hydrothermal microclimate is the main component in indoor comfort. The optimum hydrothermal level can be ensured by suitable changes in the sources of heat and water vapor within the building, changes in the environment (the interior of the building and in the people exposed to the conditions inside the building. A change in the heat source and the source of water vapor involves improving the heat - insulating properties and the air permeability of the peripheral walls and especially of the windows. The change in the environment will bring human bodies into balance with the environment. This can be expressed in terms of an optimum or at least an acceptable globe temperature, an adequate proportion of radiant heat within the total amount of heat from the environment (defined by the difference between air and wall temperature, uniform cooling of the human body by the environment, defined a by the acceptable temperature difference between head and ankles, b by acceptable temperature variations during a shift (location unchanged, or during movement from one location to another without a change of clothing. Finally, a moisture balance between man and the environment is necessary (defined by acceptable relative air humidity. A change for human beings means a change of clothes which, of course, is limited by social acceptance in summer and by inconvenient heaviness in winter. The principles of optimum heating and cooling, humidification and dehumidification are presented in this paper.Hydrothermal comfort in an environment depends on heat and humidity flows (heat and water vapors, occurring in a given space in a building interior and affecting the total state of the human organism.

  7. Thermal Comfort and Optimum Humidity Part 2

    Directory of Open Access Journals (Sweden)

    M. V. Jokl

    2002-01-01

    Full Text Available The hydrothermal microclimate is the main component in indoor comfort. The optimum hydrothermal level can be ensured by suitable changes in the sources of heat and water vapor within the building, changes in the environment (the interior of the building and in the people exposed to the conditions inside the building. A change in the heat source and the source of water vapor involves improving the heat - insulating properties and the air permeability of the peripheral walls and especially of the windows. The change in the environment will bring human bodies into balance with the environment. This can be expressed in terms of an optimum or at least an acceptable globe temperature, an adequate proportion of radiant heat within the total amount of heat from the environment (defined by the difference between air and wall temperature, uniform cooling of the human body by the environment, defined a by the acceptable temperature difference between head and ankles, b by acceptable temperature variations during a shift (location unchanged, or during movement from one location to another without a change of clothing. Finally, a moisture balance between man and the environment is necessary (defined by acceptable relative air humidity. A change for human beings means a change of clothes which, of course, is limited by social acceptance in summer and by inconvenient heaviness in winter. The principles of optimum heating and cooling, humidification and dehumidification are presented in this paper.Hydrothermal comfort in an environment depends on heat and humidity flows (heat and water vapors, occurring in a given space in a building interior and affecting the total state of the human organism.

  8. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  9. Computer hardware for radiologists: Part 2

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. “Storage drive” is a term describing a “memory” hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. “Drive interfaces” connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular “input/output devices” used commonly with computers are the printer, monitor, mouse, and keyboard. The “bus” is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. “Ports” are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ‘ever increasing’ digital future

  10. Computer hardware for radiologists: Part 2

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU, chipset, random access memory (RAM, and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. "Storage drive" is a term describing a "memory" hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. "Drive interfaces" connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular "input/output devices" used commonly with computers are the printer, monitor, mouse, and keyboard. The "bus" is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated ISA bus. "Ports" are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ′ever increasing′ digital future.

  11. Hardwares e sistemas multiagente: um estudo sobre arquiteturas híbridas

    Directory of Open Access Journals (Sweden)

    Rafhael Rodrigues Cunha

    2015-05-01

    Full Text Available Este artigo apresenta três estudos de caso sobre a aplicabilidade de arquiteturas de hardware reconfiguráveis, como FPGA, voltadas à utilização em sistemas multiagentes. Feita uma análise visando à elucidação dos resultados e das contribuições que os estudos proporcionaram aos autores, observa-se que o desenvolvimento de sistemas inteligentes depende cada vez mais de uma programação que explore o hardware ao máximo. Esse desfecho torna o uso de hardwares reconfiguráveis o mais aconselhável quando problemas computacionais complexos demandam respostas rápidas e eficientes, como nos casos estudados.

  12. Accelerator Technology: Injection and Extraction Related Hardware: Kickers and Septa

    CERN Document Server

    Barnes, M J; Mertens, V

    2013-01-01

    This document is part of Subvolume C 'Accelerators and Colliders' of Volume 21 'Elementary Particles' of Landolt-Börnstein - Group I 'Elementary Particles, Nuclei and Atoms'. It contains the the Section '8.7 Injection and Extraction Related Hardware: Kickers and Septa' of the Chapter '8 Accelerator Technology' with the content: 8.7 Injection and Extraction Related Hardware: Kickers and Septa 8.7.1 Fast Pulsed Systems (Kickers) 8.7.2 Electrostatic and Magnetic Septa

  13. Testing Microgravity Flight Hardware Concepts on the NASA KC-135

    Science.gov (United States)

    Motil, Susan M.; Harrivel, Angela R.; Zimmerli, Gregory A.

    2001-01-01

    This paper provides an overview of utilizing the NASA KC-135 Reduced Gravity Aircraft for the Foam Optics and Mechanics (FOAM) microgravity flight project. The FOAM science requirements are summarized, and the KC-135 test-rig used to test hardware concepts designed to meet the requirements are described. Preliminary results regarding foam dispensing, foam/surface slip tests, and dynamic light scattering data are discussed in support of the flight hardware development for the FOAM experiment.

  14. Hardware control system using modular software under RSX-11D

    International Nuclear Information System (INIS)

    Kittell, R.S.; Helland, J.A.

    1978-01-01

    A modular software system used to control extensive hardware is described. The development, operation, and experience with this software are discussed. Included are the methods employed to implement this system while taking advantage of the Real-Time features of RSX-11D. Comparisons are made between this system and an earlier nonmodular system. The controlled hardware includes magnet power supplies, stepping motors, DVM's, and multiplexors, and is interfaced through CAMAC. 4 figures

  15. ARM assembly language with hardware experiments

    CERN Document Server

    Elahi, Ata

    2015-01-01

    This book provides a hands-on approach to learning ARM assembly language with the use of a TI microcontroller. The book starts with an introduction to computer architecture and then discusses number systems and digital logic. The text covers ARM Assembly Language, ARM Cortex Architecture and its components, and Hardware Experiments using TILM3S1968. Written for those interested in learning embedded programming using an ARM Microcontroller. ·         Introduces number systems and signal transmission methods   ·         Reviews logic gates, registers, multiplexers, decoders and memory   ·         Provides an overview and examples of ARM instruction set   ·         Uses using Keil development tools for writing and debugging ARM assembly language Programs   ·         Hardware experiments using a Mbed NXP LPC1768 microcontroller; including General Purpose Input/Output (GPIO) configuration, real time clock configuration, binary input to 7-segment display, creating ...

  16. Secure Hardware Performance Analysis in Virtualized Cloud Environment

    Directory of Open Access Journals (Sweden)

    Chee-Heng Tan

    2013-01-01

    Full Text Available The main obstacle in mass adoption of cloud computing for database operations is the data security issue. In this paper, it is shown that IT services particularly in hardware performance evaluation in virtual machine can be accomplished effectively without IT personnel gaining access to real data for diagnostic and remediation purposes. The proposed mechanisms utilized TPC-H benchmark to achieve 2 objectives. First, the underlying hardware performance and consistency is supervised via a control system, which is constructed using a combination of TPC-H queries, linear regression, and machine learning techniques. Second, linear programming techniques are employed to provide input to the algorithms that construct stress-testing scenarios in the virtual machine, using the combination of TPC-H queries. These stress-testing scenarios serve 2 purposes. They provide the boundary resource threshold verification to the first control system, so that periodic training of the synthetic data sets for performance evaluation is not constrained by hardware inadequacy, particularly when the resources in the virtual machine are scaled up or down which results in the change of the utilization threshold. Secondly, they provide a platform for response time verification on critical transactions, so that the expected Quality of Service (QoS from these transactions is assured.

  17. 2D neural hardware versus 3D biological ones

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-31

    This paper will present important limitations of hardware neural nets as opposed to biological neural nets (i.e. the real ones). The author starts by discussing neural structures and their biological inspirations, while mentioning the simplifications leading to artificial neural nets. Going further, the focus will be on hardware constraints. The author will present recent results for three different alternatives of implementing neural networks: digital, threshold gate, and analog, while the area and the delay will be related to neurons' fan-in and weights' precision. Based on all of these, it will be shown why hardware implementations cannot cope with their biological inspiration with respect to their power of computation: the mapping onto silicon lacking the third dimension of biological nets. This translates into reduced fan-in, and leads to reduced precision. The main conclusion is that one is faced with the following alternatives: (1) try to cope with the limitations imposed by silicon, by speeding up the computation of the elementary silicon neurons; (2) investigate solutions which would allow one to use the third dimension, e.g. using optical interconnections.

  18. Procedure for determining the optimum rate of increasing shaft depth

    Energy Technology Data Exchange (ETDEWEB)

    Durov, E.M.

    1983-03-01

    Presented is an economic analysis of increasing shaft depth during mine modernization. Investigations carried out by the Yuzhgiproshakht Institute are analyzed. The investigations are aimed at determining the optimum shaft sinking rate (the rate which reduces investment to the minimum). The following factors are considered: coal output of a mine (0.9, 1.2, 1.5 and 1.8 Mt/year), depth at which the new mining level is situated (600, 800, 1200, 1400 and 1600 m), four schemes of increasing depth of 2 central shafts (rock hoisting to ground surface, rock hoisting to the existing level, rock haulage to the developed level, rock haulage to the level being developed using a large diameter borehole drilled from the new level to the shaft bottom and enlarged from shaft bottom to the new level), shaft sinking rate (10, 20, 30, 40, 50 and 60 m/month), range of increasing shaft depth (the difference between depth of the shaft before and after increasing its depth by 100, 200, 300 and 400 m). Comparative evaluations show that the optimum shaft sinking rate depends on the scheme for rock hoisting (one of 4 analyzed), range of increasing shaft depth and gas content in coal seams. The optimum shaft sinking rate ranges from 20 to 40 m/month in coal mines with low methane content and from 20 to 30 m/month in gassy coal mines. The planned coal output of a mine does not influence the optimum shaft sinking rate.

  19. Performance/price estimates for cortex-scale hardware: a design space exploration.

    Science.gov (United States)

    Zaveri, Mazad S; Hammerstrom, Dan

    2011-04-01

    In this paper, we revisit the concept of virtualization. Virtualization is useful for understanding and investigating the performance/price and other trade-offs related to the hardware design space. Moreover, it is perhaps the most important aspect of a hardware design space exploration. Such a design space exploration is a necessary part of the study of hardware architectures for large-scale computational models for intelligent computing, including AI, Bayesian, bio-inspired and neural models. A methodical exploration is needed to identify potentially interesting regions in the design space, and to assess the relative performance/price points of these implementations. As an example, in this paper we investigate the performance/price of (digital and mixed-signal) CMOS and hypothetical CMOL (nanogrid) technology based hardware implementations of human cortex-scale spiking neural systems. Through this analysis, and the resulting performance/price points, we demonstrate, in general, the importance of virtualization, and of doing these kinds of design space explorations. The specific results suggest that hybrid nanotechnology such as CMOL is a promising candidate to implement very large-scale spiking neural systems, providing a more efficient utilization of the density and storage benefits of emerging nano-scale technologies. In general, we believe that the study of such hypothetical designs/architectures will guide the neuromorphic hardware community towards building large-scale systems, and help guide research trends in intelligent computing, and computer engineering. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. How to create successful Open Hardware projects - About White Rabbits and open fields

    CERN Document Server

    van der Bij, E; Lewis, J; Stana, T; Wlostowski, T; Gousiou, E; Serrano, J; Arruat, M; Lipinski, M M; Daniluk, G; Voumard, N; Cattin, M

    2013-01-01

    CERN's accelerator control group has embraced "Open Hardware" (OH) to facilitate peer review, avoid vendor lock-in and make support tasks scalable. A web-based tool for easing collaborative work was set up and the CERN OH Licence was created. New ADC, TDC, fine delay and carrier cards based on VITA and PCI-SIG standards were designed and drivers for Linux were written. Often industry was paid for developments, while quality and documentation was controlled by CERN. An innovative timing network was also developed with the OH paradigm. Industry now sells and supports these designs that find their way into new fields.

  1. Study on optimum aseismic design of complex structure system focusing on damping effect

    International Nuclear Information System (INIS)

    Takahashi, Yoshitaka; Suzuki, Kohei

    1995-01-01

    Optimum design technique for the purpose of aseismic design of complex plant structures such as piping and boiler structures is proposed. Particular attention is focused on the evaluation of the optimum damping and stiffness of the structures and components. Pseudo least square algorithm is introduced to determine the optimum design parameters. Under the requirement of certain allowable maximum response to a given earthquake excitation, optimum stiffness and damping values of the structure can be simultaneously calculated by this proposed method. The applicability of the method is demonstrated through three structural models; (1) linear multi-storied building model in which stiffness and damping constant of each floor are optimized; (2) nonlinear multi-storied building model having the isolated floor in which hysteretic energy absorber of the isolator is optimized; (3) combined boiler-supporting structure model connected by the inelastic seismic ties with each other is optimized. In this model, optimum values of damping characteristic of the seismic ties are evaluated. This work is particularly important for the aseismic design of complex plant structures like integrated boiler-supporting structure in thermal power plant and piping-containment vessel structure in nuclear power plant

  2. A photovoltaic source I/U model suitable for hardware in the loop application

    Directory of Open Access Journals (Sweden)

    Stala Robert

    2017-12-01

    Full Text Available This paper presents a novel, low-complexity method of simulating PV source characteristics suitable for real-time modeling and hardware implementation. The application of the suitable model of the PV source as well as the model of all the PV system components in a real-time hardware gives a safe, fast and low cost method of testing PV systems. The paper demonstrates the concept of the PV array model and the hardware implementation in FPGAs of the system which combines two PV arrays. The obtained results confirm that the proposed model is of low complexity and can be suitable for hardware in the loop (HIL tests of the complex PV system control, with various arrays operating under different conditions.

  3. Common Core: Teaching Optimum Topic Exploration (TOTE)

    Science.gov (United States)

    Karge, Belinda Dunnick; Moore, Roxane Kushner

    2015-01-01

    The Common Core has become a household term and yet many educators do not understand what it means. This article explains the historical perspectives of the Common Core and gives guidance to teachers in application of Teaching Optimum Topic Exploration (TOTE) necessary for full implementation of the Common Core State Standards. An effective…

  4. Optimum fiber distribution in singlewall corrugated fiberboard

    Science.gov (United States)

    Millard W. Johnson; Thomas J. Urbanik; William E. Denniston

    1979-01-01

    Determining optimum distribution of fiber through rational design of corrugated fiberboard could result in significant reductions in fiber required to meet end-use conditions, with subsequent reductions in price pressure and extension of the softwood timber supply. A theory of thin plates under large deformations is developed that is both kinematically and physically...

  5. Accelerating epistasis analysis in human genetics with consumer graphics hardware

    Directory of Open Access Journals (Sweden)

    Cancare Fabio

    2009-07-01

    Full Text Available Abstract Background Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs have more memory bandwidth and computational capability than Central Processing Units (CPUs and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. Findings We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective

  6. Optimum heat power cycles for specified boundary conditions

    International Nuclear Information System (INIS)

    Ibrahim, O.M.; Klein, S.A.; Mitchell, J.W.

    1991-01-01

    In this paper optimization of the power output of Carnot and closed Brayton cycles is considered for both finite and infinite thermal capacitance rates of the external fluid streams. The method of Lagrange multipliers is used to solve for working fluid temperatures that yield maximum power. Analytical expressions for the maximum power and the cycle efficiency at maximum power are obtained. A comparison of the maximum power from the two cycles for the same boundary conditions, i.e., the same heat source/sink inlet temperatures, thermal capacitance rates, and heat exchanger conductances, shows that the Brayton cycle can produce more power than the Carnot cycle. This comparison illustrates that cycles exist that can produce more power than the Carnot cycle. The optimum heat power cycle, which will provide the upper limit of power obtained from any thermodynamic cycle for specified boundary conditions and heat exchanger conductances is considered. The optimum heat power cycle is identified by optimizing the sum of the power output from a sequence of Carnot cycles. The shape of the optimum heat power cycle, the power output, and corresponding efficiency are presented. The efficiency at maximum power of all cycles investigated in this study is found to be equal to (or well approximated by) η = 1 - sq. root T L.in /φT H.in where φ is a factor relating the entropy changes during heat rejection and heat addition

  7. Dietary energy level for optimum productivity and carcass ...

    African Journals Online (AJOL)

    user

    2013-08-05

    Aug 5, 2013 ... optimum weights at dietary energy levels of 13.81, 13.23, 13.43 and ... Tadelle & Ogle (2000) reported that energy requirement of ..... The authors would like to acknowledge the National Research Foundation (NRF) and VLIR ...

  8. Flight Hardware Packaging Design for Stringent EMC Radiated Emission Requirements

    Science.gov (United States)

    Lortz, Charlene L.; Huang, Chi-Chien N.; Ravich, Joshua A.; Steiner, Carl N.

    2013-01-01

    This packaging design approach can help heritage hardware meet a flight project's stringent EMC radiated emissions requirement. The approach requires only minor modifications to a hardware's chassis and mainly concentrates on its connector interfaces. The solution is to raise the surface area where the connector is mounted by a few millimeters using a pedestal, and then wrapping with conductive tape from the cable backshell down to the surface-mounted connector. This design approach has been applied to JPL flight project subsystems. The EMC radiated emissions requirements for flight projects can vary from benign to mission critical. If the project's EMC requirements are stringent, the best approach to meet EMC requirements would be to design an EMC control program for the project early on and implement EMC design techniques starting with the circuit board layout. This is the ideal scenario for hardware that is built from scratch. Implementation of EMC radiated emissions mitigation techniques can mature as the design progresses, with minimal impact to the design cycle. The real challenge exists for hardware that is planned to be flown following a built-to-print approach, in which heritage hardware from a past project with a different set of requirements is expected to perform satisfactorily for a new project. With acceptance of heritage, the design would already be established (circuit board layout and components have already been pre-determined), and hence any radiated emissions mitigation techniques would only be applicable at the packaging level. The key is to take a heritage design with its known radiated emissions spectrum and repackage, or modify its chassis design so that it would have a better chance of meeting the new project s radiated emissions requirements.

  9. Determination of optimum pressurizer level for kori unit 1

    Energy Technology Data Exchange (ETDEWEB)

    Song, Dong Soo; Lee, Chang Sup; Yong, Lee Jae; Kim, Yo Han; Lee, Dong Hyuk [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    To determine the optimum pressurizer water level during normal operation for Kori unit 1, performance and safety analysis are performed. The methodology is developed by evaluating {sup d}ecrease in secondary heat removal{sup e}vents such as Loss of Normal Feedwater accident. To demonstrate optimum pressurizer level setpoint, RETRAN-03 code is used for performance analysis. Analysis results of RETRAN following reactor trip are compared with the actual plant data to justify RETRAN code modelling. The results of performance and safety analyses show that the newly established level setpoints not only improve the performance of pressurizer during transient including reactor trip but also meet the design bases of the pressurizer volume and pressure. 6 refs., 5 figs. (Author)

  10. Determination of optimum pressurizer level for kori unit 1

    Energy Technology Data Exchange (ETDEWEB)

    Song, Dong Soo; Lee, Chang Sup; Lee Jae Yong; Kim, Yo Han; Lee, Dong Hyuk [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    To determine the optimum pressurizer water level during normal operation for Kori unit 1, performance and safety analysis are performed. The methodology is developed by evaluating {sup d}ecrease in secondary heat removal{sup e}vents such as Loss of Normal Feedwater accident. To demonstrate optimum pressurizer level setpoint, RETRAN-03 code is used for performance analysis. Analysis results of RETRAN following reactor trip are compared with the actual plant data to justify RETRAN code modelling. The results of performance and safety analyses show that the newly established level setpoints not only improve the performance of pressurizer during transient including reactor trip but also meet the design bases of the pressurizer volume and pressure. 6 refs., 5 figs. (Author)

  11. Optimum Conditions for Uricase Enzyme Production by Gliomastix gueg

    Directory of Open Access Journals (Sweden)

    Atalla, M. M.

    2009-01-01

    Full Text Available Nineteen strains of microorganisms were screened for uricase production. Gliomastix gueg was recognized to produce high levels of the enzyme. The optimum fermentation conditions for uricase production by Gliomastix gueg were examined. Results showed that uric acid medium was the most favorable one, the optimum temperature was at 30ºC, and incubation period required for maximum production was 8 days with aeration level at 150 rpm and at pH 8.0. Sucrose proved to be the best carbon source, uric acid was found to be the best nitrogen source. Both, dipotassium hydrogen phosphate and ferrous chloride as well as some vitamins gave the highest amount of uricase by Gliomastix gueg.

  12. Hardware detection and parameter tuning method for speed control system of PMSM

    Science.gov (United States)

    Song, Zhengqiang; Yang, Huiling

    2018-03-01

    In this paper, the development of permanent magnet synchronous motor AC speed control system is taken as an example, aiming to expound the principle and parameter setting method of the system hardware, and puts forward the method of using software or hardware to eliminate the problem.

  13. Open Source Hardware for DIY Environmental Sensing

    Science.gov (United States)

    Aufdenkampe, A. K.; Hicks, S. D.; Damiano, S. G.; Montgomery, D. S.

    2014-12-01

    The Arduino open source electronics platform has been very popular within the DIY (Do It Yourself) community for several years, and it is now providing environmental science researchers with an inexpensive alternative to commercial data logging and transmission hardware. Here we present the designs for our latest series of custom Arduino-based dataloggers, which include wireless communication options like self-meshing radio networks and cellular phone modules. The main Arduino board uses a custom interface board to connect to various research-grade sensors to take readings of turbidity, dissolved oxygen, water depth and conductivity, soil moisture, solar radiation, and other parameters. Sensors with SDI-12 communications can be directly interfaced to the logger using our open Arduino-SDI-12 software library (https://github.com/StroudCenter/Arduino-SDI-12). Different deployment options are shown, like rugged enclosures to house the loggers and rigs for mounting the sensors in both fresh water and marine environments. After the data has been collected and transmitted by the logger, the data is received by a mySQL-PHP stack running on a web server that can be accessed from anywhere in the world. Once there, the data can be visualized on web pages or served though REST requests and Water One Flow (WOF) services. Since one of the main benefits of using open source hardware is the easy collaboration between users, we are introducing a new web platform for discussion and sharing of ideas and plans for hardware and software designs used with DIY environmental sensors and data loggers.

  14. Determination of Optimum Condition of Leucine Content in Beef Protein Hydrolysate using Response Surface Methodology

    International Nuclear Information System (INIS)

    Siti Roha Ab Mutalib; Zainal Samicho; Noriham Abdullah

    2016-01-01

    The aim of this study is to determine the optimum condition of leucine content in beef hydrolysate. Beef hydrolysate was prepared by enzymatic hydrolysis using bromelain enzyme produced from pineapple peel. Parameter conditions such as concentration of bromelain, hydrolysis temperature and hydrolysis time were assessed to obtain the optimum leucine content of beef hydrolysate according to experimental design which was recommended by response surface methodology (RSM). Leucine content in beef hydrolysate was determined using AccQ. Tag amino acid analysis method using high performance liquid chromatography (HPLC). The condition of optimum leucine content was at bromelain concentration of 1.38 %, hydrolysis temperature of 42.5 degree Celcius and hydrolysis time of 31.59 hours with the predicted leucine content of 26.57 %. The optimum condition was verified with the leucine value obtained was 26.25 %. Since there was no significant difference (p>0.05) between the predicted and verified leucine values, thus it indicates that the predicted optimum condition by RSM can be accepted to predict the optimum leucine content in beef hydrolysate. (author)

  15. Determination of the Optimum Thickness of Approximately ...

    African Journals Online (AJOL)

    In an attempt to conserve the world's scarce energy and material resources, a balance between the cost of heating a material and the optimum thickness of the material becomes vey essential. One of such materials is the local cast aluminium pot commonly used as cooking ware in Nigeria. This paper therefore sets up a ...

  16. The LISA Pathfinder interferometry-hardware and system testing

    Energy Technology Data Exchange (ETDEWEB)

    Audley, H; Danzmann, K; MarIn, A Garcia; Heinzel, G; Monsky, A; Nofrarias, M; Steier, F; Bogenstahl, J [Albert-Einstein-Institut, Max-Planck-Institut fuer Gravitationsphysik und Universitaet Hannover, 30167 Hannover (Germany); Gerardi, D; Gerndt, R; Hechenblaikner, G; Johann, U; Luetzow-Wentzky, P; Wand, V [EADS Astrium GmbH, Friedrichshafen (Germany); Antonucci, F [Dipartimento di Fisica, Universita di Trento and INFN, Gruppo Collegato di Trento, 38050 Povo, Trento (Italy); Armano, M [European Space Astronomy Centre, European Space Agency, Villanueva de la Canada, 28692 Madrid (Spain); Auger, G; Binetruy, P [APC UMR7164, Universite Paris Diderot, Paris (France); Benedetti, M [Dipartimento di Ingegneria dei Materiali e Tecnologie Industriali, Universita di Trento and INFN, Gruppo Collegato di Trento, Mesiano, Trento (Italy); Boatella, C, E-mail: antonio.garcia@aei.mpg.de [CNES, DCT/AQ/EC, 18 Avenue Edouard Belin, 31401 Toulouse, Cedex 9 (France)

    2011-05-07

    Preparations for the LISA Pathfinder mission have reached an exciting stage. Tests of the engineering model (EM) of the optical metrology system have recently been completed at the Albert Einstein Institute, Hannover, and flight model tests are now underway. Significantly, they represent the first complete integration and testing of the space-qualified hardware and are the first tests on an optical system level. The results and test procedures of these campaigns will be utilized directly in the ground-based flight hardware tests, and subsequently during in-flight operations. In addition, they allow valuable testing of the data analysis methods using the MATLAB-based LTP data analysis toolbox. This paper presents an overview of the results from the EM test campaign that was successfully completed in December 2009.

  17. Design of hardware accelerators for demanding applications.

    NARCIS (Netherlands)

    Jozwiak, L.; Jan, Y.

    2010-01-01

    This paper focuses on mastering the architecture development of hardware accelerators. It presents the results of our analysis of the main issues that have to be addressed when designing accelerators for modern demanding applications, when using as an example the accelerator design for LDPC decoding

  18. Introduction to 6800/6802 microprocessor systems hardware, software and experimentation

    CERN Document Server

    Simpson, Robert J

    1987-01-01

    Introduction to 6800/6802 Microprocessor Systems: Hardware, Software and Experimentation introduces the reader to the features, characteristics, operation, and applications of the 6800/6802 microprocessor and associated family of devices. Many worked examples are included to illustrate the theoretical and practical aspects of the 6800/6802 microprocessor.Comprised of six chapters, this book begins by presenting several aspects of digital systems before introducing the concepts of fetching and execution of a microprocessor instruction. Details and descriptions of hardware elements (MPU, RAM, RO

  19. CAMAC high energy physics electronics hardware

    International Nuclear Information System (INIS)

    Kolpakov, I.F.

    1977-01-01

    CAMAC hardware for high energy physics large spectrometers and control systems is reviewed as is the development of CAMAC modules at the High Energy Laboratory, JINR (Dubna). The total number of crates used at the Laboratory is 179. The number of CAMAC modules of 120 different types exceeds 1700. The principles of organization and the structure of developed CAMAC systems are described. (author)

  20. Hardware Realization of Chaos Based Symmetric Image Encryption

    KAUST Repository

    Barakat, Mohamed L.

    2012-01-01

    This thesis presents a novel work on hardware realization of symmetric image encryption utilizing chaos based continuous systems as pseudo random number generators. Digital implementation of chaotic systems results in serious degradations

  1. Genotype x environment interaction and optimum resource ...

    African Journals Online (AJOL)

    ... x E) interaction and to determine the optimum resource allocation for cassava yield trials. The effects of environment, genotype and G x E interaction were highly significant for all yield traits. Variations due to G x E interaction were greater than those due to genotypic differences for all yield traits. Genotype x location x year ...

  2. Asymmetric Hardware Distortions in Receive Diversity Systems: Outage Performance Analysis

    KAUST Repository

    Javed, Sidrah; Amin, Osama; Ikki, Salama S.; Alouini, Mohamed-Slim

    2017-01-01

    This paper studies the impact of asymmetric hardware distortion (HWD) on the performance of receive diversity systems using linear and switched combining receivers. The asymmetric attribute of the proposed model motivates the employment of improper Gaussian signaling (IGS) scheme rather than the traditional proper Gaussian signaling (PGS) scheme. The achievable rate performance is analyzed for the ideal and non-ideal hardware scenarios using PGS and IGS transmission schemes for different combining receivers. In addition, the IGS statistical characteristics are optimized to maximize the achievable rate performance. Moreover, the outage probability performance of the receive diversity systems is analyzed yielding closed form expressions for both PGS and IGS based transmission schemes. HWD systems that employ IGS is proven to efficiently combat the self interference caused by the HWD. Furthermore, the obtained analytic expressions are validated through Monte-Carlo simulations. Eventually, non-ideal hardware transceivers degradation and IGS scheme acquired compensation are quantified through suitable numerical results.

  3. Reconfigurable Signal Processing and Hardware Architecture for Broadband Wireless Communications

    Directory of Open Access Journals (Sweden)

    Liang Ying-Chang

    2005-01-01

    Full Text Available This paper proposes a broadband wireless transceiver which can be reconfigured to any type of cyclic-prefix (CP -based communication systems, including orthogonal frequency-division multiplexing (OFDM, single-carrier cyclic-prefix (SCCP system, multicarrier (MC code-division multiple access (MC-CDMA, MC direct-sequence CDMA (MC-DS-CDMA, CP-based CDMA (CP-CDMA, and CP-based direct-sequence CDMA (CP-DS-CDMA. A hardware platform is proposed and the reusable common blocks in such a transceiver are identified. The emphasis is on the equalizer design for mobile receivers. It is found that after block despreading operation, MC-DS-CDMA and CP-DS-CDMA have the same equalization blocks as OFDM and SCCP systems, respectively, therefore hardware and software sharing is possible for these systems. An attempt has also been made to map the functional reconfigurable transceiver onto the proposed hardware platform. The different functional entities which will be required to perform the reconfiguration and realize the transceiver are explained.

  4. Asymmetric Hardware Distortions in Receive Diversity Systems: Outage Performance Analysis

    KAUST Repository

    Javed, Sidrah

    2017-02-22

    This paper studies the impact of asymmetric hardware distortion (HWD) on the performance of receive diversity systems using linear and switched combining receivers. The asymmetric attribute of the proposed model motivates the employment of improper Gaussian signaling (IGS) scheme rather than the traditional proper Gaussian signaling (PGS) scheme. The achievable rate performance is analyzed for the ideal and non-ideal hardware scenarios using PGS and IGS transmission schemes for different combining receivers. In addition, the IGS statistical characteristics are optimized to maximize the achievable rate performance. Moreover, the outage probability performance of the receive diversity systems is analyzed yielding closed form expressions for both PGS and IGS based transmission schemes. HWD systems that employ IGS is proven to efficiently combat the self interference caused by the HWD. Furthermore, the obtained analytic expressions are validated through Monte-Carlo simulations. Eventually, non-ideal hardware transceivers degradation and IGS scheme acquired compensation are quantified through suitable numerical results.

  5. Qualification of software and hardware

    International Nuclear Information System (INIS)

    Gossner, S.; Schueller, H.; Gloee, G.

    1987-01-01

    The qualification of on-line process control equipment is subdivided into three areas: 1) materials and structural elements; 2) on-line process-control components and devices; 3) electrical systems (reactor protection and confinement system). Microprocessor-aided process-control equipment are difficult to verify for failure-free function owing to the complexity of the functional structures of the hardware and to the variety of the software feasible for microprocessors. Hence, qualification will make great demands on the inspecting expert. (DG) [de

  6. Hardware dependencies of GPU-accelerated beamformer performances for microwave breast cancer detection

    Directory of Open Access Journals (Sweden)

    Salomon Christoph J.

    2016-09-01

    Full Text Available UWB microwave imaging has proven to be a promising technique for early-stage breast cancer detection. The extensive image reconstruction time can be accelerated by parallelizing the execution of the underlying beamforming algorithms. However, the efficiency of the parallelization will most likely depend on the grade of parallelism of the imaging algorithm and of the utilized hardware. This paper investigates the dependencies of two different beamforming algorithms on multiple hardware specification of several graphics boards. The parallel implementation is realized by using NVIDIA’s CUDA. Three conclusions are drawn about the behavior of the parallel implementation and how to efficiently use the accessible hardware.

  7. Automation Hardware & Software for the STELLA Robotic Telescope

    Science.gov (United States)

    Weber, M.; Granzer, Th.; Strassmeier, K. G.

    The STELLA telescope (a joint project of the AIP, Hamburger Sternwarte and the IAC) is to operate in fully robotic mode, with no human interaction necessary for regular operation. Thus, the hardware must be kept as simple as possible to avoid unnecessary failures, and the environmental conditions must be monitored accurately to protect the telescope in case of bad weather. All computers are standard PCs running Linux, and communication with specialized hardware is done via a RS232/RS485 bus system. The high level (java based) control software consists of independent modules to ease bug-tracking and to allow the system to be extended without changing existing modules. Any command cycle consists of three messages, the actual command sent from the central node to the operating device, an immediate acknowledge, and a final done message, both sent back from the receiving device to the central node. This reply-splitting allows a direct distinction between communication problems (no acknowledge message) and hardware problems (no or a delayed done message). To avoid bug-prone packing of all the sensor-analyzing software into a single package, each sensor-reading and interaction with other sensors is done within a self-contained thread. Weather-decision making is therefore totally decoupled from the core control software to avoid dead-locks in the core module.

  8. Rupture hardware minimization in pressurized water reactor piping

    International Nuclear Information System (INIS)

    Mukherjee, S.K.; Ski, J.J.; Chexal, V.; Norris, D.M.; Goldstein, N.A.; Beaudoin, B.F.; Quinones, D.F.; Server, W.L.

    1989-01-01

    For much of the high-energy piping in light reactor systems, fracture mechanics calculations can be used to assure pipe failure resistance, thus allowing the elimination of excessive rupture restraint hardware both inside and outside containment. These calculations use the concept of leak-before-break (LBB) and include part-through-wall flaw fatigue crack propagation, through-wall flaw detectable leakage, and through-wall flaw stability analyses. Performing these analyses not only reduces initial construction, future maintenance, and radiation exposure costs, but also improves the overall safety and integrity of the plant since much more is known about the piping and its capabilities than would be the case had the analyses not been performed. This paper presents the LBB methodology applied a Beaver Valley Power Station- Unit 2 (BVPS-2); the application for two specific lines, one inside containment (stainless steel) and the other outside containment (ferrutic steel), is shown in a generic sense using a simple parametric matrix. The overall results for BVPS-2 indicate that pipe rupture hardware is not necessary for stainless steel lines inside containment greater than or equal to 6-in. (152-mm) nominal pipe size that have passed a screening criteria designed to eliminate potential problem systems (such as the feedwater system). Similarly, some ferritic steel line as small as 3-in. (76-mm) diameter (outside containment) can qualify for pipe rupture hardware elemination

  9. Improving boiler unit performance using an optimum robust minimum-order observer

    International Nuclear Information System (INIS)

    Moradi, Hamed; Bakhtiari-Nejad, Firooz

    2011-01-01

    Research highlights: → Multivariable model of a boiler unit with uncertainty. → Design of a robust minimum-order observer. → Developing an optimal functional code in MATLAB environment. → Finding optimum region of observer-based controller poles. → Guarantee of robust performance in the presence of parametric uncertainties. - Abstract: To achieve a good performance of the utility boiler, dynamic variables such as drum pressure, steam temperature and water level of drum must be controlled. In this paper, a linear time invariant (LTI) model of a boiler system is considered in which the input variables are feed-water and fuel mass rates. Due to the inaccessibility of some state variables of boiler system, a minimum-order observer is designed based on Luenberger's model to gain an estimate state x-tilde of the true state x. Low cost of design and high accuracy of states estimation are the main advantages of the minimum-order observer; in comparison with previous designed full-order observers. By applying the observer on the closed-loop system, a regulator system is designed. Using an optimal functional code developed in MATLAB environment, desired observer poles are found such that suitable time response specifications of the boiler system are achieved and the gain and phase margin values are adjusted in an acceptable range. However, the real dynamic model may associate with parametric uncertainties. In that case, optimum region of poles of observer-based controller are found such that the robust performance of the boiler system against model uncertainties is guaranteed.

  10. Optimum welding condition of 2017 aluminum similar alloy friction welded joints

    Energy Technology Data Exchange (ETDEWEB)

    Tsujino R.; Ochi, H. [Osaka Inst. of Tech., Osaka (Japan); Morikawa, K. [Osaka Sangyo Univ., Osaka (Japan); Yamaguchi, H.; Ogawa, K. [Osaka Prefecture Univ., Osaka (Japan); Fujishiro, Y.; Yoshida, M. [Sumitomo Metal Technology Ltd., Hyogo (Japan)

    2002-07-01

    Usefulness of the statistical analysis for judging optimization of the friction welding conditions was investigated by using 2017 aluminum similar alloy, where many samples under fixed welding conditions were friction welded and analyzed statistically. In general, selection of the optimum friction welding conditions for similar materials is easy. However, it was not always the case for 2017 aluminum alloy. For optimum friction welding conditions of this material, it is necessary to apply relatively larger upset pressure to obtain high friction heating. Joint efficiencies obtained under the optimum friction welding conditions showed large shape parameter (m value) of Weibull distribution as well as in the dissimilar materials previously reported. The m value calculated on the small number of data can be substituted for m value on the 30 data. Therefore, m value is useful for practical use in the factory for assuming the propriety of the friction welding conditions. (orig.)

  11. Optimum electron temperature and density for short-wavelength plasma-lasing from nickel-like ions

    International Nuclear Information System (INIS)

    Masoudnia, Leili; Bleiner, Davide

    2014-01-01

    Soft X-ray lasing across a Ni-like plasma gain-medium requires optimum electron temperature and density for attaining to the Ni-like ion stage and for population inversion in the 3d 9 4d 1 (J=0)→3d 9 4p 1 (J=1) laser transition. Various scaling laws, function of operating parameters, were compared with respect to their predictions for optimum temperatures and densities. It is shown that the widely adopted local thermodynamic equilibrium (LTE) model underestimates the optimum plasma-lasing conditions. On the other hand, non-LTE models, especially when complemented with dielectronic recombination, provided accurate prediction of the optimum plasma-lasing conditions. It is further shown that, for targets with Z equal or greater than the rare-earth elements (e.g. Sm), the optimum electron density for plasma-lasing is not accessible for pump-pulses at λ=1ω=1μm. This observation explains a fundamental difficulty in saturating the wavelength of plasma-based X-ray lasers below 6.8 nm, unless using 2ω pumping

  12. Deuterium–tritium catalytic reaction in fast ignition: Optimum ...

    Indian Academy of Sciences (India)

    proton beam, the corresponding optimum interval values are proton average energy 3 ... contributions, into the study of the ignition and burn dynamics in a fast ignition frame- ..... choice of proton beam energy would fall in 3 ≤ Ep ≤ 10 MeV.

  13. Radiographic testing - optimum radiographs of plastics and composite materials with dosimeter control

    International Nuclear Information System (INIS)

    Kuster, J.

    1978-01-01

    In view of great differencies in X-ray transmission it is more difficult to get optimum radiographs of plastics and especially of reinforced plastics than for example of metals. A procedure will be reported how to get with little effort optimum radiographs especially also in the range of long wavelength radiation corresponding 10 to 25 kV.P. (orig.) [de

  14. An optimum design of R-C oscillatory De-Qing circuit

    International Nuclear Information System (INIS)

    Cao Dezhang; Pan Linghe; Yang Tianlu

    1990-01-01

    An optimum design of R-C oscillatory De-Qing circuit has been developed for voltage regulation of the pulse modulator. When a new coefficient T 3 /T is introduced, the selection of De-Qing circuit parameters will become quite simple and the optimum parameters can be calculated directly. The De-Qing circuit parameters calculated will be effective in the whole range of the percentage regulation η from zero to maximum design value. The limit value of η is 0.36 or 0.29, theoretically, when the time constant of the De-Qing circuit is 3RC or 5RC respectively

  15. Determination of optimum oven cooking procedures for lean beef products

    OpenAIRE

    Rodas?Gonz?lez, Argenis; Larsen, Ivy L.; Uttaro, Bethany; Ju?rez, Manuel; Parslow, Joyce; Aalhus, Jennifer L.

    2015-01-01

    Abstract In order to determine optimum oven cooking procedures for lean beef, the effects of searing at 232 or 260?C for 0, 10, 20 or 30?min, and roasting at 160 or 135?C on semimembranosus (SM) and longissimus lumborum (LL) muscles were evaluated. In addition, the optimum determined cooking method (oven?seared for 10?min at 232?C and roasted at 135?C) was applied to SM roasts varying in weight from 0.5 to 2.5?kg. Mainly, SM muscles seared for 0 or 10?min at 232?C followed by roast at 135?C h...

  16. Simulation of optimum parameters for GaN MSM UV photodetector

    Energy Technology Data Exchange (ETDEWEB)

    Alhelfi, Mohanad A., E-mail: mhad12344@gmail.com; Ahmed, Naser M., E-mail: nas-tiji@yahoo.com; Hashim, M. R., E-mail: roslan@usm.my; Hassan, Z., E-mail: zai@usm.my [Institue of Nano-Optoelectronics Research and Technology (INOR), School of Physics, Universiti Sains Malaysia 11800 Penang (Malaysia); Al-Rawi, Ali Amer, E-mail: aliamer@unimap.edu.my [School of Computer and Communication Eng. 3st Floor, Pauh Putra Main Campus 02600 Arau, Perlis Malaysia (Malaysia)

    2016-07-06

    In this study the optimum parameters of GaN M-S-M photodetector are discussed. The evaluation of the photodetector depends on many parameters, the most of the important parameters the quality of the GaN film and others depend on the geometry of the interdigited electrode. In this simulation work using MATLAB software with consideration of the reflection and absorption on the metal contacts, a detailed study involving various electrode spacings (S) and widths (W) reveals conclusive results in device design. The optimum interelectrode design for interdigitated MSM-PD has been specified and evaluated by effect on quantum efficiency and responsivity.

  17. Multi-loop PWR modeling and hardware-in-the-loop testing using ACSL

    International Nuclear Information System (INIS)

    Thomas, V.M.; Heibel, M.D.; Catullo, W.J.

    1989-01-01

    Westinghouse has developed an Advanced Digital Feedwater Control System (ADFCS) which is aimed at reducing feedwater related reactor trips through improved control performance for pressurized water reactor (PWR) power plants. To support control system setpoint studies and functional design efforts for the ADFCS, an ACSL based model of the nuclear steam supply system (NSSS) of a Westinghouse (PWR) was generated. Use of this plant model has been extended from system design to system testing through integration of the model into a Hardware-in-Loop test environment for the ADFCS. This integration includes appropriate interfacing between a Gould SEL 32/87 computer, upon which the plant model executes in real time, and the Westinghouse Distributed Processing family (WDPF) test hardware. A development program has been undertaken to expand the existing ACSL model to include capability to explicitly model multiple plant loops, steam generators, and corresponding feedwater systems. Furthermore, the program expands the ADFCS Hardware-in-Loop testing to include the multi-loop plant model. This paper provides an overview of the testing approach utilized for the ADFCS with focus on the role of Hardware-in-Loop testing. Background on the plant model, methodology and test environment is also provided. Finally, an overview is presented of the program to expand the model and associated Hardware-in-Loop test environment to handle multiple loops

  18. Hardware architecture for projective model calculation and false match refining using random sample consensus algorithm

    Science.gov (United States)

    Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid

    2016-11-01

    The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.

  19. A Cost-Effective Approach to Hardware-in-the-Loop Simulation

    DEFF Research Database (Denmark)

    Pedersen, Mikkel Melters; Hansen, M. R.; Ballebye, M.

    2012-01-01

    This paper presents an approach for developing cost effective hardware-in-the- loop (HIL) simulation platforms for the use in controller software test and development. The approach is aimed at the many smaller manufacturers of e.g. mobile hydraulic machinery, which often do not have very advanced...... testing facilities at their disposal. A case study is presented where a HIL simulation platform is developed for the controller of a truck mounted loader crane. The total expenses in hardware and software is less than 10.000$....

  20. Electrical, electronics, and digital hardware essentials for scientists and engineers

    CERN Document Server

    Lipiansky, Ed

    2012-01-01

    A practical guide for solving real-world circuit board problems Electrical, Electronics, and Digital Hardware Essentials for Scientists and Engineers arms engineers with the tools they need to test, evaluate, and solve circuit board problems. It explores a wide range of circuit analysis topics, supplementing the material with detailed circuit examples and extensive illustrations. The pros and cons of various methods of analysis, fundamental applications of electronic hardware, and issues in logic design are also thoroughly examined. The author draws on more than tw

  1. Carbonate fuel cell endurance: Hardware corrosion and electrolyte management status

    Energy Technology Data Exchange (ETDEWEB)

    Yuh, C.; Johnsen, R.; Farooque, M.; Maru, H.

    1993-01-01

    Endurance tests of carbonate fuel cell stacks (up to 10,000 hours) have shown that hardware corrosion and electrolyte losses can be reasonably controlled by proper material selection and cell design. Corrosion of stainless steel current collector hardware, nickel clad bipolar plate and aluminized wet seal show rates within acceptable limits. Electrolyte loss rate to current collector surface has been minimized by reducing exposed current collector surface area. Electrolyte evaporation loss appears tolerable. Electrolyte redistribution has been restrained by proper design of manifold seals.

  2. Carbonate fuel cell endurance: Hardware corrosion and electrolyte management status

    Energy Technology Data Exchange (ETDEWEB)

    Yuh, C.; Johnsen, R.; Farooque, M.; Maru, H.

    1993-05-01

    Endurance tests of carbonate fuel cell stacks (up to 10,000 hours) have shown that hardware corrosion and electrolyte losses can be reasonably controlled by proper material selection and cell design. Corrosion of stainless steel current collector hardware, nickel clad bipolar plate and aluminized wet seal show rates within acceptable limits. Electrolyte loss rate to current collector surface has been minimized by reducing exposed current collector surface area. Electrolyte evaporation loss appears tolerable. Electrolyte redistribution has been restrained by proper design of manifold seals.

  3. Internet-based hardware/software co-design framework for embedded 3D graphics applications

    Directory of Open Access Journals (Sweden)

    Wong Weng-Fai

    2011-01-01

    Full Text Available Abstract Advances in technology are making it possible to run three-dimensional (3D graphics applications on embedded and handheld devices. In this article, we propose a hardware/software co-design environment for 3D graphics application development that includes the 3D graphics software, OpenGL ES application programming interface (API, device driver, and 3D graphics hardware simulators. We developed a 3D graphics system-on-a-chip (SoC accelerator using transaction-level modeling (TLM. This gives software designers early access to the hardware even before it is ready. On the other hand, hardware designers also stand to gain from the more complex test benches made available in the software for verification. A unique aspect of our framework is that it allows hardware and software designers from geographically dispersed areas to cooperate and work on the same framework. Designs can be entered and executed from anywhere in the world without full access to the entire framework, which may include proprietary components. This results in controlled and secure transparency and reproducibility, granting leveled access to users of various roles.

  4. Summary of multi-core hardware and programming model investigations

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Suzanne Marie; Pedretti, Kevin Thomas Tauke; Levenhagen, Michael J.

    2008-05-01

    This report summarizes our investigations into multi-core processors and programming models for parallel scientific applications. The motivation for this study was to better understand the landscape of multi-core hardware, future trends, and the implications on system software for capability supercomputers. The results of this study are being used as input into the design of a new open-source light-weight kernel operating system being targeted at future capability supercomputers made up of multi-core processors. A goal of this effort is to create an agile system that is able to adapt to and efficiently support whatever multi-core hardware and programming models gain acceptance by the community.

  5. The impact of feedstock cost on technology selection and optimum size

    International Nuclear Information System (INIS)

    Cameron, Jay B.; Kumar, Amit; Flynn, Peter C.

    2007-01-01

    Development of biomass projects at optimum size and technology enhances the role that biomass can make in mitigating greenhouse gas. Optimum sized plants can be built when biomass resources are sufficient to meet feedstock demand; examples include wood and forest harvest residues from extensive forests, and grain straw and corn stover from large agricultural regions. The impact of feedstock cost on technology selection is evaluated by comparing the cost of power from the gasification and direct combustion of boreal forest wood chips. Optimum size is a function of plant cost and the distance variable cost (DVC, $ dry tonne -1 km -1 ) of the biomass fuel; distance fixed costs (DFC, $ dry tonne -1 ) such as acquisition, harvesting, loading and unloading do not impact optimum size. At low values of DVC and DFC, as occur with wood chips sourced from the boreal forest, direct combustion has a lower power cost than gasification. At higher values of DVC and DFC, gasification has a lower power cost than direct combustion. This crossover in most economic technology will always arise when a more efficient technology with a higher capital cost per unit of output is compared to a less efficient technology with a lower capital cost per unit of output. In such cases technology selection cannot be separated from an analysis of feedstock cost

  6. Hardware Realization of Chaos-based Symmetric Video Encryption

    KAUST Repository

    Ibrahim, Mohamad A.

    2013-01-01

    This thesis reports original work on hardware realization of symmetric video encryption using chaos-based continuous systems as pseudo-random number generators. The thesis also presents some of the serious degradations caused by digitally

  7. Control/interlock/display system for EBT-P using commercially-available hardware and firmware

    International Nuclear Information System (INIS)

    Schmitt, R.J.

    1983-01-01

    For the EBT-P project, alternative commercially-available hardware, software and firmware have been employed for control, interlock and data display functions. This paper describes the criteria and rationale used to select that commercial equipment and discusses the important features of the equipment chosen, especially programmable controllers. Additional discussion is centered on interface problems which are encountered upon attempts to integrate equipment from several vendors. Some solutions to these problems are discussed. Details of software and hardware performance during tests are presented. The extent to which the EBT-P hardware and software configuration addresses and resolves various issues is discussed. Several areas have been uncovered in which relatively slight improvements/modifications of commercial programmable controller firmware would significantly improve the capability of this type of hardware in fusion control applications. These improvements are discussed in detail

  8. Digital Hardware Design Teaching: An Alternative Approach

    Science.gov (United States)

    Benkrid, Khaled; Clayton, Thomas

    2012-01-01

    This article presents the design and implementation of a complete review of undergraduate digital hardware design teaching in the School of Engineering at the University of Edinburgh. Four guiding principles have been used in this exercise: learning-outcome driven teaching, deep learning, affordability, and flexibility. This has identified…

  9. Development of a hardware-in-loop attitude control simulator for a CubeSat satellite

    Science.gov (United States)

    Tapsawat, Wittawat; Sangpet, Teerawat; Kuntanapreeda, Suwat

    2018-01-01

    Attitude control is an important part in satellite on-orbit operation. It greatly affects the performance of satellites. Testing of an attitude determination and control subsystem (ADCS) is very challenging since it might require attitude dynamics and space environment in the orbit. This paper develops a low-cost hardware-in-loop (HIL) simulator for testing an ADCS of a CubeSat satellite. The simulator consists of a numerical simulation part, a hardware part, and a HIL interface hardware unit. The numerical simulation part includes orbital dynamics, attitude dynamics and Earth’s magnetic field. The hardware part is the real ADCS board of the satellite. The simulation part outputs satellite’s angular velocity and geomagnetic field information to the HIL interface hardware. Then, based on this information, the HIL interface hardware generates I2C signals mimicking the signals of the on-board rate-gyros and magnetometers and consequently outputs the signals to the ADCS board. The ADCS board reads the rate-gyro and magnetometer signals, calculates control signals, and drives the attitude actuators which are three magnetic torquers (MTQs). The responses of the MTQs sensed by a separated magnetometer are feedback to the numerical simulation part completing the HIL simulation loop. Experimental studies are conducted to demonstrate the feasibility and effectiveness of the simulator.

  10. Applied orthogonal experiment design for the optimum microwave ...

    African Journals Online (AJOL)

    An experiment on polysaccharides from Rhodiolae Radix (PRR) extraction was carried out using microwave-assisted extraction (MAE) method with an objective to establishing the optimum MAE conditions of PRR. Single factor experiments were performed to determine the appropriate range of extraction conditions, and the ...

  11. On convergence of differential evolution over a class of continuous functions with unique global optimum.

    Science.gov (United States)

    Ghosh, Sayan; Das, Swagatam; Vasilakos, Athanasios V; Suresh, Kaushik

    2012-02-01

    Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms of current interest. Since its inception in the mid 1990s, DE has been finding many successful applications in real-world optimization problems from diverse domains of science and engineering. This paper takes a first significant step toward the convergence analysis of a canonical DE (DE/rand/1/bin) algorithm. It first deduces a time-recursive relationship for the probability density function (PDF) of the trial solutions, taking into consideration the DE-type mutation, crossover, and selection mechanisms. Then, by applying the concepts of Lyapunov stability theorems, it shows that as time approaches infinity, the PDF of the trial solutions concentrates narrowly around the global optimum of the objective function, assuming the shape of a Dirac delta distribution. Asymptotic convergence behavior of the population PDF is established by constructing a Lyapunov functional based on the PDF and showing that it monotonically decreases with time. The analysis is applicable to a class of continuous and real-valued objective functions that possesses a unique global optimum (but may have multiple local optima). Theoretical results have been substantiated with relevant computer simulations.

  12. Choosing the optimum burnup

    International Nuclear Information System (INIS)

    Geller, L.; Goldstein, L.; Franks, W.A.

    1986-01-01

    This paper reviews some of the considerations utilities must evaluate when going to higher discharge burnups. The advantages and disadvantages of higher discharge burnups are described, as well as a consistent approach for evaluating optimum discharge burnup and its comparison to current practice. When an analysis is performed over the life of the plant, the design of the terminal cycles has significant impact on the lifetime savings from higher burnups. Designs for high burnup cycles have a greater average inventory value in the core. As one goes to higher burnup, there is a greater likelihood of discarding a larger value in unused fuel unless the terminal cycles are designed carefully. This effect can be large enough in some cases to wipe out the lifetime cost savings relative to operating with a higher discharge burnup cycle

  13. Evaluating the scalability of HEP software and multi-core hardware

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A

    2011-01-01

    As researchers have reached the practical limits of processor performance improvements by frequency scaling, it is clear that the future of computing lies in the effective utilization of parallel and multi-core architectures. Since this significant change in computing is well underway, it is vital for HEP programmers to understand the scalability of their software on modern hardware and the opportunities for potential improvements. This work aims to quantify the benefit of new mainstream architectures to the HEP community through practical benchmarking on recent hardware solutions, including the usage of parallelized HEP applications.

  14. Asymptotically optimum multialternative sequential procedures for discernment of processes minimizing average length of observations

    Science.gov (United States)

    Fishman, M. M.

    1985-01-01

    The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.

  15. Water system hardware and management rehabilitation: Qualitative evidence from Ghana, Kenya, and Zambia.

    Science.gov (United States)

    Klug, Tori; Shields, Katherine F; Cronk, Ryan; Kelly, Emma; Behnke, Nikki; Lee, Kristen; Bartram, Jamie

    2017-05-01

    Sufficient, safe, continuously available drinking water is important for human health and development, yet one in three handpumps in sub-Saharan Africa are non-functional at any given time. Community management, coupled with access to external technical expertise and spare parts, is a widely promoted model for rural water supply management. However, there is limited evidence describing how community management can address common hardware and management failures of rural water systems in sub-Saharan Africa. We identified hardware and management rehabilitation pathways using qualitative data from 267 interviews and 57 focus group discussions in Ghana, Kenya, and Zambia. Study participants were water committee members, community members, and local leaders in 18 communities (six in each study country) with water systems managed by a water committee and supported by World Vision (WV), an international non-governmental organization (NGO). Government, WV or private sector employees engaged in supporting the water systems were also interviewed. Inductive analysis was used to allow for pathways to emerge from the data, based on the perspectives and experiences of study participants. Four hardware rehabilitation pathways were identified, based on the types of support used in rehabilitation. Types of support were differentiated as community or external. External support includes financial and/or technical support from government or WV employees. Community actor understanding of who to contact when a hardware breakdown occurs and easy access to technical experts were consistent reasons for rapid rehabilitation for all hardware rehabilitation pathways. Three management rehabilitation pathways were identified. All require the involvement of community leaders and were best carried out when the action was participatory. The rehabilitation pathways show how available resources can be leveraged to restore hardware breakdowns and management failures for rural water systems in sub

  16. Trainable hardware for dynamical computing using error backpropagation through physical media.

    Science.gov (United States)

    Hermans, Michiel; Burm, Michaël; Van Vaerenbergh, Thomas; Dambre, Joni; Bienstman, Peter

    2015-03-24

    Neural networks are currently implemented on digital Von Neumann machines, which do not fully leverage their intrinsic parallelism. We demonstrate how to use a novel class of reconfigurable dynamical systems for analogue information processing, mitigating this problem. Our generic hardware platform for dynamic, analogue computing consists of a reciprocal linear dynamical system with nonlinear feedback. Thanks to reciprocity, a ubiquitous property of many physical phenomena like the propagation of light and sound, the error backpropagation-a crucial step for tuning such systems towards a specific task-can happen in hardware. This can potentially speed up the optimization process significantly, offering important benefits for the scalability of neuro-inspired hardware. In this paper, we show, using one experimentally validated and one conceptual example, that such systems may provide a straightforward mechanism for constructing highly scalable, fully dynamical analogue computers.

  17. How to create successful Open Hardware projects — About White Rabbits and open fields

    International Nuclear Information System (INIS)

    Bij, E van der; Arruat, M; Cattin, M; Daniluk, G; Cobas, J D Gonzalez; Gousiou, E; Lewis, J; Lipinski, M M; Serrano, J; Stana, T; Voumard, N; Wlostowski, T

    2013-01-01

    CERN's accelerator control group has embraced ''Open Hardware'' (OH) to facilitate peer review, avoid vendor lock-in and make support tasks scalable. A web-based tool for easing collaborative work was set up and the CERN OH Licence was created. New ADC, TDC, fine delay and carrier cards based on VITA and PCI-SIG standards were designed and drivers for Linux were written. Often industry was paid for developments, while quality and documentation was controlled by CERN. An innovative timing network was also developed with the OH paradigm. Industry now sells and supports these designs that find their way into new fields

  18. Optimum moisture levels for biodegradation of mortality composting envelope materials.

    Science.gov (United States)

    Ahn, H K; Richard, T L; Glanville, T D

    2008-01-01

    Moisture affects the physical and biological properties of compost and other solid-state fermentation matrices. Aerobic microbial systems experience different respiration rates (oxygen uptake and CO2 evolution) as a function of moisture content and material type. In this study the microbial respiration rates of 12 mortality composting envelope materials were measured by a pressure sensor method at six different moisture levels. A wide range of respiration (1.6-94.2mg O2/g VS-day) rates were observed for different materials, with alfalfa hay, silage, oat straw, and turkey litter having the highest values. These four envelope materials may be particularly suitable for improving internal temperature and pathogen destruction rates for disease-related mortality composting. Optimum moisture content was determined based on measurements across a range that spans the maximum respiration rate. The optimum moisture content of each material was observed near water holding capacity, which ranged from near 60% to over 80% on a wet basis for all materials except a highly stabilized soil compost blend (optimum around 25% w.b.). The implications of the results for moisture management and process control strategies during mortality composting are discussed.

  19. Optimum Tower Crane Selection and Supporting Design Management

    Directory of Open Access Journals (Sweden)

    Hyo Won Sohn

    2014-08-01

    Full Text Available To optimize tower crane selection and supporting design, lifting requirements (as well as stability should be examined, followed by a review of economic feasibility. However, construction engineers establish plans based on data provided by equipment suppliers since there are no tools with which to thoroughly examine a support design's suitability for various crane types, and such plans lack the necessary supporting data. In such cases it is impossible to optimize a tower crane selection to satisfy lifting requirements in terms of cost, and to perform lateral support and foundation design. Thus, this study is intended to develop an optimum tower crane selection and supporting design management method based on stability. All cases that are capable of generating an optimization of approximately 3,000 ˜ 15,000 times are calculated to identify the candidate cranes with minimized cost, which are examined. The optimization method developed in the study is expected to support engineers in determining the optimum lifting equipment management.

  20. RECENT APPROACHES IN THE OPTIMUM CURRENCY AREAS THEORY

    Directory of Open Access Journals (Sweden)

    AURA SOCOL

    2011-04-01

    Full Text Available This study is dealing with the endogenous characteristic of the OCA criteria, starting from the idea that a higherconformity of the business cycles will result in a better timing of the economic cycles and, thus, in getting closerto the quality of an optimum currency area. Thus, if the classical theory is focused on a static approach of theproblem, the new theories assert that these conditions are dynamic, and they cannot be positively affected evenby the establishment of the Economic and Monetary Union. The consequences are overwhelming, as theendogenous approach shows that a monetary union can be achieved even if all the conditions mentioned inMundell’s optimum currency areas theory are not met, showing that some of them may also be met subsequentto the unification. Thus, a country joining a monetary union, althogh it does not meet the criteria for an optimumcurrency area, will ex post lead to the increase of the integration and business cycle correlation degree.

  1. Developing an optimum protocol for thermoluminescence dosimetry with gr-200 chips using Taguchi method

    International Nuclear Information System (INIS)

    Sadeghi, Maryam; Faghihi, Reza; Sina, Sedigheh

    2017-01-01

    Thermoluminescence dosimetry (TLD) is a powerful technique with wide applications in personal, environmental and clinical dosimetry. The optimum annealing, storage and reading protocols are very effective in accuracy of TLD response. The purpose of this study is to obtain an optimum protocol for GR-200; LiF: Mg, Cu, P, by optimizing the effective parameters, to increase the reliability of the TLD response using Taguchi method. Taguchi method has been used in this study for optimization of annealing, storage and reading protocols of the TLDs. A number of 108 GR-200 chips were divided into 27 groups, each containing four chips. The TLDs were exposed to three different doses, and stored, annealed and read out by different procedures as suggested by Taguchi Method. By comparing the signal-to-noise ratios the optimum dosimetry procedure was obtained. According to the results, the optimum values for annealing temperature (de.C), Annealing Time (s), Annealing to Exposure time (d), Exposure to Readout time (d), Pre-heat Temperature (de.C), Pre-heat Time (s), Heating Rate (de.C/s), Maximum Temperature of Readout (de.C), readout time (s) and Storage Temperature (de.C) are 240, 90, 1, 2, 50, 0, 15, 240, 13 and -20, respectively. Using the optimum protocol, an efficient glow curve with low residual signals can be achieved. Using optimum protocol obtained by Taguchi method, the dosimetry can be effectively performed with great accuracy. (authors)

  2. 48 CFR 1812.7000 - Prohibition on guaranteed customer bases for new commercial space hardware or services.

    Science.gov (United States)

    2010-10-01

    ... customer bases for new commercial space hardware or services. 1812.7000 Section 1812.7000 Federal... PLANNING ACQUISITION OF COMMERCIAL ITEMS Commercial Space Hardware or Services 1812.7000 Prohibition on guaranteed customer bases for new commercial space hardware or services. Public Law 102-139, title III...

  3. Optimum energies for dual-energy computed tomography

    International Nuclear Information System (INIS)

    Talbert, A.J.; Brooks, R.A.; Morgenthaler, D.G.

    1980-01-01

    By performing a dual-energy scan, separate information can be obtained on the Compton and photoelectric components of attenuation for an unknown material. This procedure has been analysed for the optimum energies, and for the optimum dose distribution between the two scans. It was found that an equal dose at both energies was a good compromise, compared with optimising the dose distributing for either the Compton or photoelectric components individually. For monoenergetic beams, it was found that low energy of 40 keV produced minimum noise when using high-energy beams of 80 to 100 keV. This was true whether one maintained constant integral dose or constant surface dose. A low energy of 50 keV which is more nearly attainable in practice, produced almost as good a degree of accuracy. The analysis can be extended to polyenergetic beams by the inclusion of a noise factor. The above results were qualitatively unchanged, although the noise was increased by about 20% with integral dose equivalence and 50% with surface dose equivalence. It is very important to make the spectra as narrow as possible, especially at the low energy, in order to minimise the noise. (author)

  4. Optimum Performance-Based Seismic Design Using a Hybrid Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    S. Talatahari

    2014-01-01

    Full Text Available A hybrid optimization method is presented to optimum seismic design of steel frames considering four performance levels. These performance levels are considered to determine the optimum design of structures to reduce the structural cost. A pushover analysis of steel building frameworks subject to equivalent-static earthquake loading is utilized. The algorithm is based on the concepts of the charged system search in which each agent is affected by local and global best positions stored in the charged memory considering the governing laws of electrical physics. Comparison of the results of the hybrid algorithm with those of other metaheuristic algorithms shows the efficiency of the hybrid algorithm.

  5. Growth in spaceflight hardware results in alterations to the transcriptome and proteome

    Science.gov (United States)

    Basu, Proma; Kruse, Colin P. S.; Luesse, Darron R.; Wyatt, Sarah E.

    2017-11-01

    The Biological Research in Canisters (BRIC) hardware has been used to house many biology experiments on both the Space Transport System (STS, commonly known as the space shuttle) and the International Space Station (ISS). However, microscopic examination of Arabidopsis seedlings by Johnson et al. (2015) indicated the hardware itself may affect cell morphology. The experiment herein was designed to assess the effects of the BRIC-Petri Dish Fixation Units (BRIC-PDFU) hardware on the transcriptome and proteome of Arabidopsis seedlings. To our knowledge, this is the first transcriptomic and proteomic comparison of Arabidopsis seedlings grown with and without hardware. Arabidopsis thaliana wild-type Columbia (Col-0) seeds were sterilized and bulk plated on forty-four 60 mm Petri plates, of which 22 were integrated into the BRIC-PDFU hardware and 22 were maintained in closed containers at Ohio University. Seedlings were grown for approximately 3 days, fixed with RNAlater® and stored at -80 °C prior to RNA and protein extraction, with proteins separated into membrane and soluble fractions prior to analysis. The RNAseq analysis identified 1651 differentially expressed genes; MS/MS analysis identified 598 soluble and 589 membrane proteins differentially abundant both at p < .05. Fold enrichment analysis of gene ontology terms related to differentially expressed transcripts and proteins highlighted a variety of stress responses. Some of these genes and proteins have been previously identified in spaceflight experiments, indicating that these genes and proteins may be perturbed by both conditions.

  6. Particle Transport Simulation on Heterogeneous Hardware

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    CPUs and GPGPUs. About the speaker Vladimir Koylazov is CTO and founder of Chaos Software and one of the original developers of the V-Ray raytracing software. Passionate about 3D graphics and programming, Vlado is the driving force behind Chaos Group's software solutions. He participated in the implementation of algorithms for accurate light simulations and support for different hardware platforms, including CPU and GPGPU, as well as distributed calculat...

  7. High exposure rate hardware ALARA plan

    International Nuclear Information System (INIS)

    Nellesen, A.L.

    1996-10-01

    This as low as reasonably achievable review provides a description of the engineering and administrative controls used to manage personnel exposure and to control contamination levels and airborne radioactivity concentrations. HERH waste is hardware found in the N-Fuel Storage Basin, which has a contact dose rate greater than 1 R/hr and used filters. This waste will be collected in the fuel baskets at various locations in the basins

  8. Hardware-in-the-Loop Simulation for the Automatic Power Control System of Research Reactors

    International Nuclear Information System (INIS)

    Fikry, R.M.; Shehata, S.A.; Elaraby, S.M.; Mahmoud, M.I.; Elbardini, M.M.

    2009-01-01

    Designing and testing digital control system for any nuclear research reactor can be costly and time consuming. In this paper, a rapid, low-cost proto typing and testing procedure for digital controller design is proposed using the concept of Hardware-In- The-Loop (HIL). Some of the control loop components are real hardware components and thc others are simulated. First, the whole system is modeled and tested by Real- Time Simulation (RTS) using conventional simulation techniques such as MATLAB / SIMULINK. Second the Hardware-in-the-Ioop simulation is tested using Real-Time Windows Target in MATLAB and Visual C++. The control parts are included as hardware components which are the reactor control rod and its drivers. Two kinds of controllers are studied, Proportional derivative (PD) and Fuzzy controller, An experimental setup for the hardware used in HIL concept for the control of the nuclear research reactor has been realized. Experimental results are obtained and compared with the simulation results. The experimental results indicate the validation of HIL method in this domain

  9. Increased power generation from primary sludge by a submersible microbial fuel cell and optimum operational conditions

    DEFF Research Database (Denmark)

    Vologni, Valentina; Kakarla, Ramesh; Angelidaki, Irini

    2013-01-01

    Microbial fuel cells (MFCs) have received attention as a promising renewable energy technology for waste treatment and energy recovery. We tested a submersible MFC with an innovative design capable of generating a stable voltage of 0.250 ± 0.008 V (with a fixed 470 Ω resistor) directly from prima...... prolonged the current generation and increased the power density by 7 and 1.5 times, respectively, in comparison with raw primary sludge. These findings suggest that energy recovery from primary sludge can be maximized using an advanced MFC system with optimum conditions....

  10. Hardware accuracy counters for application precision and quality feedback

    Science.gov (United States)

    de Paula Rosa Piga, Leonardo; Majumdar, Abhinandan; Paul, Indrani; Huang, Wei; Arora, Manish; Greathouse, Joseph L.

    2018-06-05

    Methods, devices, and systems for capturing an accuracy of an instruction executing on a processor. An instruction may be executed on the processor, and the accuracy of the instruction may be captured using a hardware counter circuit. The accuracy of the instruction may be captured by analyzing bits of at least one value of the instruction to determine a minimum or maximum precision datatype for representing the field, and determining whether to adjust a value of the hardware counter circuit accordingly. The representation may be output to a debugger or logfile for use by a developer, or may be output to a runtime or virtual machine to automatically adjust instruction precision or gating of portions of the processor datapath.

  11. Optimum tilt angle for flat plate collectors all over the World – A declination dependence formula and comparisons of three solar radiation models

    International Nuclear Information System (INIS)

    Stanciu, Camelia; Stanciu, Dorin

    2014-01-01

    consists in finding this global optimum instead of “local” ones for which monthly adjustment is required. The results are compared to the approximation given in the technical literature according to which the optimum tilt angle should be local latitude plus or minus (for winter or summer seasons, respectively) 10° or 15° (with small variations as it is further presented)

  12. Fast and Reliable Mouse Picking Using Graphics Hardware

    Directory of Open Access Journals (Sweden)

    Hanli Zhao

    2009-01-01

    Full Text Available Mouse picking is the most commonly used intuitive operation to interact with 3D scenes in a variety of 3D graphics applications. High performance for such operation is necessary in order to provide users with fast responses. This paper proposes a fast and reliable mouse picking algorithm using graphics hardware for 3D triangular scenes. Our approach uses a multi-layer rendering algorithm to perform the picking operation in linear time complexity. The objectspace based ray-triangle intersection test is implemented in a highly parallelized geometry shader. After applying the hardware-supported occlusion queries, only a small number of objects (or sub-objects are rendered in subsequent layers, which accelerates the picking efficiency. Experimental results demonstrate the high performance of our novel approach. Due to its simplicity, our algorithm can be easily integrated into existing real-time rendering systems.

  13. Advances in neuromorphic hardware exploiting emerging nanoscale devices

    CERN Document Server

    2017-01-01

    This book covers all major aspects of cutting-edge research in the field of neuromorphic hardware engineering involving emerging nanoscale devices. Special emphasis is given to leading works in hybrid low-power CMOS-Nanodevice design. The book offers readers a bidirectional (top-down and bottom-up) perspective on designing efficient bio-inspired hardware. At the nanodevice level, it focuses on various flavors of emerging resistive memory (RRAM) technology. At the algorithm level, it addresses optimized implementations of supervised and stochastic learning paradigms such as: spike-time-dependent plasticity (STDP), long-term potentiation (LTP), long-term depression (LTD), extreme learning machines (ELM) and early adoptions of restricted Boltzmann machines (RBM) to name a few. The contributions discuss system-level power/energy/parasitic trade-offs, and complex real-world applications. The book is suited for both advanced researchers and students interested in the field.

  14. Web tools to monitor and debug DAQ hardware

    International Nuclear Information System (INIS)

    Desavouret, Eugene; Nogiec, Jerzy M.

    2003-01-01

    A web-based toolkit to monitor and diagnose data acquisition hardware has been developed. It allows for remote testing, monitoring, and control of VxWorks data acquisition computers and associated instrumentation using the HTTP protocol and a web browser. This solution provides concurrent and platform independent access, supplementary to the standard single-user rlogin mechanism. The toolkit is based on a specialized web server, and allows remote access and execution of select system commands and tasks, execution of test procedures, and provides remote monitoring of computer system resources and connected hardware. Various DAQ components such as multiplexers, digital I/O boards, analog to digital converters, or current sources can be accessed and diagnosed remotely in a uniform and well-organized manner. Additionally, the toolkit application supports user authentication and is able to enforce specified access restrictions

  15. A Hardware Framework for on-Chip FPGA Acceleration

    DEFF Research Database (Denmark)

    Lomuscio, Andrea; Cardarilli, Gian Carlo; Nannarelli, Alberto

    2016-01-01

    In this work, we present a new framework to dynamically load hardware accelerators on reconfigurable platforms (FPGAs). Provided a library of application-specific processors, we load on-the-fly the specific processor in the FPGA, and we transfer the execution from the CPU to the FPGA-based accele......In this work, we present a new framework to dynamically load hardware accelerators on reconfigurable platforms (FPGAs). Provided a library of application-specific processors, we load on-the-fly the specific processor in the FPGA, and we transfer the execution from the CPU to the FPGA......-based accelerator. Results show that significant speed-up can be obtained by the proposed acceleration framework on system-on-chips where reconfigurable fabric is placed next to the CPUs. The speed-up is due to both the intrinsic acceleration in the application-specific processors, and to the increased parallelism....

  16. Optimum development temperature and duration for nuclear plate

    International Nuclear Information System (INIS)

    Nagoshi, Chieko.

    1975-01-01

    Sakura 100 μm thick nuclear plates have been employed to determine optimum temperature and duration of the Amidol developer for low energy protons (Ep 0 C were tried for periods of 15--35 min. For Ep 0 C and for development time less than 30 min. (auth.)

  17. Optimum workforce-size model using dynamic programming approach

    African Journals Online (AJOL)

    This paper presents an optimum workforce-size model which determines the minimum number of excess workers (overstaffing) as well as the minimum total recruitment cost during a specified planning horizon. The model is an extension of other existing dynamic programming models for manpower planning in the sense ...

  18. Dynamic modelling and hardware-in-the-loop testing of PEMFC

    Energy Technology Data Exchange (ETDEWEB)

    Vath, Andreas; Soehn, Matthias; Nicoloso, Norbert; Hartkopf, Thomas [Technische Universitaet Darmstadt/Institut fuer Elektrische Energie wand lung, Landgraf-Georg-Str. 4, D-64283 Darmstadt (Germany); Lemes, Zijad; Maencher, Hubert [MAGNUM Automatisierungstechnik GmbH, Bunsenstr. 22, D-64293 Darmstadt (Germany)

    2006-07-03

    Modelling and hardware-in-the-loop (HIL) testing of fuel cell components and entire systems open new ways for the design and advance development of FCs. In this work proton exchange membrane fuel cells (PEMFC) are dynamically modelled within MATLAB-Simulink at various operation conditions in order to establish a comprehensive description of their dynamic behaviour as well as to explore the modelling facility as a diagnostic tool. Set-up of a hardware-in-the-loop (HIL) system enables real time interaction between the selected hardware and the model. The transport of hydrogen, nitrogen, oxygen, water vapour and liquid water in the gas diffusion and catalyst layers of the stack are incorporated into the model according to their physical and electrochemical characteristics. Other processes investigated include, e.g., the membrane resistance as a function of the water content during fast load changes. Cells are modelled three-dimensionally and dynamically. In case of system simulations a one-dimensional model is preferred to reduce computation time. The model has been verified by experiments with a water-cooled stack. (author)

  19. Wireless Energy Harvesting Two-Way Relay Networks with Hardware Impairments.

    Science.gov (United States)

    Peng, Chunling; Li, Fangwei; Liu, Huaping

    2017-11-13

    This paper considers a wireless energy harvesting two-way relay (TWR) network where the relay has energy-harvesting abilities and the effects of practical hardware impairments are taken into consideration. In particular, power splitting (PS) receiver is adopted at relay to harvests the power it needs for relaying the information between the source nodes from the signals transmitted by the source nodes, and hardware impairments is assumed suffered by each node. We analyze the effect of hardware impairments [-20]on both decode-and-forward (DF) relaying and amplify-and-forward (AF) relaying networks. By utilizing the obtained new expressions of signal-to-noise-plus-distortion ratios, the exact analytical expressions of the achievable sum rate and ergodic capacities for both DF and AF relaying protocols are derived. Additionally, the optimal power splitting (OPS) ratio that maximizes the instantaneous achievable sum rate is formulated and solved for both protocols. The performances of DF and AF protocols are evaluated via numerical results, which also show the effects of various network parameters on the system performance and on the OPS ratio design.

  20. Hardware implementation of on -chip learning using re configurable FPGAS

    International Nuclear Information System (INIS)

    Kelash, H.M.; Sorour, H.S; Mahmoud, I.I.; Zaki, M; Haggag, S.S.

    2009-01-01

    The multilayer perceptron (MLP) is a neural network model that is being widely applied in the solving of diverse problems. A supervised training is necessary before the use of the neural network.A highly popular learning algorithm called back-propagation is used to train this neural network model. Once trained, the MLP can be used to solve classification problems. An interesting method to increase the performance of the model is by using hardware implementations. The hardware can do the arithmetical operations much faster than software. In this paper, a design and implementation of the sequential mode (stochastic mode) of backpropagation algorithm with on-chip learning using field programmable gate arrays (FPGA) is presented, a pipelined adaptation of the on-line back propagation algorithm (BP) is shown.The hardware implementation of forward stage, backward stage and update weight of backpropagation algorithm is also presented. This implementation is based on a SIMD parallel architecture of the forward propagation the diagnosis of the multi-purpose research reactor of Egypt accidents is used to test the proposed system

  1. Evaluation of accelerated iterative x-ray CT image reconstruction using floating point graphics hardware

    International Nuclear Information System (INIS)

    Kole, J S; Beekman, F J

    2006-01-01

    Statistical reconstruction methods offer possibilities to improve image quality as compared with analytical methods, but current reconstruction times prohibit routine application in clinical and micro-CT. In particular, for cone-beam x-ray CT, the use of graphics hardware has been proposed to accelerate the forward and back-projection operations, in order to reduce reconstruction times. In the past, wide application of this texture hardware mapping approach was hampered owing to limited intrinsic accuracy. Recently, however, floating point precision has become available in the latest generation commodity graphics cards. In this paper, we utilize this feature to construct a graphics hardware accelerated version of the ordered subset convex reconstruction algorithm. The aims of this paper are (i) to study the impact of using graphics hardware acceleration for statistical reconstruction on the reconstructed image accuracy and (ii) to measure the speed increase one can obtain by using graphics hardware acceleration. We compare the unaccelerated algorithm with the graphics hardware accelerated version, and for the latter we consider two different interpolation techniques. A simulation study of a micro-CT scanner with a mathematical phantom shows that at almost preserved reconstructed image accuracy, speed-ups of a factor 40 to 222 can be achieved, compared with the unaccelerated algorithm, and depending on the phantom and detector sizes. Reconstruction from physical phantom data reconfirms the usability of the accelerated algorithm for practical cases

  2. High-performance reconfigurable hardware architecture for restricted Boltzmann machines.

    Science.gov (United States)

    Ly, Daniel Le; Chow, Paul

    2010-11-01

    Despite the popularity and success of neural networks in research, the number of resulting commercial or industrial applications has been limited. A primary cause for this lack of adoption is that neural networks are usually implemented as software running on general-purpose processors. Hence, a hardware implementation that can exploit the inherent parallelism in neural networks is desired. This paper investigates how the restricted Boltzmann machine (RBM), which is a popular type of neural network, can be mapped to a high-performance hardware architecture on field-programmable gate array (FPGA) platforms. The proposed modular framework is designed to reduce the time complexity of the computations through heavily customized hardware engines. A method to partition large RBMs into smaller congruent components is also presented, allowing the distribution of one RBM across multiple FPGA resources. The framework is tested on a platform of four Xilinx Virtex II-Pro XC2VP70 FPGAs running at 100 MHz through a variety of different configurations. The maximum performance was obtained by instantiating an RBM of 256 × 256 nodes distributed across four FPGAs, which resulted in a computational speed of 3.13 billion connection-updates-per-second and a speedup of 145-fold over an optimized C program running on a 2.8-GHz Intel processor.

  3. Hardware demonstration of high-speed networks for satellite applications.

    Energy Technology Data Exchange (ETDEWEB)

    Donaldson, Jonathon W.; Lee, David S.

    2008-09-01

    This report documents the implementation results of a hardware demonstration utilizing the Serial RapidIO{trademark} and SpaceWire protocols that was funded by Sandia National Laboratories (SNL's) Laboratory Directed Research and Development (LDRD) office. This demonstration was one of the activities in the Modeling and Design of High-Speed Networks for Satellite Applications LDRD. This effort has demonstrated the transport of application layer packets across both RapidIO and SpaceWire networks to a common downlink destination using small topologies comprised of commercial-off-the-shelf and custom devices. The RapidFET and NEX-SRIO debug and verification tools were instrumental in the successful implementation of the RapidIO hardware demonstration. The SpaceWire hardware demonstration successfully demonstrated the transfer and routing of application data packets between multiple nodes and also was able reprogram remote nodes using configuration bitfiles transmitted over the network, a key feature proposed in node-based architectures (NBAs). Although a much larger network (at least 18 to 27 nodes) would be required to fully verify the design for use in a real-world application, this demonstration has shown that both RapidIO and SpaceWire are capable of routing application packets across a network to a common downlink node, illustrating their potential use in real-world NBAs.

  4. A Framework for Dynamically-Loaded Hardware Library (HLL) in FPGA Acceleration

    DEFF Research Database (Denmark)

    Cardarilli, Gian Carlo; Di Carlo, Leonardo; Nannarelli, Alberto

    2016-01-01

    Hardware acceleration is often used to address the need for speed and computing power in embedded systems. FPGAs always represented a good solution for HW acceleration and, recently, new SoC platforms extended the flexibility of the FPGAs by combining on a single chip both high-performance CPUs...... and FPGA fabric. The aim of this work is the implementation of hardware accelerators for these new SoCs. The innovative feature of these accelerators is the on-the-fly reconfiguration of the hardware to dynamically adapt the accelerator’s functionalities to the current CPU workload. The realization...... of the accelerators preliminarily requires also the profiling of both the SW (ARM CPU + NEON Units) and HW (FPGA) performance, an evaluation of the partial reconfiguration times and the development of an applicationspecific IP-cores library. This paper focuses on the profiling aspect of both the SW and HW...

  5. Efficient Hardware Implementation For Fingerprint Image Enhancement Using Anisotropic Gaussian Filter.

    Science.gov (United States)

    Khan, Tariq Mahmood; Bailey, Donald G; Khan, Mohammad A U; Kong, Yinan

    2017-05-01

    A real-time image filtering technique is proposed which could result in faster implementation for fingerprint image enhancement. One major hurdle associated with fingerprint filtering techniques is the expensive nature of their hardware implementations. To circumvent this, a modified anisotropic Gaussian filter is efficiently adopted in hardware by decomposing the filter into two orthogonal Gaussians and an oriented line Gaussian. An architecture is developed for dynamically controlling the orientation of the line Gaussian filter. To further improve the performance of the filter, the input image is homogenized by a local image normalization. In the proposed structure, for a middle-range reconfigurable FPGA, both parallel compute-intensive and real-time demands were achieved. We manage to efficiently speed up the image-processing time and improve the resource utilization of the FPGA. Test results show an improved speed for its hardware architecture while maintaining reasonable enhancement benchmarks.

  6. IDEAS and App Development Internship in Hardware and Software Design

    Science.gov (United States)

    Alrayes, Rabab D.

    2016-01-01

    In this report, I will discuss the tasks and projects I have completed while working as an electrical engineering intern during the spring semester of 2016 at NASA Kennedy Space Center. In the field of software development, I completed tasks for the G-O Caching Mobile App and the Asbestos Management Information System (AMIS) Web App. The G-O Caching Mobile App was written in HTML, CSS, and JavaScript on the Cordova framework, while the AMIS Web App is written in HTML, CSS, JavaScript, and C# on the AngularJS framework. My goals and objectives on these two projects were to produce an app with an eye-catching and intuitive User Interface (UI), which will attract more employees to participate; to produce a fully-tested, fully functional app which supports workforce engagement and exploration; to produce a fully-tested, fully functional web app that assists technicians working in asbestos management. I also worked in hardware development on the Integrated Display and Environmental Awareness System (IDEAS) wearable technology project. My tasks on this project were focused in PCB design and camera integration. My goals and objectives for this project were to successfully integrate fully functioning custom hardware extenders on the wearable technology headset to minimize the size of hardware on the smart glasses headset for maximum user comfort; to successfully integrate fully functioning camera onto the headset. By the end of this semester, I was able to successfully develop four extender boards to minimize hardware on the headset, and assisted in integrating a fully-functioning camera into the system.

  7. Dietary energy level for optimum productivity and carcass ...

    African Journals Online (AJOL)

    A study was conducted to determine dietary energy levels for optimum productivity and carcass characteristics of indigenous Venda chickens raised in closed confinement. Four dietary treatments were considered in the first phase (1 to 7 weeks) on two hundred day-old unsexed indigenous Venda chicks indicated as EVS1, ...

  8. Round Girls in Square Computers: Feminist Perspectives on the Aesthetics of Computer Hardware.

    Science.gov (United States)

    Carr-Chellman, Alison A.; Marra, Rose M.; Roberts, Shari L.

    2002-01-01

    Considers issues related to computer hardware, aesthetics, and gender. Explores how gender has influenced the design of computer hardware and how these gender-driven aesthetics may have worked to maintain, extend, or alter gender distinctions, roles, and stereotypes; discusses masculine media representations; and presents an alternative model.…

  9. Developing an Optimum Protocol for Thermoluminescence Dosimetry with GR-200 Chips using Taguchi Method.

    Science.gov (United States)

    Sadeghi, Maryam; Faghihi, Reza; Sina, Sedigheh

    2017-06-15

    Thermoluminescence dosimetry (TLD) is a powerful technique with wide applications in personal, environmental and clinical dosimetry. The optimum annealing, storage and reading protocols are very effective in accuracy of TLD response. The purpose of this study is to obtain an optimum protocol for GR-200; LiF: Mg, Cu, P, by optimizing the effective parameters, to increase the reliability of the TLD response using Taguchi method. Taguchi method has been used in this study for optimization of annealing, storage and reading protocols of the TLDs. A number of 108 GR-200 chips were divided into 27 groups, each containing four chips. The TLDs were exposed to three different doses, and stored, annealed and read out by different procedures as suggested by Taguchi Method. By comparing the signal-to-noise ratios the optimum dosimetry procedure was obtained. According to the results, the optimum values for annealing temperature (°C), Annealing Time (s), Annealing to Exposure time (d), Exposure to Readout time (d), Pre-heat Temperature (°C), Pre-heat Time (s), Heating Rate (°C/s), Maximum Temperature of Readout (°C), readout time (s) and Storage Temperature (°C) are 240, 90, 1, 2, 50, 0, 15, 240, 13 and -20, respectively. Using the optimum protocol, an efficient glow curve with low residual signals can be achieved. Using optimum protocol obtained by Taguchi method, the dosimetry can be effectively performed with great accuracy. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Generating AN Optimum Treatment Plan for External Beam Radiation Therapy.

    Science.gov (United States)

    Kabus, Irwin

    1990-01-01

    The application of linear programming to the generation of an optimum external beam radiation treatment plan is investigated. MPSX, an IBM linear programming software package was used. All data originated from the CAT scan of an actual patient who was treated for a pancreatic malignant tumor before this study began. An examination of several alternatives for representing the cross section of the patient showed that it was sufficient to use a set of strategically placed points in the vital organs and tumor and a grid of points spaced about one half inch apart for the healthy tissue. Optimum treatment plans were generated from objective functions representing various treatment philosophies. The optimum plans were based on allowing for 216 external radiation beams which accounted for wedges of any size. A beam reduction scheme then reduced the number of beams in the optimum plan to a number of beams small enough for implementation. Regardless of the objective function, the linear programming treatment plan preserved about 95% of the patient's right kidney vs. 59% for the plan the hospital actually administered to the patient. The clinician, on the case, found most of the linear programming treatment plans to be superior to the hospital plan. An investigation was made, using parametric linear programming, concerning any possible benefits derived from generating treatment plans based on objective functions made up of convex combinations of two objective functions, however, this proved to have only limited value. This study also found, through dual variable analysis, that there was no benefit gained from relaxing some of the constraints on the healthy regions of the anatomy. This conclusion was supported by the clinician. Finally several schemes were found that, under certain conditions, can further reduce the number of beams in the final linear programming treatment plan.

  11. Contamination Examples and Lessons from Low Earth Orbit Experiments and Operational Hardware

    Science.gov (United States)

    Pippin, Gary; Finckenor, Miria M.

    2009-01-01

    Flight experiments flown on the Space Shuttle, the International Space Station, Mir, Skylab, and free flyers such as the Long Duration Exposure Facility, the European Retrievable Carrier, and the EFFU, provide multiple opportunities for the investigation of molecular contamination effects. Retrieved hardware from the Solar Maximum Mission satellite, Mir, and the Hubble Space Telescope has also provided the means gaining insight into contamination processes. Images from the above mentioned hardware show contamination effects due to materials processing, hardware storage, pre-flight cleaning, as well as on-orbit events such as outgassing, mechanical failure of hardware in close proximity, impacts from man-made debris, and changes due to natural environment factors.. Contamination effects include significant changes to thermal and electrical properties of thermal control surfaces, optics, and power systems. Data from several flights has been used to develop a rudimentary estimate of asymptotic values for absorptance changes due to long-term solar exposure (4000-6000 Equivalent Sun Hours) of silicone-based molecular contamination deposits of varying thickness. Recommendations and suggestions for processing changes and constraints based on the on-orbit observed results will be presented.

  12. SEnviro: A Sensorized Platform Proposal Using Open Hardware and Open Standards

    Directory of Open Access Journals (Sweden)

    Sergio Trilles

    2015-03-01

    Full Text Available The need for constant monitoring of environmental conditions has produced an increase in the development of wireless sensor networks (WSN. The drive towards smart cities has produced the need for smart sensors to be able to monitor what is happening in our cities. This, combined with the decrease in hardware component prices and the increase in the popularity of open hardware, has favored the deployment of sensor networks based on open hardware. The new trends in Internet Protocol (IP communication between sensor nodes allow sensor access via the Internet, turning them into smart objects (Internet of Things and Web of Things. Currently, WSNs provide data in different formats. There is a lack of communication protocol standardization, which turns into interoperability issues when connecting different sensor networks or even when connecting different sensor nodes within the same network. This work presents a sensorized platform proposal that adheres to the principles of the Internet of Things and theWeb of Things. Wireless sensor nodes were built using open hardware solutions, and communications rely on the HTTP/IP Internet protocols. The Open Geospatial Consortium (OGC SensorThings API candidate standard was used as a neutral format to avoid interoperability issues. An environmental WSN developed following the proposed architecture was built as a proof of concept. Details on how to build each node and a study regarding energy concerns are presented.

  13. SEnviro: a sensorized platform proposal using open hardware and open standards.

    Science.gov (United States)

    Trilles, Sergio; Luján, Alejandro; Belmonte, Óscar; Montoliu, Raúl; Torres-Sospedra, Joaquín; Huerta, Joaquín

    2015-03-06

    The need for constant monitoring of environmental conditions has produced an increase in the development of wireless sensor networks (WSN). The drive towards smart cities has produced the need for smart sensors to be able to monitor what is happening in our cities. This, combined with the decrease in hardware component prices and the increase in the popularity of open hardware, has favored the deployment of sensor networks based on open hardware. The new trends in Internet Protocol (IP) communication between sensor nodes allow sensor access via the Internet, turning them into smart objects (Internet of Things and Web of Things). Currently, WSNs provide data in different formats. There is a lack of communication protocol standardization, which turns into interoperability issues when connecting different sensor networks or even when connecting different sensor nodes within the same network. This work presents a sensorized platform proposal that adheres to the principles of the Internet of Things and theWeb of Things. Wireless sensor nodes were built using open hardware solutions, and communications rely on the HTTP/IP Internet protocols. The Open Geospatial Consortium (OGC) SensorThings API candidate standard was used as a neutral format to avoid interoperability issues. An environmental WSN developed following the proposed architecture was built as a proof of concept. Details on how to build each node and a study regarding energy concerns are presented.

  14. Optimization Strategies for Hardware-Based Cofactorization

    Science.gov (United States)

    Loebenberger, Daniel; Putzka, Jens

    We use the specific structure of the inputs to the cofactorization step in the general number field sieve (GNFS) in order to optimize the runtime for the cofactorization step on a hardware cluster. An optimal distribution of bitlength-specific ECM modules is proposed and compared to existing ones. With our optimizations we obtain a speedup between 17% and 33% of the cofactorization step of the GNFS when compared to the runtime of an unoptimized cluster.

  15. Hardware Design of a Smart Meter

    OpenAIRE

    Ganiyu A. Ajenikoko; Anthony A. Olaomi

    2014-01-01

    Smart meters are electronic measurement devices used by utilities to communicate information for billing customers and operating their electric systems. This paper presents the hardware design of a smart meter. Sensing and circuit protection circuits are included in the design of the smart meter in which resistors are naturally a fundamental part of the electronic design. Smart meters provides a route for energy savings, real-time pricing, automated data collection and elimina...

  16. Optimum design for 12 MeV linear induction accelerator diode

    International Nuclear Information System (INIS)

    Yu Haijun; Shi Jinshui; Li Qin; He Guorong; Ma Bing; Wang Jingsheng; Wang Liping

    2001-01-01

    A series of optimization designs of electron diode in 12 Mev linear induction accelerator are studied by using numerical simulation code MAGIC and experiment method in order to improve the electron beam quality. MAGIC code solves the Maxwell equations in the presence of charged particle, electron field distribution on cathode surface which influences electron emission is given, the optimum diode is obtained by comparing the results of experiment in 12 MeV linear induction accelerator. The author also gives SEM analysis and experiment comparison of velvet emission. Finally, emitted current I e = 8.52 kA, beam current I 8 ≥ 3.0 kA, targeted current I 0 ≥ 2.30 kA with optimum diode are obtained

  17. Optimum design of exploding pusher target to produce maximum neutrons

    International Nuclear Information System (INIS)

    Kitagawa, Y.; Miyanaga, N.; Kato, Y.; Nakatsuka, M.; Nishiguchi, A.; Yabe, T.; Yamanaka, C.

    1985-03-01

    Exploding pusher target experiments have been conducted with the 1.052-μm GEKKO MII two-beam glass laser system to design an optimum target, which couples to the incident laser light most effectively to produce the maximum neutrons. Since hot electrons preheat the shell entirely in spite of strongly nonuniform irradiation, a simple model can design the optimum target, of which the shell/fuel interface is accelerated to 0.5 to 0.7 times the initial radius within a laser pulse. A 2-dimensional computer simulation supports this target design. The scaling of the neutron yield N with the laser power P is N ∝ P 2.4±0.4 . (author)

  18. An interactive audio-visual installation using ubiquitous hardware and web-based software deployment

    Directory of Open Access Journals (Sweden)

    Tiago Fernandes Tavares

    2015-05-01

    Full Text Available This paper describes an interactive audio-visual musical installation, namely MOTUS, that aims at being deployed using low-cost hardware and software. This was achieved by writing the software as a web application and using only hardware pieces that are built-in most modern personal computers. This scenario implies in specific technical restrictions, which leads to solutions combining both technical and artistic aspects of the installation. The resulting system is versatile and can be freely used from any computer with Internet access. Spontaneous feedback from the audience has shown that the provided experience is interesting and engaging, regardless of the use of minimal hardware.

  19. Ultrasound gel minimizes third body debris with partial hardware removal in joint arthroplasty

    Directory of Open Access Journals (Sweden)

    Aidan C. McGrory

    2017-03-01

    Full Text Available Hundreds of thousands of revision surgeries for hip, knee, and shoulder joint arthroplasties are now performed worldwide annually. Partial removal of hardware during some types of revision surgeries may create significant amounts of third body metal, polymer, or bone cement debris. Retained debris may lead to a variety of negative health effects including damage to the joint replacement. We describe a novel technique for the better containment and easier removal of third body debris during partial hardware removal. We demonstrate hardware removal on a hip joint model in the presence and absence of water-soluble gel to depict the reduction in metal debris volume and area of spread.

  20. Smart Home Hardware-in-the-Loop Testing

    Energy Technology Data Exchange (ETDEWEB)

    Pratt, Annabelle

    2017-07-12

    This presentation provides a high-level overview of NREL's smart home hardware-in-the-loop testing. It was presented at the Fourth International Workshop on Grid Simulator Testing of Energy Systems and Wind Turbine Powertrains, held April 25-26, 2017, hosted by NREL and Clemson University at the Energy Systems Integration Facility in Golden, Colorado.

  1. Interfacing Hardware Accelerators to a Time-Division Multiplexing Network-on-Chip

    DEFF Research Database (Denmark)

    Pezzarossa, Luca; Sørensen, Rasmus Bo; Schoeberl, Martin

    2015-01-01

    This paper addresses the integration of stateless hardware accelerators into time-predictable multi-core platforms based on time-division multiplexing networks-on-chip. Stateless hardware accelerators, like floating-point units, are typically attached as co-processors to individual processors in ...... implementation. The design evaluation is carried out using the open source T-CREST multi-core platform implemented on an Altera Cyclone IV FPGA. The size of the proposed design, including a floating-point accelerator, is about two-thirds of a processor....

  2. Hardware Descriptive Languages: An Efficient Approach to Device ...

    African Journals Online (AJOL)

    Contemporarily, owing to astronomical advancements in the very large scale integration (VLSI) market segments, hardware engineers are now focusing on how to develop their new digital system designs in programmable languages like very high speed integrated circuit hardwaredescription language (VHDL) and Verilog ...

  3. Another way of doing RSA cryptography in hardware

    NARCIS (Netherlands)

    Batina, L.; Bruin - Muurling, G.; Honary, B.

    2001-01-01

    In this paper we describe an efficient and secure hardware implementation of the RSA cryptosystem. Modular exponentiation is based on Montgomery’s method without any modular reduction achieving the optimal bound. The presented systolic array architecture is scalable in severalparameters which makes

  4. Hardware-Oblivious Parallelism for In-Memory Column-Stores

    NARCIS (Netherlands)

    M. Heimel; M. Saecker; H. Pirk (Holger); S. Manegold (Stefan); V. Markl

    2013-01-01

    htmlabstractThe multi-core architectures of today’s computer systems make parallelism a necessity for performance critical applications. Writing such applications in a generic, hardware-oblivious manner is a challenging problem: Current database systems thus rely on labor-intensive and error-prone

  5. Generalized Distance Transforms and Skeletons in Graphics Hardware

    NARCIS (Netherlands)

    Strzodka, R.; Telea, A.

    2004-01-01

    We present a framework for computing generalized distance transforms and skeletons of two-dimensional objects using graphics hardware. Our method is based on the concept of footprint splatting. Combining different splats produces weighted distance transforms for different metrics, as well as the

  6. OPTIMUM PROGRAMMABLE CONTROL OF UNMANNED FLYING VEHICLE

    Directory of Open Access Journals (Sweden)

    A. А. Lobaty

    2012-01-01

    Full Text Available The paper considers an analytical synthesis problem pertaining to programmable control of an unmanned flying vehicle while steering it to the fixed space point. The problem has been solved while applying a maximum principle which takes into account a final control purpose and its integral expenses. The paper presents an optimum law of controlling overload variation of a flying vehicle that has been obtained analytically

  7. Techniques for evaluating optimum data center operation

    Science.gov (United States)

    Hamann, Hendrik F.; Rodriguez, Sergio Adolfo Bermudez; Wehle, Hans-Dieter

    2017-06-14

    Techniques for modeling a data center are provided. In one aspect, a method for determining data center efficiency is provided. The method includes the following steps. Target parameters for the data center are obtained. Technology pre-requisite parameters for the data center are obtained. An optimum data center efficiency is determined given the target parameters for the data center and the technology pre-requisite parameters for the data center.

  8. Probabilistic studies for safety at optimum cost

    International Nuclear Information System (INIS)

    Pitner, P.

    1999-01-01

    By definition, the risk of failure of very reliable components is difficult to evaluate. How can the best strategies for in service inspection and maintenance be defined to limit this risk to an acceptable level at optimum cost? It is not sufficient to design structures with margins, it is also essential to understand how they age. The probabilistic approach has made it possible to develop well proven concepts. (author)

  9. Determining optimum aging time using novel core flooding equipment

    DEFF Research Database (Denmark)

    Ahkami, Mehrdad; Chakravarty, Krishna Hara; Xiarchos, Ioannis

    2016-01-01

    the optimum aging time regardless of variations in crude oil, rock, and brine properties. State of the art core flooding equipment has been developed that can be used for consistently determining the resistivity of the coreplug during aging and waterflooding using advanced data acquisition software......New methods for enhanced oil recovery are typically developed using core flooding techniques. Establishing reservoir conditions is essential before the experimental campaign commences. The realistic oil-rock wettability can be obtained through optimum aging of the core. Aging time is affected....... In the proposed equipment, independent axial and sleeve pressure can be applied to mimic stresses at reservoir conditions. 10 coreplugs (four sandstones and six chalk samples) from the North Sea have been aged for more than 408 days in total and more than 29000 resistivity data points have been measured...

  10. 34 CFR 464.42 - What limit applies to purchasing computer hardware and software?

    Science.gov (United States)

    2010-07-01

    ... software? 464.42 Section 464.42 Education Regulations of the Offices of the Department of Education... computer hardware and software? Not more than ten percent of funds received under any grant under this part may be used to purchase computer hardware or software. (Authority: 20 U.S.C. 1208aa(f)) ...

  11. Infected hardware after surgical stabilization of rib fractures: Outcomes and management experience.

    Science.gov (United States)

    Thiels, Cornelius A; Aho, Johnathon M; Naik, Nimesh D; Zielinski, Martin D; Schiller, Henry J; Morris, David S; Kim, Brian D

    2016-05-01

    Surgical stabilization of rib fracture (SSRF) is increasingly used for treatment of rib fractures. There are few data on the incidence, risk factors, outcomes, and optimal management strategy for hardware infection in these patients. We aimed to develop and propose a management algorithm to help others treat this potentially morbid complication. We retrospectively searched a prospectively collected rib fracture database for the records of all patients who underwent SSRF from August 2009 through March 2014 at our institution. We then analyzed for the subsequent development of hardware infection among these patients. Standard descriptive analyses were performed. Among 122 patients who underwent SSRF, most (73%) were men; the mean (SD) age was 59.5 (16.4) years, and median (interquartile range [IQR]) Injury Severity Score was 17 (13-22). The median number of rib fractures was 7 (5-9) and 48% of the patients had flail chest. Mortality at 30 days was 0.8%. Five patients (4.1%) had a hardware infection on mean (SD) postoperative day 12.0 (6.6). Median Injury Severity Score (17 [range, 13-42]) and hospital length of stay (9 days [6-37 days]) in these patients were similar to the values for those without infection (17 days [range, 13-22 days] and 9 days [6-12 days], respectively). Patients with infection underwent a median (IQR) of 2 (range, 2-3) additional operations, which included wound debridement (n = 5), negative-pressure wound therapy (n = 3), and antibiotic beads (n = 4). Hardware was removed in 3 patients at 140, 190, and 192 days after index operation. Cultures grew only gram-positive organisms. No patients required reintervention after hardware removal, and all achieved bony union and were taking no narcotics or antibiotics at the latest follow-up. Although uncommon, hardware infection after SSRF carries considerable morbidity. With the use of an aggressive multimodal management strategy, however, bony union and favorable long-term outcomes can be achieved

  12. Computer organization and design the hardware/software interface

    CERN Document Server

    Patterson, David A

    2013-01-01

    The 5th edition of Computer Organization and Design moves forward into the post-PC era with new examples, exercises, and material highlighting the emergence of mobile computing and the cloud. This generational change is emphasized and explored with updated content featuring tablet computers, cloud infrastructure, and the ARM (mobile computing devices) and x86 (cloud computing) architectures. Because an understanding of modern hardware is essential to achieving good performance and energy efficiency, this edition adds a new concrete example, "Going Faster," used throughout the text to demonstrate extremely effective optimization techniques. Also new to this edition is discussion of the "Eight Great Ideas" of computer architecture. As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, computer arithmetic, pipelining, memory hierarchies and I/O. Optimization techniques featured throughout the text. It covers parallelism in depth with...

  13. Evaluation of mathematical methods for predicting optimum dose of gamma radiation in sugarcane (Saccharum sp.)

    International Nuclear Information System (INIS)

    Wu, K.K.; Siddiqui, S.H.; Heinz, D.J.; Ladd, S.L.

    1978-01-01

    Two mathematical methods - the reversed logarithmic method and the regression method - were used to compare the predicted and the observed optimum gamma radiation dose (OD 50 ) in vegetative propagules of sugarcane. The reversed logarithmic method, usually used in sexually propagated crops, showed the largest difference between the predicted and observed optimum dose. The regression method resulted in a better prediction of the observed values and is suggested as a better method for the prediction of optimum dose for vegetatively propagated crops. (author)

  14. Effect of optimum stratification on sampling with varying probabilities under proportional allocation

    OpenAIRE

    Syed Ejaz Husain Rizvi; Jaj P. Gupta; Manoj Bhargava

    2007-01-01

    The problem of optimum stratification on an auxiliary variable when the units from different strata are selected with probability proportional to the value of auxiliary variable (PPSWR) was considered by Singh (1975) for univariate case. In this paper we have extended the same problem, for proportional allocation, when two variates are under study. A cum. 3 R3(x) rule for obtaining approximately optimum strata boundaries has been provided. It has been shown theoretically as well as empiricall...

  15. DETERMINATION of OPTIMUM CONDITION of PAPAIN ENZYME FROM PAPAYA VAR JAVA (Carica papaya

    Directory of Open Access Journals (Sweden)

    Aline Puspita Kusumadjaja

    2010-06-01

    Full Text Available A study to investigate the optimum condition of papain enzyme has been carried out. The condition that are investigated are pH and temperature, based on measurement of enzyme activity which is defined as mmole tyrosin that are released in reaction between papain enzyme and casein as substrat per minute. In this research, the papain enzyme was isolated from pepaya burung varietas Java. The enzyme was partially purified by precipitation method using 30% - 50% saturated acetone. The result showed that the optimum conditions of papain enzyme are in pH 6 with activity 2,606 U/mL, and temperature at 50 oC with activity 2,469 U/mL. Keywords : Papaya var Java, papain, optimum condition, enzymatic activity

  16. Co-verification of hardware and software for ARM SoC design

    CERN Document Server

    Andrews, Jason

    2004-01-01

    Hardware/software co-verification is how to make sure that embedded system software works correctly with the hardware, and that the hardware has been properly designed to run the software successfully -before large sums are spent on prototypes or manufacturing. This is the first book to apply this verification technique to the rapidly growing field of embedded systems-on-a-chip(SoC). As traditional embedded system design evolves into single-chip design, embedded engineers must be armed with the necessary information to make educated decisions about which tools and methodology to deploy. SoC verification requires a mix of expertise from the disciplines of microprocessor and computer architecture, logic design and simulation, and C and Assembly language embedded software. Until now, the relevant information on how it all fits together has not been available. Andrews, a recognized expert, provides in-depth information about how co-verification really works, how to be successful using it, and pitfalls to avoid. H...

  17. Digital Hardware Realization of Forward and Inverse Kinematics for a Five-Axis Articulated Robot Arm

    Directory of Open Access Journals (Sweden)

    Bui Thi Hai Linh

    2015-01-01

    Full Text Available When robot arm performs a motion control, it needs to calculate a complicated algorithm of forward and inverse kinematics which consumes much CPU time and certainty slows down the motion speed of robot arm. Therefore, to solve this issue, the development of a hardware realization of forward and inverse kinematics for an articulated robot arm is investigated. In this paper, the formulation of the forward and inverse kinematics for a five-axis articulated robot arm is derived firstly. Then, the computations algorithm and its hardware implementation are described. Further, very high speed integrated circuits hardware description language (VHDL is applied to describe the overall hardware behavior of forward and inverse kinematics. Additionally, finite state machine (FSM is applied for reducing the hardware resource usage. Finally, for verifying the correctness of forward and inverse kinematics for the five-axis articulated robot arm, a cosimulation work is constructed by ModelSim and Simulink. The hardware of the forward and inverse kinematics is run by ModelSim and a test bench which generates stimulus to ModelSim and displays the output response is taken in Simulink. Under this design, the forward and inverse kinematics algorithms can be completed within one microsecond.

  18. Using Innovative Techniques for Manufacturing Rocket Engine Hardware

    Science.gov (United States)

    Betts, Erin M.; Reynolds, David C.; Eddleman, David E.; Hardin, Andy

    2011-01-01

    Many of the manufacturing techniques that are currently used for rocket engine component production are traditional methods that have been proven through years of experience and historical precedence. As we enter into a new space age where new launch vehicles are being designed and propulsion systems are being improved upon, it is sometimes necessary to adopt new and innovative techniques for manufacturing hardware. With a heavy emphasis on cost reduction and improvements in manufacturing time, manufacturing techniques such as Direct Metal Laser Sintering (DMLS) are being adopted and evaluated for their use on J-2X, with hopes of employing this technology on a wide variety of future projects. DMLS has the potential to significantly reduce the processing time and cost of engine hardware, while achieving desirable material properties by using a layered powder metal manufacturing process in order to produce complex part geometries. Marshall Space Flight Center (MSFC) has recently hot-fire tested a J-2X gas generator discharge duct that was manufactured using DMLS. The duct was inspected and proof tested prior to the hot-fire test. Using the Workhorse Gas Generator (WHGG) test setup at MSFC?s East Test Area test stand 116, the duct was subject to extreme J-2X gas generator environments and endured a total of 538 seconds of hot-fire time. The duct survived the testing and was inspected after the test. DMLS manufacturing has proven to be a viable option for manufacturing rocket engine hardware, and further development and use of this manufacturing method is recommended.

  19. Feasibility study of a XML-based software environment to manage data acquisition hardware devices

    International Nuclear Information System (INIS)

    Arcidiacono, R.; Brigljevic, V.; Bruno, G.; Cano, E.; Cittolin, S.; Erhan, S.; Gigi, D.; Glege, F.; Gomez-Reino, R.; Gulmini, M.; Gutleber, J.; Jacobs, C.; Kreuzer, P.; Lo Presti, G.; Magrans, I.; Marinelli, N.; Maron, G.; Meijers, F.; Meschi, E.; Murray, S.; Nafria, M.; Oh, A.; Orsini, L.; Pieri, M.; Pollet, L.; Racz, A.; Rosinsky, P.; Schwick, C.; Sphicas, P.; Varela, J.

    2005-01-01

    A Software environment to describe configuration, control and test systems for data acquisition hardware devices is presented. The design follows a model that enforces a comprehensive use of an extensible markup language (XML) syntax to describe both the code and associated data. A feasibility study of this software, carried out for the CMS experiment at CERN, is also presented. This is based on a number of standalone applications for different hardware modules, and the design of a hardware management system to remotely access to these heterogeneous subsystems through a uniform web service interface

  20. Feasibility study of a XML-based software environment to manage data acquisition hardware devices

    Energy Technology Data Exchange (ETDEWEB)

    Arcidiacono, R. [Massachusetts Institute of Technology, Cambridge, MA (United States); Brigljevic, V. [CERN, Geneva (Switzerland); Rudjer Boskovic Institute, Zagreb (Croatia); Bruno, G. [CERN, Geneva (Switzerland); Cano, E. [CERN, Geneva (Switzerland); Cittolin, S. [CERN, Geneva (Switzerland); Erhan, S. [University of California, Los Angeles, Los Angeles, CA (United States); Gigi, D. [CERN, Geneva (Switzerland); Glege, F. [CERN, Geneva (Switzerland); Gomez-Reino, R. [CERN, Geneva (Switzerland); Gulmini, M. [INFN-Laboratori Nazionali di Legnaro, Legnaro (Italy); CERN, Geneva (Switzerland); Gutleber, J. [CERN, Geneva (Switzerland); Jacobs, C. [CERN, Geneva (Switzerland); Kreuzer, P. [University of Athens, Athens (Greece); Lo Presti, G. [CERN, Geneva (Switzerland); Magrans, I. [CERN, Geneva (Switzerland) and Electronic Engineering Department, Universidad Autonoma de Barcelona, Barcelona (Spain)]. E-mail: ildefons.magrans@cern.ch; Marinelli, N. [Institute of Accelerating Systems and Applications, Athens (Greece); Maron, G. [INFN-Laboratori Nazionali di Legnaro, Legnaro (Italy); Meijers, F. [CERN, Geneva (Switzerland); Meschi, E. [CERN, Geneva (Switzerland); Murray, S. [CERN, Geneva (Switzerland); Nafria, M. [Electronic Engineering Department, Universidad Autonoma de Barcelona, Barcelona (Spain); Oh, A. [CERN, Geneva (Switzerland); Orsini, L. [CERN, Geneva (Switzerland); Pieri, M. [University of California, San Diago, San Diago, CA (United States); Pollet, L. [CERN, Geneva (Switzerland); Racz, A. [CERN, Geneva (Switzerland); Rosinsky, P. [CERN, Geneva (Switzerland); Schwick, C. [CERN, Geneva (Switzerland); Sphicas, P. [University of Athens, Athens (Greece); CERN, Geneva (Switzerland); Varela, J. [LIP, Lisbon (Portugal); CERN, Geneva (Switzerland)

    2005-07-01

    A Software environment to describe configuration, control and test systems for data acquisition hardware devices is presented. The design follows a model that enforces a comprehensive use of an extensible markup language (XML) syntax to describe both the code and associated data. A feasibility study of this software, carried out for the CMS experiment at CERN, is also presented. This is based on a number of standalone applications for different hardware modules, and the design of a hardware management system to remotely access to these heterogeneous subsystems through a uniform web service interface.

  1. Combining hardware and simulation for datacenter scaling studies

    DEFF Research Database (Denmark)

    Ruepp, Sarah Renée; Pilimon, Artur; Thrane, Jakob

    2017-01-01

    and simulation to illustrate the scalability and performance of datacenter networks. We simulate a Datacenter network and interconnect it with real world traffic generation hardware. Analysis of the introduced packet conversion and virtual queueing delays shows that the conversion efficiency is at the order...

  2. REGARDING "TRAGIC ECONOMIC OPTIMUM" FROM HOLISTIC+ PERSPECTIVE

    Directory of Open Access Journals (Sweden)

    Constantin Popescu

    2010-12-01

    Full Text Available Communication aims to discuss the new scientific vision of "the entire integrated" as it follows the recent achievements of quantum physics, psychology and biology. From this perspective, economy is seen as a living organism, part of the social organism and together with de bright ecology. The optimum of the economy as a living organism is based on dynamic compatibilities with all common living requirements. The evolution of economic life is organically linked to the unavoidable circumstances contained in the form of V. Frankl ‘s tragic triad consisting of: pain, guilt and death. In interaction with the holistic triad circumscribed by limitations, uncertainties and open interdependencies, the tragic economic optimum (TEO is formed. It can be understood as that state of economic life in which freedom of choice of scarce resources under uncertainty has in the compatibility of rationality and hope the development criteria of MEANING. TEO means to say YES to economic life even in conditions of resource limitations, bankruptcies and unemployment, negative externalities, stress, etc. By respiritualization of responsibility using scientific knowledge. TEO - involves multicriteria modeling of economic life by integrating human demands, community, environmental, spiritual and business development in the assessment predicting human GDP as a variable wave aggregate.

  3. Diseño hardware de una tarjeta de control y comunicaciones

    OpenAIRE

    Sánchez Salvador, David

    2017-01-01

    En la actualidad convivimos con infinidad de sistemas electrónicos, desde pequeños weareables como las pulseras inteligentes a grandes equipos como radares, pasando por equipos sin ningún tipo de lógica programada como una radio. Estos dispositivos electrónicos se desarrollan en base a un software generalmente, obviando la diferencia de complejidad según la aplicación, pero todos ellos se crean sobre un hardware. Dicho hardware puede ser una PCB sencilla con algunos componentes o un conjunto ...

  4. Performance Estimation for Hardware/Software codesign using Hierarchical Colored Petri Nets

    DEFF Research Database (Denmark)

    Grode, Jesper Nicolai Riis; Madsen, Jan; Jerraya, Ahmed-Amine

    1998-01-01

    This paper presents an approach for abstract modeling of the functional behavior of hardware architectures using Hierarchical Colored Petri Nets (HCPNs). Using HCPNs as architectural models has several advantages such as higher estimation accuracy, higher flexibility, and the need for only one...... estimation tool. This makes the approach very useful for designing component models used for performance estimation in Hardware/Software Codesign frameworks such as the LYCOS system. The paper presents the methodology and rules for designing component models using HCPNs. Two examples of architectural models...

  5. Trustworthy reconfigurable systems enhancing the security capabilities of reconfigurable hardware architectures

    CERN Document Server

    Feller, Thomas

    2014-01-01

    ?Thomas Feller sheds some light on trust anchor architectures fortrustworthy reconfigurable systems. He is presenting novel concepts enhancing the security capabilities of reconfigurable hardware.Almost invisible to the user, many computer systems are embedded into everyday artifacts, such as cars, ATMs, and pacemakers. The significant growth of this market segment within the recent years enforced a rethinking with respect to the security properties and the trustworthiness of these systems. The trustworthiness of a system in general equates to the integrity of its system components. Hardware-b

  6. Optimum Image Formation for Spaceborne Microwave Radiometer Products.

    Science.gov (United States)

    Long, David G; Brodzik, Mary J

    2016-05-01

    This paper considers some of the issues of radiometer brightness image formation and reconstruction for use in the NASA-sponsored Calibrated Passive Microwave Daily Equal-Area Scalable Earth Grid 2.0 Brightness Temperature Earth System Data Record project, which generates a multisensor multidecadal time series of high-resolution radiometer products designed to support climate studies. Two primary reconstruction algorithms are considered: the Backus-Gilbert approach and the radiometer form of the scatterometer image reconstruction (SIR) algorithm. These are compared with the conventional drop-in-the-bucket (DIB) gridded image formation approach. Tradeoff study results for the various algorithm options are presented to select optimum values for the grid resolution, the number of SIR iterations, and the BG gamma parameter. We find that although both approaches are effective in improving the spatial resolution of the surface brightness temperature estimates compared to DIB, SIR requires significantly less computation. The sensitivity of the reconstruction to the accuracy of the measurement spatial response function (MRF) is explored. The partial reconstruction of the methods can tolerate errors in the description of the sensor measurement response function, which simplifies the processing of historic sensor data for which the MRF is not known as well as modern sensors. Simulation tradeoff results are confirmed using actual data.

  7. Hardware Accelerators Targeting a Novel Group Based Packet Classification Algorithm

    Directory of Open Access Journals (Sweden)

    O. Ahmed

    2013-01-01

    Full Text Available Packet classification is a ubiquitous and key building block for many critical network devices. However, it remains as one of the main bottlenecks faced when designing fast network devices. In this paper, we propose a novel Group Based Search packet classification Algorithm (GBSA that is scalable, fast, and efficient. GBSA consumes an average of 0.4 megabytes of memory for a 10 k rule set. The worst-case classification time per packet is 2 microseconds, and the preprocessing speed is 3 M rules/second based on an Xeon processor operating at 3.4 GHz. When compared with other state-of-the-art classification techniques, the results showed that GBSA outperforms the competition with respect to speed, memory usage, and processing time. Moreover, GBSA is amenable to implementation in hardware. Three different hardware implementations are also presented in this paper including an Application Specific Instruction Set Processor (ASIP implementation and two pure Register-Transfer Level (RTL implementations based on Impulse-C and Handel-C flows, respectively. Speedups achieved with these hardware accelerators ranged from 9x to 18x compared with a pure software implementation running on an Xeon processor.

  8. Pipe rupture hardware minimization in pressurized water reactor system

    International Nuclear Information System (INIS)

    Mukherjee, S.K.; Szyslowski, J.J.; Chexal, V.; Norris, D.M.; Goldstein, N.A.; Beaudoin, B.; Quinones, D.; Server, W.

    1987-01-01

    For much of the high energy piping in light water reactor systems, fracture mechanics calculations can be used to assure pipe failure resistance, thus allowing the elimination of excessive rupture restraint hardware both inside and outside containment. These calculations use the concept of leak-before-break (LBB) and include part-through-wall flaw fatigue crack propagation, through-wall flaw detectable leakage, and through-wall flaw stability analyses. Performing these analyses not only reduces initial construction, future maintenance, and radiation exposure costs, but the overall safety and integrity of the plant are improved since much more is known about the piping and its capabilities than would be the case had the analyses not been performed. This paper presents the LBB methodology applied at Beaver Valley Power Station - Unit 2 (BVPS-2); the application for two specific lines, one inside containment (stainless steel) and the other outside containment (ferritic steel), is shown in a generic sense using a simple parametric matrix. The overall results for BVPS-2 indicate that pipe rupture hardware is not necessary for stainless steel lines inside containment greater than or equal to 6-in (152 mm) nominal pipe size that have passed a screening criteria designed to eliminate potential problem systems (such as the feedwater system). Similarly, some ferritic steel lines as small as 3-in (76 mm) diameter (outside containment) can qualify for pipe rupture hardware elimination

  9. Ultra-low noise miniaturized neural amplifier with hardware averaging.

    Science.gov (United States)

    Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M

    2015-08-01

    Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This technique can be adopted for other applications where miniaturized and implantable multichannel acquisition systems with ultra-low noise and low power are required.

  10. Optimum autonomous stand-alone photovoltaic system design on the basis of energy pay-back analysis

    International Nuclear Information System (INIS)

    Kaldellis, J.K.; Zafirakis, D.; Kondili, E.

    2009-01-01

    Stand-alone photovoltaic (PV) systems comprise one of the most promising electrification solutions for covering the demand of remote consumers. However, such systems are strongly questioned due to extreme life-cycle (LC) energy requirements. For similar installations to be considered as environmentally sustainable, their LC energy content must be compensated by the respective useful energy production, i.e. their energy pay-back period (EPBP) should be found less than their service period. In this context, an optimum sizing methodology is currently developed, based on the criterion of minimum embodied energy. Various energy autonomous stand-alone PV-lead-acid battery systems are examined and two different cases are investigated; a high solar potential area and a medium solar potential area. By considering that the PV-battery (PV-Bat) system's useful energy production is equal to the remote consumer's electricity consumption, optimum cadmium telluride (CdTe) based systems yield the minimum EPBP (15 years). If achieving to exploit the net PV energy production however, the EPBP is found less than 20 years for all PV types. Finally, the most interesting finding concerns the fact that in all cases examined the contribution of the battery component exceeds 27% of the system LC energy requirements, reflecting the difference between grid-connected and stand-alone configurations.

  11. Hardware Evaluation of the Horizontal Exercise Fixture with Weight Stack

    Science.gov (United States)

    Newby, Nate; Leach, Mark; Fincke, Renita; Sharp, Carwyn

    2009-01-01

    HEF with weight stack seems to be a very sturdy and reliable exercise device that should function well in a bed rest training setting. A few improvements should be made to both the hardware and software to improve usage efficiency, but largely, this evaluation has demonstrated HEF's robustness. The hardware offers loading to muscles, bones, and joints, potentially sufficient to mitigate the loss of muscle mass and bone mineral density during long-duration bed rest campaigns. With some minor modifications, the HEF with weight stack equipment provides the best currently available means of performing squat, heel raise, prone row, bench press, and hip flexion/extension exercise in a supine orientation.

  12. The fast Amsterdam multiprocessor (FAMP) system hardware

    International Nuclear Information System (INIS)

    Hertzberger, L.O.; Kieft, G.; Kisielewski, B.; Wiggers, L.W.; Engster, C.; Koningsveld, L. van

    1981-01-01

    The architecture of a multiprocessor system is described that will be used for on-line filter and second stage trigger applications. The system is based on the MC 68000 microprocessor from Motorola. Emphasis is paid to hardware aspects, in particular the modularity, processor communication and interfacing, whereas the system software and the applications will be described in separate articles. (orig.)

  13. Optimum dietary protein requirement of genetically male tilapia ...

    African Journals Online (AJOL)

    The study was conducted to investigate the optimum dietary protein level needed for growing genetically male tilapia, Oreochromis niloticus. Diets containing crude protein levels 40, 42.5, 45, 47.5 and 50% were formulated and tried in triplicates. Test diets were fed to 20 fish/1m3 floating hapa at 5% of fish body weight daily ...

  14. Monitoring and Hardware Management for Critical Fusion Plasma Instrumentation

    Directory of Open Access Journals (Sweden)

    Carvalho Paulo F.

    2018-01-01

    Full Text Available Controlled nuclear fusion aims to obtain energy by particles collision confined inside a nuclear reactor (Tokamak. These ionized particles, heavier isotopes of hydrogen, are the main elements inside of plasma that is kept at high temperatures (millions of Celsius degrees. Due to high temperatures and magnetic confinement, plasma is exposed to several sources of instabilities which require a set of procedures by the control and data acquisition systems throughout fusion experiments processes. Control and data acquisition systems often used in nuclear fusion experiments are based on the Advanced Telecommunication Computer Architecture (AdvancedTCA® standard introduced by the Peripheral Component Interconnect Industrial Manufacturers Group (PICMG®, to meet the demands of telecommunications that require large amount of data (TB transportation at high transfer rates (Gb/s, to ensure high availability including features such as reliability, serviceability and redundancy. For efficient plasma control, systems are required to collect large amounts of data, process it, store for later analysis, make critical decisions in real time and provide status reports either from the experience itself or the electronic instrumentation involved. Moreover, systems should also ensure the correct handling of detected anomalies and identified faults, notify the system operator of occurred events, decisions taken to acknowledge and implemented changes. Therefore, for everything to work in compliance with specifications it is required that the instrumentation includes hardware management and monitoring mechanisms for both hardware and software. These mechanisms should check the system status by reading sensors, manage events, update inventory databases with hardware system components in use and maintenance, store collected information, update firmware and installed software modules, configure and handle alarms to detect possible system failures and prevent emergency

  15. Monitoring and Hardware Management for Critical Fusion Plasma Instrumentation

    Science.gov (United States)

    Carvalho, Paulo F.; Santos, Bruno; Correia, Miguel; Combo, Álvaro M.; Rodrigues, AntÓnio P.; Pereira, Rita C.; Fernandes, Ana; Cruz, Nuno; Sousa, Jorge; Carvalho, Bernardo B.; Batista, AntÓnio J. N.; Correia, Carlos M. B. A.; Gonçalves, Bruno

    2018-01-01

    Controlled nuclear fusion aims to obtain energy by particles collision confined inside a nuclear reactor (Tokamak). These ionized particles, heavier isotopes of hydrogen, are the main elements inside of plasma that is kept at high temperatures (millions of Celsius degrees). Due to high temperatures and magnetic confinement, plasma is exposed to several sources of instabilities which require a set of procedures by the control and data acquisition systems throughout fusion experiments processes. Control and data acquisition systems often used in nuclear fusion experiments are based on the Advanced Telecommunication Computer Architecture (AdvancedTCA®) standard introduced by the Peripheral Component Interconnect Industrial Manufacturers Group (PICMG®), to meet the demands of telecommunications that require large amount of data (TB) transportation at high transfer rates (Gb/s), to ensure high availability including features such as reliability, serviceability and redundancy. For efficient plasma control, systems are required to collect large amounts of data, process it, store for later analysis, make critical decisions in real time and provide status reports either from the experience itself or the electronic instrumentation involved. Moreover, systems should also ensure the correct handling of detected anomalies and identified faults, notify the system operator of occurred events, decisions taken to acknowledge and implemented changes. Therefore, for everything to work in compliance with specifications it is required that the instrumentation includes hardware management and monitoring mechanisms for both hardware and software. These mechanisms should check the system status by reading sensors, manage events, update inventory databases with hardware system components in use and maintenance, store collected information, update firmware and installed software modules, configure and handle alarms to detect possible system failures and prevent emergency scenarios

  16. HiCAT Software Infrastructure: Safe hardware control with object oriented Python

    Science.gov (United States)

    Moriarty, Christopher; Brooks, Keira; Soummer, Remi

    2018-01-01

    High contrast imaging for Complex Aperture Telescopes (HiCAT) is a testbed designed to demonstrate coronagraphy and wavefront control for segmented on-axis space telescopes such as envisioned for LUVOIR. To limit the air movements in the testbed room, software interfaces for several different hardware components were developed to completely automate operations. When developing software interfaces for many different pieces of hardware, unhandled errors are commonplace and can prevent the software from properly closing a hardware resource. Some fragile components (e.g. deformable mirrors) can be permanently damaged because of this. We present an object oriented Python-based infrastructure to safely automate hardware control and optical experiments. Specifically, conducting high-contrast imaging experiments while monitoring humidity and power status along with graceful shutdown processes even for unexpected errors. Python contains a construct called a “context manager” that allows you define code to run when a resource is opened or closed. Context managers ensure that a resource is properly closed, even when unhandled errors occur. Harnessing the context manager design, we also use Python’s multiprocessing library to monitor humidity and power status without interrupting the experiment. Upon detecting a safety problem, the master process sends an event to the child process that triggers the context managers to gracefully close any open resources. This infrastructure allows us to queue up several experiments and safely operate the testbed without a human in the loop.

  17. Test Hardware Design for Flight-Like Operation of Advanced Stirling Convertors

    Science.gov (United States)

    Oriti, Salvatore M.

    2012-01-01

    NASA Glenn Research Center (GRC) has been supporting development of the Advanced Stirling Radioisotope Generator (ASRG) since 2006. A key element of the ASRG project is providing life, reliability, and performance testing of the Advanced Stirling Convertor (ASC). For this purpose, the Thermal Energy Conversion branch at GRC has been conducting extended operation of a multitude of free-piston Stirling convertors. The goal of this effort is to generate long-term performance data (tens of thousands of hours) simultaneously on multiple units to build a life and reliability database. The test hardware for operation of these convertors was designed to permit in-air investigative testing, such as performance mapping over a range of environmental conditions. With this, there was no requirement to accurately emulate the flight hardware. For the upcoming ASC-E3 units, the decision has been made to assemble the convertors into a flight-like configuration. This means the convertors will be arranged in the dual-opposed configuration in a housing that represents the fit, form, and thermal function of the ASRG. The goal of this effort is to enable system level tests that could not be performed with the traditional test hardware at GRC. This offers the opportunity to perform these system-level tests much earlier in the ASRG flight development, as they would normally not be performed until fabrication of the qualification unit. This paper discusses the requirements, process, and results of this flight-like hardware design activity.

  18. Hardware Algorithms For Tile-Based Real-Time Rendering

    NARCIS (Netherlands)

    Crisu, D.

    2012-01-01

    In this dissertation, we present the GRAphics AcceLerator (GRAAL) framework for developing embedded tile-based rasterization hardware for mobile devices, meant to accelerate real-time 3-D graphics (OpenGL compliant) applications. The goal of the framework is a low-cost, low-power, high-performance

  19. Chip-Multiprocessor Hardware Locks for Safety-Critical Java

    DEFF Research Database (Denmark)

    Strøm, Torur Biskopstø; Puffitsch, Wolfgang; Schoeberl, Martin

    2013-01-01

    and may void a task set's schedulability. In this paper we present a hardware locking mechanism to reduce the synchronization overhead. The solution is implemented for the chip-multiprocessor version of the Java Optimized Processor in the context of safety-critical Java. The implementation is compared...

  20. Comparison of optimum tilt angles of solar collectors determined at yearly, seasonal and monthly levels

    International Nuclear Information System (INIS)

    Despotovic, Milan; Nedic, Vladimir

    2015-01-01

    Highlights: • Optimum yearly, biannual, seasonal, monthly, and daily tilt angles were found. • Energy collected per square meter is compared for ten different scenarios. • Four seasonal scenarios and two biannual scenarios were considered. • It is sufficient to adjust tilt angles only twice per year. - Abstract: The amount of energy that is transformed in solar collector depends on its tilt angle with respect to horizontal plane and orientation of the collector. In this article the optimum tilt angle of solar collectors for Belgrade, which is located at the latitude of 44°47′N is determined. The optimum tilt angle was found by searching for the values for which the solar radiation on the collector surface is maximum for a particular day or a specific period. In that manner the yearly, biannual, seasonal, monthly, fortnightly, and daily optimum tilt angles are determined. Annually collected energy per square meter of tilted surface is compared for ten different scenarios. In addition, these optimum tilt angles are used to calculate the amount of energy on the surface of PV panels that could be installed at the roof of the building. The results show that for observed case study placing the panels at yearly, seasonal and monthly optimum tilt angles, would yield increasing yearly amount of collected energy by factor of 5.98%, 13.55%, and 15.42% respectively compared to energy that could be collected by putting the panels at current roofs’ surface angles

  1. Large-scale simulations of plastic neural networks on neuromorphic hardware

    Directory of Open Access Journals (Sweden)

    James Courtney Knight

    2016-04-01

    Full Text Available SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 20000 neurons and 51200000 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.

  2. Application of customer-interruption costs for optimum distribution planning

    International Nuclear Information System (INIS)

    Mok, Y.L.; Chung, T.S.

    1996-01-01

    We present a new methodology for obtaining optimum values of the integrated cost of utility investment with customer interruption in distribution planning for electric power systems by determining the reliability cost and worth of the distribution system. Reliability cost refers to investment cost of the utility in achieving a defined level of reliability. Reliability worth is the benefit gained by the utility customer from an increase of reliability. A computer program has been developed to determine comparative reliability indices for a typical distribution network. With the average interruption cost, outage duration, average disconnected load, cost data for distribution equipment, etc. being known, the relation between reliability cost, reliability worth and reliability at the specified load point are obtained. The optimum reliability of the distribution system is then determined from the minimum cost to the utility with customer interruption. The applicability of this approach is demonstrated by several practical networks. (Author)

  3. Dependence of optimum baseline setting on scatter fraction and detector response function

    International Nuclear Information System (INIS)

    Atkins, F.B.; Beck, R.N.; Hoffer, P.B.; Palmer, D.

    1977-01-01

    A theoretical and experimental investigation has been undertaken to determine the dependence of an optimum baseline setting on the amount of scattered radiation recorded in a spectrum, and on the energy resolution of the detector. In particular, baseline settings were established for clinical examinations which differed greatly in the amount of scattered radiation, namely, liver and brain scans, for which individual variations were found to produce only minimal fluctuations in the optimum baseline settings. This analysis resulted in an optimum baseline setting of 125.0 keV for brain scans and 127.2 keV for liver scans for the scintillation camera used in these studies. The criterion that was used is based on statistical considerations of the measurement of an unscattered component in the presence of a background due to scattered photons. The limitations of such a criterion are discussed, and phantom images are presented to illustrate these effects at various baseline settings. (author)

  4. Comparison between coasting and bunched beams on optimum stochastic cooling and signal suppression

    International Nuclear Information System (INIS)

    Wei, J.

    1991-01-01

    A comparison has been performed between coasting and bunched particle beams pertaining to the mechanism of stochastic cooling. In the case that particles occupy the entire sinusoidal rf bucket, the optimum cooling rate for the bunched beam is shown to be the same as that predicted from the coasting-beam theory using local particle density. However, in the case that particles occupy only the center of the bucket, the optimum rate decreases in proportion to the ratio of the bunch area to the bucket area. Furthermore, it has been shown for both coasting and bunched beams that particle motion is stable upon signal suppression if the amplitude of the gain is less than twice the optimum value over the entire frequency bandwidth of the cooling system. 7 refs., 1 fig

  5. CASIS Fact Sheet: Hardware and Facilities

    Science.gov (United States)

    Solomon, Michael R.; Romero, Vergel

    2016-01-01

    Vencore is a proven information solutions, engineering, and analytics company that helps our customers solve their most complex challenges. For more than 40 years, we have designed, developed and delivered mission-critical solutions as our customers' trusted partner. The Engineering Services Contract, or ESC, provides engineering and design services to the NASA organizations engaged in development of new technologies at the Kennedy Space Center. Vencore is the ESC prime contractor, with teammates that include Stinger Ghaffarian Technologies, Sierra Lobo, Nelson Engineering, EASi, and Craig Technologies. The Vencore team designs and develops systems and equipment to be used for the processing of space launch vehicles, spacecraft, and payloads. We perform flight systems engineering for spaceflight hardware and software; develop technologies that serve NASA's mission requirements and operations needs for the future. Our Flight Payload Support (FPS) team at Kennedy Space Center (KSC) provides engineering, development, and certification services as well as payload integration and management services to NASA and commercial customers. Our main objective is to assist principal investigators (PIs) integrate their science experiments into payload hardware for research aboard the International Space Station (ISS), commercial spacecraft, suborbital vehicles, parabolic flight aircrafts, and ground-based studies. Vencore's FPS team is AS9100 certified and a recognized implementation partner for the Center for Advancement of Science in Space (CASIS

  6. Recycling Flight Hardware Components and Systems to Reduce Next Generation Research Costs

    Science.gov (United States)

    Turner, Wlat

    2011-01-01

    With the recent 'new direction' put forth by President Obama identifying NASA's new focus in research rather than continuing on a path to return to the Moon and Mars, the focus of work at Kennedy Space Center (KSC) may be changing dramatically. Research opportunities within the micro-gravity community potentially stands at the threshold of resurgence when the new direction of the agency takes hold for the next generation of experimenters. This presentation defines a strategy for recycling flight experiment components or part numbers, in order to reduce research project costs, not just in component selection and fabrication, but in expediting qualification of hardware for flight. A key component of the strategy is effective communication of relevant flight hardware information and available flight hardware components to researchers, with the goal of 'short circuiting' the design process for flight experiments

  7. Comparative analysis of diffused solar radiation models for optimum tilt angle determination for Indian locations

    International Nuclear Information System (INIS)

    Yadav, P.; Chandel, S.S.

    2014-01-01

    Tilt angle and orientation greatly are influenced on the performance of the solar photo voltaic panels. The tilt angle of solar photovoltaic panels is one of the important parameters for the optimum sizing of solar photovoltaic systems. This paper analyses six different isotropic and anisotropic diffused solar radiation models for optimum tilt angle determination. The predicted optimum tilt angles are compared with the experimentally measured values for summer season under outdoor conditions. The Liu and Jordan model is found to exhibit t lowest error as compared to other models for the location. (author)

  8. Parallel random number generator for inexpensive configurable hardware cells

    Science.gov (United States)

    Ackermann, J.; Tangen, U.; Bödekker, B.; Breyer, J.; Stoll, E.; McCaskill, J. S.

    2001-11-01

    A new random number generator ( RNG) adapted to parallel processors has been created. This RNG can be implemented with inexpensive hardware cells. The correlation between neighboring cells is suppressed with smart connections. With such connection structures, sequences of pseudo-random numbers are produced. Numerical tests including a self-avoiding random walk test and the simulation of the order parameter and energy of the 2D Ising model give no evidence for correlation in the pseudo-random sequences. Because the new random number generator has suppressed the correlation between neighboring cells which is usually observed in cellular automaton implementations, it is applicable for extended time simulations. It gives an immense speed-up factor if implemented directly in configurable hardware, and has recently been used for long time simulations of spatially resolved molecular evolution.

  9. Hardware-in-the-loop vehicle system including dynamic fuel cell model

    Energy Technology Data Exchange (ETDEWEB)

    Lemes, Z.; Lenhart, T.; Braun, M.; Maencher, H. [MAGNUM Automatisierungstechnik GmbH, Darmstadt (Germany)

    2005-07-01

    In order to reduce costs and accelerate the development of fuel cells and systems the usage of hardware-in-the-loop (HIL) testing and dynamic modelling opens new possibilities. The dynamic model of a proton exchange membrane fuel cell (PEMFC) together with a vehicle model is used to carry out a comprehensive system investigation, which allows designing and optimising the behaviour of the components and the entire fuel cell system. The set-up of a HIL system enables real time interaction between the selected hardware and the model. (orig.)

  10. Accelerating the Non-equispaced Fast Fourier Transform on Commodity Graphics Hardware

    DEFF Research Database (Denmark)

    Sørensen, Thomas Sangild; Schaeffter, Tobias; Noe, Karsten Østergaard

    2008-01-01

    We present a fast parallel algorithm to compute the Non-equispaced fast Fourier transform on commodity graphics hardware (the GPU). We focus particularly on a novel implementation of the convolution step in the transform, which was previously its most time consuming part. We describe the performa......We present a fast parallel algorithm to compute the Non-equispaced fast Fourier transform on commodity graphics hardware (the GPU). We focus particularly on a novel implementation of the convolution step in the transform, which was previously its most time consuming part. We describe...

  11. Environmental Friendly Coatings and Corrosion Prevention For Flight Hardware Project

    Science.gov (United States)

    Calle, Luz

    2014-01-01

    Identify, test and develop qualification criteria for environmentally friendly corrosion protective coatings and corrosion preventative compounds (CPC's) for flight hardware an ground support equipment.

  12. Hardware and Software Integration in Project Development of Automated Controller System Using LABVIEW FPGA

    International Nuclear Information System (INIS)

    Mohd Khairulezwan Abd Manan; Mohd Sabri Minhat; Izhar Abu Hussin

    2014-01-01

    The Field-Programmable Gate Array (FPGA) is a semiconductor device that can be programmed after manufacturing. Instead of being restricted to any predetermined hardware function, an FPGA allows user to program product features and functions, adapt to new standards, and reconfigure hardware for specific applications even after the product has been installed in the field, hence the name field-programmable. This project developed a control system using LabVIEW FPGA. LabVIEW FPGA is easier where it is programmed by using drag and drop icon. Then it will be integrated with the hardware input and output. (author)

  13. A Fast hardware Tracker for the ATLAS Trigger system

    CERN Document Server

    Pandini, Carlo Enrico; The ATLAS collaboration

    2015-01-01

    The trigger system at the ATLAS experiment is designed to lower the event rate occurring from the nominal bunch crossing at 40 MHz to about 1 kHz for a designed LHC luminosity of 10$^{34}$ cm$^{-2}$ s$^{-1}$. After a very successful data taking run the LHC is expected to run starting in 2015 with much higher instantaneous luminosities and this will increase the load on the High Level Trigger system. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals, which requires a more extensive use of tracking information. The Fast Tracker (FTK) trigger system, part of the ATLAS trigger upgrade program, is a highly parallel hardware device designed to perform full-scan track-finding at the event rate of 100 kHz. FTK is a dedicated processor based on a mixture of advanced technologies. Modern, powerful, Field Programmable Gate Arrays form an important part of the system architecture, and the combinatorial problem of pattern r...

  14. Hiding State in CλaSH Hardware Descriptions

    NARCIS (Netherlands)

    Gerards, Marco Egbertus Theodorus; Baaij, C.P.R.; Kuper, Jan; Kooijman, Matthijs

    Synchronous hardware can be modelled as a mapping from input and state to output and a new state, such mappings are referred to as transition functions. It is natural to use a functional language to implement transition functions. The CaSH compiler is capable of translating transition functions to

  15. Towards automated construction of dependable software/hardware systems

    Energy Technology Data Exchange (ETDEWEB)

    Yakhnis, A.; Yakhnis, V. [Pioneer Technologies & Rockwell Science Center, Albuquerque, NM (United States)

    1997-11-01

    This report contains viewgraphs on the automated construction of dependable computer architecture systems. The outline of this report is: examples of software/hardware systems; dependable systems; partial delivery of dependability; proposed approach; removing obstacles; advantages of the approach; criteria for success; current progress of the approach; and references.

  16. Beyond Open Source Software: Solving Common Library Problems Using the Open Source Hardware Arduino Platform

    Directory of Open Access Journals (Sweden)

    Jonathan Younker

    2013-06-01

    Full Text Available Using open source hardware platforms like the Arduino, libraries have the ability to quickly and inexpensively prototype custom hardware solutions to common library problems. The authors present the Arduino environment, what it is, what it does, and how it was used at the James A. Gibson Library at Brock University to create a production portable barcode-scanning utility for in-house use statistics collection as well as a prototype for a service desk statistics tabulation program’s hardware interface.

  17. Using Innovative Technologies for Manufacturing Rocket Engine Hardware

    Science.gov (United States)

    Betts, E. M.; Eddleman, D. E.; Reynolds, D. C.; Hardin, N. A.

    2011-01-01

    Many of the manufacturing techniques that are currently used for rocket engine component production are traditional methods that have been proven through years of experience and historical precedence. As the United States enters into the next space age where new launch vehicles are being designed and propulsion systems are being improved upon, it is sometimes necessary to adopt innovative techniques for manufacturing hardware. With a heavy emphasis on cost reduction and improvements in manufacturing time, rapid manufacturing techniques such as Direct Metal Laser Sintering (DMLS) are being adopted and evaluated for their use on NASA s Space Launch System (SLS) upper stage engine, J-2X, with hopes of employing this technology on a wide variety of future projects. DMLS has the potential to significantly reduce the processing time and cost of engine hardware, while achieving desirable material properties by using a layered powder metal manufacturing process in order to produce complex part geometries. Marshall Space Flight Center (MSFC) has recently hot-fire tested a J-2X gas generator (GG) discharge duct that was manufactured using DMLS. The duct was inspected and proof tested prior to the hot-fire test. Using a workhorse gas generator (WHGG) test fixture at MSFC's East Test Area, the duct was subjected to extreme J-2X hot gas environments during 7 tests for a total of 537 seconds of hot-fire time. The duct underwent extensive post-test evaluation and showed no signs of degradation. DMLS manufacturing has proven to be a viable option for manufacturing rocket engine hardware, and further development and use of this manufacturing method is recommended.

  18. High-precision optical systems with inexpensive hardware: a unified alignment and structural design approach

    Science.gov (United States)

    Winrow, Edward G.; Chavez, Victor H.

    2011-09-01

    High-precision opto-mechanical structures have historically been plagued by high costs for both hardware and the associated alignment and assembly process. This problem is especially true for space applications where only a few production units are produced. A methodology for optical alignment and optical structure design is presented which shifts the mechanism of maintaining precision from tightly toleranced, machined flight hardware to reusable, modular tooling. Using the proposed methodology, optical alignment error sources are reduced by the direct alignment of optics through their surface retroreflections (pips) as seen through a theodolite. Optical alignment adjustments are actualized through motorized, sub-micron precision actuators in 5 degrees of freedom. Optical structure hardware costs are reduced through the use of simple shapes (tubes, plates) and repeated components. This approach produces significantly cheaper hardware and more efficient assembly without sacrificing alignment precision or optical structure stability. The design, alignment plan and assembly of a 4" aperture, carbon fiber composite, Schmidt-Cassegrain concept telescope is presented.

  19. Calculation of optimum control rod operation programme for boiling water reactor

    International Nuclear Information System (INIS)

    Fehr, L.

    1978-01-01

    Control rod operation programmes are calculated based on a three dimensional Boiling Water Reactor situation model. The position of the control rods at variosu burn-ups is chosen by an optimisation so that the sum of the square deviations of the load density distribution from an optimum distribution ('Haling' distribution) are minimised. Other conditions are remaining critical and observing the thermal limits for central fuel element melting and critical heat surface loading. As an example, an optimum control rod operation programme for the first cycle in Lengen nuclear power station is calculated and is compared with the programme actually used. (orig.) 891 HP [de

  20. Applicability Problem in Optimum Reinforced Concrete Structures Design

    Directory of Open Access Journals (Sweden)

    Ashara Assedeq

    2016-01-01

    Full Text Available Optimum reinforced concrete structures design is very complex problem, not only considering exactness of calculus but also because of questionable applicability of existing methods in practice. This paper presents the main theoretical mathematical and physical features of the problem formulation as well as the review and analysis of existing methods and solutions considering their exactness and applicability.