WorldWideScience

Sample records for computer components

  1. Computers as components principles of embedded computing system design

    CERN Document Server

    Wolf, Marilyn

    2012-01-01

    Computers as Components: Principles of Embedded Computing System Design, 3e, presents essential knowledge on embedded systems technology and techniques. Updated for today's embedded systems design methods, this edition features new examples including digital signal processing, multimedia, and cyber-physical systems. Author Marilyn Wolf covers the latest processors from Texas Instruments, ARM, and Microchip Technology plus software, operating systems, networks, consumer devices, and more. Like the previous editions, this textbook: Uses real processors to demonstrate both technology and tec

  2. Gamma dose effects valuation on micro computing components

    International Nuclear Information System (INIS)

    Joffre, F.

    1995-01-01

    Robotics in hostile environment raises the problem of micro computing components resistance with gamma radiation cumulated dose. The current aim is to reach a dose of 3000 grays with industrial components. A methodology and an instrumentation adapted to test this type of components have been developed. The aim of this work is to present the advantages and disadvantages bound to the use of industrial components in the presence of gamma radiation. After an analysis of the criteria allowing to justify the technological choices, the different steps which characterize the selection and the assessment methodology used are explained. The irradiation and measures means now operational are mentioned. Moreover, the supply aspects of the chosen components for the design of an industrialized system is taken into account. These selection and assessment components contribute to the development and design of computers for civil nuclear robotics. (O.M.)

  3. 77 FR 20047 - Certain Computer and Computer Peripheral Devices and Components Thereof and Products Containing...

    Science.gov (United States)

    2012-04-03

    ... INTERNATIONAL TRADE COMMISSION [DN 2889] Certain Computer and Computer Peripheral Devices and... Certain Computer and Computer Peripheral Devices and Components Thereof and Products Containing the Same... importation, and the sale within the United States after importation of certain computer and computer...

  4. Component-based software for high-performance scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  5. Component-based software for high-performance scientific computing

    International Nuclear Information System (INIS)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly

  6. 77 FR 26041 - Certain Computers and Computer Peripheral Devices and Components Thereof and Products Containing...

    Science.gov (United States)

    2012-05-02

    ... INTERNATIONAL TRADE COMMISSION [Inv. No. 337-TA-841] Certain Computers and Computer Peripheral... after importation of certain computers and computer peripheral devices and components thereof and... industry in the United States exists as required by subsection (a)(2) of section 337. The complainant...

  7. A virtual component method in numerical computation of cascades for isotope separation

    International Nuclear Information System (INIS)

    Zeng Shi; Cheng Lu

    2014-01-01

    The analysis, optimization, design and operation of cascades for isotope separation involve computations of cascades. In analytical analysis of cascades, using virtual components is a very useful analysis method. For complicated cases of cascades, numerical analysis has to be employed. However, bound up to the conventional idea that the concentration of a virtual component should be vanishingly small, virtual component is not yet applied to numerical computations. Here a method of introducing the method of using virtual components to numerical computations is elucidated, and its application to a few types of cascades is explained and tested by means of numerical experiments. The results show that the concentration of a virtual component is not restrained at all by the 'vanishingly small' idea. For the same requirements on cascades, the cascades obtained do not depend on the concentrations of virtual components. (authors)

  8. Independent component analysis of dynamic contrast-enhanced computed tomography images

    Energy Technology Data Exchange (ETDEWEB)

    Koh, T S [School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798 (Singapore); Yang, X [School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798 (Singapore); Bisdas, S [Department of Diagnostic and Interventional Radiology, Johann Wolfgang Goethe University Hospital, Theodor-Stern-Kai 7, D-60590 Frankfurt (Germany); Lim, C C T [Department of Neuroradiology, National Neuroscience Institute, 11 Jalan Tan Tock Seng, Singapore 308433 (Singapore)

    2006-10-07

    Independent component analysis (ICA) was applied on dynamic contrast-enhanced computed tomography images of cerebral tumours to extract spatial component maps of the underlying vascular structures, which correspond to different haemodynamic phases as depicted by the passage of the contrast medium. The locations of arteries, veins and tumours can be separately identified on these spatial component maps. As the contrast enhancement behaviour of the cerebral tumour differs from the normal tissues, ICA yields a tumour component map that reveals the location and extent of the tumour. Tumour outlines can be generated using the tumour component maps, with relatively simple segmentation methods. (note)

  9. Computer navigation experience in hip resurfacing improves femoral component alignment using a conventional jig.

    Science.gov (United States)

    Morison, Zachary; Mehra, Akshay; Olsen, Michael; Donnelly, Michael; Schemitsch, Emil

    2013-11-01

    The use of computer navigation has been shown to improve the accuracy of femoral component placement compared to conventional instrumentation in hip resurfacing. Whether exposure to computer navigation improves accuracy when the procedure is subsequently performed with conventional instrumentation without navigation has not been explored. We examined whether femoral component alignment utilizing a conventional jig improves following experience with the use of imageless computer navigation for hip resurfacing. Between December 2004 and December 2008, 213 consecutive hip resurfacings were performed by a single surgeon. The first 17 (Cohort 1) and the last 9 (Cohort 2) hip resurfacings were performed using a conventional guidewire alignment jig. In 187 cases, the femoral component was implanted using the imageless computer navigation. Cohorts 1 and 2 were compared for femoral component alignment accuracy. All components in Cohort 2 achieved the position determined by the preoperative plan. The mean deviation of the stem-shaft angle (SSA) from the preoperatively planned target position was 2.2° in Cohort 2 and 5.6° in Cohort 1 (P = 0.01). Four implants in Cohort 1 were positioned at least 10° varus compared to the target SSA position and another four were retroverted. Femoral component placement utilizing conventional instrumentation may be more accurate following experience using imageless computer navigation.

  10. Computer compensation for NMR quantitative analysis of trace components

    International Nuclear Information System (INIS)

    Nakayama, T.; Fujiwara, Y.

    1981-01-01

    A computer program has been written that determines trace components and separates overlapping components in multicomponent NMR spectra. This program uses the Lorentzian curve as a theoretical curve of NMR spectra. The coefficients of the Lorentzian are determined by the method of least squares. Systematic errors such as baseline/phase distortion are compensated and random errors are smoothed by taking moving averages, so that there processes contribute substantially to decreasing the accumulation time of spectral data. The accuracy of quantitative analysis of trace components has been improved by two significant figures. This program was applied to determining the abundance of 13C and the saponification degree of PVA

  11. Computer navigation experience in hip resurfacing improves femoral component alignment using a conventional jig

    Directory of Open Access Journals (Sweden)

    Zachary Morison

    2013-01-01

    Full Text Available Background:The use of computer navigation has been shown to improve the accuracy of femoral component placement compared to conventional instrumentation in hip resurfacing. Whether exposure to computer navigation improves accuracy when the procedure is subsequently performed with conventional instrumentation without navigation has not been explored. We examined whether femoral component alignment utilizing a conventional jig improves following experience with the use of imageless computer navigation for hip resurfacing. Materials and Methods:Between December 2004 and December 2008, 213 consecutive hip resurfacings were performed by a single surgeon. The first 17 (Cohort 1 and the last 9 (Cohort 2 hip resurfacings were performed using a conventional guidewire alignment jig. In 187 cases, the femoral component was implanted using the imageless computer navigation. Cohorts 1 and 2 were compared for femoral component alignment accuracy. Results:All components in Cohort 2 achieved the position determined by the preoperative plan. The mean deviation of the stem-shaft angle (SSA from the preoperatively planned target position was 2.2° in Cohort 2 and 5.6° in Cohort 1 ( P = 0.01. Four implants in Cohort 1 were positioned at least 10° varus compared to the target SSA position and another four were retroverted. Conclusions: Femoral component placement utilizing conventional instrumentation may be more accurate following experience using imageless computer navigation.

  12. Computation of X-ray powder diffractograms of cement components ...

    Indian Academy of Sciences (India)

    Computation of X-ray powder diffractograms of cement components and its application to phase analysis and hydration performance of OPC cement. Rohan Jadhav N C Debnath. Volume 34 Issue 5 August 2011 pp 1137- ... Keywords. Portland cement; X-ray diffraction; crystal structure; characterization; Rietveld method.

  13. Improved computation method in residual life estimation of structural components

    Directory of Open Access Journals (Sweden)

    Maksimović Stevan M.

    2013-01-01

    Full Text Available This work considers the numerical computation methods and procedures for the fatigue crack growth predicting of cracked notched structural components. Computation method is based on fatigue life prediction using the strain energy density approach. Based on the strain energy density (SED theory, a fatigue crack growth model is developed to predict the lifetime of fatigue crack growth for single or mixed mode cracks. The model is based on an equation expressed in terms of low cycle fatigue parameters. Attention is focused on crack growth analysis of structural components under variable amplitude loads. Crack growth is largely influenced by the effect of the plastic zone at the front of the crack. To obtain efficient computation model plasticity-induced crack closure phenomenon is considered during fatigue crack growth. The use of the strain energy density method is efficient for fatigue crack growth prediction under cyclic loading in damaged structural components. Strain energy density method is easy for engineering applications since it does not require any additional determination of fatigue parameters (those would need to be separately determined for fatigue crack propagation phase, and low cyclic fatigue parameters are used instead. Accurate determination of fatigue crack closure has been a complex task for years. The influence of this phenomenon can be considered by means of experimental and numerical methods. Both of these models are considered. Finite element analysis (FEA has been shown to be a powerful and useful tool1,6 to analyze crack growth and crack closure effects. Computation results are compared with available experimental results. [Projekat Ministarstva nauke Republike Srbije, br. OI 174001

  14. Fuzzy cluster quantitative computations of component mass transfer in rocks or minerals

    International Nuclear Information System (INIS)

    Liu Dezheng

    2000-01-01

    The author advances a new component mass transfer quantitative computation method on the basis of closure nature of mass percentage of components in rocks or minerals. Using fuzzy dynamic cluster analysis, and calculating restore closure difference, and determining type of difference, and assisted by relevant diagnostic parameters, the method gradually screens out the true constant component. Then, true mass percentage and mass transfer quantity of components of metabolic rocks or minerals are calculated by applying the true constant component fixed coefficient. This method is called true constant component fixed method (TCF method)

  15. Development of computational methods of design by analysis for pressure vessel components

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan; Wu Honglin

    2005-01-01

    Stress classification is not only one of key steps when pressure vessel component is designed by analysis, but also a difficulty which puzzles engineers and designers at all times. At present, for calculating and categorizing the stress field of pressure vessel components, there are several computation methods of design by analysis such as Stress Equivalent Linearization, Two-Step Approach, Primary Structure method, Elastic Compensation method, GLOSS R-Node method and so on, that are developed and applied. Moreover, ASME code also gives an inelastic method of design by analysis for limiting gross plastic deformation only. When pressure vessel components design by analysis, sometimes there are huge differences between the calculating results for using different calculating and analysis methods mentioned above. As consequence, this is the main reason that affects wide application of design by analysis approach. Recently, a new approach, presented in the new proposal of a European Standard, CEN's unfired pressure vessel standard EN 13445-3, tries to avoid problems of stress classification by analyzing pressure vessel structure's various failure mechanisms directly based on elastic-plastic theory. In this paper, some stress classification methods mentioned above, are described briefly. And the computational methods cited in the European pressure vessel standard, such as Deviatoric Map, and nonlinear analysis methods (plastic analysis and limit analysis), are depicted compendiously. Furthermore, the characteristics of computational methods of design by analysis are summarized for selecting the proper computational method when design pressure vessel component by analysis. (authors)

  16. 77 FR 27078 - Certain Electronic Devices, Including Mobile Phones and Tablet Computers, and Components Thereof...

    Science.gov (United States)

    2012-05-08

    ... Phones and Tablet Computers, and Components Thereof; Notice of Receipt of Complaint; Solicitation of... entitled Certain Electronic Devices, Including Mobile Phones and Tablet Computers, and Components Thereof... the United States after importation of certain electronic devices, including mobile phones and tablet...

  17. 77 FR 34063 - Certain Electronic Devices, Including Mobile Phones and Tablet Computers, and Components Thereof...

    Science.gov (United States)

    2012-06-08

    ... Phones and Tablet Computers, and Components Thereof Institution of Investigation AGENCY: U.S... the United States after importation of certain electronic devices, including mobile phones and tablet... mobile phones and tablet computers, and components thereof that infringe one or more of claims 1-3 and 5...

  18. Development of high performance scientific components for interoperability of computing packages

    Energy Technology Data Exchange (ETDEWEB)

    Gulabani, Teena Pratap [Iowa State Univ., Ames, IA (United States)

    2008-01-01

    Three major high performance quantum chemistry computational packages, NWChem, GAMESS and MPQC have been developed by different research efforts following different design patterns. The goal is to achieve interoperability among these packages by overcoming the challenges caused by the different communication patterns and software design of each of these packages. A chemistry algorithm is hard to develop as well as being a time consuming process; integration of large quantum chemistry packages will allow resource sharing and thus avoid reinvention of the wheel. Creating connections between these incompatible packages is the major motivation of the proposed work. This interoperability is achieved by bringing the benefits of Component Based Software Engineering through a plug-and-play component framework called Common Component Architecture (CCA). In this thesis, I present a strategy and process used for interfacing two widely used and important computational chemistry methodologies: Quantum Mechanics and Molecular Mechanics. To show the feasibility of the proposed approach the Tuning and Analysis Utility (TAU) has been coupled with NWChem code and its CCA components. Results show that the overhead is negligible when compared to the ease and potential of organizing and coping with large-scale software applications.

  19. Computed tomography (CT) as a nondestructive test method used for composite helicopter components

    Science.gov (United States)

    Oster, Reinhold

    1991-09-01

    The first components of primary helicopter structures to be made of glass fiber reinforced plastics were the main and tail rotor blades of the Bo105 and BK 117 helicopters. These blades are now successfully produced in series. New developments in rotor components, e.g., the rotor blade technology of the Bo108 and PAH2 programs, make use of very complex fiber reinforced structures to achieve simplicity and strength. Computer tomography was found to be an outstanding nondestructive test method for examining the internal structure of components. A CT scanner generates x-ray attenuation measurements which are used to produce computer reconstructed images of any desired part of an object. The system images a range of flaws in composites in a number of views and planes. Several CT investigations and their results are reported taking composite helicopter components as an example.

  20. 75 FR 8399 - In the Matter of Certain Mobile Communications and Computer Devices and Components Thereof...

    Science.gov (United States)

    2010-02-24

    ... States after importation of certain mobile communications and computer devices and components thereof by... importation of certain mobile communications or computer devices or components thereof that infringe one or... INTERNATIONAL TRADE COMMISSION [Inv. No. 337-TA-704] In the Matter of Certain Mobile...

  1. Dynamic leaching test of personal computer components.

    Science.gov (United States)

    Li, Yadong; Richardson, Jay B; Niu, Xiaojun; Jackson, Ollie J; Laster, Jeremy D; Walker, Aaron K

    2009-11-15

    A dynamic leaching test (DLT) was developed and used to evaluate the leaching of toxic substances for electronic waste in the environment. The major components in personal computers (PCs) including motherboards, hard disc drives, floppy disc drives, and compact disc drives were tested. The tests lasted for 2 years for motherboards and 1.5 year for the disc drives. The extraction fluids for the standard toxicity characteristic leaching procedure (TCLP) and synthetic precipitation leaching procedure (SPLP) were used as the DLT leaching solutions. A total of 18 elements including Ag, Al, As, Au, Ba, Be, Cd, Cr, Cu, Fe, Ga, Ni, Pd, Pb, Sb, Se, Sn, and Zn were analyzed in the DLT leachates. Only Al, Cu, Fe, Ni, Pb, and Zn were commonly found in the DLT leachates of the PC components. Their leaching levels were much higher in TCLP extraction fluid than in SPLP extraction fluid. The toxic heavy metal Pb was found to continuously leach out of the components over the entire test periods. The cumulative amounts of Pb leached out of the motherboards in TCLP extraction fluid reached 2.0 g per motherboard over the 2-year test period, and that in SPLP extraction fluid were 75-90% less. The leaching rates or levels of Pb were largely affected by the content of galvanized steel in the PC components. The higher was the steel content, the lower the Pb leaching rate would be. The findings suggest that the obsolete PCs disposed of in landfills or discarded in the environment continuously release Pb for years when subjected to landfill leachate or rains.

  2. Development and application of computer codes for multidimensional thermalhydraulic analyses of nuclear reactor components

    International Nuclear Information System (INIS)

    Carver, M.B.

    1983-01-01

    Components of reactor systems and related equipment are identified in which multidimensional computational thermal hydraulics can be used to advantage to assess and improve design. Models of single- and two-phase flow are reviewed, and the governing equations for multidimensional analysis are discussed. Suitable computational algorithms are introduced, and sample results from the application of particular multidimensional computer codes are given

  3. Femoral Component External Rotation Affects Knee Biomechanics: A Computational Model of Posterior-stabilized TKA.

    Science.gov (United States)

    Kia, Mohammad; Wright, Timothy M; Cross, Michael B; Mayman, David J; Pearle, Andrew D; Sculco, Peter K; Westrich, Geoffrey H; Imhauser, Carl W

    2018-01-01

    The correct amount of external rotation of the femoral component during TKA is controversial because the resulting changes in biomechanical knee function associated with varying degrees of femoral component rotation are not well understood. We addressed this question using a computational model, which allowed us to isolate the biomechanical impact of geometric factors including bony shapes, location of ligament insertions, and implant size across three different knees after posterior-stabilized (PS) TKA. Using a computational model of the tibiofemoral joint, we asked: (1) Does external rotation unload the medial collateral ligament (MCL) and what is the effect on lateral collateral ligament tension? (2) How does external rotation alter tibiofemoral contact loads and kinematics? (3) Does 3° external rotation relative to the posterior condylar axis align the component to the surgical transepicondylar axis (sTEA) and what anatomic factors of the femoral condyle explain variations in maximum MCL tension among knees? We incorporated a PS TKA into a previously developed computational knee model applied to three neutrally aligned, nonarthritic, male cadaveric knees. The computational knee model was previously shown to corroborate coupled motions and ligament loading patterns of the native knee through a range of flexion. Implant geometries were virtually installed using hip-to-ankle CT scans through measured resection and anterior referencing surgical techniques. Collateral ligament properties were standardized across each knee model by defining stiffness and slack lengths based on the healthy population. The femoral component was externally rotated from 0° to 9° relative to the posterior condylar axis in 3° increments. At each increment, the knee was flexed under 500 N compression from 0° to 90° simulating an intraoperative examination. The computational model predicted collateral ligament forces, compartmental contact forces, and tibiofemoral internal/external and

  4. Separation of electron ion ring components (computational simulation and experimental results)

    International Nuclear Information System (INIS)

    Aleksandrov, V.S.; Dolbilov, G.V.; Kazarinov, N.Yu.; Mironov, V.I.; Novikov, V.G.; Perel'shtejn, Eh.A.; Sarantsev, V.P.; Shevtsov, V.F.

    1978-01-01

    The problems of the available polarization value of electron-ion rings in the regime of acceleration and separation of its components at the final stage of acceleration are studied. The results of computational simulation by use of the macroparticle method and experiments on the ring acceleration and separation are given. The comparison of calculation results with experiment is presented

  5. FAILPROB-A Computer Program to Compute the Probability of Failure of a Brittle Component; TOPICAL

    International Nuclear Information System (INIS)

    WELLMAN, GERALD W.

    2002-01-01

    FAILPROB is a computer program that applies the Weibull statistics characteristic of brittle failure of a material along with the stress field resulting from a finite element analysis to determine the probability of failure of a component. FAILPROB uses the statistical techniques for fast fracture prediction (but not the coding) from the N.A.S.A. - CARES/life ceramic reliability package. FAILPROB provides the analyst at Sandia with a more convenient tool than CARES/life because it is designed to behave in the tradition of structural analysis post-processing software such as ALGEBRA, in which the standard finite element database format EXODUS II is both read and written. This maintains compatibility with the entire SEACAS suite of post-processing software. A new technique to deal with the high local stresses computed for structures with singularities such as glass-to-metal seals and ceramic-to-metal braze joints is proposed and implemented. This technique provides failure probability computation that is insensitive to the finite element mesh employed in the underlying stress analysis. Included in this report are a brief discussion of the computational algorithms employed, user instructions, and example problems that both demonstrate the operation of FAILPROB and provide a starting point for verification and validation

  6. A computer-based feedback only intervention with and without a moderation skills component

    OpenAIRE

    Weaver, Cameron C.; Leffingwell, Thad R.; Lombardi, Nathaniel J.; Claborn, Kasey R.; Miller, Mary E.; Martens, Matthew P.

    2013-01-01

    Research on the efficacy of computer-delivered feedback-only interventions (FOIs) for college alcohol misuse has been mixed. Limitations to these FOIs include participant engagement and variation in the use of a moderation skills component. The current investigation sought to address these limitations using a novel computer-delivered FOI, the Drinkers Assessment and Feedback Tool for College Students (DrAFT-CS). Heavy drinking college students (N = 176) were randomly assigned to DrAFT-CS, DrA...

  7. 76 FR 41523 - In the Matter of Certain Mobile Communications and Computer Devices and Components Thereof...

    Science.gov (United States)

    2011-07-14

    ... in its entirety Inv. No. 337-TA-704, Certain Mobile Communications and Computer Devices and... importation of certain mobile communications and computer devices and components thereof by reason of... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-704] In the Matter of Certain Mobile...

  8. Computational models for residual creep life prediction of power plant components

    International Nuclear Information System (INIS)

    Grewal, G.S.; Singh, A.K.; Ramamoortry, M.

    2006-01-01

    All high temperature - high pressure power plant components are prone to irreversible visco-plastic deformation by the phenomenon of creep. The steady state creep response as well as the total creep life of a material is related to the operational component temperature through, respectively, the exponential and inverse exponential relationships. Minor increases in the component temperature can thus have serious consequences as far as the creep life and dimensional stability of a plant component are concerned. In high temperature steam tubing in power plants, one mechanism by which a significant temperature rise can occur is by the growth of a thermally insulating oxide film on its steam side surface. In the present paper, an elegantly simple and computationally efficient technique is presented for predicting the residual creep life of steel components subjected to continual steam side oxide film growth. Similarly, fabrication of high temperature power plant components involves extensive use of welding as the fabrication process of choice. Naturally, issues related to the creep life of weldments have to be seriously addressed for safe and continual operation of the welded plant component. Unfortunately, a typical weldment in an engineering structure is a zone of complex microstructural gradation comprising of a number of distinct sub-zones with distinct meso-scale and micro-scale morphology of the phases and (even) chemistry and its creep life prediction presents considerable challenges. The present paper presents a stochastic algorithm, which can be' used for developing experimental creep-cavitation intensity versus residual life correlations for welded structures. Apart from estimates of the residual life in a mean field sense, the model can be used for predicting the reliability of the plant component in a rigorous probabilistic setting. (author)

  9. Computation of the mechanical behaviour of nuclear reactor components

    International Nuclear Information System (INIS)

    Brosi, S.; Niffenegger, M.; Roesel, R.; Reichlin, K.; Duijvestijn, A.

    1994-01-01

    A possible limiting factor of the service life of a reactor is the mechanical load carrying margin, i.e. the excess of the load carrying capacity over the actual loading, of the central, heavy section components. This margin decreases during service but, for safety reasons, may not fall below a critical value. Therefore, it is essential to check and to control continuously the factors which cause the decrease. The reasons for the decrease are shown at length and in detail in an example relating to the test which almost achieved failure of a pipe emanating from a reactor pressure vessel, weakened by an artificial crack and undergoing a water-hammer loading. The latter was caused by a sudden valve closure supposed to follow upon a break far downstream. The computational and experimental difficulties associated with the simultaneous occurrence of an extreme weakening and an extreme loading in an already rather complicated geometry are explained. It is concluded that available computational tools and present know-how are sufficient to simulate the behaviour under such conditions as would prevail in normal service, and even to analyse departures from them, as long as not all difficulties arise simultaneously. (author) figs., tabs., refs

  10. Experimental investigation of surface determination process on multi-material components for dimensional computed tomography

    DEFF Research Database (Denmark)

    Borges de Oliveira, Fabrício; Stolfi, Alessandro; Bartscher, Markus

    2016-01-01

    The possibility of measuring multi-material components, while assessing inner and outer features simultaneously makes X-ray computed tomography (CT) the latest evolution in the field of coordinate measurement systems (CMSs). However, the difficulty in selecting suitable scanning parameters and su...

  11. GRAPHIC COMPETENCE AS A COMPONENT OF TRAINING FUTURE ENGINEERING TEACHERS OF COMPUTER PROFILE

    Directory of Open Access Journals (Sweden)

    Yuliya Kozak

    2016-06-01

    Full Text Available The article analysis the system of professional training of future engineering teachers of computer type at the pedagogical universities, including graphical content preparation. It is established that the modernization of this system of training engineering teachers of computer profile is extremely important because of increasing demands for total graphics education, which in terms of mass communication, the need to compress a significant amount of information and opportunities provided by new information technologies, becomes so important as second literacy. The article reveals the essential characteristics of the concept of graphic competence as important component of the modernization of the education system, and an attempt to find promising ways of further work to effective solving of the issue of formation of graphic competence of engineering teachers of computer profile.

  12. 75 FR 8400 - In the Matter of Certain Notebook Computer Products and Components Thereof; Notice of Investigation

    Science.gov (United States)

    2010-02-24

    ... INTERNATIONAL TRADE COMMISSION [Inv. No. 337-TA-705] In the Matter of Certain Notebook Computer... United States after importation of certain notebook computer products and components thereof by reason of... an industry in the United States exists as required by subsection (a)(2) of section 337. The...

  13. A computer-based feedback only intervention with and without a moderation skills component.

    Science.gov (United States)

    Weaver, Cameron C; Leffingwell, Thad R; Lombardi, Nathaniel J; Claborn, Kasey R; Miller, Mary E; Martens, Matthew P

    2014-01-01

    Research on the efficacy of computer-delivered feedback-only interventions (FOIs) for college alcohol misuse has been mixed. Limitations to these FOIs include participant engagement and variation in the use of a moderation skills component. The current investigation sought to address these limitations using a novel computer-delivered FOI, the Drinkers Assessment and Feedback Tool for College Students (DrAFT-CS). Heavy drinking college students (N=176) were randomly assigned to DrAFT-CS, DrAFT-CS plus moderation skills (DrAFT-CS+), moderation skills only (MSO), or assessment only (AO) group, and were assessed at 1-month follow-up (N=157). Participants in the DrAFT-CS and DrAFT-CS+groups reported significantly lower estimated blood alcohol concentrations (eBACs) on typical heaviest drinking day than participants in the AO group. The data also supported the incorporation of a moderation skills component within FOIs, such that participants in DrAFT-CS+group reported significantly fewer drinks per week and drinks per heaviest drinking occasion than participants in the AO group. © 2013.

  14. A computer-controlled electronic system for the ultrasonic NDT of components for nuclear power stations

    International Nuclear Information System (INIS)

    Rehrmann, M.; Harbecke, D.

    1987-01-01

    The paper describes an automatic ultrasonic testing system combined with a computer-controlled electronics system, called IMPULS I, for the non-destructive testing of components of nuclear reactors. The system can be used for both in-service inspection and for inspection during the manufacturing process. IMPUL I has more functions and less components than conventional ultrasonic systems, and the system gives good reproducible test results and is easy to operate. (U.K.)

  15. Advanced computational simulation for design and manufacturing of lightweight material components for automotive applications

    Energy Technology Data Exchange (ETDEWEB)

    Simunovic, S.; Aramayo, G.A.; Zacharia, T. [Oak Ridge National Lab., TN (United States); Toridis, T.G. [George Washington Univ., Washington, DC (United States); Bandak, F.; Ragland, C.L. [Dept. of Transportation, Washington, DC (United States)

    1997-04-01

    Computational vehicle models for the analysis of lightweight material performance in automobiles have been developed through collaboration between Oak Ridge National Laboratory, the National Highway Transportation Safety Administration, and George Washington University. The vehicle models have been verified against experimental data obtained from vehicle collisions. The crashed vehicles were analyzed, and the main impact energy dissipation mechanisms were identified and characterized. Important structural parts were extracted and digitized and directly compared with simulation results. High-performance computing played a key role in the model development because it allowed for rapid computational simulations and model modifications. The deformation of the computational model shows a very good agreement with the experiments. This report documents the modifications made to the computational model and relates them to the observations and findings on the test vehicle. Procedural guidelines are also provided that the authors believe need to be followed to create realistic models of passenger vehicles that could be used to evaluate the performance of lightweight materials in automotive structural components.

  16. 78 FR 63492 - Certain Electronic Devices, Including Mobile Phones and Tablet Computers, and Components Thereof...

    Science.gov (United States)

    2013-10-24

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-847] Certain Electronic Devices, Including Mobile Phones and Tablet Computers, and Components Thereof; Notice of Request for Statements on the Public Interest AGENCY: U.S. International Trade Commission. ACTION: Notice. SUMMARY: Notice is...

  17. TITAN: a computer program for accident occurrence frequency analyses by component Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Nomura, Yasushi [Department of Fuel Cycle Safety Research, Nuclear Safety Research Center, Tokai Research Establishment, Japan Atomic Energy Research Institute, Tokai, Ibaraki (Japan); Tamaki, Hitoshi [Department of Safety Research Technical Support, Tokai Research Establishment, Japan Atomic Energy Research Institute, Tokai, Ibaraki (Japan); Kanai, Shigeru [Fuji Research Institute Corporation, Tokyo (Japan)

    2000-04-01

    In a plant system consisting of complex equipments and components for a reprocessing facility, there might be grace time between an initiating event and a resultant serious accident, allowing operating personnel to take remedial actions, thus, terminating the ongoing accident sequence. A component Monte Carlo simulation computer program TITAN has been developed to analyze such a complex reliability model including the grace time without any difficulty to obtain an accident occurrence frequency. Firstly, basic methods for the component Monte Carlo simulation is introduced to obtain an accident occurrence frequency, and then, the basic performance such as precision, convergence, and parallelization of calculation, is shown through calculation of a prototype accident sequence model. As an example to illustrate applicability to a real scale plant model, a red oil explosion in a German reprocessing plant model is simulated to show that TITAN can give an accident occurrence frequency with relatively good accuracy. Moreover, results of uncertainty analyses by TITAN are rendered to show another performance, and a proposal is made for introducing of a new input-data format to adapt the component Monte Carlo simulation. The present paper describes the calculational method, performance, applicability to a real scale, and new proposal for the TITAN code. In the Appendixes, a conventional analytical method is shown to avoid complex and laborious calculation to obtain a strict solution of accident occurrence frequency, compared with Monte Carlo method. The user's manual and the list/structure of program are also contained in the Appendixes to facilitate TITAN computer program usage. (author)

  18. TITAN: a computer program for accident occurrence frequency analyses by component Monte Carlo simulation

    International Nuclear Information System (INIS)

    Nomura, Yasushi; Tamaki, Hitoshi; Kanai, Shigeru

    2000-04-01

    In a plant system consisting of complex equipments and components for a reprocessing facility, there might be grace time between an initiating event and a resultant serious accident, allowing operating personnel to take remedial actions, thus, terminating the ongoing accident sequence. A component Monte Carlo simulation computer program TITAN has been developed to analyze such a complex reliability model including the grace time without any difficulty to obtain an accident occurrence frequency. Firstly, basic methods for the component Monte Carlo simulation is introduced to obtain an accident occurrence frequency, and then, the basic performance such as precision, convergence, and parallelization of calculation, is shown through calculation of a prototype accident sequence model. As an example to illustrate applicability to a real scale plant model, a red oil explosion in a German reprocessing plant model is simulated to show that TITAN can give an accident occurrence frequency with relatively good accuracy. Moreover, results of uncertainty analyses by TITAN are rendered to show another performance, and a proposal is made for introducing of a new input-data format to adapt the component Monte Carlo simulation. The present paper describes the calculational method, performance, applicability to a real scale, and new proposal for the TITAN code. In the Appendixes, a conventional analytical method is shown to avoid complex and laborious calculation to obtain a strict solution of accident occurrence frequency, compared with Monte Carlo method. The user's manual and the list/structure of program are also contained in the Appendixes to facilitate TITAN computer program usage. (author)

  19. Gamma dose effects valuation on micro computing components; Evaluation des effets de la dose gamma sur les composants micro-informatiques

    Energy Technology Data Exchange (ETDEWEB)

    Joffre, F

    1996-12-31

    Robotics in hostile environment raises the problem of micro computing components resistance with gamma radiation cumulated dose. The current aim is to reach a dose of 3000 grays with industrial components. A methodology and an instrumentation adapted to test this type of components have been developed. The aim of this work is to present the advantages and disadvantages bound to the use of industrial components in the presence of gamma radiation. After an analysis of the criteria allowing to justify the technological choices, the different steps which characterize the selection and the assessment methodology used are explained. The irradiation and measures means now operational are mentioned. Moreover, the supply aspects of the chosen components for the design of an industrialized system is taken into account. These selection and assessment components contribute to the development and design of computers for civil nuclear robotics. (O.M.). 7 refs.

  20. 78 FR 75942 - Certain Mobile Phones and Tablet Computers, and Components Thereof; Commission Determination To...

    Science.gov (United States)

    2013-12-13

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-847] Certain Mobile Phones and Tablet Computers, and Components Thereof; Commission Determination To Review in Part a Final Initial Determination... Qualcomm Magellan and Odyssey transceiver chips have become a de facto standard in the mobile devices...

  1. Computer Simulation of Robotic Device Components in 3D Printer Manufacturing

    Directory of Open Access Journals (Sweden)

    M. A. Kiselev

    2016-01-01

    Full Text Available The paper considers a relevant problem "Computer simulation of robotic device components in manufacturing on a 3D printer" and highlights the problem of computer simulation based on the cognitive programming technology of robotic device components. The paper subject is urgent because computer simulation of force-torque and accuracy characteristics of robot components in terms of their manufacturing properties and conditions from polymeric and metallic materials is of paramount importance for programming and manufacturing on the 3D printers. Two types of additive manufacturing technologies were used:1. FDM (Fused deposition modeling - layered growth of products from molten plastic strands;2. SLM (Selective laser melting - selective laser sintering of metal powders, which, in turn, create:• conditions for reducing the use of expensive equipment;• reducing weight and increasing strength through optimization of  the lattice structures when using a bionic design;• a capability to implement mathematical modeling of individual components of robotic and other devices in terms of appropriate characteristics;• a 3D printing capability to create unique items, which cannot be made by other known methods.The paper aim was to confirm the possibility of ensuring the strength and accuracy characteristics of cases when printing from polymeric and metallic materials on a 3D printer. The investigation emphasis is on mathematical modeling based on the cognitive programming technology using the additive technologies in their studies since it is, generally, impossible to make the obtained optimized structures on the modern CNC machines.The latter allows us to create a program code to be clear to other developers without cost, additional time for development, adaptation and implementation.Year by year Russian companies increasingly use a 3D-print system in mechanical engineering, aerospace industry, and for scientific purposes. Machines for the additive

  2. Can a dual-energy computed tomography predict unsuitable stone components for extracorporeal shock wave lithotripsy?

    Science.gov (United States)

    Ahn, Sung Hoon; Oh, Tae Hoon; Seo, Ill Young

    2015-09-01

    To assess the potential of dual-energy computed tomography (DECT) to identify urinary stone components, particularly uric acid and calcium oxalate monohydrate, which are unsuitable for extracorporeal shock wave lithotripsy (ESWL). This clinical study included 246 patients who underwent removal of urinary stones and an analysis of stone components between November 2009 and August 2013. All patients received preoperative DECT using two energy values (80 kVp and 140 kVp). Hounsfield units (HU) were measured and matched to the stone component. Significant differences in HU values were observed between uric acid and nonuric acid stones at the 80 and 140 kVp energy values (penergy values (p<0.001). DECT improved the characterization of urinary stone components and was a useful method for identifying uric acid and calcium oxalate monohydrate stones, which are unsuitable for ESWL.

  3. A framework for the computer-aided planning and optimisation of manufacturing processes for components with functional graded properties

    Science.gov (United States)

    Biermann, D.; Gausemeier, J.; Heim, H.-P.; Hess, S.; Petersen, M.; Ries, A.; Wagner, T.

    2014-05-01

    In this contribution a framework for the computer-aided planning and optimisation of functional graded components is presented. The framework is divided into three modules - the "Component Description", the "Expert System" for the synthetisation of several process chains and the "Modelling and Process Chain Optimisation". The Component Description module enhances a standard computer-aided design (CAD) model by a voxel-based representation of the graded properties. The Expert System synthesises process steps stored in the knowledge base to generate several alternative process chains. Each process chain is capable of producing components according to the enhanced CAD model and usually consists of a sequence of heating-, cooling-, and forming processes. The dependencies between the component and the applied manufacturing processes as well as between the processes themselves need to be considered. The Expert System utilises an ontology for that purpose. The ontology represents all dependencies in a structured way and connects the information of the knowledge base via relations. The third module performs the evaluation of the generated process chains. To accomplish this, the parameters of each process are optimised with respect to the component specification, whereby the result of the best parameterisation is used as representative value. Finally, the process chain which is capable of manufacturing a functionally graded component in an optimal way regarding to the property distributions of the component description is presented by means of a dedicated specification technique.

  4. Advances in Human-Computer Interaction: Graphics and Animation Components for Interface Design

    Science.gov (United States)

    Cipolla Ficarra, Francisco V.; Nicol, Emma; Cipolla-Ficarra, Miguel; Richardson, Lucy

    We present an analysis of communicability methodology in graphics and animation components for interface design, called CAN (Communicability, Acceptability and Novelty). This methodology has been under development between 2005 and 2010, obtaining excellent results in cultural heritage, education and microcomputing contexts. In studies where there is a bi-directional interrelation between ergonomics, usability, user-centered design, software quality and the human-computer interaction. We also present the heuristic results about iconography and layout design in blogs and websites of the following countries: Spain, Italy, Portugal and France.

  5. Decreasing Transition Times in Elementary School Classrooms: Using Computer-Assisted Instruction to Automate Intervention Components

    Science.gov (United States)

    Hine, Jeffrey F.; Ardoin, Scott P.; Foster, Tori E.

    2015-01-01

    Research suggests that students spend a substantial amount of time transitioning between classroom activities, which may reduce time spent academically engaged. This study used an ABAB design to evaluate the effects of a computer-assisted intervention that automated intervention components previously shown to decrease transition times. We examined…

  6. Computational/experimental studies of isolated, single component droplet combustion

    Science.gov (United States)

    Dryer, Frederick L.

    1993-01-01

    Isolated droplet combustion processes have been the subject of extensive experimental and theoretical investigations for nearly 40 years. The gross features of droplet burning are qualitatively embodied by simple theories and are relatively well understood. However, there remain significant aspects of droplet burning, particularly its dynamics, for which additional basic knowledge is needed for thorough interpretations and quantitative explanations of transient phenomena. Spherically-symmetric droplet combustion, which can only be approximated under conditions of both low Reynolds and Grashof numbers, represents the simplest geometrical configuration in which to study the coupled chemical/transport processes inherent within non-premixed flames. The research summarized here, concerns recent results on isolated, single component, droplet combustion under microgravity conditions, a program pursued jointly with F.A. Williams of the University of California, San Diego. The overall program involves developing and applying experimental methods to study the burning of isolated, single component droplets, in various atmospheres, primarily at atmospheric pressure and below, in both drop towers and aboard space-based platforms such as the Space Shuttle or Space Station. Both computational methods and asymptotic methods, the latter pursued mainly at UCSD, are used in developing the experimental test matrix, in analyzing results, and for extending theoretical understanding. Methanol, and the normal alkanes, n-heptane, and n-decane, have been selected as test fuels to study time-dependent droplet burning phenomena. The following sections summarizes the Princeton efforts on this program, describe work in progress, and briefly delineate future research directions.

  7. Computer software program for monitoring the availability of systems and components of electric power generating systems

    International Nuclear Information System (INIS)

    Petersen, T.A.; Hilsmeier, T.A.; Kapinus, D.M.

    1994-01-01

    As availabilities of electric power generating stations systems and components become more and more important from a financial, personnel safety, and regulatory requirements standpoint, it is evident that a comprehensive, yet simple and user-friendly program for system and component tracking and monitoring is needed to assist in effectively managing the large volume of systems and components with their large numbers of associated maintenance/availability records. A user-friendly computer software program for system and component availability monitoring has been developed that calculates, displays and monitors selected component and system availabilities. This is a Windows trademark based (Graphical User Interface) program that utilizes a system flow diagram for the data input screen which also provides a visual representation of availability values and limits for the individual components and associated systems. This program can be customized to the user's plant-specific system and component selections and configurations. As will be discussed herein, this software program is well suited for availability monitoring and ultimately providing valuable information for improving plant performance and reducing operating costs

  8. EXAFS Phase Retrieval Solution Tracking for Complex Multi-Component System: Synthesized Topological Inverse Computation

    International Nuclear Information System (INIS)

    Lee, Jay Min; Yang, Dong-Seok; Bunker, Grant B

    2013-01-01

    Using the FEFF kernel A(k,r), we describe the inverse computation from χ(k)-data to g(r)-solution in terms of a singularity regularization method based on complete Bayesian statistics process. In this work, we topologically decompose the system-matched invariant projection operators into two distinct types, (A + AA + A) and (AA + AA + ), and achieved Synthesized Topological Inversion Computation (STIC), by employing a 12-operator-closed-loop emulator of the symplectic transformation. This leads to a numerically self-consistent solution as the optimal near-singular regularization parameters are sought, dramatically suppressing instability problems connected with finite precision arithmetic in ill-posed systems. By statistically correlating a pair of measured data, it was feasible to compute an optimal EXAFS phase retrieval solution expressed in terms of the complex-valued χ(k), and this approach was successfully used to determine the optimal g(r) for a complex multi-component system.

  9. A Computational Model of Torque Generation: Neural, Contractile, Metabolic and Musculoskeletal Components

    Science.gov (United States)

    Callahan, Damien M.; Umberger, Brian R.; Kent-Braun, Jane A.

    2013-01-01

    The pathway of voluntary joint torque production includes motor neuron recruitment and rate-coding, sarcolemmal depolarization and calcium release by the sarcoplasmic reticulum, force generation by motor proteins within skeletal muscle, and force transmission by tendon across the joint. The direct source of energetic support for this process is ATP hydrolysis. It is possible to examine portions of this physiologic pathway using various in vivo and in vitro techniques, but an integrated view of the multiple processes that ultimately impact joint torque remains elusive. To address this gap, we present a comprehensive computational model of the combined neuromuscular and musculoskeletal systems that includes novel components related to intracellular bioenergetics function. Components representing excitatory drive, muscle activation, force generation, metabolic perturbations, and torque production during voluntary human ankle dorsiflexion were constructed, using a combination of experimentally-derived data and literature values. Simulation results were validated by comparison with torque and metabolic data obtained in vivo. The model successfully predicted peak and submaximal voluntary and electrically-elicited torque output, and accurately simulated the metabolic perturbations associated with voluntary contractions. This novel, comprehensive model could be used to better understand impact of global effectors such as age and disease on various components of the neuromuscular system, and ultimately, voluntary torque output. PMID:23405245

  10. Teaching Computer Security with a Hands-On Component

    OpenAIRE

    Murthy , Narayan

    2011-01-01

    Part 2: WISE 7; International audience; To address national needs for computer security education, many universities have incorporated computer and security courses into their undergraduate and graduate curricula. Our department has introduced computer security courses at both the undergraduate and the graduate level. This paper describes our approach, our experiences, and lessons learned in teaching a Computer Security Overview course.There are two key elements in the course: Studying comput...

  11. Bridge between control science and technology. Volume 5 Manufacturing man-machine systems, computers, components, traffic control, space applications

    Energy Technology Data Exchange (ETDEWEB)

    Rembold, U; Kempf, K G; Towill, D R; Johannsen, G; Paul, M

    1985-01-01

    Among the topics discussed are: robotics; CAD/CAM applications; and man-machine systems. Consideration is also given to: tools and software for system design and integration; communication systems for real-time computer control; fail-safe design of real-time computer systems; and microcomputer-based control systems. Additional topics discussed include: programmable and intelligent components and instruments in automatic control; transportation systems; and space applications of automatic control systems.

  12. Knowledge-based geographic information systems on the Macintosh computer: a component of the GypsES project

    Science.gov (United States)

    Gregory Elmes; Thomas Millette; Charles B. Yuill

    1991-01-01

    GypsES, a decision-support and expert system for the management of Gypsy Moth addresses five related research problems in a modular, computer-based project. The modules are hazard rating, monitoring, prediction, treatment decision and treatment implementation. One common component is a geographic information system designed to function intelligently. We refer to this...

  13. Incompressible viscous flow computations for the pump components and the artificial heart

    Science.gov (United States)

    Kiris, Cetin

    1992-01-01

    A finite difference, three dimensional incompressible Navier-Stokes formulation to calculate the flow through turbopump components is utilized. The solution method is based on the pseudo compressibility approach and uses an implicit upwind differencing scheme together with the Gauss-Seidel line relaxation method. Both steady and unsteady flow calculations can be performed using the current algorithm. Here, equations are solved in steadily rotating reference frames by using the steady state formulation in order to simulate the flow through a turbopump inducer. Eddy viscosity is computed by using an algebraic mixing-length turbulence model. Numerical results are compared with experimental measurements and a good agreement is found between the two.

  14. Computer-aided stress analysis system for nuclear plant primary components

    International Nuclear Information System (INIS)

    Murai, Tsutomu; Tokumaru, Yoshio; Yamazaki, Junko.

    1980-06-01

    Generally it needs a vast quantity of calculation to make the stress analysis reports of nuclear plant primary components. In Japan, especially, stress analysis reports are under obligation to make for each plant. In Mitsubishi Heavy Industries, Ltd., We have been making great efforts to rationalize the process of analysis for about these ten years. As the result of rationalization up to now, a computer-aided stress analysis system using graphic display, graphic tablet, data file, etc. was accomplished and it needs us only the least hand work. In addition we developed a fracture safety analysis system. And we are going to develop the input generator system for 3-dimensional FEM analysis by graphics terminals in the near future. We expect that when the above-mentioned input generator system is accomplished, it will be possible for us to solve instantly any case of problem. (author)

  15. Surgical planning of total hip arthroplasty: accuracy of computer-assisted EndoMap software in predicting component size

    International Nuclear Information System (INIS)

    Davila, Jesse A.; Kransdorf, Mark J.; Duffy, Gavan P.

    2006-01-01

    The purpose of our study was to assess the accuracy of a computer-assisted templating in the surgical planning of patients undergoing total hip arthroplasty utilizing EndoMap software (Siemans AG, Medical Solutions, Erlangen, Germany). Endomap Software is an electronic program that uses DICOM images to analyze standard anteroposterior radiographs for determination of optimal prosthesis component size. We retrospectively reviewed the preoperative radiographs of 36 patients undergoing uncomplicated primary total hip arthroplasty, utilizing EndoMap software, Version VA20. DICOM anteroposterior radiographs were analyzed using standard manufacturer supplied electronic templates to determine acetabular and femoral component sizes. No additional clinical information was reviewed. Acetabular and femoral component sizes were assessed by an orthopedic surgeon and two radiologists. Mean and estimated component size was compared with component size as documented in operative reports. The mean estimated acetabular component size was 53 mm (range 48-60 mm), 1 mm larger than the mean implanted size of 52 mm (range 48-62 mm). Thirty-one of 36 acetabular component sizes (86%) were accurate within one size. The mean calculated femoral component size was 4 (range 2-7), 1 size smaller than the actual mean component size of 5 (range 2-9). Twenty-six of 36 femoral component sizes (72%) were accurate within one size, and accurate within two sizes in all but four cases (94%). EndoMap Software predicted femoral component size well, with 72% within one component size of that used, and 94% within two sizes. Acetabular component size was predicted slightly better with 86% within one component size and 94% within two component sizes. (orig.)

  16. Software Components and Formal Methods from a Computational Viewpoint

    OpenAIRE

    Lambertz, Christian

    2012-01-01

    Software components and the methodology of component-based development offer a promising approach to master the design complexity of huge software products because they separate the concerns of software architecture from individual component behavior and allow for reusability of components. In combination with formal methods, the specification of a formal component model of the later software product or system allows for establishing and verifying important system properties in an automatic a...

  17. Predicting adenocarcinoma recurrence using computational texture models of nodule components in lung CT

    International Nuclear Information System (INIS)

    Depeursinge, Adrien; Yanagawa, Masahiro; Leung, Ann N.; Rubin, Daniel L.

    2015-01-01

    Purpose: To investigate the importance of presurgical computed tomography (CT) intensity and texture information from ground-glass opacities (GGO) and solid nodule components for the prediction of adenocarcinoma recurrence. Methods: For this study, 101 patients with surgically resected stage I adenocarcinoma were selected. During the follow-up period, 17 patients had disease recurrence with six associated cancer-related deaths. GGO and solid tumor components were delineated on presurgical CT scans by a radiologist. Computational texture models of GGO and solid regions were built using linear combinations of steerable Riesz wavelets learned with linear support vector machines (SVMs). Unlike other traditional texture attributes, the proposed texture models are designed to encode local image scales and directions that are specific to GGO and solid tissue. The responses of the locally steered models were used as texture attributes and compared to the responses of unaligned Riesz wavelets. The texture attributes were combined with CT intensities to predict tumor recurrence and patient hazard according to disease-free survival (DFS) time. Two families of predictive models were compared: LASSO and SVMs, and their survival counterparts: Cox-LASSO and survival SVMs. Results: The best-performing predictive model of patient hazard was associated with a concordance index (C-index) of 0.81 ± 0.02 and was based on the combination of the steered models and CT intensities with survival SVMs. The same feature group and the LASSO model yielded the highest area under the receiver operating characteristic curve (AUC) of 0.8 ± 0.01 for predicting tumor recurrence, although no statistically significant difference was found when compared to using intensity features solely. For all models, the performance was found to be significantly higher when image attributes were based on the solid components solely versus using the entire tumors (p < 3.08 × 10 −5 ). Conclusions: This study

  18. Mitigating component performance variation

    Science.gov (United States)

    Gara, Alan G.; Sylvester, Steve S.; Eastep, Jonathan M.; Nagappan, Ramkumar; Cantalupo, Christopher M.

    2018-01-09

    Apparatus and methods may provide for characterizing a plurality of similar components of a distributed computing system based on a maximum safe operation level associated with each component and storing characterization data in a database and allocating non-uniform power to each similar component based at least in part on the characterization data in the database to substantially equalize performance of the components.

  19. Quantum computation

    International Nuclear Information System (INIS)

    Deutsch, D.

    1992-01-01

    As computers become ever more complex, they inevitably become smaller. This leads to a need for components which are fabricated and operate on increasingly smaller size scales. Quantum theory is already taken into account in microelectronics design. This article explores how quantum theory will need to be incorporated into computers in future in order to give them their components functionality. Computation tasks which depend on quantum effects will become possible. Physicists may have to reconsider their perspective on computation in the light of understanding developed in connection with universal quantum computers. (UK)

  20. Use of EPICS and Python technology for the development of a computational toolkit for high heat flux testing of plasma facing components

    Energy Technology Data Exchange (ETDEWEB)

    Sugandhi, Ritesh, E-mail: ritesh@ipr.res.in; Swamy, Rajamannar, E-mail: rajamannar@ipr.res.in; Khirwadkar, Samir, E-mail: sameer@ipr.res.in

    2016-11-15

    Highlights: • An integrated approach to software development for computational processing and experimental control. • Use of open source, cross platform, robust and advanced tools for computational code development. • Prediction of optimized process parameters for critical heat flux model. • Virtual experimentation for high heat flux testing of plasma facing components. - Abstract: The high heat flux testing and characterization of the divertor and first wall components are a challenging engineering problem of a tokamak. These components are subject to steady state and transient heat load of high magnitude. Therefore, the accurate prediction and control of the cooling parameters is crucial to prevent burnout. The prediction of the cooling parameters is based on the numerical solution of the critical heat flux (CHF) model. In a test facility for high heat flux testing of plasma facing components (PFC), the integration of computations and experimental control is an essential requirement. Experimental physics and industrial control system (EPICS) provides powerful tools for steering controls, data simulation, hardware interfacing and wider usability. Python provides an open source alternative for numerical computations and scripting. We have integrated these two open source technologies to develop a graphical software for a typical high heat flux experiment. The implementation uses EPICS based tools namely IOC (I/O controller) server, control system studio (CSS) and Python based tools namely Numpy, Scipy, Matplotlib and NOSE. EPICS and Python are integrated using PyEpics library. This toolkit is currently under operation at high heat flux test facility at Institute for Plasma Research (IPR) and is also useful for the experimental labs working in the similar research areas. The paper reports the software architectural design, implementation tools and rationale for their selection, test and validation.

  1. Use of EPICS and Python technology for the development of a computational toolkit for high heat flux testing of plasma facing components

    International Nuclear Information System (INIS)

    Sugandhi, Ritesh; Swamy, Rajamannar; Khirwadkar, Samir

    2016-01-01

    Highlights: • An integrated approach to software development for computational processing and experimental control. • Use of open source, cross platform, robust and advanced tools for computational code development. • Prediction of optimized process parameters for critical heat flux model. • Virtual experimentation for high heat flux testing of plasma facing components. - Abstract: The high heat flux testing and characterization of the divertor and first wall components are a challenging engineering problem of a tokamak. These components are subject to steady state and transient heat load of high magnitude. Therefore, the accurate prediction and control of the cooling parameters is crucial to prevent burnout. The prediction of the cooling parameters is based on the numerical solution of the critical heat flux (CHF) model. In a test facility for high heat flux testing of plasma facing components (PFC), the integration of computations and experimental control is an essential requirement. Experimental physics and industrial control system (EPICS) provides powerful tools for steering controls, data simulation, hardware interfacing and wider usability. Python provides an open source alternative for numerical computations and scripting. We have integrated these two open source technologies to develop a graphical software for a typical high heat flux experiment. The implementation uses EPICS based tools namely IOC (I/O controller) server, control system studio (CSS) and Python based tools namely Numpy, Scipy, Matplotlib and NOSE. EPICS and Python are integrated using PyEpics library. This toolkit is currently under operation at high heat flux test facility at Institute for Plasma Research (IPR) and is also useful for the experimental labs working in the similar research areas. The paper reports the software architectural design, implementation tools and rationale for their selection, test and validation.

  2. Fabrication of optical fiber micro(and nano)-optical and photonic devices and components, using computer controlled spark thermo-pulling system

    International Nuclear Information System (INIS)

    Fatemi, H.; Mosleh, A.; Pashmkar, M.; Khaksar Kalati, A.

    2007-01-01

    Fabrication of optical fiber Micro (and Nano)-Optical component and devices, as well as, those applicable for photonic purposes are described. It is to demonstrate the practical capabilities and characterization of the previously reported Computer controlled spark thermo-pulling fabrication system.

  3. Passive Microwave Components and Antennas

    DEFF Research Database (Denmark)

    State-of-the-art microwave systems always require higher performance and lower cost microwave components. Constantly growing demands and performance requirements of industrial and scientific applications often make employing traditionally designed components impractical. For that reason, the design...... and development process remains a great challenge today. This problem motivated intensive research efforts in microwave design and technology, which is responsible for a great number of recently appeared alternative approaches to analysis and design of microwave components and antennas. This book highlights...... techniques. Modelling and computations in electromagnetics is a quite fast-growing research area. The recent interest in this field is caused by the increased demand for designing complex microwave components, modeling electromagnetic materials, and rapid increase in computational power for calculation...

  4. Computer-aided process planning in prismatic shape die components based on Standard for the Exchange of Product model data

    Directory of Open Access Journals (Sweden)

    Awais Ahmad Khan

    2015-11-01

    Full Text Available Insufficient technologies made good integration between the die components in design, process planning, and manufacturing impossible in the past few years. Nowadays, the advanced technologies based on Standard for the Exchange of Product model data are making it possible. This article discusses the three main steps for achieving the complete process planning for prismatic parts of the die components. These three steps are data extraction, feature recognition, and process planning. The proposed computer-aided process planning system works as part of an integrated system to cover the process planning of any prismatic part die component. The system is built using Visual Basic with EWDraw system for visualizing the Standard for the Exchange of Product model data file. The system works successfully and can cover any type of sheet metal die components. The case study discussed in this article is taken from a large design of progressive die.

  5. Rotational and Translational Components of Motion Parallax: Observers' Sensitivity and Implications for Three-Dimensional Computer Graphics

    Science.gov (United States)

    Kaiser, Mary K.; Montegut, Michael J.; Proffitt, Dennis R.

    1995-01-01

    The motion of objects during motion parallax can be decomposed into 2 observer-relative components: translation and rotation. The depth ratio of objects in the visual field is specified by the inverse ratio of their angular displacement (from translation) or equivalently by the inverse ratio of their rotations. Despite the equal mathematical status of these 2 information sources, it was predicted that observers would be far more sensitive to the translational than rotational component. Such a differential sensitivity is implicitly assumed by the computer graphics technique billboarding, in which 3-dimensional (3-D) objects are drawn as planar forms (i.e., billboards) maintained normal to the line of sight. In 3 experiments, observers were found to be consistently less sensitive to rotational anomalies. The implications of these findings for kinetic depth effect displays and billboarding techniques are discussed.

  6. A hybrid finite element analysis and evolutionary computation method for the design of lightweight lattice components with optimized strut diameter

    DEFF Research Database (Denmark)

    Salonitis, Konstantinos; Chantzis, Dimitrios; Kappatos, Vasileios

    2017-01-01

    approaches or with the use of topology optimization methodologies. An optimization approach utilizing multipurpose optimization algorithms has not been proposed yet. This paper presents a novel user-friendly method for the design optimization of lattice components towards weight minimization, which combines...... finite element analysis and evolutionary computation. The proposed method utilizes the cell homogenization technique in order to reduce the computational cost of the finite element analysis and a genetic algorithm in order to search for the most lightweight lattice configuration. A bracket consisting...

  7. A study on the optimal replacement periods of digital control computer's components of Wolsung nuclear power plant unit 1

    International Nuclear Information System (INIS)

    Mok, Jin Il; Seong, Poong Hyun

    1993-01-01

    Due to the failure of the instrument and control devices of nuclear power plants caused by aging, nuclear power plants occasionally trip. Even a trip of a single nuclear power plant (NPP) causes an extravagant economical loss and deteriorates public acceptance of nuclear power plants. Therefore, the replacement of the instrument and control devices with proper consideration of the aging effect is necessary in order to prevent the inadvertent trip. In this paper we investigated the optimal replacement periods of the control computer's components of Wolsung nuclear power plant Unit 1. We first derived mathematical models of optimal replacement periods to the digital control computer's components of Wolsung NPP Unit 1 and calculated the optimal replacement periods analytically. We compared the periods with the replacement periods currently used at Wolsung NPP Unit 1. The periods used at Wolsung is not based on mathematical analysis, but on empirical knowledge. As a consequence, the optimal replacement periods analytically obtained and those used in the field show a little difference. (Author)

  8. Software, component, and service deployment in computational Grids

    International Nuclear Information System (INIS)

    von Laszewski, G.; Blau, E.; Bletzinger, M.; Gawor, J.; Lane, P.; Martin, S.; Russell, M.

    2002-01-01

    Grids comprise an infrastructure that enables scientists to use a diverse set of distributed remote services and resources as part of complex scientific problem-solving processes. We analyze some of the challenges involved in deploying software and components transparently in Grids. We report on three practical solutions used by the Globus Project. Lessons learned from this experience lead us to believe that it is necessary to support a variety of software and component deployment strategies. These strategies are based on the hosting environment

  9. ACDOS1: a computer code to calculate dose rates from neutron activation of neutral beamlines and other fusion-reactor components

    International Nuclear Information System (INIS)

    Keney, G.S.

    1981-08-01

    A computer code has been written to calculate neutron induced activation of neutral-beam injector components and the corresponding dose rates as a function of geometry, component composition, and time after shutdown. The code, ACDOS1, was written in FORTRAN IV to calculate both activity and dose rates for up to 30 target nuclides and 50 neutron groups. Sufficient versatility has also been incorporated into the code to make it applicable to a variety of general activation problems due to neutrons of energy less than 20 MeV

  10. REVIEW OF THE GOVERNING EQUATIONS, COMPUTATIONAL ALGORITHMS, AND OTHER COMPONENTS OF THE MODELS-3 COMMUNITY MULTISCALE AIR QUALITY (CMAQ) MODELING SYSTEM

    Science.gov (United States)

    This article describes the governing equations, computational algorithms, and other components entering into the Community Multiscale Air Quality (CMAQ) modeling system. This system has been designed to approach air quality as a whole by including state-of-the-science capabiliti...

  11. Report on the thermal-hydraulics computational component

    International Nuclear Information System (INIS)

    Laughton, T.; Jones, B.G.

    1996-01-01

    The nodal methods computer code utilizing hexagonal geometry, which is being developed as part of this DOE contract, is called THMZ. The computational objective of the code is to calculate the steady-state thermal-hydraulic conditions in a hexagonal geometry reactor core given the appropriate initial conditions and the axial neutron flux profile. The latter is given by a companion nodal neutronics code which was developed in an earlier part of the contact. The joining of these two codes to provide a coupled analysis tool for hexagonal lattice cores is the ultimate objective of the contract and its follow-on work. The remaining part of this report presents the current status of the development and the results which have been obtained to date. These will appear in the MS thesis of Mr. Terrill Laughton in the Department of Nuclear Engineering which is currently in preparation

  12. IPython: components for interactive and parallel computing across disciplines. (Invited)

    Science.gov (United States)

    Perez, F.; Bussonnier, M.; Frederic, J. D.; Froehle, B. M.; Granger, B. E.; Ivanov, P.; Kluyver, T.; Patterson, E.; Ragan-Kelley, B.; Sailer, Z.

    2013-12-01

    Scientific computing is an inherently exploratory activity that requires constantly cycling between code, data and results, each time adjusting the computations as new insights and questions arise. To support such a workflow, good interactive environments are critical. The IPython project (http://ipython.org) provides a rich architecture for interactive computing with: 1. Terminal-based and graphical interactive consoles. 2. A web-based Notebook system with support for code, text, mathematical expressions, inline plots and other rich media. 3. Easy to use, high performance tools for parallel computing. Despite its roots in Python, the IPython architecture is designed in a language-agnostic way to facilitate interactive computing in any language. This allows users to mix Python with Julia, R, Octave, Ruby, Perl, Bash and more, as well as to develop native clients in other languages that reuse the IPython clients. In this talk, I will show how IPython supports all stages in the lifecycle of a scientific idea: 1. Individual exploration. 2. Collaborative development. 3. Production runs with parallel resources. 4. Publication. 5. Education. In particular, the IPython Notebook provides an environment for "literate computing" with a tight integration of narrative and computation (including parallel computing). These Notebooks are stored in a JSON-based document format that provides an "executable paper": notebooks can be version controlled, exported to HTML or PDF for publication, and used for teaching.

  13. Annotating the structure and components of a nanoparticle formulation using computable string expressions.

    Science.gov (United States)

    Thomas, Dennis G; Chikkagoudar, Satish; Chappell, Alan R; Baker, Nathan A

    2012-12-31

    Nanoparticle formulations that are being developed and tested for various medical applications are typically multi-component systems that vary in their structure, chemical composition, and function. It is difficult to compare and understand the differences between the structural and chemical descriptions of hundreds and thousands of nanoparticle formulations found in text documents. We have developed a string nomenclature to create computable string expressions that identify and enumerate the different high-level types of material parts of a nanoparticle formulation and represent the spatial order of their connectivity to each other. The string expressions are intended to be used as IDs, along with terms that describe a nanoparticle formulation and its material parts, in data sharing documents and nanomaterial research databases. The strings can be parsed and represented as a directed acyclic graph. The nodes of the graph can be used to display the string ID, name and other text descriptions of the nanoparticle formulation or its material part, while the edges represent the connectivity between the material parts with respect to the whole nanoparticle formulation. The different patterns in the string expressions can be searched for and used to compare the structure and chemical components of different nanoparticle formulations. The proposed string nomenclature is extensible and can be applied along with ontology terms to annotate the complete description of nanoparticles formulations.

  14. Methodology for the computational simulation of the components in photovoltaic systems; Desarrollo de herramientas para la prediccion del comportamiento de sistemas fotovoltaicos

    Energy Technology Data Exchange (ETDEWEB)

    Galimberti, P.; Arcuri, G.; Manno, R.; Fasulo, A. J.

    2004-07-01

    This work presents a methodology for the computational simulation of the components that comprise photovoltaic systems, in order to study the behavior of each component and its relevance in the operation of the whole system, which would allow to make decisions in the selection process of these components and their improvements.As a result of the simulation, files with values of different variables which characterize the behaviour of the components are obtained. Different kind of plots can be drawn, which show the information in a summarized form. Finally, the results are discussed making a comparison with actual data for the city of Rio Cuarto in Argentina (33,1 degree South Latitude) and some advantages of the propose method are mentioned. (Author)

  15. Computer software.

    Science.gov (United States)

    Rosenthal, L E

    1986-10-01

    Software is the component in a computer system that permits the hardware to perform the various functions that a computer system is capable of doing. The history of software and its development can be traced to the early nineteenth century. All computer systems are designed to utilize the "stored program concept" as first developed by Charles Babbage in the 1850s. The concept was lost until the mid-1940s, when modern computers made their appearance. Today, because of the complex and myriad tasks that a computer system can perform, there has been a differentiation of types of software. There is software designed to perform specific business applications. There is software that controls the overall operation of a computer system. And there is software that is designed to carry out specialized tasks. Regardless of types, software is the most critical component of any computer system. Without it, all one has is a collection of circuits, transistors, and silicone chips.

  16. The Component-Based Application for GAMESS

    Energy Technology Data Exchange (ETDEWEB)

    Peng, Fang [Iowa State Univ., Ames, IA (United States)

    2007-01-01

    GAMESS, a quantum chetnistry program for electronic structure calculations, has been freely shared by high-performance application scientists for over twenty years. It provides a rich set of functionalities and can be run on a variety of parallel platforms through a distributed data interface. While a chemistry computation is sophisticated and hard to develop, the resource sharing among different chemistry packages will accelerate the development of new computations and encourage the cooperation of scientists from universities and laboratories. Common Component Architecture (CCA) offers an enviromnent that allows scientific packages to dynamically interact with each other through components, which enable dynamic coupling of GAMESS with other chetnistry packages, such as MPQC and NWChem. Conceptually, a cotnputation can be constructed with "plug-and-play" components from scientific packages and require more than componentizing functions/subroutines of interest, especially for large-scale scientific packages with a long development history. In this research, we present our efforts to construct cotnponents for GAMESS that conform to the CCA specification. The goal is to enable the fine-grained interoperability between three quantum chemistry programs, GAMESS, MPQC and NWChem, via components. We focus on one of the three packages, GAMESS; delineate the structure of GAMESS computations, followed by our approaches to its component development. Then we use GAMESS as the driver to interoperate integral components from the other tw"o packages, arid show the solutions for interoperability problems along with preliminary results. To justify the versatility of the design, the Tuning and Analysis Utility (TAU) components have been coupled with GAMESS and its components, so that the performance of GAMESS and its components may be analyzed for a wide range of systetn parameters.

  17. Computational needs for modelling accelerator components

    International Nuclear Information System (INIS)

    Hanerfeld, H.

    1985-06-01

    The particle-in-cell MASK is being used to model several different electron accelerator components. These studies are being used both to design new devices and to understand particle behavior within existing structures. Studies include the injector for the Stanford Linear Collider and the 50 megawatt klystron currently being built at SLAC. MASK is a 2D electromagnetic code which is being used by SLAC both on our own IBM 3081 and on the CRAY X-MP at the NMFECC. Our experience with running MASK illustrates the need for supercomputers to continue work of the kind described. 3 refs., 2 figs

  18. Computing the influences of different Intraocular Pressures on the human eye components using computational fluid-structure interaction model.

    Science.gov (United States)

    Karimi, Alireza; Razaghi, Reza; Navidbakhsh, Mahdi; Sera, Toshihiro; Kudo, Susumu

    2017-01-01

    Intraocular Pressure (IOP) is defined as the pressure of aqueous in the eye. It has been reported that the normal range of IOP should be within the 10-20 mmHg with an average of 15.50 mmHg among the ophthalmologists. Keratoconus is an anti-inflammatory eye disorder that debilitated cornea unable to reserve the normal structure contrary to the IOP in the eye. Consequently, the cornea would bulge outward and invoke a conical shape following by distorted vision. In addition, it is known that any alterations in the structure and composition of the lens and cornea would exceed a change of the eye ball as well as the mechanical and optical properties of the eye. Understanding the precise alteration of the eye components' stresses and deformations due to different IOPs could help elucidate etiology and pathogenesis to develop treatments not only for keratoconus but also for other diseases of the eye. In this study, at three different IOPs, including 10, 20, and 30 mmHg the stresses and deformations of the human eye components were quantified using a Three-Dimensional (3D) computational Fluid-Structure Interaction (FSI) model of the human eye. The results revealed the highest amount of von Mises stress in the bulged region of the cornea with 245 kPa at the IOP of 30 mmHg. The lens was also showed the von Mises stress of 19.38 kPa at the IOPs of 30 mmHg. In addition, by increasing the IOP from 10 to 30 mmHg, the radius of curvature in the cornea and lens was increased accordingly. In contrast, the sclera indicated its highest stress at the IOP of 10 mmHg due to over pressure phenomenon. The variation of IOP illustrated a little influence in the amount of stress as well as the resultant displacement of the optic nerve. These results can be used for understanding the amount of stresses and deformations in the human eye components due to different IOPs as well as for clarifying significant role of IOP on the radius of curvature of the cornea and the lens.

  19. Modern computer hardware and the role of central computing facilities in particle physics

    International Nuclear Information System (INIS)

    Zacharov, V.

    1981-01-01

    Important recent changes in the hardware technology of computer system components are reviewed, and the impact of these changes assessed on the present and future pattern of computing in particle physics. The place of central computing facilities is particularly examined, to answer the important question as to what, if anything, should be their future role. Parallelism in computing system components is considered to be an important property that can be exploited with advantage. The paper includes a short discussion of the position of communications and network technology in modern computer systems. (orig.)

  20. Specialized computer architectures for computational aerodynamics

    Science.gov (United States)

    Stevenson, D. K.

    1978-01-01

    In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.

  1. Implementation of the Principal Component Analysis onto High-Performance Computer Facilities for Hyperspectral Dimensionality Reduction: Results and Comparisons

    Directory of Open Access Journals (Sweden)

    Ernestina Martel

    2018-06-01

    Full Text Available Dimensionality reduction represents a critical preprocessing step in order to increase the efficiency and the performance of many hyperspectral imaging algorithms. However, dimensionality reduction algorithms, such as the Principal Component Analysis (PCA, suffer from their computationally demanding nature, becoming advisable for their implementation onto high-performance computer architectures for applications under strict latency constraints. This work presents the implementation of the PCA algorithm onto two different high-performance devices, namely, an NVIDIA Graphics Processing Unit (GPU and a Kalray manycore, uncovering a highly valuable set of tips and tricks in order to take full advantage of the inherent parallelism of these high-performance computing platforms, and hence, reducing the time that is required to process a given hyperspectral image. Moreover, the achieved results obtained with different hyperspectral images have been compared with the ones that were obtained with a field programmable gate array (FPGA-based implementation of the PCA algorithm that has been recently published, providing, for the first time in the literature, a comprehensive analysis in order to highlight the pros and cons of each option.

  2. Information Sharing for Computing Trust Metrics on COTS Electronic Components

    National Research Council Canada - National Science Library

    McMillon, William J

    2008-01-01

    .... It is challenging for the DoD to determine whether and how much to trust in COTS components, given uncertainty and incomplete information about the developers and suppliers of COTS components as well...

  3. Research regarding reverse engineering for aircraft components

    Directory of Open Access Journals (Sweden)

    Udroiu Razvan

    2017-01-01

    Full Text Available Reverse engineering is a useful technique used in manufacturing and design process of new components. In aerospace industry new components can be developed, based on existing components without technical Computer Aided Design (CAD data, in order to reduce the development cycle of new products. This paper proposes a methodology wherein the CAD model of turbine blade can be build using computer aided reverse engineering technique utilising a 5 axis Coordinate Measuring Machine (CMM. The proposed methodology uses a scanning strategy by features, followed by a design methodology for 3D modelling of complex shapes.

  4. FY05 LDRD Final Report A Computational Design Tool for Microdevices and Components in Pathogen Detection Systems

    Energy Technology Data Exchange (ETDEWEB)

    Trebotich, D

    2006-02-07

    We have developed new algorithms to model complex biological flows in integrated biodetection microdevice components. The proposed work is important because the design strategy for the next-generation Autonomous Pathogen Detection System at LLNL is the microfluidic-based Biobriefcase, being developed under the Chemical and Biological Countermeasures Program in the Homeland Security Organization. This miniaturization strategy introduces a new flow regime to systems where biological flow is already complex and not well understood. Also, design and fabrication of MEMS devices is time-consuming and costly due to the current trial-and-error approach. Furthermore, existing devices, in general, are not optimized. There are several MEMS CAD capabilities currently available, but their computational fluid dynamics modeling capabilities are rudimentary at best. Therefore, we proposed a collaboration to develop computational tools at LLNL which will (1) provide critical understanding of the fundamental flow physics involved in bioMEMS devices, (2) shorten the design and fabrication process, and thus reduce costs, (3) optimize current prototypes and (4) provide a prediction capability for the design of new, more advanced microfluidic systems. Computational expertise was provided by Comp-CASC and UC Davis-DAS. The simulation work was supported by key experiments for guidance and validation at UC Berkeley-BioE.

  5. Component effects in mixture experiments

    International Nuclear Information System (INIS)

    Piepel, G.F.

    1980-01-01

    In a mixture experiment, the response to a mixture of q components is a function of the proportions x 1 , x 2 , ..., x/sub q/ of components in the mixture. Experimental regions for mixture experiments are often defined by constraints on the proportions of the components forming the mixture. The usual (orthogonal direction) definition of a factor effect does not apply because of the dependence imposed by the mixture restriction, /sup q/Σ/sub i=1/ x/sub i/ = 1. A direction within the experimental region in which to compute a mixture component effect is presented and compared to previously suggested directions. This new direction has none of the inadequacies or errors of previous suggestions while having a more meaningful interpretation. The distinction between partial and total effects is made. The uses of partial and total effects (computed using the new direction) in modification and interpretation of mixture response prediction equations are considered. The suggestions of the paper are illustrated in an example from a glass development study in a waste vitrification program. 5 figures, 3 tables

  6. Runtime Concepts of Hierarchical Software Components

    Czech Academy of Sciences Publication Activity Database

    Bureš, Tomáš; Hnětynka, P.; Plášil, František

    2007-01-01

    Roč. 8, special (2007), s. 454-463 ISSN 1525-9293 R&D Projects: GA AV ČR 1ET400300504 Institutional research plan: CEZ:AV0Z10300504 Keywords : component-based development * hierarchical components * connectors * controlers * runtime environment Subject RIV: JC - Computer Hardware ; Software

  7. Probabilistic Structural Analysis Methods for select space propulsion system components (PSAM). Volume 2: Literature surveys of critical Space Shuttle main engine components

    Science.gov (United States)

    Rajagopal, K. R.

    1992-01-01

    The technical effort and computer code development is summarized. Several formulations for Probabilistic Finite Element Analysis (PFEA) are described with emphasis on the selected formulation. The strategies being implemented in the first-version computer code to perform linear, elastic PFEA is described. The results of a series of select Space Shuttle Main Engine (SSME) component surveys are presented. These results identify the critical components and provide the information necessary for probabilistic structural analysis. Volume 2 is a summary of critical SSME components.

  8. Development of software for computing forming information using a component based approach

    Directory of Open Access Journals (Sweden)

    Kwang Hee Ko

    2009-12-01

    Full Text Available In shipbuilding industry, the manufacturing technology has advanced at an unprecedented pace for the last decade. As a result, many automatic systems for cutting, welding, etc. have been developed and employed in the manufacturing process and accordingly the productivity has been increased drastically. Despite such improvement in the manufacturing technology, however, development of an automatic system for fabricating a curved hull plate remains at the beginning stage since hardware and software for the automation of the curved hull fabrication process should be developed differently depending on the dimensions of plates, forming methods and manufacturing processes of each shipyard. To deal with this problem, it is necessary to create a “plug-in” framework, which can adopt various kinds of hardware and software to construct a full automatic fabrication system. In this paper, a framework for automatic fabrication of curved hull plates is proposed, which consists of four components and related software. In particular the software module for computing fabrication information is developed by using the ooCBD development methodology, which can interface with other hardware and software with minimum effort. Examples of the proposed framework applied to medium and large shipyards are presented.

  9. Reliability parameters of distribution networks components

    Energy Technology Data Exchange (ETDEWEB)

    Gono, R.; Kratky, M.; Rusek, S.; Kral, V. [Technical Univ. of Ostrava (Czech Republic)

    2009-03-11

    This paper presented a framework for the retrieval of parameters from various heterogenous power system databases. The framework was designed to transform the heterogenous outage data in a common relational scheme. The framework was used to retrieve outage data parameters from the Czech and Slovak republics in order to demonstrate the scalability of the framework. A reliability computation of the system was computed in 2 phases representing the retrieval of component reliability parameters and the reliability computation. Reliability rates were determined using component reliability and global reliability indices. Input data for the reliability was retrieved from data on equipment operating under similar conditions, while the probability of failure-free operations was evaluated by determining component status. Anomalies in distribution outage data were described as scheme, attribute, and term differences. Input types consisted of input relations; transformation programs; codebooks; and translation tables. The system was used to successfully retrieve data from 7 distributors in the Czech Republic and Slovak Republic between 2000-2007. The database included 301,555 records. Data were queried using SQL language. 29 refs., 2 tabs., 2 figs.

  10. Acetabular component positioning in total hip arthroplasty with and without a computer-assisted system: a prospective, randomized and controlled study.

    Science.gov (United States)

    Gurgel, Henrique M C; Croci, Alberto T; Cabrita, Henrique A B A; Vicente, José Ricardo N; Leonhardt, Marcos C; Rodrigues, João Carlos

    2014-01-01

    In a study of the acetabular component in total hip arthroplasty, 20 hips were operated on using imageless navigation and 20 hips were operated on using the conventional method. The correct position of the acetabular component was evaluated with computed tomography, measuring the operative anteversion and the operative inclination and determining the cases inside Lewinnek's safe zone. The results were similar in all the analyses: a mean anteversion of 17.4° in the navigated group and 14.5° in the control group (P=.215); a mean inclination of 41.7° and 42.2° (P=.633); a mean deviation from the desired anteversion (15°) of 5.5° and 6.6° (P=.429); a mean deviation from the desired inclination of 3° and 3.2° (P=.783); and location inside the safe zone of 90% and 80% (P=.661). The acetabular component position's tomography analyses were similar whether using the imageless navigation or performing it conventionally. Published by Elsevier Inc.

  11. Computer-Aided College Algebra: Learning Components that Students Find Beneficial

    Science.gov (United States)

    Aichele, Douglas B.; Francisco, Cynthia; Utley, Juliana; Wescoatt, Benjamin

    2011-01-01

    A mixed-method study was conducted during the Fall 2008 semester to better understand the experiences of students participating in computer-aided instruction of College Algebra using the software MyMathLab. The learning environment included a computer learning system for the majority of the instruction, a support system via focus groups (weekly…

  12. T-stresses for internally cracked components

    International Nuclear Information System (INIS)

    Fett, T.

    1997-12-01

    The failure of cracked components is governed by the stresses in the vicinity of the crack tip. The singular stress contribution is characterised by the stress intensity factor K, the first regular stress term is represented by the so-called T-stress. T-stress solutions for components containing an internal crack were computed by application of the Bundary Collocation Method (BCM). The results are compiled in form of tables or approximative relations. In addition a Green's function of T-stresses is proposed for internal cracks which enables to compute T-stress terms for any given stress distribution in the uncracked body. (orig.) [de

  13. Feasibility Study of Cryogenic Cutting Technology by Using a Computer Simulation and Manufacture of Main Components for Cryogenic Cutting System

    International Nuclear Information System (INIS)

    Kim, Sung Kyun; Lee, Dong Gyu; Lee, Kune Woo; Song, Oh Seop

    2009-01-01

    Cryogenic cutting technology is one of the most suitable technologies for dismantling nuclear facilities due to the fact that a secondary waste is not generated during the cutting process. In this paper, the feasibility of cryogenic cutting technology was investigated by using a computer simulation. In the computer simulation, a hybrid method combined with the SPH (smoothed particle hydrodynamics) method and the FE (finite element) method was used. And also, a penetration depth equation, for the design of the cryogenic cutting system, was used and the design variables and operation conditions to cut a 10 mm thickness for steel were determined. Finally, the main components of the cryogenic cutting system were manufactures on the basis of the obtained design variables and operation conditions.

  14. Design of an EEG-based brain-computer interface (BCI) from standard components running in real-time under Windows.

    Science.gov (United States)

    Guger, C; Schlögl, A; Walterspacher, D; Pfurtscheller, G

    1999-01-01

    An EEG-based brain-computer interface (BCI) is a direct connection between the human brain and the computer. Such a communication system is needed by patients with severe motor impairments (e.g. late stage of Amyotrophic Lateral Sclerosis) and has to operate in real-time. This paper describes the selection of the appropriate components to construct such a BCI and focuses also on the selection of a suitable programming language and operating system. The multichannel system runs under Windows 95, equipped with a real-time Kernel expansion to obtain reasonable real-time operations on a standard PC. Matlab controls the data acquisition and the presentation of the experimental paradigm, while Simulink is used to calculate the recursive least square (RLS) algorithm that describes the current state of the EEG in real-time. First results of the new low-cost BCI show that the accuracy of differentiating imagination of left and right hand movement is around 95%.

  15. Principal component regression analysis with SPSS.

    Science.gov (United States)

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  16. Computational neuroanatomy: ontology-based representation of neural components and connectivity.

    Science.gov (United States)

    Rubin, Daniel L; Talos, Ion-Florin; Halle, Michael; Musen, Mark A; Kikinis, Ron

    2009-02-05

    A critical challenge in neuroscience is organizing, managing, and accessing the explosion in neuroscientific knowledge, particularly anatomic knowledge. We believe that explicit knowledge-based approaches to make neuroscientific knowledge computationally accessible will be helpful in tackling this challenge and will enable a variety of applications exploiting this knowledge, such as surgical planning. We developed ontology-based models of neuroanatomy to enable symbolic lookup, logical inference and mathematical modeling of neural systems. We built a prototype model of the motor system that integrates descriptive anatomic and qualitative functional neuroanatomical knowledge. In addition to modeling normal neuroanatomy, our approach provides an explicit representation of abnormal neural connectivity in disease states, such as common movement disorders. The ontology-based representation encodes both structural and functional aspects of neuroanatomy. The ontology-based models can be evaluated computationally, enabling development of automated computer reasoning applications. Neuroanatomical knowledge can be represented in machine-accessible format using ontologies. Computational neuroanatomical approaches such as described in this work could become a key tool in translational informatics, leading to decision support applications that inform and guide surgical planning and personalized care for neurological disease in the future.

  17. Residual life estimation of cracked aircraft structural components

    OpenAIRE

    Maksimović, Mirko S.; Vasović, Ivana V.; Maksimović, Katarina S.; Trišović, Nataša; Maksimović, Stevan M.

    2018-01-01

    The subject of this investigation is focused on developing computation procedure for strength analysis of damaged aircraft structural components with respect to fatigue and fracture mechanics. For that purpose, here will be defined computation procedures for residual life estimation of aircraft structural components such as wing skin and attachment lugs under cyclic loads of constant amplitude and load spectrum. A special aspect of this investigation is based on using of the Strain Energy Den...

  18. Center for Technology for Advanced Scientific Component Software (TASCS)

    Energy Technology Data Exchange (ETDEWEB)

    Damevski, Kostadin [Virginia State Univ., Petersburg, VA (United States)

    2009-03-30

    A resounding success of the Scientific Discover through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedened computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS) tackles these issues by exploiting component-based software development to facilitate collaborative hig-performance scientific computing.

  19. Computation of geometric representation of novel spectrophotometric methods used for the analysis of minor components in pharmaceutical preparations.

    Science.gov (United States)

    Lotfy, Hayam M; Saleh, Sarah S; Hassan, Nagiba Y; Salem, Hesham

    2015-01-01

    Novel spectrophotometric methods were applied for the determination of the minor component tetryzoline HCl (TZH) in its ternary mixture with ofloxacin (OFX) and prednisolone acetate (PA) in the ratio of (1:5:7.5), and in its binary mixture with sodium cromoglicate (SCG) in the ratio of (1:80). The novel spectrophotometric methods determined the minor component (TZH) successfully in the two selected mixtures by computing the geometrical relationship of either standard addition or subtraction. The novel spectrophotometric methods are: geometrical amplitude modulation (GAM), geometrical induced amplitude modulation (GIAM), ratio H-point standard addition method (RHPSAM) and compensated area under the curve (CAUC). The proposed methods were successfully applied for the determination of the minor component TZH below its concentration range. The methods were validated as per ICH guidelines where accuracy, repeatability, inter-day precision and robustness were found to be within the acceptable limits. The results obtained from the proposed methods were statistically compared with official ones where no significant difference was observed. No difference was observed between the obtained results when compared to the reported HPLC method, which proved that the developed methods could be alternative to HPLC techniques in quality control laboratories. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. SOFA 2 Component Framework and Its Ecosystem

    Czech Academy of Sciences Publication Activity Database

    Malohlava, M.; Hnětynka, P.; Bureš, Tomáš

    2013-01-01

    Roč. 295, 9 May (2013), s. 101-106 ISSN 1571-0661. [FESCA 2012. International Workshop on Formal Engineering approaches to Software Components and Architectures /9./. Tallinn, 31.03.2012] R&D Projects: GA ČR GD201/09/H057 Grant - others:GA AV ČR(CZ) GAP202/11/0312; UK(CZ) SVV-2012-265312 Keywords : CBSE * component system * component model * component * sofa * ecosystem * development tool Subject RIV: JC - Computer Hardware ; Software

  1. Impact test of components

    International Nuclear Information System (INIS)

    Borsoi, L.; Buland, P.; Labbe, P.

    1987-01-01

    Stops with gaps are currently used to support components and piping: it is simple, low cost, efficient and permits free thermal expansion. In order to keep the nonlinear nature of stops, such design is often modeled by beam elements (for the component) and nonlinear springs (for the stops). This paper deals with the validity and the limits of these models through the comparison of computational and experimental results. The experimental results come from impact laboratory tests on a simplified mockup. (orig.)

  2. Do Clouds Compute? A Framework for Estimating the Value of Cloud Computing

    Science.gov (United States)

    Klems, Markus; Nimis, Jens; Tai, Stefan

    On-demand provisioning of scalable and reliable compute services, along with a cost model that charges consumers based on actual service usage, has been an objective in distributed computing research and industry for a while. Cloud Computing promises to deliver on this objective: consumers are able to rent infrastructure in the Cloud as needed, deploy applications and store data, and access them via Web protocols on a pay-per-use basis. The acceptance of Cloud Computing, however, depends on the ability for Cloud Computing providers and consumers to implement a model for business value co-creation. Therefore, a systematic approach to measure costs and benefits of Cloud Computing is needed. In this paper, we discuss the need for valuation of Cloud Computing, identify key components, and structure these components in a framework. The framework assists decision makers in estimating Cloud Computing costs and to compare these costs to conventional IT solutions. We demonstrate by means of representative use cases how our framework can be applied to real world scenarios.

  3. Use of computed tomography slices 3D-reconstruction as a powerful tool to improve manufacturing processes on aeroengine components

    International Nuclear Information System (INIS)

    Castellan, C.; Dastarac, D.

    2000-01-01

    TURBOMECA has been using computed tomography for several years as an inner-health analysis powerful tool for engine components. From 2D slices of the examined part, detailed information about lacks or inclusions could easily be extracted. But, measurements on internal features were quickly required because no other NDT methods were able to do it. CT has thus logically become a powerful 2D dimensional measuring tool. Recently, with new software and the latest computers able to deal with huge files, CT has become a powerful 3D digitization tool and now, TOMO ADOUR can offer a complete solution for reverse engineering of complex parts. Several months ago, TURBOMECA introduced CT into many development, validation and industrialization processes and has demonstrated how to take corrective actions to process deviation on their aeroengine components by: extracting the nonexisting CAD model of a part, generating CAD compatible data to check dimensional conformity and, eventually correct design misfits or manufacturing drifts, highlighting the metallurgical health of first article parts, making the decision of repairing the defining the appropriate method, generating a file (.STL) to build a rapid prototype or a file to pilot tool parts for machining, calculating physical properties such as behavior or flow analysis on a 'real' model. The image also allows a drawing to be made of a part that was originally produced by a supplier or competitor. This paper will be illustrated with a large number of examples

  4. Opportunity for Realizing Ideal Computing System using Cloud Computing Model

    OpenAIRE

    Sreeramana Aithal; Vaikunth Pai T

    2017-01-01

    An ideal computing system is a computing system with ideal characteristics. The major components and their performance characteristics of such hypothetical system can be studied as a model with predicted input, output, system and environmental characteristics using the identified objectives of computing which can be used in any platform, any type of computing system, and for application automation, without making modifications in the form of structure, hardware, and software coding by an exte...

  5. Principal component approach in variance component estimation for international sire evaluation

    Directory of Open Access Journals (Sweden)

    Jakobsen Jette

    2011-05-01

    bias, but increased standard errors of the estimates and notably the computing time. Conclusions In terms of estimation's accuracy, both principal component approaches performed equally well and permitted the use of more parsimonious models through random regression MACE. The advantage of the bottom-up PC approach is that it does not need any previous knowledge on the rank. However, with a predetermined rank, the direct PC approach needs less computing time than the bottom-up PC.

  6. A Survey on PageRank Computing

    OpenAIRE

    Berkhin, Pavel

    2005-01-01

    This survey reviews the research related to PageRank computing. Components of a PageRank vector serve as authority weights for web pages independent of their textual content, solely based on the hyperlink structure of the web. PageRank is typically used as a web search ranking component. This defines the importance of the model and the data structures that underly PageRank processing. Computing even a single PageRank is a difficult computational task. Computing many PageRanks is a much mor...

  7. The digital computer

    CERN Document Server

    Parton, K C

    2014-01-01

    The Digital Computer focuses on the principles, methodologies, and applications of the digital computer. The publication takes a look at the basic concepts involved in using a digital computer, simple autocode examples, and examples of working advanced design programs. Discussions focus on transformer design synthesis program, machine design analysis program, solution of standard quadratic equations, harmonic analysis, elementary wage calculation, and scientific calculations. The manuscript then examines commercial and automatic programming, how computers work, and the components of a computer

  8. Phased Array Imaging of Complex-Geometry Composite Components.

    Science.gov (United States)

    Brath, Alex J; Simonetti, Francesco

    2017-10-01

    Progress in computational fluid dynamics and the availability of new composite materials are driving major advances in the design of aerospace engine components which now have highly complex geometries optimized to maximize system performance. However, shape complexity poses significant challenges to traditional nondestructive evaluation methods whose sensitivity and selectivity rapidly decrease as surface curvature increases. In addition, new aerospace materials typically exhibit an intricate microstructure that further complicates the inspection. In this context, an attractive solution is offered by combining ultrasonic phased array (PA) technology with immersion testing. Here, the water column formed between the complex surface of the component and the flat face of a linear or matrix array probe ensures ideal acoustic coupling between the array and the component as the probe is continuously scanned to form a volumetric rendering of the part. While the immersion configuration is desirable for practical testing, the interpretation of the measured ultrasonic signals for image formation is complicated by reflection and refraction effects that occur at the water-component interface. To account for refraction, the geometry of the interface must first be reconstructed from the reflected signals and subsequently used to compute suitable delay laws to focus inside the component. These calculations are based on ray theory and can be computationally intensive. Moreover, strong reflections from the interface can lead to a thick dead zone beneath the surface of the component which limits sensitivity to shallow subsurface defects. This paper presents a general approach that combines advanced computing for rapid ray tracing in anisotropic media with a 256-channel parallel array architecture. The full-volume inspection of complex-shape components is enabled through the combination of both reflected and transmitted signals through the part using a pair of arrays held in a yoke

  9. Euler principal component analysis

    NARCIS (Netherlands)

    Liwicki, Stephan; Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    Principal Component Analysis (PCA) is perhaps the most prominent learning tool for dimensionality reduction in pattern recognition and computer vision. However, the ℓ 2-norm employed by standard PCA is not robust to outliers. In this paper, we propose a kernel PCA method for fast and robust PCA,

  10. Model reduction by weighted Component Cost Analysis

    Science.gov (United States)

    Kim, Jae H.; Skelton, Robert E.

    1990-01-01

    Component Cost Analysis considers any given system driven by a white noise process as an interconnection of different components, and assigns a metric called 'component cost' to each component. These component costs measure the contribution of each component to a predefined quadratic cost function. A reduced-order model of the given system may be obtained by deleting those components that have the smallest component costs. The theory of Component Cost Analysis is extended to include finite-bandwidth colored noises. The results also apply when actuators have dynamics of their own. Closed-form analytical expressions of component costs are also derived for a mechanical system described by its modal data. This is very useful to compute the modal costs of very high order systems. A numerical example for MINIMAST system is presented.

  11. Computation of X-ray powder diffractograms of cement components ...

    Indian Academy of Sciences (India)

    are very important to understand and predict the performance of cement and the resulting ..... modulus given by kR/Di and k the first order surface rate constant for the reaction ... components of interest are listed in table 1. The other input.

  12. Component reliability for electronic systems

    CERN Document Server

    Bajenescu, Titu-Marius I

    2010-01-01

    The main reason for the premature breakdown of today's electronic products (computers, cars, tools, appliances, etc.) is the failure of the components used to build these products. Today professionals are looking for effective ways to minimize the degradation of electronic components to help ensure longer-lasting, more technically sound products and systems. This practical book offers engineers specific guidance on how to design more reliable components and build more reliable electronic systems. Professionals learn how to optimize a virtual component prototype, accurately monitor product reliability during the entire production process, and add the burn-in and selection procedures that are the most appropriate for the intended applications. Moreover, the book helps system designers ensure that all components are correctly applied, margins are adequate, wear-out failure modes are prevented during the expected duration of life, and system interfaces cannot lead to failure.

  13. COMPUTER AIDED THREE DIMENSIONAL DESIGN OF MOLD COMPONENTS

    Directory of Open Access Journals (Sweden)

    Kerim ÇETİNKAYA

    2000-02-01

    Full Text Available Sheet metal molding design with classical methods is formed in very long times calculates and drafts. At the molding design, selection and drafting of most of the components requires very long time because of similar repetative processes. In this study, a molding design program has been developed by using AutoLISP which has been adapted AutoCAD packet program. With this study, design of sheet metal molding, dimensioning, assemly drafting has been realized.

  14. Computer control of fuel handling activities at FFTF

    International Nuclear Information System (INIS)

    Romrell, D.M.

    1985-03-01

    The Fast Flux Test Facility near Richland, Washington, utilizes computer control for reactor refueling and other related core component handling and processing tasks. The computer controlled tasks described in this paper include core component transfers within the reactor vessel, core component transfers into and out of the reactor vessel, remote duct measurements of irradiated core components, remote duct cutting, and finally, transferring irradiated components out of the reactor containment building for off-site shipments or to long term storage. 3 refs., 16 figs

  15. Cloud computing for radiologists

    OpenAIRE

    Amit T Kharat; Amjad Safvi; S S Thind; Amarjit Singh

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as...

  16. Failure detection in high-performance clusters and computers using chaotic map computations

    Science.gov (United States)

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  17. Principal Component Analysis - A Powerful Tool in Computing Marketing Information

    Directory of Open Access Journals (Sweden)

    Constantin C.

    2014-12-01

    Full Text Available This paper is about an instrumental research regarding a powerful multivariate data analysis method which can be used by the researchers in order to obtain valuable information for decision makers that need to solve the marketing problem a company face with. The literature stresses the need to avoid the multicollinearity phenomenon in multivariate analysis and the features of Principal Component Analysis (PCA in reducing the number of variables that could be correlated with each other to a small number of principal components that are uncorrelated. In this respect, the paper presents step-by-step the process of applying the PCA in marketing research when we use a large number of variables that naturally are collinear.

  18. Component mode synthesis in structural dynamics

    International Nuclear Information System (INIS)

    Reddy, G.R.; Vaze, K.K.; Kushwaha, H.S.

    1993-01-01

    In seismic analysis of Nuclear Reactor Structures and equipments eigen solution requires large computer time. Component mode synthesis is an efficient technique with which one can evaluate dynamic characteristics of a large structure with minimum computer time. Due to this reason it is possible to do a coupled analysis of structure and equipment which takes into account the interaction effects. Basically in this the method large size structure is divided into small substructures and dynamic characteristics of individual substructure are determined. The dynamic characteristics of entire structure are evaluated by synthesising the individual substructure characteristics. Component mode synthesis has been applied in this paper to the analysis of a tall heavy water upgrading tower. Use of fixed interface normal modes, constrained modes, attachment modes in the component mode synthesis using energy principle and using Ritz vectors have been discussed. The validity of this method is established by solving fixed-fixed beam and comparing the results obtained by conventional and classical method. The eigen value problem has been solved using simultaneous iteration method. (author)

  19. Electron Gun for Computer-controlled Welding of Small Components

    Czech Academy of Sciences Publication Activity Database

    Dupák, Jan; Vlček, Ivan; Zobač, Martin

    2001-01-01

    Roč. 62, 2-3 (2001), s. 159-164 ISSN 0042-207X R&D Projects: GA AV ČR IBS2065015 Institutional research plan: CEZ:AV0Z2065902 Keywords : Electron beam-welding machine * Electron gun * Computer- control led beam Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.541, year: 2001

  20. Computer interfacing

    CERN Document Server

    Dixey, Graham

    1994-01-01

    This book explains how computers interact with the world around them and therefore how to make them a useful tool. Topics covered include descriptions of all the components that make up a computer, principles of data exchange, interaction with peripherals, serial communication, input devices, recording methods, computer-controlled motors, and printers.In an informative and straightforward manner, Graham Dixey describes how to turn what might seem an incomprehensible 'black box' PC into a powerful and enjoyable tool that can help you in all areas of your work and leisure. With plenty of handy

  1. Explorations in quantum computing

    CERN Document Server

    Williams, Colin P

    2011-01-01

    By the year 2020, the basic memory components of a computer will be the size of individual atoms. At such scales, the current theory of computation will become invalid. ""Quantum computing"" is reinventing the foundations of computer science and information theory in a way that is consistent with quantum physics - the most accurate model of reality currently known. Remarkably, this theory predicts that quantum computers can perform certain tasks breathtakingly faster than classical computers -- and, better yet, can accomplish mind-boggling feats such as teleporting information, breaking suppos

  2. Parallel PDE-Based Simulations Using the Common Component Architecture

    International Nuclear Information System (INIS)

    McInnes, Lois C.; Allan, Benjamin A.; Armstrong, Robert; Benson, Steven J.; Bernholdt, David E.; Dahlgren, Tamara L.; Diachin, Lori; Krishnan, Manoj Kumar; Kohl, James A.; Larson, J. Walter; Lefantzi, Sophia; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G.; Ray, Jaideep; Zhou, Shujia

    2006-01-01

    The complexity of parallel PDE-based simulations continues to increase as multimodel, multiphysics, and multi-institutional projects become widespread. A goal of component based software engineering in such large-scale simulations is to help manage this complexity by enabling better interoperability among various codes that have been independently developed by different groups. The Common Component Architecture (CCA) Forum is defining a component architecture specification to address the challenges of high-performance scientific computing. In addition, several execution frameworks, supporting infrastructure, and general purpose components are being developed. Furthermore, this group is collaborating with others in the high-performance computing community to design suites of domain-specific component interface specifications and underlying implementations. This chapter discusses recent work on leveraging these CCA efforts in parallel PDE-based simulations involving accelerator design, climate modeling, combustion, and accidental fires and explosions. We explain how component technology helps to address the different challenges posed by each of these applications, and we highlight how component interfaces built on existing parallel toolkits facilitate the reuse of software for parallel mesh manipulation, discretization, linear algebra, integration, optimization, and parallel data redistribution. We also present performance data to demonstrate the suitability of this approach, and we discuss strategies for applying component technologies to both new and existing applications

  3. Verifying Embedded Systems using Component-based Runtime Observers

    DEFF Research Database (Denmark)

    Guan, Wei; Marian, Nicolae; Angelov, Christo K.

    against formally specified properties. This paper presents a component-based design method for runtime observers, which are configured from instances of prefabricated reusable components---Predicate Evaluator (PE) and Temporal Evaluator (TE). The PE computes atomic propositions for the TE; the latter...... is a reconfigurable component processing a data structure, representing the state transition diagram of a non-deterministic state machine, i.e. a Buchi automaton derived from a system property specified in Linear Temporal Logic (LTL). Observer components have been implemented using design models and design patterns...

  4. Parsimonious classification of binary lacunarity data computed from food surface images using kernel principal component analysis and artificial neural networks.

    Science.gov (United States)

    Iqbal, Abdullah; Valous, Nektarios A; Sun, Da-Wen; Allen, Paul

    2011-02-01

    Lacunarity is about quantifying the degree of spatial heterogeneity in the visual texture of imagery through the identification of the relationships between patterns and their spatial configurations in a two-dimensional setting. The computed lacunarity data can designate a mathematical index of spatial heterogeneity, therefore the corresponding feature vectors should possess the necessary inter-class statistical properties that would enable them to be used for pattern recognition purposes. The objectives of this study is to construct a supervised parsimonious classification model of binary lacunarity data-computed by Valous et al. (2009)-from pork ham slice surface images, with the aid of kernel principal component analysis (KPCA) and artificial neural networks (ANNs), using a portion of informative salient features. At first, the dimension of the initial space (510 features) was reduced by 90% in order to avoid any noise effects in the subsequent classification. Then, using KPCA, the first nineteen kernel principal components (99.04% of total variance) were extracted from the reduced feature space, and were used as input in the ANN. An adaptive feedforward multilayer perceptron (MLP) classifier was employed to obtain a suitable mapping from the input dataset. The correct classification percentages for the training, test and validation sets were 86.7%, 86.7%, and 85.0%, respectively. The results confirm that the classification performance was satisfactory. The binary lacunarity spatial metric captured relevant information that provided a good level of differentiation among pork ham slice images. Copyright © 2010 The American Meat Science Association. Published by Elsevier Ltd. All rights reserved.

  5. Holistic and component plant phenotyping using temporal image sequence.

    Science.gov (United States)

    Das Choudhury, Sruti; Bashyam, Srinidhi; Qiu, Yumou; Samal, Ashok; Awada, Tala

    2018-01-01

    Image-based plant phenotyping facilitates the extraction of traits noninvasively by analyzing large number of plants in a relatively short period of time. It has the potential to compute advanced phenotypes by considering the whole plant as a single object (holistic phenotypes) or as individual components, i.e., leaves and the stem (component phenotypes), to investigate the biophysical characteristics of the plants. The emergence timing, total number of leaves present at any point of time and the growth of individual leaves during vegetative stage life cycle of the maize plants are significant phenotypic expressions that best contribute to assess the plant vigor. However, image-based automated solution to this novel problem is yet to be explored. A set of new holistic and component phenotypes are introduced in this paper. To compute the component phenotypes, it is essential to detect the individual leaves and the stem. Thus, the paper introduces a novel method to reliably detect the leaves and the stem of the maize plants by analyzing 2-dimensional visible light image sequences captured from the side using a graph based approach. The total number of leaves are counted and the length of each leaf is measured for all images in the sequence to monitor leaf growth. To evaluate the performance of the proposed algorithm, we introduce University of Nebraska-Lincoln Component Plant Phenotyping Dataset (UNL-CPPD) and provide ground truth to facilitate new algorithm development and uniform comparison. The temporal variation of the component phenotypes regulated by genotypes and environment (i.e., greenhouse) are experimentally demonstrated for the maize plants on UNL-CPPD. Statistical models are applied to analyze the greenhouse environment impact and demonstrate the genetic regulation of the temporal variation of the holistic phenotypes on the public dataset called Panicoid Phenomap-1. The central contribution of the paper is a novel computer vision based algorithm for

  6. System reliability with correlated components: Accuracy of the Equivalent Planes method

    NARCIS (Netherlands)

    Roscoe, K.; Diermanse, F.; Vrouwenvelder, A.C.W.M.

    2015-01-01

    Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing

  7. System reliability with correlated components : Accuracy of the Equivalent Planes method

    NARCIS (Netherlands)

    Roscoe, K.; Diermanse, F.; Vrouwenvelder, T.

    2015-01-01

    Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing

  8. Language interoperability for high-performance parallel scientific components

    International Nuclear Information System (INIS)

    Elliot, N; Kohn, S; Smolinski, B

    1999-01-01

    With the increasing complexity and interdisciplinary nature of scientific applications, code reuse is becoming increasingly important in scientific computing. One method for facilitating code reuse is the use of components technologies, which have been used widely in industry. However, components have only recently worked their way into scientific computing. Language interoperability is an important underlying technology for these component architectures. In this paper, we present an approach to language interoperability for a high-performance parallel, component architecture being developed by the Common Component Architecture (CCA) group. Our approach is based on Interface Definition Language (IDL) techniques. We have developed a Scientific Interface Definition Language (SIDL), as well as bindings to C and Fortran. We have also developed a SIDL compiler and run-time library support for reference counting, reflection, object management, and exception handling (Babel). Results from using Babel to call a standard numerical solver library (written in C) from C and Fortran show that the cost of using Babel is minimal, where as the savings in development time and the benefits of object-oriented development support for C and Fortran far outweigh the costs

  9. Architectures for single-chip image computing

    Science.gov (United States)

    Gove, Robert J.

    1992-04-01

    This paper will focus on the architectures of VLSI programmable processing components for image computing applications. TI, the maker of industry-leading RISC, DSP, and graphics components, has developed an architecture for a new-generation of image processors capable of implementing a plurality of image, graphics, video, and audio computing functions. We will show that the use of a single-chip heterogeneous MIMD parallel architecture best suits this class of processors--those which will dominate the desktop multimedia, document imaging, computer graphics, and visualization systems of this decade.

  10. Using Principal Component Analysis (PCA) to Speed up Radiative Transfer (RT) Computations

    Science.gov (United States)

    Natraj, Vijay

    2012-01-01

    Multiple scattering RT calculations time-consuming. Need a speed improvement of about 1000 (for OCO)! Solution: Make use of redundancies in spectra. Correlated-k (Lacis and Wang, Lacis and Oinas, Goody et al, Fu and Liou) Problem: Assume that spectral variation of atmospheric optical properties spatially correlated at all points along optical path. High accuracy (HI) and 2-stream (2S) calculations have high correlation. Single scattering (SS) computations highly scenario-dependent, but not time consuming. Perform SS and 2S calculations at every wavelength. Perform small number of HI computations. Need to compute correction factor B at every wavelength.

  11. Center for Center for Technology for Advanced Scientific Component Software (TASCS)

    Energy Technology Data Exchange (ETDEWEB)

    Kostadin, Damevski [Virginia State Univ., Petersburg, VA (United States)

    2015-01-25

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.

  12. Computer Simulation of Material Flow in Warm-forming Bimetallic Components

    Science.gov (United States)

    Kong, T. F.; Chan, L. C.; Lee, T. C.

    2007-05-01

    Bimetallic components take advantage of two different metals or alloys so that their applicable performance, weight and cost can be optimized. However, since each material has its own flow properties and mechanical behaviour, heterogeneous material flows will occur during the bimetal forming process. Those controls of process parameters are relatively more complicated than forming single metals. Most previous studies in bimetal forming have focused mainly on cold forming, and less relevant information about the warm forming has been provided. Indeed, changes of temperature and heat transfer between two materials are the significant factors which can highly influence the success of the process. Therefore, this paper presents a study of the material flow in warm-forming bimetallic components using finite-element (FE) simulation in order to determine the suitable process parameters for attaining the complete die filling. A watch-case-like component made of stainless steel (AISI-316L) and aluminium alloy (AL-6063) was used as the example. The warm-forming processes were simulated with the punch speeds V of 40, 80, and 120 mm/s and the initial temperatures of the stainless steel TiSS of 625, 675, 725, 775, 825, 875, 925, 975, and 1025 °C. The results showed that the AL-6063 flowed faster than the AISI-316L and so the incomplete die filling was only found in the AISI-316L region. A higher TiSS was recommended to avoid incomplete die filling. The reduction of V is also suggested because this can save the forming energy and prevent the damage of tooling. Eventually, with the experimental verification, the results from the simulation were in agreement with those of the experiments. On the basis of the results of this study, engineers can gain a better understanding of the material flow in warm-forming bimetallic components, and be able to determine more efficiently the punch speed and initial material temperature for the process.

  13. Diamond High Assurance Security Program: Trusted Computing Exemplar

    Science.gov (United States)

    2002-09-01

    computing component, the Embedded MicroKernel Prototype. A third-party evaluation of the component will be initiated during development (e.g., once...target technologies and larger projects is a topic for future research. Trusted Computing Reference Component – The Embedded MicroKernel Prototype We...Kernel The primary security function of the Embedded MicroKernel will be to enforce process and data-domain separation, while providing primitive

  14. Automatic procedures for computing complete configuration geometry from individual component descriptions

    Science.gov (United States)

    Barger, Raymond L.; Adams, Mary S.

    1994-01-01

    Procedures are derived for developing a complete airplane surface geometry starting from component descriptions. The procedures involve locating the intersection lines of adjacent components and omitting any regions for which part of one surface lies within the other. The geometry files utilize the wave-drag (Harris) format, and output files are written in Hess format. Two algorithms are used: one, if both intersecting surfaces have airfoil cross sections; the other, if one of the surfaces has circular cross sections. Some sample results in graphical form are included.

  15. Impact of reduced-radiation dual-energy protocols using 320-detector row computed tomography for analyzing urinary calculus components: initial in vitro evaluation.

    Science.gov (United States)

    Cai, Xiangran; Zhou, Qingchun; Yu, Juan; Xian, Zhaohui; Feng, Youzhen; Yang, Wencai; Mo, Xukai

    2014-10-01

    To evaluate the impact of reduced-radiation dual-energy (DE) protocols using 320-detector row computed tomography on the differentiation of urinary calculus components. A total of 58 urinary calculi were placed into the same phantom and underwent DE scanning with 320-detector row computed tomography. Each calculus was scanned 4 times with the DE protocols using 135 kV and 80 kV tube voltage and different tube current combinations, including 100 mA and 570 mA (group A), 50 mA and 290 mA (group B), 30 mA and 170 mA (group C), and 10 mA and 60 mA (group D). The acquisition data of all 4 groups were then analyzed by stone DE analysis software, and the results were compared with x-ray diffraction analysis. Noise, contrast-to-noise ratio, and radiation dose were compared. Calculi were correctly identified in 56 of 58 stones (96.6%) using group A and B protocols. However, only 35 stones (60.3%) and 16 stones (27.6%) were correctly diagnosed using group C and D protocols, respectively. Mean noise increased significantly and mean contrast-to-noise ratio decreased significantly from groups A to D (P calculus component analysis while reducing patient radiation exposure to 1.81 mSv. Further reduction of tube currents may compromise diagnostic accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. NET-COMPUTER: Internet Computer Architecture and its Application in E-Commerce

    OpenAIRE

    P. O. Umenne; M. O. Odhiambo

    2012-01-01

    Research in Intelligent Agents has yielded interesting results, some of which have been translated into commer­cial ventures. Intelligent Agents are executable software components that represent the user, perform tasks on behalf of the user and when the task terminates, the Agents send the result to the user. Intelligent Agents are best suited for the Internet: a collection of computers connected together in a world-wide computer network. Swarm and HYDRA computer architectures for Agents’ ex...

  17. Determination of the optimal number of components in independent components analysis.

    Science.gov (United States)

    Kassouf, Amine; Jouan-Rimbaud Bouveresse, Delphine; Rutledge, Douglas N

    2018-03-01

    Independent components analysis (ICA) may be considered as one of the most established blind source separation techniques for the treatment of complex data sets in analytical chemistry. Like other similar methods, the determination of the optimal number of latent variables, in this case, independent components (ICs), is a crucial step before any modeling. Therefore, validation methods are required in order to decide about the optimal number of ICs to be used in the computation of the final model. In this paper, three new validation methods are formally presented. The first one, called Random_ICA, is a generalization of the ICA_by_blocks method. Its specificity resides in the random way of splitting the initial data matrix into two blocks, and then repeating this procedure several times, giving a broader perspective for the selection of the optimal number of ICs. The second method, called KMO_ICA_Residuals is based on the computation of the Kaiser-Meyer-Olkin (KMO) index of the transposed residual matrices obtained after progressive extraction of ICs. The third method, called ICA_corr_y, helps to select the optimal number of ICs by computing the correlations between calculated proportions and known physico-chemical information about samples, generally concentrations, or between a source signal known to be present in the mixture and the signals extracted by ICA. These three methods were tested using varied simulated and experimental data sets and compared, when necessary, to ICA_by_blocks. Results were relevant and in line with expected ones, proving the reliability of the three proposed methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Generating Multimedia Components for M-Learning

    Directory of Open Access Journals (Sweden)

    Adriana REVEIU

    2009-01-01

    Full Text Available The paper proposes a solution to generate template based multimedia components for instruction and learning available both for computer based applications and for mobile devices. The field of research is situated at the intersection of computer science, mobile tools and e-learning and is generically named mobile learning or M-learning. The research goal is to provide access to computer based training resources from any location and to adapt the training content to the specific features of mobile devices, communication environment, users' preferences and users' knowledge. To become important tools in education field, the technical solutions proposed will follow to use the potential of mobile devices.

  19. Computer modeling of flow induced in-reactor vibrations

    International Nuclear Information System (INIS)

    Turula, P.; Mulcahy, T.M.

    1977-01-01

    An assessment of the reliability of finite element method computer models, as applied to the computation of flow induced vibration response of components used in nuclear reactors, is presented. The prototype under consideration was the Fast Flux Test Facility reactor being constructed for US-ERDA. Data were available from an extensive test program which used a scale model simulating the hydraulic and structural characteristics of the prototype components, subjected to scaled prototypic flow conditions as well as to laboratory shaker excitations. Corresponding analytical solutions of the component vibration problems were obtained using the NASTRAN computer code. Modal analyses and response analyses were performed. The effect of the surrounding fluid was accounted for. Several possible forcing function definitions were considered. Results indicate that modal computations agree well with experimental data. Response amplitude comparisons are good only under conditions favorable to a clear definition of the structural and hydraulic properties affecting the component motion. 20 refs

  20. Spatio temporal media components for neurofeedback

    DEFF Research Database (Denmark)

    Jensen, Camilla Birgitte Falk; Petersen, Michael Kai; Larsen, Jakob Eg

    2013-01-01

    A class of Brain Computer Interfaces (BCI) involves interfaces for neurofeedback training, where a user can learn to self-regulate brain activity based on real-time feedback. These particular interfaces are constructed from audio-visual components and temporal settings, which appear to have...... a strong influence on the ability to control brain activity. Therefore, identifying the different interface components and exploring their individual effects might be key for constructing new interfaces that support more efficient neurofeedback training. We discuss experiments involving two different...

  1. Computational Support for the Selection of Energy Saving Building Components

    NARCIS (Netherlands)

    De Wilde, P.J.C.J.

    2004-01-01

    Buildings use energy for heating, cooling and lighting, contributing to the problems of exhaustion of fossil fuel supplies and environmental pollution. In order to make buildings more energy-efficient an extensive set of âenergy saving building componentsâ has been developed that contributes to

  2. All-optical reservoir computing.

    Science.gov (United States)

    Duport, François; Schneider, Bendix; Smerieri, Anteo; Haelterman, Marc; Massar, Serge

    2012-09-24

    Reservoir Computing is a novel computing paradigm that uses a nonlinear recurrent dynamical system to carry out information processing. Recent electronic and optoelectronic Reservoir Computers based on an architecture with a single nonlinear node and a delay loop have shown performance on standardized tasks comparable to state-of-the-art digital implementations. Here we report an all-optical implementation of a Reservoir Computer, made of off-the-shelf components for optical telecommunications. It uses the saturation of a semiconductor optical amplifier as nonlinearity. The present work shows that, within the Reservoir Computing paradigm, all-optical computing with state-of-the-art performance is possible.

  3. Computer-Aided Facilities Management Systems (CAFM).

    Science.gov (United States)

    Cyros, Kreon L.

    Computer-aided facilities management (CAFM) refers to a collection of software used with increasing frequency by facilities managers. The six major CAFM components are discussed with respect to their usefulness and popularity in facilities management applications: (1) computer-aided design; (2) computer-aided engineering; (3) decision support…

  4. Embedded 100 Gbps Photonic Components:

    Energy Technology Data Exchange (ETDEWEB)

    Kuznia, Charlie

    2018-04-26

    This innovation to fiber optic component technology increases the performance, reduces the size and reduces the power consumption of optical communications within dense network systems, such as advanced distributed computing systems and data centers. VCSEL technology is enabling short-reach (< 100 m) and >100 Gbps optical interconnections over multi-mode fiber in commercial applications.

  5. Evaluation of Rankine cycle air conditioning system hardware by computer simulation

    Science.gov (United States)

    Healey, H. M.; Clark, D.

    1978-01-01

    A computer program for simulating the performance of a variety of solar powered Rankine cycle air conditioning system components (RCACS) has been developed. The computer program models actual equipment by developing performance maps from manufacturers data and is capable of simulating off-design operation of the RCACS components. The program designed to be a subroutine of the Marshall Space Flight Center (MSFC) Solar Energy System Analysis Computer Program 'SOLRAD', is a complete package suitable for use by an occasional computer user in developing performance maps of heating, ventilation and air conditioning components.

  6. Computational Design of Multi-component Bio-Inspired Bilayer Membranes

    Directory of Open Access Journals (Sweden)

    Evan Koufos

    2014-04-01

    Full Text Available Our investigation is motivated by the need to design bilayer membranes with tunable interfacial and mechanical properties for use in a range of applications, such as targeted drug delivery, sensing and imaging. We draw inspiration from biological cell membranes and focus on their principal constituents. In this paper, we present our results on the role of molecular architecture on the interfacial, structural and dynamical properties of bio-inspired membranes. We focus on four lipid architectures with variations in the head group shape and the hydrocarbon tail length. Each lipid species is composed of a hydrophilic head group and two hydrophobic tails. In addition, we study a model of the Cholesterol molecule to understand the interfacial properties of a bilayer membrane composed of rigid, single-tail molecular species. We demonstrate the properties of the bilayer membranes to be determined by the molecular architecture and rigidity of the constituent species. Finally, we demonstrate the formation of a stable mixed bilayer membrane composed of Cholesterol and one of the phospholipid species. Our approach can be adopted to design multi-component bilayer membranes with tunable interfacial and mechanical properties. We use a Molecular Dynamics-based mesoscopic simulation technique called Dissipative Particle Dynamics that resolves the molecular details of the components through soft-sphere coarse-grained models and reproduces the hydrodynamic behavior of the system over extended time scales.

  7. Efficient implementation of one- and two-component analytical energy gradients in exact two-component theory

    Science.gov (United States)

    Franzke, Yannick J.; Middendorf, Nils; Weigend, Florian

    2018-03-01

    We present an efficient algorithm for one- and two-component analytical energy gradients with respect to nuclear displacements in the exact two-component decoupling approach to the one-electron Dirac equation (X2C). Our approach is a generalization of the spin-free ansatz by Cheng and Gauss [J. Chem. Phys. 135, 084114 (2011)], where the perturbed one-electron Hamiltonian is calculated by solving a first-order response equation. Computational costs are drastically reduced by applying the diagonal local approximation to the unitary decoupling transformation (DLU) [D. Peng and M. Reiher, J. Chem. Phys. 136, 244108 (2012)] to the X2C Hamiltonian. The introduced error is found to be almost negligible as the mean absolute error of the optimized structures amounts to only 0.01 pm. Our implementation in TURBOMOLE is also available within the finite nucleus model based on a Gaussian charge distribution. For a X2C/DLU gradient calculation, computational effort scales cubically with the molecular size, while storage increases quadratically. The efficiency is demonstrated in calculations of large silver clusters and organometallic iridium complexes.

  8. Research on development model of nuclear component based on life cycle management

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan

    2005-01-01

    At present the development process of nuclear component, even nuclear component itself, is more and more supported by computer technology. This increasing utilization of the computer and software has led to the faster development of nuclear technology on one hand and also brought new problems on the other hand. Especially, the combination of hardware, software and humans has increased nuclear component system complexities to an unprecedented level. To solve this problem, Life Cycle Management technology is adopted in nuclear component system. Hence, an intensive discussion on the development process of a nuclear component is proposed. According to the characteristics of the nuclear component development, such as the complexities and strict safety requirements of the nuclear components, long-term design period, changeable design specifications and requirements, high capital investment, and satisfaction for engineering codes/standards, the development life-cycle model of nuclear component is presented. The development life-cycle model is classified at three levels, namely, component level development life-cycle, sub-component development life-cycle and component level verification/certification life-cycle. The purposes and outcomes of development processes are stated in detailed. A process framework for nuclear component based on system engineering and development environment of nuclear component is discussed for future research work. (authors)

  9. Algorithmic fault tree construction by component-based system modeling

    International Nuclear Information System (INIS)

    Majdara, Aref; Wakabayashi, Toshio

    2008-01-01

    Computer-aided fault tree generation can be easier, faster and less vulnerable to errors than the conventional manual fault tree construction. In this paper, a new approach for algorithmic fault tree generation is presented. The method mainly consists of a component-based system modeling procedure an a trace-back algorithm for fault tree synthesis. Components, as the building blocks of systems, are modeled using function tables and state transition tables. The proposed method can be used for a wide range of systems with various kinds of components, if an inclusive component database is developed. (author)

  10. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  11. Conductivity of two-component systems

    Energy Technology Data Exchange (ETDEWEB)

    Kuijper, A. de; Hofman, J.P.; Waal, J.A. de [Shell Research BV, Rijswijk (Netherlands). Koninklijke/Shell Exploratie en Productie Lab.; Sandor, R.K.J. [Shell International Petroleum Maatschappij, The Hague (Netherlands)

    1996-01-01

    The authors present measurements and computer simulation results on the electrical conductivity of nonconducting grains embedded in a conductive brine host. The shapes of the grains ranged from prolate-ellipsoidal (with an axis ratio of 5:1) through spherical to oblate-ellipsoidal (with an axis ratio of 1:5). The conductivity was studied as a function of porosity and packing, and Archie`s cementation exponent was found to depend on porosity. They used spatially regular and random configurations with aligned and nonaligned packings. The experimental results agree well with the computer simulation data. This data set will enable extensive tests of models for calculating the anisotropic conductivity of two-component systems.

  12. A sampler of useful computational tools for applied geometry, computer graphics, and image processing foundations for computer graphics, vision, and image processing

    CERN Document Server

    Cohen-Or, Daniel; Ju, Tao; Mitra, Niloy J; Shamir, Ariel; Sorkine-Hornung, Olga; Zhang, Hao (Richard)

    2015-01-01

    A Sampler of Useful Computational Tools for Applied Geometry, Computer Graphics, and Image Processing shows how to use a collection of mathematical techniques to solve important problems in applied mathematics and computer science areas. The book discusses fundamental tools in analytical geometry and linear algebra. It covers a wide range of topics, from matrix decomposition to curvature analysis and principal component analysis to dimensionality reduction.Written by a team of highly respected professors, the book can be used in a one-semester, intermediate-level course in computer science. It

  13. Technology Acceptance and User Experience: A Review of the Experiential Component in HCI

    DEFF Research Database (Denmark)

    Hornbæk, Kasper; Hertzum, Morten

    2017-01-01

    Understanding the mechanisms that shape the adoption and use of information technology is central to human-computer interaction. Two accounts are particularly vocal about these mechanisms, namely the technology acceptance model (TAM) and work on user experience (UX) models. In this study we review...... 37 papers in the overlap between TAM and UX models to explore the experiential component of human-computer interactions. The models provide rich insights about what constructs influence the experiential component of human-computer interactions and about how these constructs are related. For example...

  14. Berends-Giele recursions and the BCJ duality in superspace and components

    Energy Technology Data Exchange (ETDEWEB)

    Mafra, Carlos R. [Institute for Advanced Study, School of Natural Sciences,Einstein Drive, Princeton, NJ 08540 (United States); DAMTP, University of Cambridge,Wilberforce Road, Cambridge, CB3 0WA (United Kingdom); Schlotterer, Oliver [Max-Planck-Institut für Gravitationsphysik, Albert-Einstein-Institut,Am Muehlenberg, 14476 Potsdam (Germany)

    2016-03-15

    The recursive method of Berends and Giele to compute tree-level gluon amplitudes is revisited using the framework of ten-dimensional super Yang-Mills. First, we prove that the pure spinor formula to compute SYM tree amplitudes derived in 2010 reduces to the standard Berends-Giele formula from the 80s when restricted to gluon amplitudes and additionally determine the fermionic completion. Second, using BRST cohomology manipulations in superspace, alternative representations of the component amplitudes are explored and the Bern-Carrasco-Johansson relations among partial tree amplitudes are derived in a novel way. Finally, it is shown how the supersymmetric components of manifestly local BCJ-satisfying tree-level numerators can be computed in a recursive fashion.

  15. Berends-Giele recursions and the BCJ duality in superspace and components

    International Nuclear Information System (INIS)

    Mafra, Carlos R.; Schlotterer, Oliver

    2016-01-01

    The recursive method of Berends and Giele to compute tree-level gluon amplitudes is revisited using the framework of ten-dimensional super Yang-Mills. First, we prove that the pure spinor formula to compute SYM tree amplitudes derived in 2010 reduces to the standard Berends-Giele formula from the 80s when restricted to gluon amplitudes and additionally determine the fermionic completion. Second, using BRST cohomology manipulations in superspace, alternative representations of the component amplitudes are explored and the Bern-Carrasco-Johansson relations among partial tree amplitudes are derived in a novel way. Finally, it is shown how the supersymmetric components of manifestly local BCJ-satisfying tree-level numerators can be computed in a recursive fashion.

  16. A refinement driven component-based design

    DEFF Research Database (Denmark)

    Chen, Zhenbang; Liu, Zhiming; Ravn, Anders Peter

    2007-01-01

    the work on the Common Component Modelling Example (CoCoME). This gives evidence that the formal techniques developed in rCOS can be integrated into a model-driven development process and shows where it may be integrated in computer-aided software engineering (CASE) tools for adding formally supported...

  17. Determining the optimal number of independent components for reproducible transcriptomic data analysis.

    Science.gov (United States)

    Kairov, Ulykbek; Cantini, Laura; Greco, Alessandro; Molkenov, Askhat; Czerwinska, Urszula; Barillot, Emmanuel; Zinovyev, Andrei

    2017-09-11

    Independent Component Analysis (ICA) is a method that models gene expression data as an action of a set of statistically independent hidden factors. The output of ICA depends on a fundamental parameter: the number of components (factors) to compute. The optimal choice of this parameter, related to determining the effective data dimension, remains an open question in the application of blind source separation techniques to transcriptomic data. Here we address the question of optimizing the number of statistically independent components in the analysis of transcriptomic data for reproducibility of the components in multiple runs of ICA (within the same or within varying effective dimensions) and in multiple independent datasets. To this end, we introduce ranking of independent components based on their stability in multiple ICA computation runs and define a distinguished number of components (Most Stable Transcriptome Dimension, MSTD) corresponding to the point of the qualitative change of the stability profile. Based on a large body of data, we demonstrate that a sufficient number of dimensions is required for biological interpretability of the ICA decomposition and that the most stable components with ranks below MSTD have more chances to be reproduced in independent studies compared to the less stable ones. At the same time, we show that a transcriptomics dataset can be reduced to a relatively high number of dimensions without losing the interpretability of ICA, even though higher dimensions give rise to components driven by small gene sets. We suggest a protocol of ICA application to transcriptomics data with a possibility of prioritizing components with respect to their reproducibility that strengthens the biological interpretation. Computing too few components (much less than MSTD) is not optimal for interpretability of the results. The components ranked within MSTD range have more chances to be reproduced in independent studies.

  18. Computer network defense system

    Science.gov (United States)

    Urias, Vincent; Stout, William M. S.; Loverro, Caleb

    2017-08-22

    A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves network connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.

  19. Digital optical computers at the optoelectronic computing systems center

    Science.gov (United States)

    Jordan, Harry F.

    1991-01-01

    The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.

  20. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    Science.gov (United States)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  1. Comparison of Component Frameworks for Real-Time Embedded Systems

    Czech Academy of Sciences Publication Activity Database

    Pop, T.; Hnětynka, P.; Hošek, P.; Malohlava, M.; Bureš, Tomáš

    2014-01-01

    Roč. 40, č. 1 (2014), s. 127-170 ISSN 0219-1377 Grant - others:GA AV ČR(CZ) GAP202/11/0312; GA UK(CZ) Project 378111; UK(CZ) SVV-2013- 267312 Keywords : component-based development * component frameworks * real-time and embedded systems Subject RIV: JC - Computer Hardware ; Software Impact factor: 1.782, year: 2014

  2. Behavior Protocols for Software Components

    Czech Academy of Sciences Publication Activity Database

    Plášil, František; Višňovský, Stanislav

    2002-01-01

    Roč. 28, č. 11 (2002), s. 1056-1076 ISSN 0098-5589 R&D Projects: GA AV ČR IAA2030902; GA ČR GA201/99/0244 Grant - others:Eureka(XE) Pepita project no.2033 Institutional research plan: AV0Z1030915 Keywords : behavior protocols * component-based programming * software architecture Subject RIV: JC - Computer Hardware ; Software Impact factor: 1.170, year: 2002

  3. 78 FR 1247 - Certain Electronic Devices, Including Wireless Communication Devices, Tablet Computers, Media...

    Science.gov (United States)

    2013-01-08

    ... Wireless Communication Devices, Tablet Computers, Media Players, and Televisions, and Components Thereof... devices, including wireless communication devices, tablet computers, media players, and televisions, and... wireless communication devices, tablet computers, media players, and televisions, and components thereof...

  4. Computational Toxicology as Implemented by the US EPA ...

    Science.gov (United States)

    Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, the U.S. Environmental Protection Agency EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in air, water, and hazardous-waste sites. The Office of Research and Development (ORD) Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the U.S. EPA Science to Achieve Results (STAR) program. Together these elements form the key components in the implementation of both the initial strategy, A Framework for a Computational Toxicology Research Program (U.S. EPA, 2003), and the newly released The U.S. Environmental Protection Agency's Strategic Plan for Evaluating the T

  5. Modelling and computer simulation for the manufacture by powder HIPing of Blanket Shield components for ITER

    International Nuclear Information System (INIS)

    Gillia, O.; Bucci, Ph.; Vidotto, F.; Leibold, J.-M.; Boireau, B.; Boudot, C.; Cottin, A.; Lorenzetto, P.; Jacquinot, F.

    2006-01-01

    In components of blanket modules for ITER, intricate cooling networks are needed in order to evacuate all heat coming from the plasma. Hot Isostatic Pressing (HIPing) technology is a very convenient method to produce near net shape components with complex cooling network through massive stainless steel parts by bonding together tubes inserted in grooves machined in bulk stainless steel. Powder is often included in the process so as to release difficulties arising with gaps closure between tube and solid part or between several solid parts. In the mean time, it releases the machining precision needed on the parts to assemble before HIP. However, inserting powder in the assembly means densification, i.e. volume change of powder during the HIP cycle. This leads to global and local shape changes of HIPed parts. In order to control the deformations, modelling and computer simulation are used. This modelling and computer simulation work has been done in support to the fabrication of a shield prototype for the ITER blanket. Problems such as global bending of the whole part and deformations of tubes in their powder bed are addressed. It is important that the part does not bend too much. It is important as well to have circular tube shape after HIP, firstly in order to avoid their rupture during HIP but also because non destructive ultrasonic examination is needed to check the quality of the densification and bonding between tube and powder or solid parts; the insertions of a probe in the tubes requires a minimal circular tube shape. For simulation purposes, the behaviour of the different materials has to be modelled. Although the modelling of the massive stainless steel behaviour is not neglected, the most critical modelling is about power. For this study, a thorough investigation on the powder behaviour has been performed with some in-situ HIP dilatometry experiments and some interrupted HIP cycles on trial parts. These experiments have allowed the identification of a

  6. Development of a small-scale computer cluster

    Science.gov (United States)

    Wilhelm, Jay; Smith, Justin T.; Smith, James E.

    2008-04-01

    An increase in demand for computing power in academia has necessitated the need for high performance machines. Computing power of a single processor has been steadily increasing, but lags behind the demand for fast simulations. Since a single processor has hard limits to its performance, a cluster of computers can have the ability to multiply the performance of a single computer with the proper software. Cluster computing has therefore become a much sought after technology. Typical desktop computers could be used for cluster computing, but are not intended for constant full speed operation and take up more space than rack mount servers. Specialty computers that are designed to be used in clusters meet high availability and space requirements, but can be costly. A market segment exists where custom built desktop computers can be arranged in a rack mount situation, gaining the space saving of traditional rack mount computers while remaining cost effective. To explore these possibilities, an experiment was performed to develop a computing cluster using desktop components for the purpose of decreasing computation time of advanced simulations. This study indicates that small-scale cluster can be built from off-the-shelf components which multiplies the performance of a single desktop machine, while minimizing occupied space and still remaining cost effective.

  7. Design and implementation of component reliability database management system for NPP

    International Nuclear Information System (INIS)

    Kim, S. H.; Jung, J. K.; Choi, S. Y.; Lee, Y. H.; Han, S. H.

    1999-01-01

    KAERI is constructing the component reliability database for Korean nuclear power plant. This paper describes the development of data management tool, which runs for component reliability database. This is running under intranet environment and is used to analyze the failure mode and failure severity to compute the component failure rate. Now we are developing the additional modules to manage operation history, test history and algorithms for calculation of component failure history and reliability

  8. Verification of Software Components: Addressing Unbounded Paralelism

    Czech Academy of Sciences Publication Activity Database

    Adámek, Jiří

    2007-01-01

    Roč. 8, č. 2 (2007), s. 300-309 ISSN 1525-9293 R&D Projects: GA AV ČR 1ET400300504 Institutional research plan: CEZ:AV0Z10300504 Keywords : software components * formal verification * unbounded parallelism Subject RIV: JC - Computer Hardware ; Software

  9. The common component architecture for particle accelerator simulations

    International Nuclear Information System (INIS)

    Dechow, D.R.; Norris, B.; Amundson, J.

    2007-01-01

    Synergia2 is a beam dynamics modeling and simulation application for high-energy accelerators such as the Tevatron at Fermilab and the International Linear Collider, which is now under planning and development. Synergia2 is a hybrid, multilanguage software package comprised of two separate accelerator physics packages (Synergia and MaryLie/Impact) and one high-performance computer science package (PETSc). We describe our approach to producing a set of beam dynamics-specific software components based on the Common Component Architecture specification. Among other topics, we describe particular experiences with the following tasks: using Python steering to guide the creation of interfaces and to prototype components; working with legacy Fortran codes; and an example component-based, beam dynamics simulation.

  10. Pump Component Model in SPACE Code

    International Nuclear Information System (INIS)

    Kim, Byoung Jae; Kim, Kyoung Doo

    2010-08-01

    This technical report describes the pump component model in SPACE code. A literature survey was made on pump models in existing system codes. The models embedded in SPACE code were examined to check the confliction with intellectual proprietary rights. Design specifications, computer coding implementation, and test results are included in this report

  11. Fusion-component lifetime analysis

    International Nuclear Information System (INIS)

    Mattas, R.F.

    1982-09-01

    A one-dimensional computer code has been developed to examine the lifetime of first-wall and impurity-control components. The code incorporates the operating and design parameters, the material characteristics, and the appropriate failure criteria for the individual components. The major emphasis of the modeling effort has been to calculate the temperature-stress-strain-radiation effects history of a component so that the synergystic effects between sputtering erosion, swelling, creep, fatigue, and crack growth can be examined. The general forms of the property equations are the same for all materials in order to provide the greatest flexibility for materials selection in the code. The individual coefficients within the equations are different for each material. The code is capable of determining the behavior of a plate, composed of either a single or dual material structure, that is either totally constrained or constrained from bending but not from expansion. The code has been utilized to analyze the first walls for FED/INTOR and DEMO and to analyze the limiter for FED/INTOR

  12. Galaxy Makers Exhibition: Re-engagement, Evaluation and Content Legacy through an Online Component

    Science.gov (United States)

    Borrow, J.; Harrison, C.

    2017-09-01

    For the Royal Society Summer Science Exhibition 2016, Durham University's Institute of Computational Cosmology created the Galaxy Makers exhibit to communicate our computational cosmology and astronomy research. In addition to the physical exhibit we created an online component to foster re-engagement, create a permanent home for our content and allow us to collect important information about participation and impact. Here we summarise the details of the exhibit and the degree of success attached to the online component. We also share suggestions for further uses and improvements that could be implemented for the online components of other science exhibitions.

  13. Cloud computing: An innovative tool for library services

    OpenAIRE

    Sahu, R.

    2015-01-01

    Cloud computing is a new technique of information communication technology because of its potential benefits such as reduced cost, accessible anywhere any time as well as its elasticity and flexibility. In this Paper defines cloud Computing, definition, essential characteristics, model of cloud computing, components of cloud, advantages & drawbacks of cloud computing and also describe cloud computing in libraries.

  14. Imprecise system reliability and component importance based on survival signature

    International Nuclear Information System (INIS)

    Feng, Geng; Patelli, Edoardo; Beer, Michael; Coolen, Frank P.A.

    2016-01-01

    The concept of the survival signature has recently attracted increasing attention for performing reliability analysis on systems with multiple types of components. It opens a new pathway for a structured approach with high computational efficiency based on a complete probabilistic description of the system. In practical applications, however, some of the parameters of the system might not be defined completely due to limited data, which implies the need to take imprecisions of component specifications into account. This paper presents a methodology to include explicitly the imprecision, which leads to upper and lower bounds of the survival function of the system. In addition, the approach introduces novel and efficient component importance measures. By implementing relative importance index of each component without or with imprecision, the most critical component in the system can be identified depending on the service time of the system. Simulation method based on survival signature is introduced to deal with imprecision within components, which is precise and efficient. Numerical example is presented to show the applicability of the approach for systems. - Highlights: • Survival signature is a novel way for system reliability and component importance • High computational efficiency based on a complete description of system. • Include explicitly the imprecision, which leads to bounds of the survival function. • A novel relative importance index is proposed as importance measure. • Allows to identify critical components depending on the service time of the system.

  15. Spatial Variation of Magnetotelluric Field Components in simple 2D ...

    African Journals Online (AJOL)

    The E-polarization mode electromagnetic field components were computed for different aspect ratios of the inhomogeneity and for different frequencies of the incident waves. The results show that as aspect ratio of the inhomogeneity is reduced the spatial variation of the electric field component Ex is reduced and that of the ...

  16. Cloud Computing for radiologists.

    Science.gov (United States)

    Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit

    2012-07-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  17. Cloud Computing for radiologists

    International Nuclear Information System (INIS)

    Kharat, Amit T; Safvi, Amjad; Thind, SS; Singh, Amarjit

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future

  18. Cloud computing for radiologists

    Directory of Open Access Journals (Sweden)

    Amit T Kharat

    2012-01-01

    Full Text Available Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  19. Evaluation of runaway-electron effects on plasma-facing components for NET

    Science.gov (United States)

    Bolt, H.; Calén, H.

    1991-03-01

    Runaway electrons which are generated during disruptions can cause serious damage to plasma facing components in a next generation device like NET. A study was performed to quantify the response of NET plasma facing components to runaway-electron impact. For the determination of the energy deposition in the component materials Monte Carlo computations were performed. Since the subsurface metal structures can be strongly heated under runaway-electron impact from the computed results damage threshold values for the thermal excursions were derived. These damage thresholds are strongly dependent on the materials selection and the component design. For a carbonmolybdenum divertor with 10 and 20 mm carbon armour thickness and 1 degree electron incidence the damage thresholds are 100 MJ/m 2 and 220 MJ/m 2. The thresholds for a carbon-copper divertor under the same conditions are about 50% lower. On the first wall damage is anticipated for energy depositions above 180 MJ/m 2.

  20. Attacks on computer systems

    Directory of Open Access Journals (Sweden)

    Dejan V. Vuletić

    2012-01-01

    Full Text Available Computer systems are a critical component of the human society in the 21st century. Economic sector, defense, security, energy, telecommunications, industrial production, finance and other vital infrastructure depend on computer systems that operate at local, national or global scales. A particular problem is that, due to the rapid development of ICT and the unstoppable growth of its application in all spheres of the human society, their vulnerability and exposure to very serious potential dangers increase. This paper analyzes some typical attacks on computer systems.

  1. Computers for Manned Space Applications Base on Commercial Off-the-Shelf Components

    Science.gov (United States)

    Vogel, T.; Gronowski, M.

    2009-05-01

    Similar to the consumer markets there has been an ever increasing demand in processing power, signal processing capabilities and memory space also for computers used for science data processing in space. An important driver of this development have been the payload developers for the International Space Station, requesting high-speed data acquisition and fast control loops in increasingly complex systems. Current experiments now even perform video processing and compression with their payload controllers. Nowadays the requirements for a space qualified computer are often far beyond the capabilities of, for example, the classic SPARC architecture that is found in ERC32 or LEON CPUs. An increase in performance usually demands costly and power consuming application specific solutions. Continuous developments over the last few years have now led to an alternative approach that is based on complete electronics modules manufactured for commercial and industrial customers. Computer modules used in industrial environments with a high demand for reliability under harsh environmental conditions like chemical reactors, electrical power plants or on manufacturing lines are entered into a selection procedure. Promising candidates then undergo a detailed characterisation process developed by Astrium Space Transportation. After thorough analysis and some modifications, these modules can replace fully qualified custom built electronics in specific, although not safety critical applications in manned space. This paper focuses on the benefits of COTS1 based electronics modules and the necessary analyses and modifications for their utilisation in manned space applications on the ISS. Some considerations regarding overall systems architecture will also be included. Furthermore this paper will also pinpoint issues that render such modules unsuitable for specific tasks, and justify the reasons. Finally, the conclusion of this paper will advocate the implementation of COTS based

  2. The role of dedicated data computing centers in the age of cloud computing

    Science.gov (United States)

    Caramarcu, Costin; Hollowell, Christopher; Strecker-Kellogg, William; Wong, Antonio; Zaytsev, Alexandr

    2017-10-01

    Brookhaven National Laboratory (BNL) anticipates significant growth in scientific programs with large computing and data storage needs in the near future and has recently reorganized support for scientific computing to meet these needs. A key component is the enhanced role of the RHIC-ATLAS Computing Facility (RACF) in support of high-throughput and high-performance computing (HTC and HPC) at BNL. This presentation discusses the evolving role of the RACF at BNL, in light of its growing portfolio of responsibilities and its increasing integration with cloud (academic and for-profit) computing activities. We also discuss BNL’s plan to build a new computing center to support the new responsibilities of the RACF and present a summary of the cost benefit analysis done, including the types of computing activities that benefit most from a local data center vs. cloud computing. This analysis is partly based on an updated cost comparison of Amazon EC2 computing services and the RACF, which was originally conducted in 2012.

  3. Flavour components of some processed. fish and fishery products of Japan

    OpenAIRE

    Mansur, M.A.; Hossain, M.I.; Takamuro, H.; Matoba, T.

    2002-01-01

    A study was conducted to examine the flavour components of some processed fish and fishery products of Japan by gas chromatography-mass spectrometry (GC-MS). In brief the method was to absorb the headspace volatiles at 70°C into the fused silica fibre of needle of the solid phase micro extraction fibre. The absorbed components were injected to the GC-MS. The components were identified by computer matching with library database as well as by authentic standard components. In gen...

  4. 77 FR 51571 - Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers...

    Science.gov (United States)

    2012-08-24

    ... Music and Data Processing Devices, Computers, and Components Thereof; Notice of Receipt of Complaint... complaint entitled Wireless Communication Devices, Portable Music and Data Processing Devices, Computers..., portable music and data processing devices, computers, and components thereof. The complaint names as...

  5. Computer-oriented approach to fault-tree construction

    International Nuclear Information System (INIS)

    Salem, S.L.; Apostolakis, G.E.; Okrent, D.

    1976-11-01

    A methodology for systematically constructing fault trees for general complex systems is developed and applied, via the Computer Automated Tree (CAT) program, to several systems. A means of representing component behavior by decision tables is presented. The method developed allows the modeling of components with various combinations of electrical, fluid and mechanical inputs and outputs. Each component can have multiple internal failure mechanisms which combine with the states of the inputs to produce the appropriate output states. The generality of this approach allows not only the modeling of hardware, but human actions and interactions as well. A procedure for constructing and editing fault trees, either manually or by computer, is described. The techniques employed result in a complete fault tree, in standard form, suitable for analysis by current computer codes. Methods of describing the system, defining boundary conditions and specifying complex TOP events are developed in order to set up the initial configuration for which the fault tree is to be constructed. The approach used allows rapid modifications of the decision tables and systems to facilitate the analysis and comparison of various refinements and changes in the system configuration and component modeling

  6. On several computer-oriented studies

    International Nuclear Information System (INIS)

    Takahashi, Ryoichi

    1982-01-01

    To utilize fully digital techniques for solving various difficult problems, nuclear engineers have recourse to computer-oriented approaches. The current trend, in such fields as optimization theory, control system theory and computational fluid dynamics reflect the ability to use computers to obtain numerical solutions to complex problems. Special purpose computers will be used as the integral part of the solving system to process a large amount of data, to implement a control law and even to produce a decision-making. Many problem-solving systems designed in the future will incorporate special-purpose computers as system component. The optimum use of computer system is discussed: why are energy model, energy data base and a big computer used; why will the economic process-computer be allocated to nuclear plants in the future; why should the super-computer be demonstrated at once. (Mori, K.)

  7. Computed Radiography: An Innovative Inspection Technique

    International Nuclear Information System (INIS)

    Klein, William A.; Councill, Donald L.

    2002-01-01

    Florida Power and Light Company's (FPL) Nuclear Division combined two diverse technologies to create an innovative inspection technique, Computed Radiography, that improves personnel safety and unit reliability while reducing inspection costs. This technique was pioneered in the medical field and applied in the Nuclear Division initially to detect piping degradation due to flow-accelerated corrosion. Component degradation can be detected by this additional technique. This approach permits FPL to reduce inspection costs, perform on line examinations (no generation curtailment), and to maintain or improve both personnel safety and unit reliability. Computed Radiography is a very versatile tool capable of other uses: - improving the external corrosion program by permitting inspections underneath insulation, and - diagnosing system and component problems such as valve positions, without the need to shutdown or disassemble the component. (authors)

  8. Outline of a novel architecture for cortical computation

    OpenAIRE

    Majumdar, Kaushik

    2007-01-01

    In this paper a novel architecture for cortical computation has been proposed. This architecture is composed of computing paths consisting of neurons and synapses only. These paths have been decomposed into lateral, longitudinal and vertical components. Cortical computation has then been decomposed into lateral computation (LaC), longitudinal computation (LoC) and vertical computation (VeC). It has been shown that various loop structures in the cortical circuit play important roles in cortica...

  9. Efficient transfer of sensitivity information in multi-component models

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Rabiti, Cristian

    2011-01-01

    In support of adjoint-based sensitivity analysis, this manuscript presents a new method to efficiently transfer adjoint information between components in a multi-component model, whereas the output of one component is passed as input to the next component. Often, one is interested in evaluating the sensitivities of the responses calculated by the last component to the inputs of the first component in the overall model. The presented method has two advantages over existing methods which may be classified into two broad categories: brute force-type methods and amalgamated-type methods. First, the presented method determines the minimum number of adjoint evaluations for each component as opposed to the brute force-type methods which require full evaluation of all sensitivities for all responses calculated by each component in the overall model, which proves computationally prohibitive for realistic problems. Second, the new method treats each component as a black-box as opposed to amalgamated-type methods which requires explicit knowledge of the system of equations associated with each component in order to reach the minimum number of adjoint evaluations. (author)

  10. ROLE OF VIRTUALIZATION IN CLOUD COMPUTING

    OpenAIRE

    Avneet kaur; Dr. Gaurav Gupta; Dr. Gurjit Singh Bhathal

    2017-01-01

    Cloud Computing is the fundamental change happening in the field of Information Technology.. Virtualization is the key component of cloud computing. With the use of virtualization, cloud computing brings about not only convenience and efficiency benefits, but also great challenges in the field of data security and privacy protection. .In this paper, we are discussing about virtualization, architecture of virtualization technology as well as Virtual Machine Monitor (VMM). Further discussing ab...

  11. Using TinyOS Components for the Design of an Adaptive Ubiquitous System

    NARCIS (Netherlands)

    Kaya, O.S.; Durmaz, O.; Dulman, S.O.; Gemesi, R.; Jansen, P.G.; Havinga, Paul J.M.

    2005-01-01

    This work is an initiative attempt toward component-based software engineering in ubiquitous computing systems. Software components cooperate in a distributed manner to meet a demand, and adapt their software bindings during run-time depending on the context information. There are two main research

  12. Using TinyOS Components for the Design of an Adaptive Ubiquitous System

    NARCIS (Netherlands)

    Kaya, O.S.; Durmaz, O.; Dulman, S.O.; Gemesi, R.; Jansen, P.G.; Havinga, Paul J.M.

    This work is an initiative attempt toward component-based software engineering in ubiquitous computing systems. Software components cooperate in a distributed manner to meet a demand, and adapt their software bindings during run-time depending on the context information. There are two main research

  13. Addressing Cloud Computing in Enterprise Architecture: Issues and Challenges

    OpenAIRE

    Khan, Khaled; Gangavarapu, Narendra

    2009-01-01

    This article discusses how the characteristics of cloud computing affect the enterprise architecture in four domains: business, data, application and technology. The ownership and control of architectural components are shifted from organisational perimeters to cloud providers. It argues that although cloud computing promises numerous benefits to enterprises, the shifting control from enterprises to cloud providers on architectural components introduces several architectural challenges. The d...

  14. Green IT engineering components, networks and systems implementation

    CERN Document Server

    Kondratenko, Yuriy; Kacprzyk, Janusz

    2017-01-01

    This book presents modern approaches to improving the energy efficiency, safety and environmental performance of industrial processes and products, based on the application of advanced trends in Green Information Technologies (IT) Engineering to components, networks and complex systems (software, programmable and hardware components, communications, Cloud and IoT-based systems, as well as IT infrastructures). The book’s 16 chapters, prepared by authors from Greece, Malaysia, Russia, Slovakia, Ukraine and the United Kingdom, are grouped into four sections: (1) The Green Internet of Things, Cloud Computing and Data Mining, (2) Green Mobile and Embedded Control Systems, (3) Green Logic and FPGA Design, and (4) Green IT for Industry and Smart Grids. The book will motivate researchers and engineers from different IT domains to develop, implement and propagate green values in complex systems. Further, it will benefit all scientists and graduate students pursuing research in computer science with a focus on green ...

  15. Verification of the component accuracy prediction obtained by physical modelling and the elastic simulation of the die/component interaction

    DEFF Research Database (Denmark)

    Ravn, Bjarne Gottlieb; Andersen, Claus Bo; Wanheim, Tarras

    2001-01-01

    There are three demands on a component that must undergo a die-cavity elasticity analysis. The demands to the product are specified as: (i) to be able to measure the loading profile which results in elestic die-cavity deflections; (ii) to be able to compute the elestic deflections using FE; (iii...

  16. Reliability-based sensitivity of mechanical components with arbitrary distribution parameters

    International Nuclear Information System (INIS)

    Zhang, Yi Min; Yang, Zhou; Wen, Bang Chun; He, Xiang Dong; Liu, Qiaoling

    2010-01-01

    This paper presents a reliability-based sensitivity method for mechanical components with arbitrary distribution parameters. Techniques from the perturbation method, the Edgeworth series, the reliability-based design theory, and the sensitivity analysis approach were employed directly to calculate the reliability-based sensitivity of mechanical components on the condition that the first four moments of the original random variables are known. The reliability-based sensitivity information of the mechanical components can be accurately and quickly obtained using a practical computer program. The effects of the design parameters on the reliability of mechanical components were studied. The method presented in this paper provides the theoretic basis for the reliability-based design of mechanical components

  17. The DC field components of horizontal and vertical electric dipole sources immersed in three-layered stratified media

    Directory of Open Access Journals (Sweden)

    D. Llanwyn Jones

    1997-04-01

    Full Text Available Formulas for computing the Cartesian components of the static (DC fields of horizontal electric dipoles ( HEDs and vertical electric dipoles ( VEDs located in the central zone of a three-layer horizontally stratified medium are derived and presented in a summary form suitable for immediate computation. Formulas are given for the electric and magnetic field components in the upper and central regions. In the general case the computation involves the summation of a convergent infinite series. For the particular case of an infinitely thick central region (corresponding to the two-layer problem, the analysis produces relatively simple closed-form equations for the field components which are suitable for a 'hand calculation'. Specimen calculations for dipoles in seawaters are included and the derived results are compared with computations made using an ac model.

  18. Fault tree analysis with multistate components

    International Nuclear Information System (INIS)

    Caldarola, L.

    1979-02-01

    A general analytical theory has been developed which allows one to calculate the occurence probability of the top event of a fault tree with multistate (more than states) components. It is shown that, in order to correctly describe a system with multistate components, a special type of Boolean algebra is required. This is called 'Boolean algebra with restrictions on varibales' and its basic rules are the same as those of the traditional Boolean algebra with some additional restrictions on the variables. These restrictions are extensively discussed in the paper. Important features of the method are the identification of the complete base and of the smallest irredundant base of a Boolean function which does not necessarily need to be coherent. It is shown that the identification of the complete base of a Boolean function requires the application of some algorithms which are not used in today's computer programmes for fault tree analysis. The problem of statistical dependence among primary components is discussed. The paper includes a small demonstrative example to illustrate the method. The example includes also statistical dependent components. (orig.) [de

  19. Transactions in Software Components: Container-Interposed Transactions

    Czech Academy of Sciences Publication Activity Database

    Procházka, M.; Plášil, František

    2002-01-01

    Roč. 3, č. 2 (2002), s. - ISSN 1525-9293 R&D Projects: GA ČR GA201/99/0244; GA AV ČR IAA2030902 Institutional research plan: AV0Z1030915 Keywords : transactions * component-based software architectures * transaction propagation policy * transaction attributes * container -interposed transactions Subject RIV: JC - Computer Hardware ; Software

  20. Implementation of computer security at nuclear facilities in Germany

    Energy Technology Data Exchange (ETDEWEB)

    Lochthofen, Andre; Sommer, Dagmar [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Koeln (Germany)

    2013-07-01

    In recent years, electrical and I and C components in nuclear power plants (NPPs) were replaced by software-based components. Due to the increased number of software-based systems also the threat of malevolent interferences and cyber-attacks on NPPs has increased. In order to maintain nuclear security, conventional physical protection measures and protection measures in the field of computer security have to be implemented. Therefore, the existing security management process of the NPPs has to be expanded to computer security aspects. In this paper, we give an overview of computer security requirements for German NPPs. Furthermore, some examples for the implementation of computer security projects based on a GRS-best-practice-approach are shown. (orig.)

  1. Implementation of computer security at nuclear facilities in Germany

    International Nuclear Information System (INIS)

    Lochthofen, Andre; Sommer, Dagmar

    2013-01-01

    In recent years, electrical and I and C components in nuclear power plants (NPPs) were replaced by software-based components. Due to the increased number of software-based systems also the threat of malevolent interferences and cyber-attacks on NPPs has increased. In order to maintain nuclear security, conventional physical protection measures and protection measures in the field of computer security have to be implemented. Therefore, the existing security management process of the NPPs has to be expanded to computer security aspects. In this paper, we give an overview of computer security requirements for German NPPs. Furthermore, some examples for the implementation of computer security projects based on a GRS-best-practice-approach are shown. (orig.)

  2. Lifetime analysis of fusion-reactor components

    International Nuclear Information System (INIS)

    Mattas, R.F.

    1983-01-01

    A one-dimensional computer code has been developed to examine the lifetime of first-wall and impurity-control components. The code incorporates the operating and design parameters, the material characteristics, and the appropriate failure criteria for the individual components. The major emphasis of the modelling effort has been to calculate the temperature-stress-strain-radiation effects history of a component so that the synergystic effects between sputtering erosion, swelling, creep, fatigue, and crack growth can be examined. The general forms of the property equations are the same for all materials in order to provide the greatest flexibility for materials selection in the code. The code is capable of determining the behavior of a plate, composed of either a single or dual material structure, that is either totally constrained or constrained from bending but not from expansion. The code has been utilized to analyze the first walls for FED/INTOR and DEMO

  3. How to Build a Quantum Computer

    Science.gov (United States)

    Sanders, Barry C.

    2017-11-01

    Quantum computer technology is progressing rapidly with dozens of qubits and hundreds of quantum logic gates now possible. Although current quantum computer technology is distant from being able to solve computational problems beyond the reach of non-quantum computers, experiments have progressed well beyond simply demonstrating the requisite components. We can now operate small quantum logic processors with connected networks of qubits and quantum logic gates, which is a great stride towards functioning quantum computers. This book aims to be accessible to a broad audience with basic knowledge of computers, electronics and physics. The goal is to convey key notions relevant to building quantum computers and to present state-of-the-art quantum-computer research in various media such as trapped ions, superconducting circuits, photonics and beyond.

  4. Optimal Component Lumping: problem formulation and solution techniques

    DEFF Research Database (Denmark)

    Lin, Bao; Leibovici, Claude F.; Jørgensen, Sten Bay

    2008-01-01

    This paper presents a systematic method for optimal lumping of a large number of components in order to minimize the loss of information. In principle, a rigorous composition-based model is preferable to describe a system accurately. However, computational intensity and numerical issues restrict ...

  5. Introducing Cloud Computing Topics in Curricula

    Science.gov (United States)

    Chen, Ling; Liu, Yang; Gallagher, Marcus; Pailthorpe, Bernard; Sadiq, Shazia; Shen, Heng Tao; Li, Xue

    2012-01-01

    The demand for graduates with exposure in Cloud Computing is on the rise. For many educational institutions, the challenge is to decide on how to incorporate appropriate cloud-based technologies into their curricula. In this paper, we describe our design and experiences of integrating Cloud Computing components into seven third/fourth-year…

  6. Intelligent Component Monitoring for Nuclear Power Plants

    International Nuclear Information System (INIS)

    Tsoukalas, Lefteri

    2010-01-01

    Reliability and economy are two major concerns for a nuclear power generation system. Next generation nuclear power reactors are being developed to be more reliable and economic. An effective and efficient surveillance system can generously contribute toward this goal. Recent progress in computer systems and computational tools has made it necessary and possible to upgrade current surveillance/monitoring strategy for better performance. For example, intelligent computing techniques can be applied to develop algorithm that help people better understand the information collected from sensors and thus reduce human error to a new low level. Incidents incurred from human error in nuclear industry are not rare and have been proven costly. The goal of this project is to develop and test an intelligent prognostics methodology for predicting aging effects impacting long-term performance of nuclear components and systems. The approach is particularly suitable for predicting the performance of nuclear reactor systems which have low failure probabilities (e.g., less than 10 -6 year -1 ). Such components and systems are often perceived as peripheral to the reactor and are left somewhat unattended. That is, even when inspected, if they are not perceived to be causing some immediate problem, they may not be paid due attention. Attention to such systems normally involves long term monitoring and possibly reasoning with multiple features and evidence, requirements that are not best suited for humans.

  7. Multi-component optical solitary waves

    DEFF Research Database (Denmark)

    Kivshar, Y. S.; Sukhorukov, A. A.; Ostrovskaya, E. A.

    2000-01-01

    We discuss several novel types of multi-component (temporal and spatial) envelope solitary waves that appear in fiber and waveguide nonlinear optics. In particular, we describe multi-channel solitary waves in bit-parallel-wavelength fiber transmission systems for highperformance computer networks......, multi-color parametric spatial solitary waves due to cascaded nonlinearities of quadratic materials, and quasiperiodic envelope solitons due to quasi-phase-matching in Fibonacci optical superlattices. (C) 2000 Elsevier Science B.V. All rights reserved....

  8. A methodology for on-line fatigue life monitoring of Indian nuclear power plant components

    International Nuclear Information System (INIS)

    Mukhopadhyay, N.K.; Dutta, B.K.; Kushawaha, H.S.

    1992-01-01

    Fatigue is one of the most important aging effects of nuclear power plant components. Information about accumulation of fatigue helps in assessing structural degradation of the components. This assists in-service inspection and maintenance and may also support future life extension program of a plant. In the present report a methodology is being proposed for monitoring on line fatigue life of nuclear power plant components using available plant instrumentations. Major factors affecting fatigue life of a nuclear power plant components are the fluctuations of temperature, pressure and flow rate. Green's function technique is used in on line fatigue monitoring as computation time is much less than finite element method. A code has been developed which computes temperature and stress Green's functions in 2-D and axisymmetric structure by finite element method due to unit change in various fluid parameters. A post processor has also been developed which computes the temperature and stress responses using corresponding Green's functions and actual fluctuation in fluid parameters. In this post processor, the multiple site problem is solved by superimposing single site Green's function technique. It is also shown that Green's function technique is best suited for on line fatigue life monitoring of nuclear power plant components. (author). 6 refs., 43 figs

  9. Risk-ranking IST components into two categories

    International Nuclear Information System (INIS)

    Rowley, C.W.

    1996-01-01

    The ASME has utilized several schemes for identifying the appropriate scope of components for inservice testing (IST). The initial scope was ASME Code Class 1/2/3, with all components treated equally. Later the ASME Operations and Maintenance (O ampersand M) Committee decided to use safe shutdown and accident mitigation as the scoping criteria, but continued to treat all components equal inside that scope. Recently the ASME O ampersand M Committee decided to recognize service condition of the component, hence the comprehensive pump test. Although probabilistic risk assessments (PRAs) are incredibly complex plant models and computer hardware and software intensive, they are a tool that can be utilized by many plant engineering organizations to analyze plant system and component applications. In 1992 the ASME O ampersand M Committee got interested in using the PRA as a tool to categorize its pumps and valves. In 1994 the ASME O ampersand M Committee commissioned the ASME Center for Research and Technology Development (CRTD) to develop a process that adapted the PRA technology to IST. In late 1995 that process was presented to the ASME O ampersand M Committee. The process had three distinct portions: (1) risk-rank the IST components; (2) develop a more effective testing strategy for More Safety Significant Components; and (3) develop a more economic testing strategy for Less Safety Significant Components

  10. Risk-ranking IST components into two categories

    Energy Technology Data Exchange (ETDEWEB)

    Rowley, C.W.

    1996-12-01

    The ASME has utilized several schemes for identifying the appropriate scope of components for inservice testing (IST). The initial scope was ASME Code Class 1/2/3, with all components treated equally. Later the ASME Operations and Maintenance (O&M) Committee decided to use safe shutdown and accident mitigation as the scoping criteria, but continued to treat all components equal inside that scope. Recently the ASME O&M Committee decided to recognize service condition of the component, hence the comprehensive pump test. Although probabilistic risk assessments (PRAs) are incredibly complex plant models and computer hardware and software intensive, they are a tool that can be utilized by many plant engineering organizations to analyze plant system and component applications. In 1992 the ASME O&M Committee got interested in using the PRA as a tool to categorize its pumps and valves. In 1994 the ASME O&M Committee commissioned the ASME Center for Research and Technology Development (CRTD) to develop a process that adapted the PRA technology to IST. In late 1995 that process was presented to the ASME O&M Committee. The process had three distinct portions: (1) risk-rank the IST components; (2) develop a more effective testing strategy for More Safety Significant Components; and (3) develop a more economic testing strategy for Less Safety Significant Components.

  11. Center for Technology for Advanced Scientific Component Software (TASCS) Consolidated Progress Report July 2006 - March 2009

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, D E; McInnes, L C; Govindaraju, M; Bramley, R; Epperly, T; Kohl, J A; Nieplocha, J; Armstrong, R; Shasharina, S; Sussman, A L; Sottile, M; Damevski, K

    2009-04-14

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS) tackles these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.

  12. 3-D inelastic analysis methods for hot section components. Volume 2: Advanced special functions models

    Science.gov (United States)

    Wilson, R. B.; Banerjee, P. K.

    1987-01-01

    This Annual Status Report presents the results of work performed during the third year of the 3-D Inelastic Analysis Methods for Hot Sections Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of computer codes that permit more accurate and efficient three-dimensional analyses of selected hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The computer codes embody a progression of mathematical models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components.

  13. The DC field components of horizontal and vertical electric dipole sources immersed in three-layered stratified media

    Directory of Open Access Journals (Sweden)

    D. Llanwyn Jones

    Full Text Available Formulas for computing the Cartesian components of the static (DC fields of horizontal electric dipoles ( HEDs and vertical electric dipoles ( VEDs located in the central zone of a three-layer horizontally stratified medium are derived and presented in a summary form suitable for immediate computation. Formulas are given for the electric and magnetic field components in the upper and central regions. In the general case the computation involves the summation of a convergent infinite series. For the particular case of an infinitely thick central region (corresponding to the two-layer problem, the analysis produces relatively simple closed-form equations for the field components which are suitable for a 'hand calculation'. Specimen calculations for dipoles in seawaters are included and the derived results are compared with computations made using an ac model.

  14. Heritability of the somatotype components in Biscay families.

    Science.gov (United States)

    Rebato, E; Jelenkovic, A; Salces, I

    2007-01-01

    The anthropometric somatotype is a quantitative description of body shape and composition. Familial studies indicate the existence of a familial resemblance for this phenotype and they suggest a substantial action by genetic factors on this aggregation. The aim of this study is to examine the degree of familial resemblance of the somatotype components and of a factor of shape, in a sample of Biscay nuclear families (Basque Country, Spain). One thousand three hundred and thirty nuclear families were analysed. The anthropometric somatotype components [Carter, J.E.L., Heath, B.H., 1990. Somatotyping. Development and applications. Cambridge University Press, Cambridge, p. 503] were computed. Each component was fitted for the other two through a stepwise multiple regression, and also fitted through the LMS method [Cole, T., 1988. Fitting smoothed centile curves to reference data. J. Roy. Stat. Soc. 151, 385-418] in order to eliminate the age, sex and generation effects. The three raw components were introduced in a PCA from which a shape factor (PC1) was extracted for each generation. The correlations analysis was performed with the SEGPATH package [Province, M.A., Rao, D.C., 1995. General purpose model and computer programme for combined segregation and path analysis (SEGPATH): automatically creating computer from symbolic language model specifications. Genet. Epidemiol. 12, 203-219]. A general model of transmission and nine reduced models were tested. Maximal heritability was estimated with the formula of [Rice, T., Warwick, D.E., Gagnon, J., Bouchard, C., Leon, A.S., Skinner, J.S., Wilmore, J.H., Rao, D.C., 1997. Familial resemblance for body composition measures: the HERITAGE family study. Obes. Res. 5, 557-562]. The correlations were higher between offspring than in parents and offspring and a significant resemblance between mating partners existed. Maximum heritabilities were 55%, 52% and 46% for endomorphy, mesomorphy and ectomorphy, respectively, and 52% for PC1

  15. Sparse Principal Component Analysis in Medical Shape Modeling

    DEFF Research Database (Denmark)

    Sjöstrand, Karl; Stegmann, Mikkel Bille; Larsen, Rasmus

    2006-01-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims...... analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of sufficiently small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA...

  16. Conducting Computer Security Assessments at Nuclear Facilities

    International Nuclear Information System (INIS)

    2016-06-01

    Computer security is increasingly recognized as a key component in nuclear security. As technology advances, it is anticipated that computer and computing systems will be used to an even greater degree in all aspects of plant operations including safety and security systems. A rigorous and comprehensive assessment process can assist in strengthening the effectiveness of the computer security programme. This publication outlines a methodology for conducting computer security assessments at nuclear facilities. The methodology can likewise be easily adapted to provide assessments at facilities with other radioactive materials

  17. Multi-component bi-Hamiltonian Dirac integrable equations

    Energy Technology Data Exchange (ETDEWEB)

    Ma Wenxiu [Department of Mathematics and Statistics, University of South Florida, Tampa, FL 33620-5700 (United States)], E-mail: mawx@math.usf.edu

    2009-01-15

    A specific matrix iso-spectral problem of arbitrary order is introduced and an associated hierarchy of multi-component Dirac integrable equations is constructed within the framework of zero curvature equations. The bi-Hamiltonian structure of the obtained Dirac hierarchy is presented be means of the variational trace identity. Two examples in the cases of lower order are computed.

  18. Lightweight on-demand computing with Elasticluster and Nordugrid ARC

    CERN Document Server

    Pedersen, Maiken; The ATLAS collaboration; Filipcic, Andrej

    2018-01-01

    The cloud computing paradigm allows scientists to elastically grow or shrink computing resources as requirements demand, so that resources only need to be paid for when necessary. The challenge of integrating cloud computing into distributed computing frameworks used by HEP experiments has led to many different solutions in the past years, however none of these solutions offer a complete, fully integrated cloud resource out of the box. This paper describes how to offer such a resource using stripped-down minimal versions of existing distributed computing software components combined with off-the-shelf cloud tools. The basis of the cloud resource is Elasticluster, and the glue to join to the HEP computing infrastructure is provided by the NorduGrid ARC middleware and the ARC Control Tower. These latter two components are stripped down to bare minimum edge services, removing the need for administering complex grid middleware, yet still provide the complete job and data management required to fully exploit the c...

  19. Design of multi-tiered database application based on CORBA component

    International Nuclear Information System (INIS)

    Sun Xiaoying; Dai Zhimin

    2003-01-01

    As computer technology quickly developing, middleware technology changed traditional two-tier database system. The multi-tiered database system, consisting of client application program, application servers and database serves, is mainly applying. While building multi-tiered database system using CORBA component has become the mainstream technique. In this paper, an example of DUV-FEL database system is presented, and then discuss the realization of multi-tiered database based on CORBA component. (authors)

  20. Computation of reactor control rod drop time under accident conditions

    International Nuclear Information System (INIS)

    Dou Yikang; Yao Weida; Yang Renan; Jiang Nanyan

    1998-01-01

    The computational method of reactor control rod drop time under accident conditions lies mainly in establishing forced vibration equations for the components under action of outside forces on control rod driven line and motion equation for the control rod moving in vertical direction. The above two kinds of equations are connected by considering the impact effects between control rod and its outside components. Finite difference method is adopted to make discretization of the vibration equations and Wilson-θ method is applied to deal with the time history problem. The non-linearity caused by impact is iteratively treated with modified Newton method. Some experimental results are used to validate the validity and reliability of the computational method. Theoretical and experimental testing problems show that the computer program based on the computational method is applicable and reliable. The program can act as an effective tool of design by analysis and safety analysis for the relevant components

  1. Modeling accelerator structures and RF components

    International Nuclear Information System (INIS)

    Ko, K., Ng, C.K.; Herrmannsfeldt, W.B.

    1993-03-01

    Computer modeling has become an integral part of the design and analysis of accelerator structures RF components. Sophisticated 3D codes, powerful workstations and timely theory support all contributed to this development. We will describe our modeling experience with these resources and discuss their impact on ongoing work at SLAC. Specific examples from R ampersand D on a future linear collide and a proposed e + e - storage ring will be included

  2. Computer systems and networks status and perspectives

    CERN Document Server

    Zacharov, V

    1981-01-01

    The properties of computers are discussed, both as separate units and in inter-coupled systems. The main elements of modern processor technology are reviewed and the associated peripheral components are discussed in the light of the prevailing rapid pace of developments. Particular emphasis is given to the impact of very large scale integrated circuitry in these developments. Computer networks are considered in some detail, including common-carrier and local-area networks, and the problem of inter-working is included in the discussion. Components of network systems and the associated technology are also among the topics treated.

  3. Computer systems and networks: Status and perspectives

    International Nuclear Information System (INIS)

    Zacharov, Z.

    1981-01-01

    The properties of computers are discussed, both as separate units and in inter-coupled systems. The main elements of modern processor thechnology are reviewed and the associated peripheral components are disscussed in the light of the prevailling rapid pace of developments. Particular emphais is given to the impact of very large scale integrated circuitry in these developments. Computer networks, and considered in some detail, including comon-carrier and local-area networks and the problem of inter-working is included in the discussion. Components of network systems and the associated technology are also among the topics treated. (orig.)

  4. Computability and Representations of the Zero Set

    NARCIS (Netherlands)

    P.J. Collins (Pieter)

    2008-01-01

    htmlabstractIn this note we give a new representation for closed sets under which the robust zero set of a function is computable. We call this representation the component cover representation. The computation of the zero set is based on topological index theory, the most powerful tool for finding

  5. The numerical simulation of accelerator components

    International Nuclear Information System (INIS)

    Herrmannsfeldt, W.B.; Hanerfeld, H.

    1987-05-01

    The techniques of the numerical simulation of plasmas can be readily applied to problems in accelerator physics. Because the problems usually involve a single component ''plasma,'' and times that are at most, a few plasma oscillation periods, it is frequently possible to make very good simulations with relatively modest computation resources. We will discuss the methods and illustrate them with several examples. One of the more powerful techniques of understanding the motion of charged particles is to view computer-generated motion pictures. We will show several little movie strips to illustrate the discussions. The examples will be drawn from the application areas of Heavy Ion Fusion, electron-positron linear colliders and injectors for free-electron lasers. 13 refs., 10 figs., 2 tabs

  6. The Node Monitoring Component of a Scalable Systems Software Environment

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Samuel James [Iowa State Univ., Ames, IA (United States)

    2006-01-01

    This research describes Fountain, a suite of programs used to monitor the resources of a cluster. A cluster is a collection of individual computers that are connected via a high speed communication network. They are traditionally used by users who desire more resources, such as processing power and memory, than any single computer can provide. A common drawback to effectively utilizing such a large-scale system is the management infrastructure, which often does not often scale well as the system grows. Large-scale parallel systems provide new research challenges in the area of systems software, the programs or tools that manage the system from boot-up to running a parallel job. The approach presented in this thesis utilizes a collection of separate components that communicate with each other to achieve a common goal. While systems software comprises a broad array of components, this thesis focuses on the design choices for a node monitoring component. We will describe Fountain, an implementation of the Scalable Systems Software (SSS) node monitor specification. It is targeted at aggregate node monitoring for clusters, focusing on both scalability and fault tolerance as its design goals. It leverages widely used technologies such as XML and HTTP to present an interface to other components in the SSS environment.

  7. Robust Spacecraft Component Detection in Point Clouds

    Directory of Open Access Journals (Sweden)

    Quanmao Wei

    2018-03-01

    Full Text Available Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density.

  8. Robust Spacecraft Component Detection in Point Clouds.

    Science.gov (United States)

    Wei, Quanmao; Jiang, Zhiguo; Zhang, Haopeng

    2018-03-21

    Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density.

  9. Computer-controlled ultrasonic equipment for automatic inspection of nuclear reactor components after manufacturing

    International Nuclear Information System (INIS)

    Moeller, P.; Roehrich, H.

    1983-01-01

    After foundation of the working team ''Automated US-Manufacture Testing'' in 1976 the realization of an ultrasonic test facility for nuclear reactor components after manufacturing has been started. During a period of about 5 years, an automated prototype facility has been developed, fabricated and successfully tested. The function of this facility is to replace the manual ultrasonic tests, which are carried out autonomically at different stages of the manufacturing process and to fulfil the test specification under improved economic conditions. This prototype facility has been designed as to be transported to the components to be tested at low expenditure. Hereby the reproduceability of a test is entirely guaranteed. (orig.) [de

  10. Incremental principal component pursuit for video background modeling

    Science.gov (United States)

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  11. GRAPH-BASED POST INCIDENT INTERNAL AUDIT METHOD OF COMPUTER EQUIPMENT

    Directory of Open Access Journals (Sweden)

    I. S. Pantiukhin

    2016-05-01

    Full Text Available Graph-based post incident internal audit method of computer equipment is proposed. The essence of the proposed solution consists in the establishing of relationships among hard disk damps (image, RAM and network. This method is intended for description of information security incident properties during the internal post incident audit of computer equipment. Hard disk damps receiving and formation process takes place at the first step. It is followed by separation of these damps into the set of components. The set of components includes a large set of attributes that forms the basis for the formation of the graph. Separated data is recorded into the non-relational database management system (NoSQL that is adapted for graph storage, fast access and processing. Damps linking application method is applied at the final step. The presented method gives the possibility to human expert in information security or computer forensics for more precise, informative internal audit of computer equipment. The proposed method allows reducing the time spent on internal audit of computer equipment, increasing accuracy and informativeness of such audit. The method has a development potential and can be applied along with the other components in the tasks of users’ identification and computer forensics.

  12. A study on gingival component of smile

    Directory of Open Access Journals (Sweden)

    Goutam Chakroborty

    2015-01-01

    Full Text Available Background: Esthetic enhancement of smile requires prior quantification of gingival component of smile. Hence, a study has been designed on randomly selected volunteers′ and posed frontal smiling photographs were taken and analyzed through computer-aided ImageJ software. Aim: To determine the role of gingival component in designing a smile. Settings and Design: Present observational study includes one frontal photograph from each of 212 subjects who were attending the Department of Periodontics (examined during the study period and then divided into three age groups (18-30, 31-40, and 41-50 years. Materials and Methods: Standardized frontal photographs with posed smile from 212 volunteers irrespective of age and sex were taken and the images were analyzed in computer by using ImageJ software. Statistical analysis used: Mean and standard deviation of intercommissural width (ICW, interlabial gap (ILG, and smile index (SI during posed smiling were calculated for different sex. Comparison between male and female group were done by Mann-Whitney U test, and P-values were calculated for ICW, ILG, and SI. Spearman′s rank correlation coefficients (rho were calculated for SI and different components of central zone of smile. Results: Male group as compared to female group exhibited greater ICW and ILG, and there was existence of fair to good correlation between lip dynamics and different factors of smile. Conclusion: Present study indicates that different factors of central zone of smile havefair to good correlation with lip dynamics assessed by SI.

  13. Some Aspects of Process Computers Configuration Control in Nuclear Power Plant Krsko - Process Computer Signal Configuration Database (PCSCDB)

    International Nuclear Information System (INIS)

    Mandic, D.; Kocnar, R.; Sucic, B.

    2002-01-01

    During the operation of NEK and other nuclear power plants it has been recognized that certain issues related to the usage of digital equipment and associated software in NPP technological process protection, control and monitoring, is not adequately addressed in the existing programs and procedures. The term and the process of Process Computers Configuration Control joins three 10CFR50 Appendix B quality requirements of Process Computers application in NPP: Design Control, Document Control and Identification and Control of Materials, Parts and Components. This paper describes Process Computer Signal Configuration Database (PCSCDB), that was developed and implemented in order to resolve some aspects of Process Computer Configuration Control related to the signals or database points that exist in the life cycle of different Process Computer Systems (PCS) in Nuclear Power Plant Krsko. PCSCDB is controlled, master database, related to the definition and description of the configurable database points associated with all Process Computer Systems in NEK. PCSCDB holds attributes related to the configuration of addressable and configurable real time database points and attributes related to the signal life cycle references and history data such as: Input/Output signals, Manually Input database points, Program constants, Setpoints, Calculated (by application program or SCADA calculation tools) database points, Control Flags (example: enable / disable certain program feature) Signal acquisition design references to the DCM (Document Control Module Application software for document control within Management Information System - MIS) and MECL (Master Equipment and Component List MIS Application software for identification and configuration control of plant equipment and components) Usage of particular database point in particular application software packages, and in the man-machine interface features (display mimics, printout reports, ...) Signals history (EEAR Engineering

  14. Developing a Model Component

    Science.gov (United States)

    Fields, Christina M.

    2013-01-01

    The Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI) is responsible for providing simulations to support test and verification of SCCS hardware and software. The Universal Coolant Transporter System (UCTS) was a Space Shuttle Orbiter support piece of the Ground Servicing Equipment (GSE). The initial purpose of the UCTS was to provide two support services to the Space Shuttle Orbiter immediately after landing at the Shuttle Landing Facility. The UCTS is designed with the capability of servicing future space vehicles; including all Space Station Requirements necessary for the MPLM Modules. The Simulation uses GSE Models to stand in for the actual systems to support testing of SCCS systems during their development. As an intern at Kennedy Space Center (KSC), my assignment was to develop a model component for the UCTS. I was given a fluid component (dryer) to model in Simulink. I completed training for UNIX and Simulink. The dryer is a Catch All replaceable core type filter-dryer. The filter-dryer provides maximum protection for the thermostatic expansion valve and solenoid valve from dirt that may be in the system. The filter-dryer also protects the valves from freezing up. I researched fluid dynamics to understand the function of my component. The filter-dryer was modeled by determining affects it has on the pressure and velocity of the system. I used Bernoulli's Equation to calculate the pressure and velocity differential through the dryer. I created my filter-dryer model in Simulink and wrote the test script to test the component. I completed component testing and captured test data. The finalized model was sent for peer review for any improvements. I participated in Simulation meetings and was involved in the subsystem design process and team collaborations. I gained valuable work experience and insight into a career path as an engineer.

  15. Study of heat fluxes on plasma facing components in a tokamak from measurements of temperature by infrared thermography

    International Nuclear Information System (INIS)

    Daviot, R.

    2010-05-01

    The goal of this thesis is the development of a method of computation of those heat loads from measurements of temperature by infrared thermography. The research was conducted on three issues arising in current tokamaks but also future ones like ITER: the measurement of temperature on reflecting walls, the determination of thermal properties for deposits observed on the surface of tokamak components and the development of a three-dimensional, non-linear computation of heat loads. A comparison of several means of pyrometry, monochromatic, bi-chromatic and photothermal, is performed on an experiment of temperature measurement. We show that this measurement is sensitive to temperature gradients on the observed area. Layers resulting from carbon deposition by the plasma on the surface of components are modeled through a field of equivalent thermal resistance, without thermal inertia. The field of this resistance is determined, for each measurement points, from a comparison of surface temperature from infrared thermographs with the result of a simulation, which is based on a mono-dimensional linear model of components. The spatial distribution of the deposit on the component surface is obtained. Finally, a three-dimensional and non-linear computation of fields of heat fluxes, based on a finite element method, is developed here. Exact geometries of the component are used. The sensitivity of the computed heat fluxes is discussed regarding the accuracy of the temperature measurements. This computation is applied to two-dimensional temperature measurements of the JET tokamak. Several components of this tokamak are modeled, such as tiles of the divertor, upper limiter and inner and outer poloidal limiters. The distribution of heat fluxes on the surface of these components is computed and studied along the two main tokamak directions, poloidal and toroidal. Toroidal symmetry of the heat loads from one tile to another is shown. The influence of measurements spatial resolution

  16. CAT: a computer code for the automated construction of fault trees

    International Nuclear Information System (INIS)

    Apostolakis, G.E.; Salem, S.L.; Wu, J.S.

    1978-03-01

    A computer code, CAT (Computer Automated Tree, is presented which applies decision table methods to model the behavior of components for systematic construction of fault trees. The decision tables for some commonly encountered mechanical and electrical components are developed; two nuclear subsystems, a Containment Spray Recirculation System and a Consequence Limiting Control System, are analyzed to demonstrate the applications of CAT code

  17. Metasurface optics for full-color computational imaging.

    Science.gov (United States)

    Colburn, Shane; Zhan, Alan; Majumdar, Arka

    2018-02-01

    Conventional imaging systems comprise large and expensive optical components that successively mitigate aberrations. Metasurface optics offers a route to miniaturize imaging systems by replacing bulky components with flat and compact implementations. The diffractive nature of these devices, however, induces severe chromatic aberrations, and current multiwavelength and narrowband achromatic metasurfaces cannot support full visible spectrum imaging (400 to 700 nm). We combine principles of both computational imaging and metasurface optics to build a system with a single metalens of numerical aperture ~0.45, which generates in-focus images under white light illumination. Our metalens exhibits a spectrally invariant point spread function that enables computational reconstruction of captured images with a single digital filter. This work connects computational imaging and metasurface optics and demonstrates the capabilities of combining these disciplines by simultaneously reducing aberrations and downsizing imaging systems using simpler optics.

  18. Configurating computer-controlled bar system

    OpenAIRE

    Šuštaršič, Nejc

    2010-01-01

    The principal goal of my diploma thesis is creating an application for configurating computer-controlled beverages dispensing systems. In the preamble of my thesis I present the theoretical platform for point of sale systems and beverages dispensing systems, which are required for the understanding of the target problematics. As with many other fields, computer tehnologies entered the field of managing bars and restaurants quite some time ago. Basic components of every bar or restaurant a...

  19. Trusted Computing Technologies, Intel Trusted Execution Technology.

    Energy Technology Data Exchange (ETDEWEB)

    Guise, Max Joseph; Wendt, Jeremy Daniel

    2011-01-01

    We describe the current state-of-the-art in Trusted Computing Technologies - focusing mainly on Intel's Trusted Execution Technology (TXT). This document is based on existing documentation and tests of two existing TXT-based systems: Intel's Trusted Boot and Invisible Things Lab's Qubes OS. We describe what features are lacking in current implementations, describe what a mature system could provide, and present a list of developments to watch. Critical systems perform operation-critical computations on high importance data. In such systems, the inputs, computation steps, and outputs may be highly sensitive. Sensitive components must be protected from both unauthorized release, and unauthorized alteration: Unauthorized users should not access the sensitive input and sensitive output data, nor be able to alter them; the computation contains intermediate data with the same requirements, and executes algorithms that the unauthorized should not be able to know or alter. Due to various system requirements, such critical systems are frequently built from commercial hardware, employ commercial software, and require network access. These hardware, software, and network system components increase the risk that sensitive input data, computation, and output data may be compromised.

  20. IAEA's experience in compiling a generic component reliability data base

    International Nuclear Information System (INIS)

    Tomic, B.; Lederman, L.

    1991-01-01

    Reliability data are essential in probabilistic safety assessment, with component reliability parameters being particularly important. Component failure data which is plant specific would be most appropriate but this is rather limited. However, similar components are used in different designs. Generic data, that is all data that is not plant specific to the plant being analyzed but which relates to components more generally, is important. The International Atomic Energy Agency has compiled the Generic Component Reliability Data Base from data available in the open literature. It is part of the IAEA computer code package for fault/event tree analysis. The Data Base contains 1010 different records including most of the components used in probabilistic safety analyses of nuclear power plants. The data base input was quality controlled and data sources noted. The data compilation procedure and problems associated with using generic data are explained. (UK)

  1. Detection of incorrect manufacturer labelling of hip components

    Energy Technology Data Exchange (ETDEWEB)

    Durand-Hill, Matthieu; Henckel, Johann; Skinner, John; Hart, Alister [University College London, Institute of Orthopaedics, London (United Kingdom); Burwell, Matthew [Royal United Hospital, Bath (United Kingdom)

    2017-01-15

    We describe the case of a 53-year-old man who underwent a left metal-on-metal hip resurfacing in 2015. Component size mismatch (CSM) was suspected because of the patient's immediate post-operative mechanical symptoms and high metal ion levels. Surgical notes indicated the appropriate combinations of implants were used. However, we detected a mismatch using computed tomography. Revision was performed and subsequent measurements of explanted components confirmed the mismatch. To our knowledge, this case is the first report of a CT method being used in a patient to pre-operatively identify CSM. (orig.)

  2. Open client/server computing and middleware

    CERN Document Server

    Simon, Alan R

    2014-01-01

    Open Client/Server Computing and Middleware provides a tutorial-oriented overview of open client/server development environments and how client/server computing is being done.This book analyzes an in-depth set of case studies about two different open client/server development environments-Microsoft Windows and UNIX, describing the architectures, various product components, and how these environments interrelate. Topics include the open systems and client/server computing, next-generation client/server architectures, principles of middleware, and overview of ProtoGen+. The ViewPaint environment

  3. Computer code for qualitative analysis of gamma-ray spectra

    International Nuclear Information System (INIS)

    Yule, H.P.

    1979-01-01

    Computer code QLN1 provides complete analysis of gamma-ray spectra observed with Ge(Li) detectors and is used at both the National Bureau of Standards and the Environmental Protection Agency. It locates peaks, resolves multiplets, identifies component radioisotopes, and computes quantitative results. The qualitative-analysis (or component identification) algorithms feature thorough, self-correcting steps which provide accurate isotope identification in spite of errors in peak centroids, energy calibration, and other typical problems. The qualitative-analysis algorithm is described in this paper

  4. Extensive use of computational fluid dynamics in the upgrading of hydraulic turbines

    Energy Technology Data Exchange (ETDEWEB)

    Sabourin, M.; Eremeef, R.; De Henau, V.

    1995-12-31

    Computational fluid dynamics codes, based on turbulent Navier-Stokes equations, allow evaluation of the hydraulic losses of each turbine component with precision. Using those codes with the new generation of computers enables a wide variety of component geometries to be modelled and compared to the original designs under flow conditions obtained from testing, at a reasonable cost and in a relatively short time. This paper reviews the actual method used in the design of a solution to a turbine rehabilitation project involving runner replacement, redesign of upstream components (stay vanes and wicket gates), and downstream components (draft tubes and runner outlets). The paper shows how computational fluid dynamics can help hydraulic engineers to obtain valuable information not only on performance enhancement but also on the phenomena that produce the enhancement, and to reduce the variety of modifications to be tested.

  5. Fault Localization for Synchrophasor Data using Kernel Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    CHEN, R.

    2017-11-01

    Full Text Available In this paper, based on Kernel Principal Component Analysis (KPCA of Phasor Measurement Units (PMU data, a nonlinear method is proposed for fault location in complex power systems. Resorting to the scaling factor, the derivative for a polynomial kernel is obtained. Then, the contribution of each variable to the T2 statistic is derived to determine whether a bus is the fault component. Compared to the previous Principal Component Analysis (PCA based methods, the novel version can combat the characteristic of strong nonlinearity, and provide the precise identification of fault location. Computer simulations are conducted to demonstrate the improved performance in recognizing the fault component and evaluating its propagation across the system based on the proposed method.

  6. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    Science.gov (United States)

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  7. A lightweight distributed framework for computational offloading in mobile cloud computing.

    Directory of Open Access Journals (Sweden)

    Muhammad Shiraz

    Full Text Available The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs. Therefore, Mobile Cloud Computing (MCC leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  8. Computer networks in future accelerator control systems

    International Nuclear Information System (INIS)

    Dimmler, D.G.

    1977-03-01

    Some findings of a study concerning a computer based control and monitoring system for the proposed ISABELLE Intersecting Storage Accelerator are presented. Requirements for development and implementation of such a system are discussed. An architecture is proposed where the system components are partitioned along functional lines. Implementation of some conceptually significant components is reviewed

  9. Component-Based Development of Runtime Observers in the COMDES Framework

    DEFF Research Database (Denmark)

    Guan, Wei; Li, Gang; Angelov, Christo K.

    2013-01-01

    against formally specified properties. This paper presents a component-based design method for runtime observers in the context of COMDES framework—a component-based framework for distributed embedded system and its supporting tools. Therefore, runtime verification is facilitated by model......Formal verification methods, such as exhaustive model checking, are often infeasible because of high computational complexity. Runtime observers (monitors) provide an alternative, light-weight verification method, which offers a non-exhaustive but still feasible approach to monitor system behavior...

  10. Prestudy - Development of trend analysis of component failure

    International Nuclear Information System (INIS)

    Poern, K.

    1995-04-01

    The Bayesian trend analysis model that has been used for the computation of initiating event intensities (I-book) is based on the number of events that have occurred during consecutive time intervals. The model itself is a Poisson process with time-dependent intensity. For the analysis of aging it is often more relevant to use times between failures for a given component as input, where by 'time' is meant a quantity that best characterizes the age of the component (calendar time, operating time, number of activations etc). Therefore, it has been considered necessary to extend the model and the computer code to allow trend analysis of times between events, and also of several sequences of times between events. This report describes this model extension as well as an application on an introductory ageing analysis of centrifugal pumps defined in Table 5 of the T-book. The application in turn directs the attention to the need for further development of both the trend model and the data base. Figs

  11. Active Computer Network Defense: An Assessment

    Science.gov (United States)

    2001-04-01

    sufficient base of knowledge in information technology can be assumed to be working on some form of computer network warfare, even if only defensive in...the Defense Information Infrastructure (DII) to attack. Transmission Control Protocol/ Internet Protocol (TCP/IP) networks are inherently resistant to...aims to create this part of information superiority, and computer network defense is one of its fundamental components. Most of these efforts center

  12. Enabling new capabilities and insights from quantum chemistry by using component architectures

    International Nuclear Information System (INIS)

    Janssen, C L; Kenny, J P; Nielsen, I M B; Krishnan, M; Gurumoorthi, V; Valeev, E F; Windus, T L

    2006-01-01

    Steady performance gains in computing power, as well as improvements in Scientific computing algorithms, are making possible the study of coupled physical phenomena of great extent and complexity. The software required for such studies is also very complex and requires contributions from experts in multiple disciplines. We have investigated the use of the Common Component Architecture (CCA) as a mechanism to tackle some of the resulting software engineering challenges in quantum chemistry, focusing on three specific application areas. In our first application, we have developed interfaces permitting solvers and quantum chemistry packages to be readily exchanged. This enables our quantum chemistry packages to be used with alternative solvers developed by specialists, remedying deficiencies we discovered in the native solvers provided in each of the quantum chemistry packages. The second application involves development of a set of components designed to improve utilization of parallel machines by allowing multiple components to execute concurrently on subsets of the available processors. This was found to give substantial improvements in parallel scalability. Our final application is a set of components permitting different quantum chemistry packages to interchange intermediate data. These components enabled the investigation of promising new methods for obtaining accurate thermochemical data for reactions involving heavy elements

  13. AGIS: Evolution of Distributed Computing Information system for ATLAS

    CERN Document Server

    Anisenkov, Alexey; The ATLAS collaboration; Alandes Pradillo, Maria; Karavakis, Edward

    2015-01-01

    The variety of the ATLAS Computing Infrastructure requires a central information system to define the topology of computing resources and to store the different parameters and configuration data which are needed by the various ATLAS software components. The ATLAS Grid Information System is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services.

  14. Providing Learning Computing Labs using Hosting and Virtualization Technologies

    Directory of Open Access Journals (Sweden)

    Armide González

    2011-05-01

    Full Text Available This paper presents a computing hosting system to provide virtual computing laboratories for learning activities. This system is based on hosting and virtualization technologies. All the components used in its development are free software tools. The computing lab model provided by the system is a more sustainable and scalable alternative than the traditional academic computing lab, and it requires lower costs of installation and operation.

  15. Principal Components as a Data Reduction and Noise Reduction Technique

    Science.gov (United States)

    Imhoff, M. L.; Campbell, W. J.

    1982-01-01

    The potential of principal components as a pipeline data reduction technique for thematic mapper data was assessed and principal components analysis and its transformation as a noise reduction technique was examined. Two primary factors were considered: (1) how might data reduction and noise reduction using the principal components transformation affect the extraction of accurate spectral classifications; and (2) what are the real savings in terms of computer processing and storage costs of using reduced data over the full 7-band TM complement. An area in central Pennsylvania was chosen for a study area. The image data for the project were collected using the Earth Resources Laboratory's thematic mapper simulator (TMS) instrument.

  16. Statistical techniques for the identification of reactor component structural vibrations

    International Nuclear Information System (INIS)

    Kemeny, L.G.

    1975-01-01

    The identification, on-line and in near real-time, of the vibration frequencies, modes and amplitudes of selected key reactor structural components and the visual monitoring of these phenomena by nuclear power plant operating staff will serve to further the safety and control philosophy of nuclear systems and lead to design optimisation. The School of Nuclear Engineering has developed a data acquisition system for vibration detection and identification. The system is interfaced with the HIFAR research reactor of the Australian Atomic Energy Commission. The reactor serves to simulate noise and vibrational phenomena which might be pertinent in power reactor situations. The data acquisition system consists of a small computer interfaced with a digital correlator and a Fourier transform unit. An incremental tape recorder is utilised as a backing store and as a means of communication with other computers. A small analogue computer and an analogue statistical analyzer can be used in the pre and post computational analysis of signals which are received from neutron and gamma detectors, thermocouples, accelerometers, hydrophones and strain gauges. Investigations carried out to date include a study of the role of local and global pressure fields due to turbulence in coolant flow and pump impeller induced perturbations on (a) control absorbers, (B) fuel element and (c) coolant external circuit and core tank structure component vibrations. (Auth.)

  17. Modernization of process computer on the ONACAWA-1 NPP

    International Nuclear Information System (INIS)

    Matsuda, Ya.

    1997-01-01

    Modernization of a process computer caused by a necessity increasing the storage capacity due to introduction of a new type of fuel and replacement of outwork computer components is performed. Comparison of the PC parameters before and after modernization is given

  18. Time dependent unavailability analysis of nuclear safety systems considering periodically tested components

    International Nuclear Information System (INIS)

    Goes, Alexandre Gromann de Araujo

    1988-01-01

    It is of utmost importance to have a computer code in order to analyze how different parameters (like test duration time) affect the unavailability of safety systems of nuclear. In this context, a study was performed in order to evaluate the model employed by the FRANTIC computer code, which performs detailed calculations on the contribution to the system unavailability originated by hardware failures, component tests and repairs, aiming at considering the influence of different test schemes on the system unavailability. It was shown, by means of the results attained that the numerical model used by the FRANTIC code and the analytical model proposed by APOSTOLAKIS and CHU (4) give unavailability values much similar when the component tests are supposed to be perfect. When a test is supposed to be imperfect (that is, when it may induce a test is supposed to be imperfect (that is, when it may induce a failure on the component being tested), the analytical model presents more conservative results. (author)

  19. An adaptive neuro fuzzy model for estimating the reliability of component-based software systems

    Directory of Open Access Journals (Sweden)

    Kirti Tyagi

    2014-01-01

    Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.

  20. How Many Separable Sources? Model Selection In Independent Components Analysis

    Science.gov (United States)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  1. Electromagnetic computations for fusion devices

    International Nuclear Information System (INIS)

    Turner, L.R.

    1989-09-01

    Among the difficulties in making nuclear fusion a useful energy source, two important ones are producing the magnetic fields needed to drive and confine the plasma, and controlling the eddy currents induced in electrically conducting components by changing fields. All over the world, researchers are developing electromagnetic codes and employing them to compute electromagnetic effects. Ferromagnetic components of a fusion reactor introduce field distortions. Eddy currents are induced in the vacuum vessel, blanket and other torus components of a tokamak when the plasma current disrupts. These eddy currents lead to large forces, and 3-D codes are being developed to study the currents and forces. 35 refs., 6 figs

  2. Electronic digital computers their use in science and engineering

    CERN Document Server

    Alt, Franz L

    1958-01-01

    Electronic Digital Computers: Their Use in Science and Engineering describes the principles underlying computer design and operation. This book describes the various applications of computers, the stages involved in using them, and their limitations. The machine is composed of the hardware which is run by a program. This text describes the use of magnetic drum for storage of data and some computing. The functions and components of the computer include automatic control, memory, input of instructions by using punched cards, and output from resulting information. Computers operate by using numbe

  3. A new methodology for predicting flow induced vibration in industrial components

    International Nuclear Information System (INIS)

    Gay, N.

    1997-12-01

    Flow induced vibration damage is a major concern for designers and operators of industrial components. For example, nuclear power plant operators have currently to deal with such flow induced vibration problems, in steam generator tube bundles, control rods or nuclear fuel assemblies. Some methodologies have thus been recently proposed to obtain an accurate description of the flow induced vibration phenomena. These methodologies are based on unsteady semi-analytical models for fluid-dynamic forces, associated with non-dimensional fluid force coefficients generally obtained from experiments. The aim is to determine the forces induced by the flow on the structure, and then to take account of these forces to derive the dynamic behaviour of the component under flow excitation. The approach is based on a general model for fluid-dynamic forces, using several non-dimensional parameters that cannot be reached through computation. These parameters are then determined experimentally on simplified test sections, representative of the component, of the flow and of the fluid-elastic coupling phenomena. Predicting computations of the industrial component can then be performed for various operating configurations, by applying laws of similarity. The major physical mechanisms involved in complex fluid-structure interaction phenomena have been understood and modelled. (author)

  4. A coordinate descent MM algorithm for fast computation of sparse logistic PCA

    KAUST Repository

    Lee, Seokho

    2013-06-01

    Sparse logistic principal component analysis was proposed in Lee et al. (2010) for exploratory analysis of binary data. Relying on the joint estimation of multiple principal components, the algorithm therein is computationally too demanding to be useful when the data dimension is high. We develop a computationally fast algorithm using a combination of coordinate descent and majorization-minimization (MM) auxiliary optimization. Our new algorithm decouples the joint estimation of multiple components into separate estimations and consists of closed-form elementwise updating formulas for each sparse principal component. The performance of the proposed algorithm is tested using simulation and high-dimensional real-world datasets. © 2013 Elsevier B.V. All rights reserved.

  5. Guns, guards, gates and geeks: Romania strengthens computer security at nuclear installations

    International Nuclear Information System (INIS)

    Gil, Laura

    2016-01-01

    A cyberattack could swipe all the information stored on your computer or even prevent it from working. That’s bad enough. But a cyberattack on a nuclear power plant could lead to sabotage or theft of nuclear material. Computer security, concerned with the protection of digital data and the defence of systems and networks against malicious acts, is a critical component of nuclear security. “The advance of computers and their use in all aspects of nuclear operations has changed the security paradigm,” said Donald Dudenhoeffer, Information Technology Security Officer at the IAEA. “Information and computer security must be considered as components in the overall nuclear security plan.”

  6. Software components for medical image visualization and surgical planning

    Science.gov (United States)

    Starreveld, Yves P.; Gobbi, David G.; Finnis, Kirk; Peters, Terence M.

    2001-05-01

    Purpose: The development of new applications in medical image visualization and surgical planning requires the completion of many common tasks such as image reading and re-sampling, segmentation, volume rendering, and surface display. Intra-operative use requires an interface to a tracking system and image registration, and the application requires basic, easy to understand user interface components. Rapid changes in computer and end-application hardware, as well as in operating systems and network environments make it desirable to have a hardware and operating system as an independent collection of reusable software components that can be assembled rapidly to prototype new applications. Methods: Using the OpenGL based Visualization Toolkit as a base, we have developed a set of components that implement the above mentioned tasks. The components are written in both C++ and Python, but all are accessible from Python, a byte compiled scripting language. The components have been used on the Red Hat Linux, Silicon Graphics Iris, Microsoft Windows, and Apple OS X platforms. Rigorous object-oriented software design methods have been applied to ensure hardware independence and a standard application programming interface (API). There are components to acquire, display, and register images from MRI, MRA, CT, Computed Rotational Angiography (CRA), Digital Subtraction Angiography (DSA), 2D and 3D ultrasound, video and physiological recordings. Interfaces to various tracking systems for intra-operative use have also been implemented. Results: The described components have been implemented and tested. To date they have been used to create image manipulation and viewing tools, a deep brain functional atlas, a 3D ultrasound acquisition and display platform, a prototype minimally invasive robotic coronary artery bypass graft planning system, a tracked neuro-endoscope guidance system and a frame-based stereotaxy neurosurgery planning tool. The frame-based stereotaxy module has been

  7. 77 FR 58576 - Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers...

    Science.gov (United States)

    2012-09-21

    ... Devices, Portable Music and Data Processing Devices, Computers, and Components Thereof; Institution of... communication devices, portable music and data processing devices, computers, and components thereof by reason... alleges that an industry in the United States exists as required by subsection (a)(2) of section 337. The...

  8. Project PEACH at UCLH: Student Projects in Healthcare Computing.

    Science.gov (United States)

    Ramachandran, Navin; Mohamedally, Dean; Taylor, Paul

    2017-01-01

    A collaboration between clinicians at UCLH and the Dept of Computer Science at UCL is giving students of computer science the opportunity to undertake real healthcare computing projects as part of their education. This is enabling the creation of a significant research computing platform within the Trust, based on open source components and hosted in the cloud, while providing a large group of students with experience of the specific challenges of health IT.

  9. NET-COMPUTER: Internet Computer Architecture and its Application in E-Commerce

    Directory of Open Access Journals (Sweden)

    P. O. Umenne

    2012-12-01

    Full Text Available Research in Intelligent Agents has yielded interesting results, some of which have been translated into commer­cial ventures. Intelligent Agents are executable software components that represent the user, perform tasks on behalf of the user and when the task terminates, the Agents send the result to the user. Intelligent Agents are best suited for the Internet: a collection of computers connected together in a world-wide computer network. Swarm and HYDRA computer architectures for Agents’ execution were developed at the University of Surrey, UK in the 90s. The objective of the research was to develop a software-based computer architecture on which Agents execution could be explored. The combination of Intelligent Agents and HYDRA computer architecture gave rise to a new computer concept: the NET-Computer in which the comput­ing resources reside on the Internet. The Internet computers form the hardware and software resources, and the user is provided with a simple interface to access the Internet and run user tasks. The Agents autonomously roam the Internet (NET-Computer executing the tasks. A growing segment of the Internet is E-Commerce for online shopping for products and services. The Internet computing resources provide a marketplace for product suppliers and consumers alike. Consumers are looking for suppliers selling products and services, while suppliers are looking for buyers. Searching the vast amount of information available on the Internet causes a great deal of problems for both consumers and suppliers. Intelligent Agents executing on the NET-Computer can surf through the Internet and select specific information of interest to the user. The simulation results show that Intelligent Agents executing HYDRA computer architecture could be applied in E-Commerce.

  10. Centralized computer-based controls of the Nova Laser Facility

    International Nuclear Information System (INIS)

    Krammen, J.

    1985-01-01

    This article introduces the overall architecture of the computer-based Nova Laser Control System and describes its basic components. Use of standard hardware and software components ensures that the system, while specialized and distributed throughout the facility, is adaptable. 9 references, 6 figures

  11. TME (Task Mapping Editor): tool for executing distributed parallel computing. TME user's manual

    International Nuclear Information System (INIS)

    Takemiya, Hiroshi; Yamagishi, Nobuhiro; Imamura, Toshiyuki

    2000-03-01

    At the Center for Promotion of Computational Science and Engineering, a software environment PPExe has been developed to support scientific computing on a parallel computer cluster (distributed parallel scientific computing). TME (Task Mapping Editor) is one of components of the PPExe and provides a visual programming environment for distributed parallel scientific computing. Users can specify data dependence among tasks (programs) visually as a data flow diagram and map these tasks onto computers interactively through GUI of TME. The specified tasks are processed by other components of PPExe such as Meta-scheduler, RIM (Resource Information Monitor), and EMS (Execution Management System) according to the execution order of these tasks determined by TME. In this report, we describe the usage of TME. (author)

  12. Integrated Computer System of Management in Logistics

    Science.gov (United States)

    Chwesiuk, Krzysztof

    2011-06-01

    This paper aims at presenting a concept of an integrated computer system of management in logistics, particularly in supply and distribution chains. Consequently, the paper includes the basic idea of the concept of computer-based management in logistics and components of the system, such as CAM and CIM systems in production processes, and management systems for storage, materials flow, and for managing transport, forwarding and logistics companies. The platform which integrates computer-aided management systems is that of electronic data interchange.

  13. Localization experience and future plan of NSLS primary components

    International Nuclear Information System (INIS)

    Kim, Haesoo

    1992-01-01

    Korea Heavy Industries and Construction Company is planning to obtain technical self-reliance of the component design, manufacturing and installation of the NSLS primary components as much as the target of 87% by 1995 as indicated in the technical self-reliance plan by the Korea Electric Power Company in 1988. In order to achieve this target, Koch has been involved in the component design, manufacturing and project management of the NSLS components from the early stage of the Young 3 and 4 project. In parallel, Koch has increased the self-reliance of the various fields taking full advantage of the technical documents, computer codes, training and consultation supplied by the technology transfer agreement. This paper presents the re-evaluation of the current status of technical self reliance as well as the make up plan to be implemented during the UCH 3 and 4 project for the area identified as the weakness

  14. The benefit of enterprise ontology in identifying business components

    OpenAIRE

    Albani, Antonia

    2006-01-01

    The benefit of enterprise ontology in identifying business components / A. Albani, J. Dietz. - In: Artificial intelligence in theory and practice : IFIP 19th World Computer Congress ; TC 12: IFIP AI 2006 Stream, August 21-24, 2006, Santiago, Chile / ed. by Max Bramer. - New York : Springer, 2006. - S. 1-12. - (IFIP ; 217)

  15. Review of P-scan computer-based ultrasonic inservice inspection system. Supplement 1

    International Nuclear Information System (INIS)

    Harris, R.V. Jr.; Angel, L.J.

    1995-12-01

    This Supplement reviews the P-scan system, a computer-based ultrasonic system used for inservice inspection of piping and other components in nuclear power plants. The Supplement was prepared using the methodology described in detail in Appendix A of NUREG/CR-5985, and is based on one month of using the system in a laboratory. This Supplement describes and characterizes: computer system, ultrasonic components, and mechanical components; scanning, detection, digitizing, imaging, data interpretation, operator interaction, data handling, and record-keeping. It includes a general description, a review checklist, and detailed results of all tests performed

  16. Coherent systems with multistate components

    International Nuclear Information System (INIS)

    Caldarola, L.

    1980-01-01

    The basic rules of the Boolean algebra with restrictions on variables are briefly recalled. This special type of Boolean algebra allows one to handle fault trees of systems made of multistate (two or more than two states) components. Coherent systems are defined in the case of multistate components. This definition is consistent with that originally suggested by Barlow in the case of binary (two states) components. The basic properties of coherence are described and discussed. Coherent Boolean functions are also defined. It is shown that these functions are irredundant, that is they have only one base which is at the same time complete and irredundant. However, irredundant functions are not necessarily coherent. Finally a simplified algorithm for the calculation of the base of a coherent function is described. In the case that the function is not coherent, the algorithm can be used to reduce the size of the normal disjunctive form of the function. This in turn eases the application of the Nelson algorithm to calculate the complete base of the function. The simplified algorithm has been built in the computer program MUSTAFA-1. In a sample case the use of this algorithm caused a reduction of the CPU time by a factor of about 20. (orig.)

  17. Chemical evolution of two-component galaxies. II

    International Nuclear Information System (INIS)

    Caimmi, R.

    1978-01-01

    In order to confirm and refine the results obtained in a previous paper the chemical evolution of two-component (spheroid + disk) galaxies is derived rejecting the instantaneous recycling approximation, by means of numerical computations, accounting for (i) the collapse phase of the gas, assumed to be uniform in density and composition, and (ii) a birth-rate stellar function. Computations are performed relatively to the solar neighbourhood and to model galaxies which closely resemble the real morphological sequence: in both cases, numerical results are compared with analytical ones. The numerical models of this paper constitute a first-order approximation, while higher order approximations could be made by rejecting the hypothesis of uniform density and composition, and making use of detailed dynamical models. (Auth.)

  18. 77 FR 52759 - Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers and...

    Science.gov (United States)

    2012-08-30

    ... Devices, Portable Music and Data Processing Devices, Computers and Components Thereof; Notice of... communication devices, portable music and data processing devices, computers and components thereof by reason of... complaint further alleges the existence of a domestic industry. The Commission's notice of investigation...

  19. Computational Fatigue Life Analysis of Carbon Fiber Laminate

    Science.gov (United States)

    Shastry, Shrimukhi G.; Chandrashekara, C. V., Dr.

    2018-02-01

    In the present scenario, many traditional materials are being replaced by composite materials for its light weight and high strength properties. Industries like automotive industry, aerospace industry etc., are some of the examples which uses composite materials for most of its components. Replacing of components which are subjected to static load or impact load are less challenging compared to components which are subjected to dynamic loading. Replacing the components made up of composite materials demands many stages of parametric study. One such parametric study is the fatigue analysis of composite material. This paper focuses on the fatigue life analysis of the composite material by using computational techniques. A composite plate is considered for the study which has a hole at the center. The analysis is carried on (0°/90°/90°/90°/90°)s laminate sequence and (45°/-45°)2s laminate sequence by using a computer script. The life cycles for both the lay-up sequence are compared with each other. It is observed that, for the same material and geometry of the component, cross ply laminates show better fatigue life than that of angled ply laminates.

  20. Algebraic computing program for studying the gauge theory

    International Nuclear Information System (INIS)

    Zet, G.

    2005-01-01

    An algebraic computing program running on Maple V platform is presented. The program is devoted to the study of the gauge theory with an internal Lie group as local symmetry. The physical quantities (gauge potentials, strength tensors, dual tensors etc.) are introduced either as equations in terms of previous defined quantities (tensors), or by manual entry of the component values. The components of the strength tensor and of its dual are obtained with respect to a given metric of the space-time used for describing the gauge theory. We choose a Minkowski space-time endowed with spherical symmetry and give some example of algebraic computing that are adequate for studying electroweak or gravitational interactions. The field equations are also obtained and their solutions are determined using the DEtools facilities of the Maple V computing program. (author)

  1. Accelerating Climate and Weather Simulations through Hybrid Computing

    Science.gov (United States)

    Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark

    2011-01-01

    Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.

  2. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    Science.gov (United States)

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Cambridge, MA; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  3. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    Science.gov (United States)

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-01-10

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  4. Personal Computer Transport Analysis Program

    Science.gov (United States)

    DiStefano, Frank, III; Wobick, Craig; Chapman, Kirt; McCloud, Peter

    2012-01-01

    The Personal Computer Transport Analysis Program (PCTAP) is C++ software used for analysis of thermal fluid systems. The program predicts thermal fluid system and component transients. The output consists of temperatures, flow rates, pressures, delta pressures, tank quantities, and gas quantities in the air, along with air scrubbing component performance. PCTAP s solution process assumes that the tubes in the system are well insulated so that only the heat transfer between fluid and tube wall and between adjacent tubes is modeled. The system described in the model file is broken down into its individual components; i.e., tubes, cold plates, heat exchangers, etc. A solution vector is built from the components and a flow is then simulated with fluid being transferred from one component to the next. The solution vector of components in the model file is built at the initiation of the run. This solution vector is simply a list of components in the order of their inlet dependency on other components. The component parameters are updated in the order in which they appear in the list at every time step. Once the solution vectors have been determined, PCTAP cycles through the components in the solution vector, executing their outlet function for each time-step increment.

  5. Efficient MATLAB computations with sparse and factored tensors.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)

    2006-12-01

    In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.

  6. Independent component analysis for automatic note extraction from musical trills

    Science.gov (United States)

    Brown, Judith C.; Smaragdis, Paris

    2004-05-01

    The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.

  7. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  8. Final Report for 'Center for Technology for Advanced Scientific Component Software'

    International Nuclear Information System (INIS)

    Shasharina, Svetlana

    2010-01-01

    The goal of the Center for Technology for Advanced Scientific Component Software is to fundamentally changing the way scientific software is developed and used by bringing component-based software development technologies to high-performance scientific and engineering computing. The role of Tech-X work in TASCS project is to provide an outreach to accelerator physics and fusion applications by introducing TASCS tools into applications, testing tools in the applications and modifying the tools to be more usable.

  9. Component-based modeling of systems for automated fault tree generation

    International Nuclear Information System (INIS)

    Majdara, Aref; Wakabayashi, Toshio

    2009-01-01

    One of the challenges in the field of automated fault tree construction is to find an efficient modeling approach that can support modeling of different types of systems without ignoring any necessary details. In this paper, we are going to represent a new system of modeling approach for computer-aided fault tree generation. In this method, every system model is composed of some components and different types of flows propagating through them. Each component has a function table that describes its input-output relations. For the components having different operational states, there is also a state transition table. Each component can communicate with other components in the system only through its inputs and outputs. A trace-back algorithm is proposed that can be applied to the system model to generate the required fault trees. The system modeling approach and the fault tree construction algorithm are applied to a fire sprinkler system and the results are presented

  10. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    OpenAIRE

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-01-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and s...

  11. Efficient and robust relaxation procedures for multi-component mixtures including phase transition

    International Nuclear Information System (INIS)

    Han, Ee; Hantke, Maren; Müller, Siegfried

    2017-01-01

    We consider a thermodynamic consistent multi-component model in multi-dimensions that is a generalization of the classical two-phase flow model of Baer and Nunziato. The exchange of mass, momentum and energy between the phases is described by additional source terms. Typically these terms are handled by relaxation procedures. Available relaxation procedures suffer from efficiency and robustness resulting in very costly computations that in general only allow for one-dimensional computations. Therefore we focus on the development of new efficient and robust numerical methods for relaxation processes. We derive exact procedures to determine mechanical and thermal equilibrium states. Further we introduce a novel iterative method to treat the mass transfer for a three component mixture. All new procedures can be extended to an arbitrary number of inert ideal gases. We prove existence, uniqueness and physical admissibility of the resulting states and convergence of our new procedures. Efficiency and robustness of the procedures are verified by means of numerical computations in one and two space dimensions. - Highlights: • We develop novel relaxation procedures for a generalized, thermodynamically consistent Baer–Nunziato type model. • Exact procedures for mechanical and thermal relaxation procedures avoid artificial parameters. • Existence, uniqueness and physical admissibility of the equilibrium states are proven for special mixtures. • A novel iterative method for mass transfer is introduced for a three component mixture providing a unique and admissible equilibrium state.

  12. Insights from random vibration analyses using multiple earthquake components

    International Nuclear Information System (INIS)

    DebChaudhury, A.; Gasparini, D.A.

    1981-01-01

    The behavior of multi-degree-of-freedom systems subjected to multiple earthquake components is studied by the use of random vibration dynamic analyses. A linear system which has been decoupled into modes and has both translational and rotational degrees of freedom is analyzed. The seismic excitation is modelled as a correlated or uncorrelated, vector-valued, non-stationary random process having a Kanai-Tajimi type of frequency content. Non-stationarity is achieved by using a piece wise linear strength function. Therefore, almost any type of evolution and decay of an earthquake may be modelled. Also, in general, the components of the excitation have different frequency contents and strength functions; i.e. intensities and durations and the correlations between components can vary with time. A state-space, modal, random vibration approach is used. Exact analytical expressions for both the state transition matrix and the evolutionary modal covariance matrix are utilized to compute time histories of modal RMS responses. Desired responses are then computed by modal superposition. Specifically, relative displacement, relative velocity and absolute acceleration responses are studied. An important advantage of such analyses is that RMS responses vary smoothly in time therefore large time intervals may be used to generate response time histories. The modal superposition is exact; that is, all cross correlation terms between modal responses are included. (orig./RW)

  13. Application of kernel principal component analysis and computational machine learning to exploration of metabolites strongly associated with diet.

    Science.gov (United States)

    Shiokawa, Yuka; Date, Yasuhiro; Kikuchi, Jun

    2018-02-21

    Computer-based technological innovation provides advancements in sophisticated and diverse analytical instruments, enabling massive amounts of data collection with relative ease. This is accompanied by a fast-growing demand for technological progress in data mining methods for analysis of big data derived from chemical and biological systems. From this perspective, use of a general "linear" multivariate analysis alone limits interpretations due to "non-linear" variations in metabolic data from living organisms. Here we describe a kernel principal component analysis (KPCA)-incorporated analytical approach for extracting useful information from metabolic profiling data. To overcome the limitation of important variable (metabolite) determinations, we incorporated a random forest conditional variable importance measure into our KPCA-based analytical approach to demonstrate the relative importance of metabolites. Using a market basket analysis, hippurate, the most important variable detected in the importance measure, was associated with high levels of some vitamins and minerals present in foods eaten the previous day, suggesting a relationship between increased hippurate and intake of a wide variety of vegetables and fruits. Therefore, the KPCA-incorporated analytical approach described herein enabled us to capture input-output responses, and should be useful not only for metabolic profiling but also for profiling in other areas of biological and environmental systems.

  14. Solving the Stokes problem on a massively parallel computer

    DEFF Research Database (Denmark)

    Axelsson, Owe; Barker, Vincent A.; Neytcheva, Maya

    2001-01-01

    boundary value problem for each velocity component, are solved by the conjugate gradient method with a preconditioning based on the algebraic multi‐level iteration (AMLI) technique. The velocity is found from the computed pressure. The method is optimal in the sense that the computational work...... is proportional to the number of unknowns. Further, it is designed to exploit a massively parallel computer with distributed memory architecture. Numerical experiments on a Cray T3E computer illustrate the parallel performance of the method....

  15. From Computer-interpretable Guidelines to Computer-interpretable Quality Indicators: A Case for an Ontology.

    Science.gov (United States)

    White, Pam; Roudsari, Abdul

    2014-01-01

    In the United Kingdom's National Health Service, quality indicators are generally measured electronically by using queries and data extraction, resulting in overlap and duplication of query components. Electronic measurement of health care quality indicators could be improved through an ontology intended to reduce duplication of effort during healthcare quality monitoring. While much research has been published on ontologies for computer-interpretable guidelines, quality indicators have lagged behind. We aimed to determine progress on the use of ontologies to facilitate computer-interpretable healthcare quality indicators. We assessed potential for improvements to computer-interpretable healthcare quality indicators in England. We concluded that an ontology for a large, diverse set of healthcare quality indicators could benefit the NHS and reduce workload, with potential lessons for other countries.

  16. Evaluation of flow-induced vibration prediction techniques for in-reactor components

    International Nuclear Information System (INIS)

    Mulcahy, T.M.; Turula, P.

    1975-05-01

    Selected in-reactor components of a hydraulic and structural dynamic scale model of the U. S. Energy Research and Development Administration experimental Fast Test Reactor have been studied in an effort to develop and evaluate techniques for predicting vibration behavior of elastic structures exposed to a moving fluid. Existing analysis methods are used to compute the natural frequencies and modal shapes of submerged beam and shell type components. Component response is calculated, assuming as fluid forcing mechanisms both vortex shedding and random excitations characterized by the available hydraulic data. The free and force vibration response predictions are compared with extensive model flow and shaker test data. (U.S.)

  17. On combining principal components with Fisher's linear discriminants for supervised learning

    NARCIS (Netherlands)

    Pechenizkiy, M.; Tsymbal, A.; Puuronen, S.

    2006-01-01

    "The curse of dimensionality" is pertinent to many learning algorithms, and it denotes the drastic increase of computational complexity and classification error in high dimensions. In this paper, principal component analysis (PCA), parametric feature extraction (FE) based on Fisher’s linear

  18. Computer hardware for radiologists: Part 2

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. “Storage drive” is a term describing a “memory” hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. “Drive interfaces” connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular “input/output devices” used commonly with computers are the printer, monitor, mouse, and keyboard. The “bus” is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. “Ports” are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ‘ever increasing’ digital future

  19. Computer hardware for radiologists: Part 2

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU, chipset, random access memory (RAM, and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. "Storage drive" is a term describing a "memory" hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. "Drive interfaces" connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular "input/output devices" used commonly with computers are the printer, monitor, mouse, and keyboard. The "bus" is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated ISA bus. "Ports" are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ′ever increasing′ digital future.

  20. A Model of Computation for Bit-Level Concurrent Computing and Programming: APEC

    Science.gov (United States)

    Ajiro, Takashi; Tsuchida, Kensei

    A concurrent model of computation and a language based on the model for bit-level operation are useful for developing asynchronous and concurrent programs compositionally, which frequently use bit-level operations. Some examples are programs for video games, hardware emulation (including virtual machines), and signal processing. However, few models and languages are optimized and oriented to bit-level concurrent computation. We previously developed a visual programming language called A-BITS for bit-level concurrent programming. The language is based on a dataflow-like model that computes using processes that provide serial bit-level operations and FIFO buffers connected to them. It can express bit-level computation naturally and develop compositionally. We then devised a concurrent computation model called APEC (Asynchronous Program Elements Connection) for bit-level concurrent computation. This model enables precise and formal expression of the process of computation, and a notion of primitive program elements for controlling and operating can be expressed synthetically. Specifically, the model is based on a notion of uniform primitive processes, called primitives, that have three terminals and four ordered rules at most, as well as on bidirectional communication using vehicles called carriers. A new notion is that a carrier moving between two terminals can briefly express some kinds of computation such as synchronization and bidirectional communication. The model's properties make it most applicable to bit-level computation compositionally, since the uniform computation elements are enough to develop components that have practical functionality. Through future application of the model, our research may enable further research on a base model of fine-grain parallel computer architecture, since the model is suitable for expressing massive concurrency by a network of primitives.

  1. Improving ATLAS computing resource utilization with HammerCloud

    CERN Document Server

    Schovancova, Jaroslava; The ATLAS collaboration

    2018-01-01

    HammerCloud is a framework to commission, test, and benchmark ATLAS computing resources and components of various distributed systems with realistic full-chain experiment workflows. HammerCloud contributes to ATLAS Distributed Computing (ADC) Operations and automation efforts, providing the automated resource exclusion and recovery tools, that help re-focus operational manpower to areas which have yet to be automated, and improve utilization of available computing resources. We present recent evolution of the auto-exclusion/recovery tools: faster inclusion of new resources in testing machinery, machine learning algorithms for anomaly detection, categorized resources as master vs. slave for the purpose of blacklisting, and a tool for auto-exclusion/recovery of resources triggered by Event Service job failures that is being extended to other workflows besides the Event Service. We describe how HammerCloud helped commissioning various concepts and components of distributed systems: simplified configuration of qu...

  2. Users guide for WoodCite, a product cost quotation tool for wood component manufacturers [computer program

    Science.gov (United States)

    Jeff Palmer; Adrienn Andersch; Jan Wiedenbeck; Urs. Buehlmann

    2014-01-01

    WoodCite is a Microsoft® Access-based application that allows wood component manufacturers to develop product price quotations for their current and potential customers. The application was developed by the U.S. Forest Service and Virginia Polytechnic Institute and State University, in cooperation with the Wood Components Manufacturers Association.

  3. COMPUTATIONAL SCIENCE CENTER

    International Nuclear Information System (INIS)

    DAVENPORT, J.

    2006-01-01

    Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together

  4. Computing the stresses and deformations of the human eye components due to a high explosive detonation using fluid-structure interaction model.

    Science.gov (United States)

    Karimi, Alireza; Razaghi, Reza; Navidbakhsh, Mahdi; Sera, Toshihiro; Kudo, Susumu

    2016-05-01

    In spite the fact that a very small human body surface area is comprised by the eye, its wounds due to detonation have recently been dramatically amplified. Although many efforts have been devoted to measure injury of the globe, there is still a lack of knowledge on the injury mechanism due to Primary Blast Wave (PBW). The goal of this study was to determine the stresses and deformations of the human eye components, including the cornea, aqueous, iris, ciliary body, lens, vitreous, retina, sclera, optic nerve, and muscles, attributed to PBW induced by trinitrotoluene (TNT) explosion via a Lagrangian-Eulerian computational coupling model. Magnetic Resonance Imaging (MRI) was employed to establish a Finite Element (FE) model of the human eye according to a normal human eye. The solid components of the eye were modelled as Lagrangian mesh, while an explosive TNT, air domain, and aqueous were modelled using Arbitrary Lagrangian-Eulerian (ALE) mesh. Nonlinear dynamic FE simulations were accomplished using the explicit FE code, namely LS-DYNA. In order to simulate the blast wave generation, propagation, and interaction with the eye, the ALE formulation with Jones-Wilkins-Lee (JWL) equation defining the explosive material were employed. The results revealed a peak stress of 135.70kPa brought about by detonation upsurge on the cornea at the distance of 25cm. The highest von Mises stresses were observed on the sclera (267.3kPa), whereas the lowest one was seen on the vitreous body (0.002kPa). The results also showed a relatively high resultant displacement for the macula as well as a high variation for the radius of curvature for the cornea and lens, which can result in both macular holes, optic nerve damage and, consequently, vision loss. These results may have implications not only for understanding the value of stresses and strains in the human eye components but also giving an outlook about the process of PBW triggers damage to the eye. Copyright © 2016 Elsevier Ltd

  5. Bacterial computing with engineered populations.

    Science.gov (United States)

    Amos, Martyn; Axmann, Ilka Maria; Blüthgen, Nils; de la Cruz, Fernando; Jaramillo, Alfonso; Rodriguez-Paton, Alfonso; Simmel, Friedrich

    2015-07-28

    We describe strategies for the construction of bacterial computing platforms by describing a number of results from the recently completed bacterial computing with engineered populations project. In general, the implementation of such systems requires a framework containing various components such as intracellular circuits, single cell input/output and cell-cell interfacing, as well as extensive analysis. In this overview paper, we describe our approach to each of these, and suggest possible areas for future research. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  6. Structured automated code checking through structural components and systems engineering

    NARCIS (Netherlands)

    Coenders, J.L.; Rolvink, A.

    2014-01-01

    This paper presents a proposal to employ the design computing methodology proposed as StructuralComponents (Rolvink et al [6] and van de Weerd et al [7]) as a method to perform a digital verification process to fulfil the requirements related to structural design and engineering as part of a

  7. Latest developments for a computer aided thermohydraulic network

    International Nuclear Information System (INIS)

    Alemberti, A.; Graziosi, G.; Mini, G.; Susco, M.

    1999-01-01

    Thermohydraulic networks are I-D systems characterized by a small number of basic components (pumps, valves, heat exchangers, etc) connected by pipes and limited spatially by a defined number of boundary conditions (tanks, atmosphere, etc). The network system is simulated by the well known computer program RELAPS/mod3. Information concerning the network geometry component behaviour, initial and boundary conditions are usually supplied to the RELAPS code using an ASCII input file by means of 'input cards'. CATNET (Computer Aided Thermalhydraulic NETwork) is a graphically user interface that, under specific user guidelines which completely define its range of applicability, permits a very high level of standardization and simplification of the RELAPS/mod3 input deck development process as well as of the output processing. The characteristics of the components (pipes, valves, pumps etc), defining the network system can be entered through CATNET. The CATNET interface is provided by special functions to compute form losses in the most typical bending and branching configurations. When the input of all system components is ready, CATNET is able to generate the RELAPS/mod3 input file. Finally, by means of CATNET, the RELAPS/mod3 code can be run and its output results can be transformed to an intuitive display form. The paper presents an example of application of the CATNET interface as well as the latest developments which greatly simplified the work of the users and allowed to reduce the possibility of input errors. (authors)

  8. Service-oriented computing : State of the art and research challenges

    NARCIS (Netherlands)

    Papazoglou, Michael P.; Traverso, Paolo; Dustdar, Schahram; Leymann, Frank

    2007-01-01

    Service-oriented computing promotes the idea of assembling application components into a network of services that can be loosely coupled to create flexible, dynamic business processes and agile applications that span organizations and computing platforms. An SOC research road map provides a context

  9. Interoperability and Security Support for Heterogeneous COTS/GOTS/Legacy Component-Based Architecture

    National Research Council Canada - National Science Library

    Tran, Tam

    2000-01-01

    There is a need for Commercial-off-the-shelf (COTS), Government-off-the-shelf (GOTS) and legacy components to interoperate in a secure distributed computing environment in order to facilitate the development of evolving applications...

  10. System for Cooling of Electronic Components

    Science.gov (United States)

    Vasil'ev, L. L.; Grakovich, L. P.; Dragun, L. A.; Zhuravlev, A. S.; Olekhnovich, V. A.; Rabetskii, M. I.

    2017-01-01

    Results of computational and experimental investigations of heat pipes having a predetermined thermal resistance and a system based on these pipes for air cooling of electronic components and diode assemblies of lasers are presented. An efficient compact cooling system comprising heat pipes with an evaporator having a capillary coating of a caked copper powder and a condenser having a developed outer finning, has been deviced. This system makes it possible to remove, to the ambient air, a heat flow of power more than 300 W at a temperature of 40-50°C.

  11. Empirical projection-based basis-component decomposition method

    Science.gov (United States)

    Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland

    2009-02-01

    Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.

  12. Framework for computer-aided systems design

    International Nuclear Information System (INIS)

    Esselman, W.H.

    1992-01-01

    Advanced computer technology, analytical methods, graphics capabilities, and expert systems contribute to significant changes in the design process. Continued progress is expected. Achieving the ultimate benefits of these computer-based design tools depends on successful research and development on a number of key issues. A fundamental understanding of the design process is a prerequisite to developing these computer-based tools. In this paper a hierarchical systems design approach is described, and methods by which computers can assist the designer are examined. A framework is presented for developing computer-based design tools for power plant design. These tools include expert experience bases, tutorials, aids in decision making, and tools to develop the requirements, constraints, and interactions among subsystems and components. Early consideration of the functional tasks is encouraged. Methods of acquiring an expert's experience base is a fundamental research problem. Computer-based guidance should be provided in a manner that supports the creativity, heuristic approaches, decision making, and meticulousness of a good designer

  13. ATLAS Distributed Computing in LHC Run2

    CERN Document Server

    Campana, Simone; The ATLAS collaboration

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (ProdSys2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward the flexible computing model. The flexible computing utilization exploring the opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model, the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover a new data management strategy, based on defined lifetime for each dataset, has been defin...

  14. The Fermilab central computing facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-01-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front-end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS cluster interactive front-end, an Amdahl VM Computing engine, ACP farms, and (primarily) VMS workstations. This paper will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. (orig.)

  15. The Fermilab Central Computing Facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs

  16. Cloud Computing Security in Openstack Architecture: General Overview

    Directory of Open Access Journals (Sweden)

    Gleb Igorevich Shakulo

    2015-10-01

    Full Text Available The subject of article is cloud computing security. Article begins with author analyzing cloud computing advantages and disadvantages, factors of growth, both positive and negative. Among latter, security is deemed one of the most prominent. Furthermore, author takes architecture of OpenStack project as an example for study: describes its essential components and their interconnection. As conclusion, author raises series of questions as possible areas of further research to resolve security concerns, thus making cloud computing more secure technology.

  17. Schottky signal analysis: tune and chromaticity computation

    CERN Document Server

    Chanon, Ondine

    2016-01-01

    Schottky monitors are used to determine important beam parameters in a non-destructive way. The Schottky signal is due to the internal statistical fluctuations of the particles inside the beam. In this report, after explaining the different components of a Schottky signal, an algorithm to compute the betatron tune is presented, followed by some ideas to compute machine chromaticity. The tests have been performed with offline and/or online LHC data.

  18. Probabilistic Structural Analysis Methods for select space propulsion system components (PSAM). Volume 3: Literature surveys and technical reports

    Science.gov (United States)

    1992-01-01

    The technical effort and computer code developed during the first year are summarized. Several formulations for Probabilistic Finite Element Analysis (PFEA) are described with emphasis on the selected formulation. The strategies being implemented in the first-version computer code to perform linear, elastic PFEA is described. The results of a series of select Space Shuttle Main Engine (SSME) component surveys are presented. These results identify the critical components and provide the information necessary for probabilistic structural analysis.

  19. Interpolation on the manifold of K component GMMs.

    Science.gov (United States)

    Kim, Hyunwoo J; Adluru, Nagesh; Banerjee, Monami; Vemuri, Baba C; Singh, Vikas

    2015-12-01

    Probability density functions (PDFs) are fundamental objects in mathematics with numerous applications in computer vision, machine learning and medical imaging. The feasibility of basic operations such as computing the distance between two PDFs and estimating a mean of a set of PDFs is a direct function of the representation we choose to work with. In this paper, we study the Gaussian mixture model (GMM) representation of the PDFs motivated by its numerous attractive features. (1) GMMs are arguably more interpretable than, say, square root parameterizations (2) the model complexity can be explicitly controlled by the number of components and (3) they are already widely used in many applications. The main contributions of this paper are numerical algorithms to enable basic operations on such objects that strictly respect their underlying geometry. For instance, when operating with a set of K component GMMs, a first order expectation is that the result of simple operations like interpolation and averaging should provide an object that is also a K component GMM. The literature provides very little guidance on enforcing such requirements systematically. It turns out that these tasks are important internal modules for analysis and processing of a field of ensemble average propagators (EAPs), common in diffusion weighted magnetic resonance imaging. We provide proof of principle experiments showing how the proposed algorithms for interpolation can facilitate statistical analysis of such data, essential to many neuroimaging studies. Separately, we also derive interesting connections of our algorithm with functional spaces of Gaussians, that may be of independent interest.

  20. Outline of a novel architecture for cortical computation.

    Science.gov (United States)

    Majumdar, Kaushik

    2008-03-01

    In this paper a novel architecture for cortical computation has been proposed. This architecture is composed of computing paths consisting of neurons and synapses. These paths have been decomposed into lateral, longitudinal and vertical components. Cortical computation has then been decomposed into lateral computation (LaC), longitudinal computation (LoC) and vertical computation (VeC). It has been shown that various loop structures in the cortical circuit play important roles in cortical computation as well as in memory storage and retrieval, keeping in conformity with the molecular basis of short and long term memory. A new learning scheme for the brain has also been proposed and how it is implemented within the proposed architecture has been explained. A few mathematical results about the architecture have been proposed, some of which are without proof.

  1. Nuclear component design ontology building based on ASME codes

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan

    2005-01-01

    The adoption of ontology analysis in the study of concept knowledge acquisition and representation for the nuclear component design process based on computer-supported cooperative work (CSCW) makes it possible to share and reuse numerous concept knowledge of multi-disciplinary domains. A practical ontology building method is accordingly proposed based on Protege knowledge model in combination with both top-down and bottom-up approaches together with Formal Concept Analysis (FCA). FCA exhibits its advantages in the way it helps establish and improve taxonomic hierarchy of concepts and resolve concept conflict occurred in modeling multi-disciplinary domains. With Protege-3.0 as the ontology building tool, a nuclear component design ontology based ASME codes is developed by utilizing the ontology building method. The ontology serves as the basis to realize concept knowledge sharing and reusing of nuclear component design. (authors)

  2. Refinement and verification in component-based model-driven design

    DEFF Research Database (Denmark)

    Chen, Zhenbang; Liu, Zhiming; Ravn, Anders Peter

    2009-01-01

    Modern software development is complex as it has to deal with many different and yet related aspects of applications. In practical software engineering this is now handled by a UML-like modelling approach in which different aspects are modelled by different notations. Component-based and object-o...... be integrated in computer-aided software engineering (CASE) tools for adding formally supported checking, transformation and generation facilities.......Modern software development is complex as it has to deal with many different and yet related aspects of applications. In practical software engineering this is now handled by a UML-like modelling approach in which different aspects are modelled by different notations. Component-based and object...

  3. Research initiatives for plug-and-play scientific computing

    International Nuclear Information System (INIS)

    McInnes, Lois Curfman; Dahlgren, Tamara; Nieplocha, Jarek; Bernholdt, David; Allan, Ben; Armstrong, Rob; Chavarria, Daniel; Elwasif, Wael; Gorton, Ian; Kenny, Joe; Krishan, Manoj; Malony, Allen; Norris, Boyana; Ray, Jaideep; Shende, Sameer

    2007-01-01

    This paper introduces three component technology initiatives within the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS) that address ever-increasing productivity challenges in creating, managing, and applying simulation software to scientific discovery. By leveraging the Common Component Architecture (CCA), a new component standard for high-performance scientific computing, these initiatives tackle difficulties at different but related levels in the development of component-based scientific software: (1) deploying applications on massively parallel and heterogeneous architectures, (2) investigating new approaches to the runtime enforcement of behavioral semantics, and (3) developing tools to facilitate dynamic composition, substitution, and reconfiguration of component implementations and parameters, so that application scientists can explore tradeoffs among factors such as accuracy, reliability, and performance

  4. Quantum computing with four-particle decoherence-free states in ion trap

    OpenAIRE

    Feng, Mang; Wang, Xiaoguang

    2001-01-01

    Quantum computing gates are proposed to apply on trapped ions in decoherence-free states. As phase changes due to time evolution of components with different eigenenergies of quantum superposition are completely frozen, quantum computing based on this model would be perfect. Possible application of our scheme in future ion-trap quantum computer is discussed.

  5. Toward a Fault Tolerant Architecture for Vital Medical-Based Wearable Computing.

    Science.gov (United States)

    Abdali-Mohammadi, Fardin; Bajalan, Vahid; Fathi, Abdolhossein

    2015-12-01

    Advancements in computers and electronic technologies have led to the emergence of a new generation of efficient small intelligent systems. The products of such technologies might include Smartphones and wearable devices, which have attracted the attention of medical applications. These products are used less in critical medical applications because of their resource constraint and failure sensitivity. This is due to the fact that without safety considerations, small-integrated hardware will endanger patients' lives. Therefore, proposing some principals is required to construct wearable systems in healthcare so that the existing concerns are dealt with. Accordingly, this paper proposes an architecture for constructing wearable systems in critical medical applications. The proposed architecture is a three-tier one, supporting data flow from body sensors to cloud. The tiers of this architecture include wearable computers, mobile computing, and mobile cloud computing. One of the features of this architecture is its high possible fault tolerance due to the nature of its components. Moreover, the required protocols are presented to coordinate the components of this architecture. Finally, the reliability of this architecture is assessed by simulating the architecture and its components, and other aspects of the proposed architecture are discussed.

  6. CernVM Co-Pilot: an Extensible Framework for Building Scalable Cloud Computing Infrastructures

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    CernVM Co-Pilot is a framework for instantiating an ad-hoc computing infrastructure on top of distributed computing resources. Such resources include commercial computing clouds (e.g. Amazon EC2), scientific computing clouds (e.g. CERN lxcloud), as well as the machines of users participating in volunteer computing projects (e.g. BOINC). The framework consists of components that communicate using the Extensible Messaging and Presence protocol (XMPP), allowing for new components to be developed in virtually any programming language and interfaced to existing Grid and batch computing infrastructures exploited by the High Energy Physics community. Co-Pilot has been used to execute jobs for both the ALICE and ATLAS experiments at CERN. CernVM Co-Pilot is also one of the enabling technologies behind the LHC@home 2.0 volunteer computing project, which is the first such project that exploits virtual machine technology. The use of virtual machines eliminates the necessity of modifying existing applications and adapt...

  7. Selection of hardfacing material for components of the Indian Prototype Fast Breeder Reactor

    International Nuclear Information System (INIS)

    Bhaduri, A.K.; Indira, R.; Albert, S.K.; Rao, B.P.S.; Jain, S.C.; Asokkumar, S.

    2004-01-01

    Nickel-base hardfacing alloys have been chosen to replace cobalt-base alloys as hardfacing material for components of the Indian Prototype Fast Breeder Reactor, for minimising the dose rate to personnel during maintenance and decommissioning, and to reduce the shielding thickness required for component handling. Induced activity, dose rate and shielding computations showed that replacing cobalt-base alloys with nickel-base alloys for hardfacing of components would result in a marked reduction in both the dose rate from the components and the thickness of lead handling flasks. Long-term ageing studies on the nickel-base hardface deposits on austenitic stainless steel showed that the hardface deposit would retain adequate hardness at the end of the components' design service-life of 40 years of exposure at 823 K

  8. Distributed computing for macromolecular crystallography.

    Science.gov (United States)

    Krissinel, Evgeny; Uski, Ville; Lebedev, Andrey; Winn, Martyn; Ballard, Charles

    2018-02-01

    Modern crystallographic computing is characterized by the growing role of automated structure-solution pipelines, which represent complex expert systems utilizing a number of program components, decision makers and databases. They also require considerable computational resources and regular database maintenance, which is increasingly more difficult to provide at the level of individual desktop-based CCP4 setups. On the other hand, there is a significant growth in data processed in the field, which brings up the issue of centralized facilities for keeping both the data collected and structure-solution projects. The paradigm of distributed computing and data management offers a convenient approach to tackling these problems, which has become more attractive in recent years owing to the popularity of mobile devices such as tablets and ultra-portable laptops. In this article, an overview is given of developments by CCP4 aimed at bringing distributed crystallographic computations to a wide crystallographic community.

  9. Five-Axis Ultrasonic Additive Manufacturing for Nuclear Component Manufacture

    Science.gov (United States)

    Hehr, Adam; Wenning, Justin; Terrani, Kurt; Babu, Sudarsanam Suresh; Norfolk, Mark

    2017-03-01

    Ultrasonic additive manufacturing (UAM) is a three-dimensional metal printing technology which uses high-frequency vibrations to scrub and weld together both similar and dissimilar metal foils. There is no melting in the process and no special atmosphere requirements are needed. Consequently, dissimilar metals can be joined with little to no intermetallic compound formation, and large components can be manufactured. These attributes have the potential to transform manufacturing of nuclear reactor core components such as control elements for the High Flux Isotope Reactor at Oak Ridge National Laboratory. These components are hybrid structures consisting of an outer cladding layer in contact with the coolant with neutron-absorbing materials inside, such as neutron poisons for reactor control purposes. UAM systems are built into a computer numerical control (CNC) framework to utilize intermittent subtractive processes. These subtractive processes are used to introduce internal features as the component is being built and for net shaping. The CNC framework is also used for controlling the motion of the welding operation. It is demonstrated here that curved components with embedded features can be produced using a five-axis code for the welder for the first time.

  10. Web Program for Development of GUIs for Cluster Computers

    Science.gov (United States)

    Czikmantory, Akos; Cwik, Thomas; Klimeck, Gerhard; Hua, Hook; Oyafuso, Fabiano; Vinyard, Edward

    2003-01-01

    WIGLAF (a Web Interface Generator and Legacy Application Facade) is a computer program that provides a Web-based, distributed, graphical-user-interface (GUI) framework that can be adapted to any of a broad range of application programs, written in any programming language, that are executed remotely on any cluster computer system. WIGLAF enables the rapid development of a GUI for controlling and monitoring a specific application program running on the cluster and for transferring data to and from the application program. The only prerequisite for the execution of WIGLAF is a Web-browser program on a user's personal computer connected with the cluster via the Internet. WIGLAF has a client/server architecture: The server component is executed on the cluster system, where it controls the application program and serves data to the client component. The client component is an applet that runs in the Web browser. WIGLAF utilizes the Extensible Markup Language to hold all data associated with the application software, Java to enable platform-independent execution on the cluster system and the display of a GUI generator through the browser, and the Java Remote Method Invocation software package to provide simple, effective client/server networking.

  11. The computerised accountancy system (MYDAS) for irradiated components in RNL's Mayfair Laboratory at Culcheth

    International Nuclear Information System (INIS)

    Stansfield, R.G.; Baker, A.R.

    1985-09-01

    The computerised Mayfair Accountancy System (MYDAS) has been developed to account for irradiated components in the Mayfair Laboratory at Culcheth and supersedes a card-index system. The computerised system greatly improves the availability of the data held and it ensures, by means of extensive data validation programs, that the data accurately represent the current inventory of irradiated components in the Laboratory. The system has been implemented on the Risley ICL 2966 main-frame computer and uses an IDMS database to store the data. The computer is accessed through the facilities of the Transaction Processing Management System (TPMS) providing rapid and secure access to the database from several visual display units and printers simultaneously. (author)

  12. Specification and Generation of Environment for Model Checking of Software Components

    Czech Academy of Sciences Publication Activity Database

    Pařízek, P.; Plášil, František

    2007-01-01

    Roč. 176, - (2007), s. 143-154 ISSN 1571-0661 R&D Projects: GA AV ČR 1ET400300504 Institutional research plan: CEZ:AV0Z10300504 Keywords : software components * behavior protocols * model checking * automated generation of environment Subject RIV: JC - Computer Hardware ; Software

  13. Bayesian component separation: The Planck experience

    Science.gov (United States)

    Wehus, Ingunn Kathrine; Eriksen, Hans Kristian

    2018-05-01

    Bayesian component separation techniques have played a central role in the data reduction process of Planck. The most important strength of this approach is its global nature, in which a parametric and physical model is fitted to the data. Such physical modeling allows the user to constrain very general data models, and jointly probe cosmological, astrophysical and instrumental parameters. This approach also supports statistically robust goodness-of-fit tests in terms of data-minus-model residual maps, which are essential for identifying residual systematic effects in the data. The main challenges are high code complexity and computational cost. Whether or not these costs are justified for a given experiment depends on its final uncertainty budget. We therefore predict that the importance of Bayesian component separation techniques is likely to increase with time for intensity mapping experiments, similar to what has happened in the CMB field, as observational techniques mature, and their overall sensitivity improves.

  14. Translator for Optimizing Fluid-Handling Components

    Science.gov (United States)

    Landon, Mark; Perry, Ernest

    2007-01-01

    A software interface has been devised to facilitate optimization of the shapes of valves, elbows, fittings, and other components used to handle fluids under extreme conditions. This software interface translates data files generated by PLOT3D (a NASA grid-based plotting-and- data-display program) and by computational fluid dynamics (CFD) software into a format in which the files can be read by Sculptor, which is a shape-deformation- and-optimization program. Sculptor enables the user to interactively, smoothly, and arbitrarily deform the surfaces and volumes in two- and three-dimensional CFD models. Sculptor also includes design-optimization algorithms that can be used in conjunction with the arbitrary-shape-deformation components to perform automatic shape optimization. In the optimization process, the output of the CFD software is used as feedback while the optimizer strives to satisfy design criteria that could include, for example, improved values of pressure loss, velocity, flow quality, mass flow, etc.

  15. Non-fuel assembly components: 10 CFR 61.55 classification for waste disposal

    International Nuclear Information System (INIS)

    Migliore, R.J.; Reid, B.D.; Fadeff, S.K.; Pauley, K.A.; Jenquin, U.P.

    1994-09-01

    This document reports the results of laboratory radionuclide measurements on a representative group of non-fuel assembly (NFA) components for the purposes of waste classification. This document also provides a methodology to estimate the radionuclide inventory of NFA components, including those located outside the fueled region of a nuclear reactor. These radionuclide estimates can then be used to determine the waste classification of NFA components for which there are no physical measurements. Previously, few radionuclide inventory measurements had been performed on NFA components. For this project, recommended scaling factors were selected for the ORIGEN2 computer code that result in conservative estimates of radionuclide concentrations in NFA components. These scaling factors were based upon experimental data obtained from the following NFA components: (1) a pressurized water reactor (PWR) burnable poison rod assembly, (2) a PVM rod cluster control assembly, and (3) a boiling water reactor cruciform control rod blade. As a whole, these components were found to be within Class C limits. Laboratory radionuclide measurements for these components are provided in detail

  16. Towards a Component Based Model for Database Systems

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2004-02-01

    04, pages 131-137, Amman, Jordan, April 2004.[3] Octavian Paul Rotaru, Marian Dobre, Mircea Petrescu, Reusability Metrics for Software Components, to appear in the Proceedings of the 3rd ACS / IEEE International Conference on Computer Systems and Applications (AICCSA-05, Cairo, Egypt, January 2005.

  17. Fast principal component analysis for stacking seismic data

    Science.gov (United States)

    Wu, Juan; Bai, Min

    2018-04-01

    Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.

  18. Synchrotron Imaging Computations on the Grid without the Computing Element

    International Nuclear Information System (INIS)

    Curri, A; Pugliese, R; Borghes, R; Kourousias, G

    2011-01-01

    Besides the heavy use of the Grid in the Synchrotron Radiation Facility (SRF) Elettra, additional special requirements from the beamlines had to be satisfied through a novel solution that we present in this work. In the traditional Grid Computing paradigm the computations are performed on the Worker Nodes of the grid element known as the Computing Element. A Grid middleware extension that our team has been working on, is that of the Instrument Element. In general it is used to Grid-enable instrumentation; and it can be seen as a neighbouring concept to that of the traditional Control Systems. As a further extension we demonstrate the Instrument Element as the steering mechanism for a series of computations. In our deployment it interfaces a Control System that manages a series of computational demanding Scientific Imaging tasks in an online manner. The instrument control in Elettra is done through a suitable Distributed Control System, a common approach in the SRF community. The applications that we present are for a beamline working in medical imaging. The solution resulted to a substantial improvement of a Computed Tomography workflow. The near-real-time requirements could not have been easily satisfied from our Grid's middleware (gLite) due to the various latencies often occurred during the job submission and queuing phases. Moreover the required deployment of a set of TANGO devices could not have been done in a standard gLite WN. Besides the avoidance of certain core Grid components, the Grid Security infrastructure has been utilised in the final solution.

  19. Packaging strategies for printed circuit board components. Volume I, materials & thermal stresses.

    Energy Technology Data Exchange (ETDEWEB)

    Neilsen, Michael K. (Kansas City Plant, Kansas City, MO); Austin, Kevin N.; Adolf, Douglas Brian; Spangler, Scott W.; Neidigk, Matthew Aaron; Chambers, Robert S.

    2011-09-01

    Decisions on material selections for electronics packaging can be quite complicated by the need to balance the criteria to withstand severe impacts yet survive deep thermal cycles intact. Many times, material choices are based on historical precedence perhaps ignorant of whether those initial choices were carefully investigated or whether the requirements on the new component match those of previous units. The goal of this program focuses on developing both increased intuition for generic packaging guidelines and computational methodologies for optimizing packaging in specific components. Initial efforts centered on characterization of classes of materials common to packaging strategies and computational analyses of stresses generated during thermal cycling to identify strengths and weaknesses of various material choices. Future studies will analyze the same example problems incorporating the effects of curing stresses as needed and analyzing dynamic loadings to compare trends with the quasi-static conclusions.

  20. Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.

    Science.gov (United States)

    Han, Lei; Zhang, Yu; Zhang, Tong

    2016-08-01

    The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.

  1. An Analysis of Creative Process Learning in Computer Game Activities Through Player Experiences

    OpenAIRE

    Wilawan Inchamnan

    2016-01-01

    This research investigates the extent to which creative processes can be fostered through computer gaming. It focuses on creative components in games that have been specifically designed for educational purposes: Digital Game Based Learning (DGBL). A behavior analysis for measuring the creative potential of computer game activities and learning outcomes is described. Creative components were measured by examining task motivation and domain-relevant and creativity-relevant skill factors. The r...

  2. An automatic chip structure optical inspection system for electronic components

    Science.gov (United States)

    Song, Zhichao; Xue, Bindang; Liang, Jiyuan; Wang, Ke; Chen, Junzhang; Liu, Yunhe

    2018-01-01

    An automatic chip structure inspection system based on machine vision is presented to ensure the reliability of electronic components. It consists of four major modules, including a metallographic microscope, a Gigabit Ethernet high-resolution camera, a control system and a high performance computer. An auto-focusing technique is presented to solve the problem that the chip surface is not on the same focusing surface under the high magnification of the microscope. A panoramic high-resolution image stitching algorithm is adopted to deal with the contradiction between resolution and field of view, caused by different sizes of electronic components. In addition, we establish a database to storage and callback appropriate parameters to ensure the consistency of chip images of electronic components with the same model. We use image change detection technology to realize the detection of chip images of electronic components. The system can achieve high-resolution imaging for chips of electronic components with various sizes, and clearly imaging for the surface of chip with different horizontal and standardized imaging for ones with the same model, and can recognize chip defects.

  3. Measuring the Common Component of Stock Market Fluctuations in the Asia-Pacific Region

    OpenAIRE

    Mapa, Dennis S.; Briones, Kristine Joy S.

    2006-01-01

    This paper fits Generalized Auto-Regressive Conditional Heteroskedasticity (GARCH) models to the daily closing stock market indices of Australia, China, Hong Kong, Indonesia, Japan, Korea, Malaysia, Philippines, Singapore, and Taiwan to compute for time-varying weights associated with the volatilities of individual indices. These weights and the returns of the various indices were then used to determine the common component of stock market returns. Our results suggest that a common component ...

  4. A coordinate descent MM algorithm for fast computation of sparse logistic PCA

    KAUST Repository

    Lee, Seokho; Huang, Jianhua Z.

    2013-01-01

    Sparse logistic principal component analysis was proposed in Lee et al. (2010) for exploratory analysis of binary data. Relying on the joint estimation of multiple principal components, the algorithm therein is computationally too demanding

  5. So, you are buying your first computer.

    Science.gov (United States)

    Ferrara-Love, R

    1999-06-01

    Buying your first computer need not be that complicated. The first thing that is needed is an understanding of what you want and need the computer for. By making a list of the various essentials, you will be on your way to purchasing that computer. Once that is completed, you will need an understanding of what each of the components of the computer is, how it works, and what options you have. This way, you will be better able to discuss your needs with the salesperson. The focus of this article is limited to personal computers or PCs (i.e., IBMs [Armonk, NY], IBM clones, Compaq [Houston, TX], Gateway [North Sioux City, SD], and so on). I am not including Macintosh or Apple [Cupertino, CA] in this discussion; most software is often made exclusively for personal computers or at least on the market for personal computers before becoming available in Macintosh version.

  6. The influence of computer games on positive socialization of delinquent teenagers

    OpenAIRE

    Mosteikaitė, Ernesta

    2016-01-01

    Computers and the Internet are rapidly spreading in everyday society. These two components had a strong influence in teenagers’ daily life. If we take a deeper look, the computer can be found in almost every family of minors, in educational institutions and it is the main reason why children use computers almost every day. Due to the mentioned fact, computers gain increasing meaning in teenagers’ daily life, naturally, a question about the impact of the computer for children’s behaviour and s...

  7. COMPUTATIONAL SCIENCE CENTER

    Energy Technology Data Exchange (ETDEWEB)

    DAVENPORT, J.

    2006-11-01

    Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to

  8. Color Independent Components Based SIFT Descriptors for Object/Scene Classification

    Science.gov (United States)

    Ai, Dan-Ni; Han, Xian-Hua; Ruan, Xiang; Chen, Yen-Wei

    In this paper, we present a novel color independent components based SIFT descriptor (termed CIC-SIFT) for object/scene classification. We first learn an efficient color transformation matrix based on independent component analysis (ICA), which is adaptive to each category in a database. The ICA-based color transformation can enhance contrast between the objects and the background in an image. Then we compute CIC-SIFT descriptors over all three transformed color independent components. Since the ICA-based color transformation can boost the objects and suppress the background, the proposed CIC-SIFT can extract more effective and discriminative local features for object/scene classification. The comparison is performed among seven SIFT descriptors, and the experimental classification results show that our proposed CIC-SIFT is superior to other conventional SIFT descriptors.

  9. Supporting collaborative computing and interaction

    International Nuclear Information System (INIS)

    Agarwal, Deborah; McParland, Charles; Perry, Marcia

    2002-01-01

    To enable collaboration on the daily tasks involved in scientific research, collaborative frameworks should provide lightweight and ubiquitous components that support a wide variety of interaction modes. We envision a collaborative environment as one that provides a persistent space within which participants can locate each other, exchange synchronous and asynchronous messages, share documents and applications, share workflow, and hold videoconferences. We are developing the Pervasive Collaborative Computing Environment (PCCE) as such an environment. The PCCE will provide integrated tools to support shared computing and task control and monitoring. This paper describes the PCCE and the rationale for its design

  10. A Tuning Process in a Tunable Archtecture Computer System

    OpenAIRE

    深沢, 良彰; 岸野, 覚; 門倉, 敏夫

    1986-01-01

    A tuning process in a tunable archtecture computer is described. We have designed a computer system with tunable archtecture. Main components of this computer are four AM2903 bit-slice chips. The control schema of micro instructions is horizontal-type, and the length of each instruction is 104 bits. Our tunable algorithm utilizes an execution history of machine level instructions, because the execution history can be regarded as a property of the user program. In execution histories of simila...

  11. Numerical simulation of residual stress in piping components at Framatome-ANP

    International Nuclear Information System (INIS)

    Gilies, P.; Franco, C.; Cipiere, M.-F.; Ould, P.

    2005-01-01

    Numerous manufacturing processes induce residual stresses and distortions in piping components and associated welds: quenching of cast pipings, machining and welding. In Pressurized Water Reactors, most of the components have a large thickness for sustaining pressure and distortions are a minor source of concern. This is not the case for residual stresses which may have a strong influence on several type of damage such as fatigue, corrosion, brittle fracture. In low toughness components, residual stress fields may contribute to ductile tearing initiation. These potential damages are mitigated after welding by stress relief heat treatment, which is applied in a systematic manner to ferritic components of the primary system in nuclear reactors. This treatment is not applied on austenitic piping for which the heat treatment temperature is limited due to the risk of sensitization and residual stresses are difficult to eliminate completely. Since on site measurements are costly and difficult to perform, numerical simulation appears to be an attractive tool for estimating residual stress distributions. Framatome-ANP is working on modelling manufacturing processes with that purpose in mind. This paper presents three kinds of applications illustrating efforts on welding, quenching and machining simulation. First a comparison is shown between computations and measurements of residual stress induced by welding of a dissimilar weld metal junction. Then numerical simulations of quenching of a cast stainless steel nozzle are presented. Finally quenching followed by machining and grinding of this cast component are considered in a full simulation of the manufacturing process. Computed distortions and residual stresses are compared with experimental measurements at different stages of the manufacturing process. (authors)

  12. Integrated Optoelectronic Networks for Application-Driven Multicore Computing

    Science.gov (United States)

    2017-05-08

    AFRL-AFOSR-VA-TR-2017-0102 Integrated Optoelectronic Networks for Application- Driven Multicore Computing Sudeep Pasricha COLORADO STATE UNIVERSITY...AND SUBTITLE Integrated Optoelectronic Networks for Application-Driven Multicore Computing 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-13-1-0110 5c...and supportive materials with innovative architectural designs that integrate these components according to system-wide application needs. 15

  13. Multi-dimensional two-fluid flow computation. An overview

    International Nuclear Information System (INIS)

    Carver, M.B.

    1992-01-01

    This paper discusses a repertoire of three-dimensional computer programs developed to perform critical analysis of single-phase, two-phase and multi-fluid flow in reactor components. The basic numerical approach to solving the governing equations common to all the codes is presented and the additional constitutive relationships required for closure are discussed. Particular applications are presented for a number of computer codes. (author). 12 refs

  14. PWR hybrid computer model for assessing the safety implications of control systems

    International Nuclear Information System (INIS)

    Smith, O.L.; Booth, R.S.; Clapp, N.E.; DiFilippo, F.C.; Renier, J.P.; Sozer, A.

    1985-01-01

    The ORNL study of safety-related aspects of control systems consists of two interrelated tasks, (1) a failure mode and effects analysis that, in part, identifies single and multiple component failures that may lead to significant plant upsets, and (2) a hybrid computer model that uses these failures as initial conditions and traces the dynamic impact on the control system and remainder of the plant. The second task is reported here. The initial step in model development was to define a suitable interface between the FMEA and computer simulation tasks. This involved identifying primary plant components that must be simulated in dynamic detail and secondary components that can be treated adequately by the FMEA alone. The FMEA in general explores broader spectra of initiating events that may collapse into a reduced number of computer runs. A portion of the FMEA includes consideration of power supply failures. Consequences of the transients may feedback on the initiating causes, and there may be an interactive relationship between the FMEA and the computer simulation. Since the thrust of this program is to investigate control system behavior, the controls are modeled in detail to accurately reproduce characteristic response under normal and off-normal transients. The balance of the model, including neutronics, thermohydraulics and component submodels, is developed in sufficient detail to provide a suitable support for the control system

  15. ANL statement of site strategy for computing workstations

    Energy Technology Data Exchange (ETDEWEB)

    Fenske, K.R. (ed.); Boxberger, L.M.; Amiot, L.W.; Bretscher, M.E.; Engert, D.E.; Moszur, F.M.; Mueller, C.J.; O' Brien, D.E.; Schlesselman, C.G.; Troyer, L.J.

    1991-11-01

    This Statement of Site Strategy describes the procedure at Argonne National Laboratory for defining, acquiring, using, and evaluating scientific and office workstations and related equipment and software in accord with DOE Order 1360.1A (5-30-85), and Laboratory policy. It is Laboratory policy to promote the installation and use of computing workstations to improve productivity and communications for both programmatic and support personnel, to ensure that computing workstations acquisitions meet the expressed need in a cost-effective manner, and to ensure that acquisitions of computing workstations are in accord with Laboratory and DOE policies. The overall computing site strategy at ANL is to develop a hierarchy of integrated computing system resources to address the current and future computing needs of the laboratory. The major system components of this hierarchical strategy are: Supercomputers, Parallel computers, Centralized general purpose computers, Distributed multipurpose minicomputers, and Computing workstations and office automation support systems. Computing workstations include personal computers, scientific and engineering workstations, computer terminals, microcomputers, word processing and office automation electronic workstations, and associated software and peripheral devices costing less than $25,000 per item.

  16. CO2 Removal from Multi-component Gas Mixtures Utilizing Spiral-Wound Asymmetric Membranes

    International Nuclear Information System (INIS)

    Said, W.B.; Fahmy, M.F.M.; Gad, F.K.; EI-Aleem, G.A.

    2004-01-01

    A systematic procedure and a computer program have been developed for simulating the performance of a spiral-wound gas permeate for the CO 2 removal from natural gas and other hydrocarbon streams. The simulation program is based on the approximate multi-component model derived by Qi and Henson(l), in addition to the membrane parameters achieved from the binary simulation program(2) (permeability and selectivity). Applying the multi-component program on the same data used by Qi and Henson to evaluate the deviation of the approximate model from the basic transport model, showing results more accurate than those of the approximate model, and are very close to those of the basic transport model, while requiring significantly less than 1 % of the computation time. The program was successfully applied on the data of Salam gas plant membrane unit at Khalda Petroleum Company, Egypt, for the separation of CO 2 from hydrocarbons in an eight-component mixture to estimate the stage cut, residue, and permeate compositions, and gave results matched with the actual Gas Chromatography Analysis measured by the lab

  17. Detection of circuit-board components with an adaptive multiclass correlation filter

    Science.gov (United States)

    Diaz-Ramirez, Victor H.; Kober, Vitaly

    2008-08-01

    A new method for reliable detection of circuit-board components is proposed. The method is based on an adaptive multiclass composite correlation filter. The filter is designed with the help of an iterative algorithm using complex synthetic discriminant functions. The impulse response of the filter contains information needed to localize and classify geometrically distorted circuit-board components belonging to different classes. Computer simulation results obtained with the proposed method are provided and compared with those of known multiclass correlation based techniques in terms of performance criteria for recognition and classification of objects.

  18. Calculating the shrapnel generation and subsequent damage to first wall and optics components for the National Ignition Facility

    International Nuclear Information System (INIS)

    Tokheim, R.E.; Seaman, L.; Cooper, T.; Lew, B.; Curran, D.R.; Sanchez, J.; Anderson, A.; Tobin, M.

    1996-01-01

    This study computationally assesses the threat from shrapnel generation on the National Ignition Facility (NIF) first wall, final optics, and ultimately other target chamber components. Motion of the shrapnel is determined both by particle velocities resulting from the neutron deposition and by x-ray and ionic debris loading arising from explosion of the hohlraum. Material responses of different target area components are computed from one-dimensional and two-dimensional stress wave propagation codes. Well developed rate-dependent spall computational models are used for stainless steel spall and splitting. Severe cell distortion is accounted for in shine-shield and hohlraum-loading computations. Resulting distributions of shrapnel particles are traced to the first wall and optics and damage is estimated for candidate materials. First wall and optical material damage from shrapnel includes crater formation and associated extended cracking. 5 refs., 10 figs

  19. Virtual reality/ augmented reality technology : the next chapter of human-computer interaction

    OpenAIRE

    Huang, Xing

    2015-01-01

    No matter how many different size and shape the computer has, the basic components of computers are still the same. If we use the user perspective to look for the development of computer history, we can surprisingly find that it is the input output device that leads the development of the industry development, in one word, human-computer interaction changes the development of computer history. Human computer interaction has been gone through three stages, the first stage relies on the inpu...

  20. Future trends in power plant process computer techniques

    International Nuclear Information System (INIS)

    Dettloff, K.

    1975-01-01

    The development of new concepts of the process computer technique has advanced in great steps. The steps are in the three sections: hardware, software, application concept. New computers with a new periphery such as, e.g., colour layer equipment, have been developed in hardware. In software, a decisive step in the sector 'automation software' has been made. Through these components, a step forwards has also been made in the question of incorporating the process computer in the structure of the whole power plant control technique. (orig./LH) [de

  1. Optical computer switching network

    Science.gov (United States)

    Clymer, B.; Collins, S. A., Jr.

    1985-01-01

    The design for an optical switching system for minicomputers that uses an optical spatial light modulator such as a Hughes liquid crystal light valve is presented. The switching system is designed to connect 80 minicomputers coupled to the switching system by optical fibers. The system has two major parts: the connection system that connects the data lines by which the computers communicate via a two-dimensional optical matrix array and the control system that controls which computers are connected. The basic system, the matrix-based connecting system, and some of the optical components to be used are described. Finally, the details of the control system are given and illustrated with a discussion of timing.

  2. Development of a computer control system for the RCNP ring cyclotron

    International Nuclear Information System (INIS)

    Ogata, H.; Yamazaki, T.; Ando, A.; Hosono, K.; Itahashi, T.; Katayama, I.; Kibayashi, M.; Kinjo, S.; Kondo, M.; Miura, I.; Nagayama, K.; Noro, T.; Saito, T.; Shimizu, A.; Uraki, M.; Maruyama, M.; Aoki, K.; Yamada, S.; Kodaira, K.

    1990-01-01

    A hierarchically distributed computer control system for the RCNP ring cyclotron is being developed. The control system consists of a central computer and four subcomputers which are linked together by an Ethernet, universal device controllers which control component devices, man-machine interfaces including an operator console and interlock systems. The universal device controller is a standard single-board computer with an 8344 microcontroller and parallel interfaces, and is usually integrated into a component device and connected to a subcomputer by means of an optical-fiber cable to achieve high-speed data transfer. Control sequences for subsystems are easily produced and improved by using an interpreter language named OPELA (OPEration Language for Accelerators). The control system will be installed in March 1990. (orig.)

  3. An efficient rhythmic component expression and weighting synthesis strategy for classifying motor imagery EEG in a brain computer interface

    Science.gov (United States)

    Wang, Tao; He, Bin

    2004-03-01

    The recognition of mental states during motor imagery tasks is crucial for EEG-based brain computer interface research. We have developed a new algorithm by means of frequency decomposition and weighting synthesis strategy for recognizing imagined right- and left-hand movements. A frequency range from 5 to 25 Hz was divided into 20 band bins for each trial, and the corresponding envelopes of filtered EEG signals for each trial were extracted as a measure of instantaneous power at each frequency band. The dimensionality of the feature space was reduced from 200 (corresponding to 2 s) to 3 by down-sampling of envelopes of the feature signals, and subsequently applying principal component analysis. The linear discriminate analysis algorithm was then used to classify the features, due to its generalization capability. Each frequency band bin was weighted by a function determined according to the classification accuracy during the training process. The present classification algorithm was applied to a dataset of nine human subjects, and achieved a success rate of classification of 90% in training and 77% in testing. The present promising results suggest that the present classification algorithm can be used in initiating a general-purpose mental state recognition based on motor imagery tasks.

  4. Navier-Stokes Computations With One-Equation Turbulence Model for Flows Along Concave Wall Surfaces

    Science.gov (United States)

    Wang, Chi R.

    2005-01-01

    This report presents the use of a time-marching three-dimensional compressible Navier-Stokes equation numerical solver with a one-equation turbulence model to simulate the flow fields developed along concave wall surfaces without and with a downstream extension flat wall surface. The 3-D Navier- Stokes numerical solver came from the NASA Glenn-HT code. The one-equation turbulence model was derived from the Spalart and Allmaras model. The computational approach was first calibrated with the computations of the velocity and Reynolds shear stress profiles of a steady flat plate boundary layer flow. The computational approach was then used to simulate developing boundary layer flows along concave wall surfaces without and with a downstream extension wall. The author investigated the computational results of surface friction factors, near surface velocity components, near wall temperatures, and a turbulent shear stress component in terms of turbulence modeling, computational mesh configurations, inlet turbulence level, and time iteration step. The computational results were compared with existing measurements of skin friction factors, velocity components, and shear stresses of the developing boundary layer flows. With a fine computational mesh and a one-equation model, the computational approach could predict accurately the skin friction factors, near surface velocity and temperature, and shear stress within the flows. The computed velocity components and shear stresses also showed the vortices effect on the velocity variations over a concave wall. The computed eddy viscosities at the near wall locations were also compared with the results from a two equation turbulence modeling technique. The inlet turbulence length scale was found to have little effect on the eddy viscosities at locations near the concave wall surface. The eddy viscosities, from the one-equation and two-equation modeling, were comparable at most stream-wise stations. The present one

  5. MULTIMETAL - Structural performance of multimetal component

    International Nuclear Information System (INIS)

    Keim, Elisabeth; Blasset, Sebastien; Tiete, Ralf; Gilles, Philippe; Karjalainen-Roikonen, Paeivi

    2012-01-01

    The main objectives of the project are: - Develop a standard for fracture resistance testing in multi-metal specimens; - Develop harmonized procedures for dissimilar metal welds brittle and ductile integrity assessment. The underlying aim of the project is to provide recommendations for a good practice approach for the integrity assessment (including testing) of dissimilar metal welds as part of overall integrity analyses and leak-before-break (LBB) procedures. In a nuclear power plant (NPP) a single metallic component may be fabricated from different materials. For example, reactor pressure vessel (RPV) components are mainly made of ferritic steel, whereas some of the connecting pipelines are fabricated from austenitic stainless steel. As a consequence, components made of different kind of steels need to be connected. Their connecting welds are called dissimilar metal welds (DMW). Despite extensive research in the past within the EURATOM Framework, e.g. the projects BIMET and ADIMEW, further work is needed to quantify the structural performance of DMWs. The first step of the project is to gather relevant information from field experience. Typical locations of DMWs in Western as well as Eastern type light water reactors (LWRs) will be identified together with their physical and metallurgical characteristics, as well as applicable structural integrity assessment methods. The collection of relevant field information including findings position (flaw) will be followed by computational structural integrity assessment analyses of DMWs for dedicated test configurations and real cases. These analyses will involve simple engineering methods and numerical analyses. The latter also involves the use of innovative micro-mechanical modelling approaches for ductile failure processes in order to augment existing numerical methods for structural integrity assessment of DMWs. Ageing related phenomena and realistic stress distributions in the weld area will be considered. The

  6. A new model for reliability optimization of series-parallel systems with non-homogeneous components

    International Nuclear Information System (INIS)

    Feizabadi, Mohammad; Jahromi, Abdolhamid Eshraghniaye

    2017-01-01

    In discussions related to reliability optimization using redundancy allocation, one of the structures that has attracted the attention of many researchers, is series-parallel structure. In models previously presented for reliability optimization of series-parallel systems, there is a restricting assumption based on which all components of a subsystem must be homogeneous. This constraint limits system designers in selecting components and prevents achieving higher levels of reliability. In this paper, a new model is proposed for reliability optimization of series-parallel systems, which makes possible the use of non-homogeneous components in each subsystem. As a result of this flexibility, the process of supplying system components will be easier. To solve the proposed model, since the redundancy allocation problem (RAP) belongs to the NP-hard class of optimization problems, a genetic algorithm (GA) is developed. The computational results of the designed GA are indicative of high performance of the proposed model in increasing system reliability and decreasing costs. - Highlights: • In this paper, a new model is proposed for reliability optimization of series-parallel systems. • In the previous models, there is a restricting assumption based on which all components of a subsystem must be homogeneous. • The presented model provides a possibility for the subsystems’ components to be non- homogeneous in the required conditions. • The computational results demonstrate the high performance of the proposed model in improving reliability and reducing costs.

  7. Interoperable mesh components for large-scale, distributed-memory simulations

    International Nuclear Information System (INIS)

    Devine, K; Leung, V; Diachin, L; Miller, M

    2009-01-01

    SciDAC applications have a demonstrated need for advanced software tools to manage the complexities associated with sophisticated geometry, mesh, and field manipulation tasks, particularly as computer architectures move toward the petascale. In this paper, we describe a software component - an abstract data model and programming interface - designed to provide support for parallel unstructured mesh operations. We describe key issues that must be addressed to successfully provide high-performance, distributed-memory unstructured mesh services and highlight some recent research accomplishments in developing new load balancing and MPI-based communication libraries appropriate for leadership class computing. Finally, we give examples of the use of parallel adaptive mesh modification in two SciDAC applications.

  8. Lifetime evaluation of Bohunice NPP components

    International Nuclear Information System (INIS)

    Kupca, L.

    2001-01-01

    The paper discuss some aspects of the main primary components lifetime evaluation program in Bohunice NPP which is performed by Nuclear Power Plant Research Institute (NPPRI) Trnava in cooperation with Bohunice and other organizations involved. Facts presented here are based on the NPPRI research report which is regularly issued after each reactor fuel campaign under conditions of project resulted from the contract between NPPRI and Bohunice NPP. For the calculations, there has been used some computer codes adapted (or made) by NPPRI and the results are just the conclusive and very brief, presented here in Tables (Figures). (authors)

  9. Distributed Component Forests : Hierarchical Image Representations Suitable for Tera-Scale Images

    NARCIS (Netherlands)

    Wilkinson, M.H.F.; Gazagnes, Simon; Suen, Ching Y.

    2018-01-01

    The standard representations know as component trees, used in morphological connected attribute filtering and multi-scale analysis, are unsuitable for cases in which either the image itself, or the tree do not fit in the memory of a single compute node. Recently, a new structure has been developed

  10. Computer hardware for radiologists: Part I

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM), Picture Archiving and Communication System (PACS), Radiology information system (RIS) technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU), the chipset, the random access memory (RAM), the memory modules, bus, storage drives, and ports. The personnel computer (PC) has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs). The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called “buses”. The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute “programs”. A Pentium ® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM) is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration

  11. Computer hardware for radiologists: Part I

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM, Picture Archiving and Communication System (PACS, Radiology information system (RIS technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU, the chipset, the random access memory (RAM, the memory modules, bus, storage drives, and ports. The personnel computer (PC has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs. The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called "buses". The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute "programs". A Pentium® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration.

  12. A High Performance COTS Based Computer Architecture

    Science.gov (United States)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  13. Computing possibilities in the mid 1990s

    International Nuclear Information System (INIS)

    Nash, T.

    1988-09-01

    This paper describes the kind of computing resources it may be possible to make available for experiments in high energy physics in the mid and late 1990s. We outline some of the work going on today, particularly at Fermilab's Advanced Computer Program, that projects to the future. We attempt to define areas in which coordinated R and D efforts should prove fruitful to provide for on and off-line computing in the SSC era. Because of extraordinary components anticipated from industry, we can be optimistic even to the level of predicting million VAX equivalent on-line multiprocessor/data acquisition systems for SSC detectors. Managing this scale of computing will require a new approach to large hardware and software systems. 15 refs., 6 figs

  14. Data management on the fusion computational pipeline

    International Nuclear Information System (INIS)

    Klasky, S; Beck, M; Bhat, V; Feibush, E; Ludaescher, B; Parashar, M; Shoshani, A; Silver, D; Vouk, M

    2005-01-01

    Fusion energy science, like other science areas in DOE, is becoming increasingly data intensive and network distributed. We discuss data management techniques that are essential for scientists making discoveries from their simulations and experiments, with special focus on the techniques and support that Fusion Simulation Project (FSP) scientists may need. However, the discussion applies to a broader audience since most of the fusion SciDAC's, and FSP proposals include a strong data management component. Simulations on ultra scale computing platforms imply an ability to efficiently integrate and network heterogeneous components (computational, storage, networks, codes, etc), and to move large amounts of data over large distances. We discuss the workflow categories needed to support such research as well as the automation and other aspects that can allow an FSP scientist to focus on the science and spend less time tending information technology

  15. Self-guaranteed measurement-based quantum computation

    Science.gov (United States)

    Hayashi, Masahito; Hajdušek, Michal

    2018-05-01

    In order to guarantee the output of a quantum computation, we usually assume that the component devices are trusted. However, when the total computation process is large, it is not easy to guarantee the whole system when we have scaling effects, unexpected noise, or unaccounted for correlations between several subsystems. If we do not trust the measurement basis or the prepared entangled state, we do need to be worried about such uncertainties. To this end, we propose a self-guaranteed protocol for verification of quantum computation under the scheme of measurement-based quantum computation where no prior-trusted devices (measurement basis or entangled state) are needed. The approach we present enables the implementation of verifiable quantum computation using the measurement-based model in the context of a particular instance of delegated quantum computation where the server prepares the initial computational resource and sends it to the client, who drives the computation by single-qubit measurements. Applying self-testing procedures, we are able to verify the initial resource as well as the operation of the quantum devices and hence the computation itself. The overhead of our protocol scales with the size of the initial resource state to the power of 4 times the natural logarithm of the initial state's size.

  16. Contour integral computations for multi-component material systems subjected to creep

    International Nuclear Information System (INIS)

    Chen, J.-J.; Tu, S.-T.; Xuan, F.-Z.; Wang, Z.-D.

    2006-01-01

    In the present paper the crack behavior of multi-component material systems is investigated under extensive creep condition. The validation of the creep fracture parameters C* and C(t) is firstly examined at the microscale level. It is found that the C* value is no longer path-independent when mismatch inclusions are embedded into the matrix. To characterize the crack fields in inhomogeneous material the integral value defined at the crack tip as C tip * is introduced to reflect the influence of the inclusion. The interaction effects between microcrack and inclusion are systematically calculated with respect to different mismatch factors, various inclusion locations and inclusion numbers. The analysis results show that the C tip * value is not only influenced by the inclusion properties but also depends on the microstructure near the crack tip

  17. Component failure data handbook

    International Nuclear Information System (INIS)

    Gentillon, C.D.

    1991-04-01

    This report presents generic component failure rates that are used in reliability and risk studies of commercial nuclear power plants. The rates are computed using plant-specific data from published probabilistic risk assessments supplemented by selected other sources. Each data source is described. For rates with four or more separate estimates among the sources, plots show the data that are combined. The method for combining data from different sources is presented. The resulting aggregated rates are listed with upper bounds that reflect the variability observed in each rate across the nuclear power plant industry. Thus, the rates are generic. Both per hour and per demand rates are included. They may be used for screening in risk assessments or for forming distributions to be updated with plant-specific data

  18. Assume-Guarantee Verification of Software Components in SOFA 2 Framework

    Czech Academy of Sciences Publication Activity Database

    Parízek, P.; Plášil, František

    2010-01-01

    Roč. 4, č. 3 (2010), s. 210-221 ISSN 1751-8806 R&D Projects: GA AV ČR 1ET400300504 Grant - others:GA MŠk(CZ) 7E08004 Institutional research plan: CEZ:AV0Z10300504 Keywords : components * software verification * model checking Subject RIV: JC - Computer Hardware ; Software Impact factor: 0.671, year: 2010

  19. Defense Strategies for Asymmetric Networked Systems with Discrete Components

    Directory of Open Access Journals (Sweden)

    Nageswara S. V. Rao

    2018-05-01

    Full Text Available We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models.

  20. Defense Strategies for Asymmetric Networked Systems with Discrete Components.

    Science.gov (United States)

    Rao, Nageswara S V; Ma, Chris Y T; Hausken, Kjell; He, Fei; Yau, David K Y; Zhuang, Jun

    2018-05-03

    We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i) the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii) the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models.

  1. Problems of the Starting and Operating of Hydraulic Components and Systems in Low Ambient Temperature (Part IV

    Directory of Open Access Journals (Sweden)

    Jasiński Ryszard

    2017-09-01

    Full Text Available Designers of hydraulically driven machines and devices are obliged to ensure during design process their high service life with taking into account their operational conditions. Some of the machines may be started in low ambient temperature and even in thermal shock conditions (due to delivering hot working medium to cold components. In order to put such devices into operation appropriate investigations, including experimental ones - usually very expensive and time-consuming, are carried out. For this reason numerical calculations can be used to determine serviceability of a hydraulic component or system operating in thermal shock conditions. Application of numerical calculation methods is much less expensive in comparison to experimental ones. This paper presents a numerical calculation method which makes it possible to solve issues of heat exchange in elements of investigated hydraulic components by using finite elements method. For performing the simulations the following data are necessary: ambient temperature, oil temperature, heat transfer coefficient between oil and surfaces of elements, as well as areas of surfaces being in contact with oil. By means of computer simulation method values of clearance between cooperating elements as well as ranges of parameters of correct and incorrect operation of hydraulic components have been determined. In this paper results of computer simulation of some experimentally tested hydraulic components such as axial piston pump and proportional spool valve, are presented. The computer simulation results were compared with the experimental ones and high conformity was obtained.

  2. Statistics of Shared Components in Complex Component Systems

    Science.gov (United States)

    Mazzolini, Andrea; Gherardi, Marco; Caselle, Michele; Cosentino Lagomarsino, Marco; Osella, Matteo

    2018-04-01

    Many complex systems are modular. Such systems can be represented as "component systems," i.e., sets of elementary components, such as LEGO bricks in LEGO sets. The bricks found in a LEGO set reflect a target architecture, which can be built following a set-specific list of instructions. In other component systems, instead, the underlying functional design and constraints are not obvious a priori, and their detection is often a challenge of both scientific and practical importance, requiring a clear understanding of component statistics. Importantly, some quantitative invariants appear to be common to many component systems, most notably a common broad distribution of component abundances, which often resembles the well-known Zipf's law. Such "laws" affect in a general and nontrivial way the component statistics, potentially hindering the identification of system-specific functional constraints or generative processes. Here, we specifically focus on the statistics of shared components, i.e., the distribution of the number of components shared by different system realizations, such as the common bricks found in different LEGO sets. To account for the effects of component heterogeneity, we consider a simple null model, which builds system realizations by random draws from a universe of possible components. Under general assumptions on abundance heterogeneity, we provide analytical estimates of component occurrence, which quantify exhaustively the statistics of shared components. Surprisingly, this simple null model can positively explain important features of empirical component-occurrence distributions obtained from large-scale data on bacterial genomes, LEGO sets, and book chapters. Specific architectural features and functional constraints can be detected from occurrence patterns as deviations from these null predictions, as we show for the illustrative case of the "core" genome in bacteria.

  3. Estimation of component failure probability from masked binomial system testing data

    International Nuclear Information System (INIS)

    Tan Zhibin

    2005-01-01

    The component failure probability estimates from analysis of binomial system testing data are very useful because they reflect the operational failure probability of components in the field which is similar to the test environment. In practice, this type of analysis is often confounded by the problem of data masking: the status of tested components is unknown. Methods in considering this type of uncertainty are usually computationally intensive and not practical to solve the problem for complex systems. In this paper, we consider masked binomial system testing data and develop a probabilistic model to efficiently estimate component failure probabilities. In the model, all system tests are classified into test categories based on component coverage. Component coverage of test categories is modeled by a bipartite graph. Test category failure probabilities conditional on the status of covered components are defined. An EM algorithm to estimate component failure probabilities is developed based on a simple but powerful concept: equivalent failures and tests. By simulation we not only demonstrate the convergence and accuracy of the algorithm but also show that the probabilistic model is capable of analyzing systems in series, parallel and any other user defined structures. A case study illustrates an application in test case prioritization

  4. Hardware replacements and software tools for digital control computers

    International Nuclear Information System (INIS)

    Walker, R.A.P.; Wang, B-C.; Fung, J.

    1996-01-01

    Technological obsolescence is an on-going challenge for all computer use. By design, and to some extent good fortune, AECL has had a good track record with respect to the march of obsolescence in CANDU digital control computer technology. Recognizing obsolescence as a fact of life, AECL has undertaken a program of supporting the digital control technology of existing CANDU plants. Other AECL groups are developing complete replacement systems for the digital control computers, and more advanced systems for the digital control computers of the future CANDU reactors. This paper presents the results of the efforts of AECL's DCC service support group to replace obsolete digital control computer and related components and to provide friendlier software technology related to the maintenance and use of digital control computers in CANDU. These efforts are expected to extend the current lifespan of existing digital control computers through their mandated life. This group applied two simple rules; the product, whether new or replacement should have a generic basis, and the products should be applicable to both existing CANDU plants and to 'repeat' plant designs built using current design guidelines. While some exceptions do apply, the rules have been met. The generic requirement dictates that the product should not be dependent on any brand technology, and should back-fit to and interface with any such technology which remains in the control design. The application requirement dictates that the product should have universal use and be user friendly to the greatest extent possible. Furthermore, both requirements were designed to anticipate user involvement, modifications and alternate user defined applications. The replacements for hardware components such as paper tape reader/punch, moving arm disk, contact scanner and Ramtek are discussed. The development of these hardware replacements coincide with the development of a gateway system for selected CANDU digital control

  5. Incorporation of passive components aging into PRAs

    International Nuclear Information System (INIS)

    Phillips, J.H.; Roesener, W.S.; Magleby, H.L.; Geidl, V.

    1993-01-01

    The probabilistic risk assessments being developed at most nuclear power plants to calculate the risk of core damage generally focus on the possible failure of active components. The possible failure of passive components is given little consideration. We are developing a method for selecting risk-significant passive components and including them in probabilistic risk assessments. We demonstrated the method by selecting a weld in the auxiliary feedwater system. The selection of this component was based on expert judgement of the likelihood of failure and on an estimate of the consequence of component failure to plant safety. We then used the PRAISE computer code to perform a probabilistic structural analysis to calculate the probability that crack growth due to aging would cause the weld to fail. The calculation included the effects of mechanical loads and thermal transients considered in the design and the effects of thermal cycling caused by a leaking check valve. We modified an existing probabilistic risk assessment (NUREG-1150 plant) to include the possible failure of the auxiliary feedwater weld, and then we used the weld failure probability as input to the modified probabilistic risk assessment to calculate the change in plant risk with time. The results showed that if the failure probability of the selected weld is high, the effect on plant risk is significant. However, this particular calculation showed a very low weld failure probability and no change in plant risk for the 48 years of service analyzed. The success of this demonstration shows that this method could be applied to nuclear power plants. (orig.)

  6. CUSTOM TRIFLANGE ACETABULAR COMPONENTS IN REVISION HIP REPLACEMENT (EXPERIENCE REVIEW

    Directory of Open Access Journals (Sweden)

    A. A. Korytkin

    2017-01-01

    Full Text Available Extensive defects of acetabulum especially accompanied by pelvis discontinuity at the level of acetabulum pose a serious challenge in revision hip replacement and create additional complexity in fixing the acetabular component. One of the perspective options to solve the above issue is the use of custom triflange acetabular components (CTAC designed based on preoperative computer tomography given the specifics of bone defects of the patient. Purpose of the study — to evaluate the outcomes of CTAC use in revision hip replacement.Materials and methods. The authors analyzed treatment outcomes of 12 patients after revision hip replacement using additive techniques of computer simulation and 3D printing CTAC. Follow up period after the surgery averaged 7±3 months (from one to ten months. 7 out of 12 patients had acetabular defects of Paprosky 3B type, 4 patients had defects of Paprosky 3A and in one patient — of Paprosky 2C.Results. Two out of twelve patients had prosthesis dislocations that required revision hip surgery, one of those patients underwent open reduction of dislocation with wound debridement, another patient underwent replacement of articulating couple of acetabular component. Total scores under Harris Hip Score and paint VAS score prior to treatment was 28±7 and 7±1 points respectively, postoperative scores were 76±9 and 3±1 respectively.Conclusion. The application of additive techniques for revision hip replacement in patients with extensive acetabular and pelvic defects allows to make a precise preoperative planning, to restore joint rotation center, to reconstruct bone defects and to securely fix triflange acetabular component that altogether significantly improve treatment outcomes and patients satisfaction with the surgery.

  7. 2nd Generation QUATARA Flight Computer Project

    Science.gov (United States)

    Falker, Jay; Keys, Andrew; Fraticelli, Jose Molina; Capo-Iugo, Pedro; Peeples, Steven

    2015-01-01

    Single core flight computer boards have been designed, developed, and tested (DD&T) to be flown in small satellites for the last few years. In this project, a prototype flight computer will be designed as a distributed multi-core system containing four microprocessors running code in parallel. This flight computer will be capable of performing multiple computationally intensive tasks such as processing digital and/or analog data, controlling actuator systems, managing cameras, operating robotic manipulators and transmitting/receiving from/to a ground station. In addition, this flight computer will be designed to be fault tolerant by creating both a robust physical hardware connection and by using a software voting scheme to determine the processor's performance. This voting scheme will leverage on the work done for the Space Launch System (SLS) flight software. The prototype flight computer will be constructed with Commercial Off-The-Shelf (COTS) components which are estimated to survive for two years in a low-Earth orbit.

  8. Statistics of Shared Components in Complex Component Systems

    Directory of Open Access Journals (Sweden)

    Andrea Mazzolini

    2018-04-01

    Full Text Available Many complex systems are modular. Such systems can be represented as “component systems,” i.e., sets of elementary components, such as LEGO bricks in LEGO sets. The bricks found in a LEGO set reflect a target architecture, which can be built following a set-specific list of instructions. In other component systems, instead, the underlying functional design and constraints are not obvious a priori, and their detection is often a challenge of both scientific and practical importance, requiring a clear understanding of component statistics. Importantly, some quantitative invariants appear to be common to many component systems, most notably a common broad distribution of component abundances, which often resembles the well-known Zipf’s law. Such “laws” affect in a general and nontrivial way the component statistics, potentially hindering the identification of system-specific functional constraints or generative processes. Here, we specifically focus on the statistics of shared components, i.e., the distribution of the number of components shared by different system realizations, such as the common bricks found in different LEGO sets. To account for the effects of component heterogeneity, we consider a simple null model, which builds system realizations by random draws from a universe of possible components. Under general assumptions on abundance heterogeneity, we provide analytical estimates of component occurrence, which quantify exhaustively the statistics of shared components. Surprisingly, this simple null model can positively explain important features of empirical component-occurrence distributions obtained from large-scale data on bacterial genomes, LEGO sets, and book chapters. Specific architectural features and functional constraints can be detected from occurrence patterns as deviations from these null predictions, as we show for the illustrative case of the “core” genome in bacteria.

  9. Automatic capability to store and retrieve component data and to calculate structural integrity of these components

    International Nuclear Information System (INIS)

    McKinnis, C.J.; Toor, P.M.

    1985-01-01

    In structural analysis, assimilation of material, geometry, and service history input parameters is very cumbersome. Quite often with changing service history and revised material properties and geometry, an analysis has to be repeated. To overcome the above mentioned difficulties, a computer program was developed to provide the capability to establish a computerized library of all material, geometry, and service history parameters for components. The program also has the capability to calculate the structural integrity based on the Arrhenius type equations, including the probability calculations. This unique combination of computerized input information storage and automated analysis procedure assures consistency, efficiency, and accuracy when the hardware integrity has to be reassessed

  10. Interpretable functional principal component analysis.

    Science.gov (United States)

    Lin, Zhenhua; Wang, Liangliang; Cao, Jiguo

    2016-09-01

    Functional principal component analysis (FPCA) is a popular approach to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). The intervals where the values of FPCs are significant are interpreted as where sample curves have major variations. However, these intervals are often hard for naïve users to identify, because of the vague definition of "significant values". In this article, we develop a novel penalty-based method to derive FPCs that are only nonzero precisely in the intervals where the values of FPCs are significant, whence the derived FPCs possess better interpretability than the FPCs derived from existing methods. To compute the proposed FPCs, we devise an efficient algorithm based on projection deflation techniques. We show that the proposed interpretable FPCs are strongly consistent and asymptotically normal under mild conditions. Simulation studies confirm that with a competitive performance in explaining variations of sample curves, the proposed FPCs are more interpretable than the traditional counterparts. This advantage is demonstrated by analyzing two real datasets, namely, electroencephalography data and Canadian weather data. © 2015, The International Biometric Society.

  11. Risk-based assessment of the allowable outage times for the unit 1 leningrad nuclear power plant ECCS components

    International Nuclear Information System (INIS)

    Koukhar, Sergey; Vinnikov, Bronislav

    2009-01-01

    Present paper describes a method for risk - informed assessment of the Allowable Outage Times (AOTs). The AOT is the time, when components of a safety system allowed to be out of service during power operation or during shutdown operation off a plant. If the components are not restored during the time, the plant in operation must be shut down or the plant in a given shutdown mode has to go to safer shutdown mode. Application of the method is also provided for the equipment of the Unit 1 Leningrad NPP ECCS components. For solution of the problem it is necessary to carry out two series of computations using a Living PSA model, level 1. In the first series of the computations the core damage frequency (CDFb) for the base configuration of the plant is determined (there is no equipment out of service). Here the symbol 'b' means the base configuration of a plant. In the second series of the computations the core damage frequency (CDFi) for the configuration of the plant with the component (which is out of service) is calculated. That is here CDFi is determined for the failure probability of the component equal to 1.0 (component 'i' is unavailable). Then it is necessary to determine so called Risk Increase Factor (RIF) using the following ratio: RIFi = CDFi / CDFb. At last the AOT is calculated with the help of the ratio: AOTi = Tppr / RIFi, where Tppr is a period of time between two Planned Preventive Repairs (PPRs). 1. Using the risk based approach the AOTs were calculated for a set of the components of the Unit 1 Leningrad NPP ECCS components. 2. The main conclusion from the analysis is that the current deterministic AOTs for the ECCS components are conservative and should be extended. 3. The risk based extension of the AOTs for the ECCS components can prevent the Unit 1 Leningrad NPP to enter into the operating modes with increased risk. (author)

  12. Recognizing genes and other components of genomic structure

    Energy Technology Data Exchange (ETDEWEB)

    Burks, C. (Los Alamos National Lab., NM (USA)); Myers, E. (Arizona Univ., Tucson, AZ (USA). Dept. of Computer Science); Stormo, G.D. (Colorado Univ., Boulder, CO (USA). Dept. of Molecular, Cellular and Developmental Biology)

    1991-01-01

    The Aspen Center for Physics (ACP) sponsored a three-week workshop, with 26 scientists participating, from 28 May to 15 June, 1990. The workshop, entitled Recognizing Genes and Other Components of Genomic Structure, focussed on discussion of current needs and future strategies for developing the ability to identify and predict the presence of complex functional units on sequenced, but otherwise uncharacterized, genomic DNA. We addressed the need for computationally-based, automatic tools for synthesizing available data about individual consensus sequences and local compositional patterns into the composite objects (e.g., genes) that are -- as composite entities -- the true object of interest when scanning DNA sequences. The workshop was structured to promote sustained informal contact and exchange of expertise between molecular biologists, computer scientists, and mathematicians. No participant stayed for less than one week, and most attended for two or three weeks. Computers, software, and databases were available for use as electronic blackboards'' and as the basis for collaborative exploration of ideas being discussed and developed at the workshop. 23 refs., 2 tabs.

  13. Tse computers. [ultrahigh speed optical processing for two dimensional binary image

    Science.gov (United States)

    Schaefer, D. H.; Strong, J. P., III

    1977-01-01

    An ultra-high-speed computer that utilizes binary images as its basic computational entity is being developed. The basic logic components perform thousands of operations simultaneously. Technologies of the fiber optics, display, thin film, and semiconductor industries are being utilized in the building of the hardware.

  14. Evolution of the Darlington NGS fuel handling computer systems

    International Nuclear Information System (INIS)

    Leung, V.; Crouse, B.

    1996-01-01

    The ability to improve the capabilities and reliability of digital control systems in nuclear power stations to meet changing plant and personnel requirements is a formidable challenge. Many of these systems have high quality assurance standards that must be met to ensure adequate nuclear safety. Also many of these systems contain obsolete hardware along with software that is not easily transported to newer technology computer equipment. Combining modern technology upgrades into a system of obsolete hardware components is not an easy task. Lastly, as users become more accustomed to using modern technology computer systems in other areas of the station (e.g. information systems), their expectations of the capabilities of the plant systems increase. This paper will present three areas of the Darlington NGS fuel handling computer system that have been or are in the process of being upgraded to current technology components within the framework of an existing fuel handling control system. (author). 3 figs

  15. Evolution of the Darlington NGS fuel handling computer systems

    Energy Technology Data Exchange (ETDEWEB)

    Leung, V; Crouse, B [Ontario Hydro, Bowmanville (Canada). Darlington Nuclear Generating Station

    1997-12-31

    The ability to improve the capabilities and reliability of digital control systems in nuclear power stations to meet changing plant and personnel requirements is a formidable challenge. Many of these systems have high quality assurance standards that must be met to ensure adequate nuclear safety. Also many of these systems contain obsolete hardware along with software that is not easily transported to newer technology computer equipment. Combining modern technology upgrades into a system of obsolete hardware components is not an easy task. Lastly, as users become more accustomed to using modern technology computer systems in other areas of the station (e.g. information systems), their expectations of the capabilities of the plant systems increase. This paper will present three areas of the Darlington NGS fuel handling computer system that have been or are in the process of being upgraded to current technology components within the framework of an existing fuel handling control system. (author). 3 figs.

  16. Computational analysis of a multistage axial compressor

    Science.gov (United States)

    Mamidoju, Chaithanya

    Turbomachines are used extensively in Aerospace, Power Generation, and Oil & Gas Industries. Efficiency of these machines is often an important factor and has led to the continuous effort to improve the design to achieve better efficiency. The axial flow compressor is a major component in a gas turbine with the turbine's overall performance depending strongly on compressor performance. Traditional analysis of axial compressors involves throughflow calculations, isolated blade passage analysis, Quasi-3D blade-to-blade analysis, single-stage (rotor-stator) analysis, and multi-stage analysis involving larger design cycles. In the current study, the detailed flow through a 15 stage axial compressor is analyzed using a 3-D Navier Stokes CFD solver in a parallel computing environment. Methodology is described for steady state (frozen rotor stator) analysis of one blade passage per component. Various effects such as mesh type and density, boundary conditions, tip clearance and numerical issues such as turbulence model choice, advection model choice, and parallel processing performance are analyzed. A high sensitivity of the predictions to the above was found. Physical explanation to the flow features observed in the computational study are given. The total pressure rise verses mass flow rate was computed.

  17. Evolution and Revolution in the Design of Computers Based on Nanoelectronics

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    Today's computers are roughly a factor of one billion less efficient at doing their job than the laws of fundamental physics state that they could be. How much of this efficiency gain will we actually be able to harvest? What are the biggest obstacles to achieving many orders of magnitude improvement in our computing hardware, rather that the roughly factor of two we are used to seeing with each new generation of chip? Shrinking components to the nanoscale offers both potential advantages and severe challenges. The transition from classical mechanics to quantum mechanics is a major issue. Others are the problems of defect and fault tolearance: defects are manufacturing mistakes or components that irreversibly break over time and faults are transient interuptions that occur during operation. Both of these issues become bigger problems as component sizes shrink and the number of components scales up massively. In 1955, John von Neumann showed that a completely general approach to building a ...

  18. Computation material science of structural-phase transformation in casting aluminium alloys

    Science.gov (United States)

    Golod, V. M.; Dobosh, L. Yu

    2017-04-01

    Successive stages of computer simulation the formation of the casting microstructure under non-equilibrium conditions of crystallization of multicomponent aluminum alloys are presented. On the basis of computer thermodynamics and heat transfer during solidification of macroscale shaped castings are specified the boundary conditions of local heat exchange at mesoscale modeling of non-equilibrium formation the solid phase and of the component redistribution between phases during coalescence of secondary dendrite branches. Computer analysis of structural - phase transitions based on the principle of additive physico-chemical effect of the alloy components in the process of diffusional - capillary morphological evolution of the dendrite structure and the o of local dendrite heterogeneity which stochastic nature and extent are revealed under metallographic study and modeling by the Monte Carlo method. The integrated computational materials science tools at researches of alloys are focused and implemented on analysis the multiple-factor system of casting processes and prediction of casting microstructure.

  19. MFAULT: a computer program for analyzing fault trees

    International Nuclear Information System (INIS)

    Pelto, P.J.; Purcell, W.L.

    1977-11-01

    A description and user instructions are presented for MFAULT, a FORTRAN computer program for fault tree analysis. MFAULT identifies the cut sets of a fault tree, calculates their probabilities, and screens the cut sets on the basis of specified cut-offs on probability and/or cut set length. MFAULT is based on an efficient upward-working algorithm for cut set identification. The probability calculations are based on the assumption of small probabilities and constant hazard rates (i.e., exponential failure distributions). Cut sets consisting of repairable components (basic events) only, non-repairable components only, or mixtures of both types can be evaluated. Components can be on-line or standby. Unavailability contributions from pre-existing failures, failures on demand, and testing and maintenance down-time can be handled. MFAULT can analyze fault trees with AND gates, OR gates, inhibit gates, on switches (houses) and off switches. The code is presently capable of finding up to ten event cut sets from a fault tree with up to 512 basic events and 400 gates. It is operational on the CONTROL DATA CYBER 74 computer. 11 figures

  20. Incidence of traumatic carotid and vertebral artery dissections: results of cervical vessel computed tomography angiogram as a mandatory scan component in severely injured patients

    Directory of Open Access Journals (Sweden)

    Schicho A

    2018-01-01

    Full Text Available Andreas Schicho,1 Lukas Luerken,1 Ramona Meier,1 Antonio Ernstberger,2 Christian Stroszczynski,1 Andreas Schreyer,1 Lena-Marie Dendl,1 Stephan Schleder1 1Department of Radiology, 2Department of Trauma Surgery, University Medical Center, Regensburg, Germany Purpose: The aim of this study was to evaluate the true incidence of cervical artery dissections (CeADs in trauma patients with an Injury Severity Score (ISS of ≥16, since head-and-neck computed tomography angiogram (CTA is not a compulsory component of whole-body trauma computed tomography (CT protocols. Patients and methods: A total of 230 consecutive trauma patients with an ISS of ≥16 admitted to our Level I trauma center during a 24-month period were prospectively included. Standardized whole-body CT in a 256-detector row scanner included a head-and-neck CTA. Incidence, mortality, patient and trauma characteristics, and concomitant injuries were recorded and analyzed retrospectively in patients with carotid artery dissection (CAD and vertebral artery dissection (VAD. Results: Of the 230 patients included, 6.5% had a CeAD, 5.2% had a CAD, and 1.7% had a VAD. One patient had both CAD and VAD. For both, CAD and VAD, mortality is 25%. One death was caused by fatal cerebral ischemia due to high-grade CAD. A total of 41.6% of the patients with traumatic CAD and 25% of the patients with VAD had neurological sequelae. Conclusion: Mandatory head-and-neck CTA yields higher CeAD incidence than reported before. We highly recommend the compulsory inclusion of a head-and-neck CTA to whole-body CT routines for severely injured patients. Keywords: polytrauma, carotid artery, vertebral artery, dissection, blunt trauma, computed tomography angiogram

  1. Computational techniques for inelastic analysis and numerical experiments

    International Nuclear Information System (INIS)

    Yamada, Y.

    1977-01-01

    A number of formulations have been proposed for inelastic analysis, particularly for the thermal elastic-plastic creep analysis of nuclear reactor components. In the elastic-plastic regime, which principally concerns with the time independent behavior, the numerical techniques based on the finite element method have been well exploited and computations have become a routine work. With respect to the problems in which the time dependent behavior is significant, it is desirable to incorporate a procedure which is workable on the mechanical model formulation as well as the method of equation of state proposed so far. A computer program should also take into account the strain-dependent and/or time-dependent micro-structural changes which often occur during the operation of structural components at the increasingly high temperature for a long period of time. Special considerations are crucial if the analysis is to be extended to large strain regime where geometric nonlinearities predominate. The present paper introduces a rational updated formulation and a computer program under development by taking into account the various requisites stated above. (Auth.)

  2. Distributed user interfaces for clinical ubiquitous computing applications.

    Science.gov (United States)

    Bång, Magnus; Larsson, Anders; Berglund, Erik; Eriksson, Henrik

    2005-08-01

    Ubiquitous computing with multiple interaction devices requires new interface models that support user-specific modifications to applications and facilitate the fast development of active workspaces. We have developed NOSTOS, a computer-augmented work environment for clinical personnel to explore new user interface paradigms for ubiquitous computing. NOSTOS uses several devices such as digital pens, an active desk, and walk-up displays that allow the system to track documents and activities in the workplace. We present the distributed user interface (DUI) model that allows standalone applications to distribute their user interface components to several devices dynamically at run-time. This mechanism permit clinicians to develop their own user interfaces and forms to clinical information systems to match their specific needs. We discuss the underlying technical concepts of DUIs and show how service discovery, component distribution, events and layout management are dealt with in the NOSTOS system. Our results suggest that DUIs--and similar network-based user interfaces--will be a prerequisite of future mobile user interfaces and essential to develop clinical multi-device environments.

  3. GPU-accelerated computation of electron transfer.

    Science.gov (United States)

    Höfinger, Siegfried; Acocella, Angela; Pop, Sergiu C; Narumi, Tetsu; Yasuoka, Kenji; Beu, Titus; Zerbetto, Francesco

    2012-11-05

    Electron transfer is a fundamental process that can be studied with the help of computer simulation. The underlying quantum mechanical description renders the problem a computationally intensive application. In this study, we probe the graphics processing unit (GPU) for suitability to this type of problem. Time-critical components are identified via profiling of an existing implementation and several different variants are tested involving the GPU at increasing levels of abstraction. A publicly available library supporting basic linear algebra operations on the GPU turns out to accelerate the computation approximately 50-fold with minor dependence on actual problem size. The performance gain does not compromise numerical accuracy and is of significant value for practical purposes. Copyright © 2012 Wiley Periodicals, Inc.

  4. Computational Methods for Biomolecular Electrostatics

    Science.gov (United States)

    Dong, Feng; Olsen, Brett; Baker, Nathan A.

    2008-01-01

    An understanding of intermolecular interactions is essential for insight into how cells develop, operate, communicate and control their activities. Such interactions include several components: contributions from linear, angular, and torsional forces in covalent bonds, van der Waals forces, as well as electrostatics. Among the various components of molecular interactions, electrostatics are of special importance because of their long range and their influence on polar or charged molecules, including water, aqueous ions, and amino or nucleic acids, which are some of the primary components of living systems. Electrostatics, therefore, play important roles in determining the structure, motion and function of a wide range of biological molecules. This chapter presents a brief overview of electrostatic interactions in cellular systems with a particular focus on how computational tools can be used to investigate these types of interactions. PMID:17964951

  5. Computational statistics handbook with Matlab

    CERN Document Server

    Martinez, Wendy L

    2007-01-01

    Prefaces Introduction What Is Computational Statistics? An Overview of the Book Probability Concepts Introduction Probability Conditional Probability and Independence Expectation Common Distributions Sampling Concepts Introduction Sampling Terminology and Concepts Sampling Distributions Parameter Estimation Empirical Distribution Function Generating Random Variables Introduction General Techniques for Generating Random Variables Generating Continuous Random Variables Generating Discrete Random Variables Exploratory Data Analysis Introduction Exploring Univariate Data Exploring Bivariate and Trivariate Data Exploring Multidimensional Data Finding Structure Introduction Projecting Data Principal Component Analysis Projection Pursuit EDA Independent Component Analysis Grand Tour Nonlinear Dimensionality Reduction Monte Carlo Methods for Inferential Statistics Introduction Classical Inferential Statistics Monte Carlo Methods for Inferential Statist...

  6. Component-based development process and component lifecycle

    NARCIS (Netherlands)

    Crnkovic, I.; Chaudron, M.R.V.; Larsson, S.

    2006-01-01

    The process of component- and component-based system development differs in many significant ways from the "classical" development process of software systems. The main difference is in the separation of the development process of components from the development process of systems. This fact has a

  7. Tool Integration: Experiences and Issues in Using XMI and Component Technology

    DEFF Research Database (Denmark)

    Damm, Christian Heide; Hansen, Klaus Marius; Thomsen, Michael

    2000-01-01

    of conflicting data models, and provide architecture for doing so, based on component technology and XML Metadata Interchange. As an example, we discuss the implementation of an electronic whiteboard tool, Knight, which adds support for creative and collaborative object-oriented modeling to existing Computer-Aided...... Software Engineering through integration using our proposed architecture....

  8. An Analog Computer for Electronic Engineering Education

    Science.gov (United States)

    Fitch, A. L.; Iu, H. H. C.; Lu, D. D. C.

    2011-01-01

    This paper describes a compact analog computer and proposes its use in electronic engineering teaching laboratories to develop student understanding of applications in analog electronics, electronic components, engineering mathematics, control engineering, safe laboratory and workshop practices, circuit construction, testing, and maintenance. The…

  9. Education through the prism of computation

    Science.gov (United States)

    Kaurov, Vitaliy

    2014-03-01

    With the rapid development of technology, computation claims its irrevocable place among research components of modern science. Thus to foster a successful future scientist, engineer or educator we need to add computation to the foundations of scientific education. We will discuss what type of paradigm shifts it brings to these foundations on the example of Wolfram Science Summer School. It is one of the most advanced computational outreach programs run by Wolfram Foundation, welcoming participants of almost all ages and backgrounds. Centered on complexity science and physics, it also covers numerous adjacent and interdisciplinary fields such as finance, biology, medicine and even music. We will talk about educational and research experiences in this program during the 12 years of its existence. We will review statistics and outputs the program has produced. Among these are interactive electronic publications at the Wolfram Demonstrations Project and contributions to the computational knowledge engine Wolfram|Alpa.

  10. Towards a Complete Model for Software Component Deployment on Heterogeneous Platform

    Directory of Open Access Journals (Sweden)

    Švogor Ivan

    2014-12-01

    Full Text Available This report briefly describes an ongoing research related to optimization of allocating software components to heterogeneous computing platform (which includes CPU, GPU and FPGA. Research goal is also presented, along with current hot topics of the research area, related research teams, and finally results and contribution of my research. It involves mathematical modelling which results in goal function, optimization method which finds a suboptimal solution to the goal function and a software modeling tool which enables graphical representation of the problem at hand and help developers determine component placement in the system design phase.

  11. A Computer String-Grammar of English.

    Science.gov (United States)

    Sager, Naomi

    This volume is the fourth in a series of detailed reports on a working computer program for the syntactic analysis of English sentences into their component strings. The report (1) records the considerations involved in various decisions among alternative grammatical formulations and presents the word-subclasses, the linguistic strings, etc., for…

  12. Survey of artificial intelligence methods for detection and identification of component faults in nuclear power plants

    International Nuclear Information System (INIS)

    Reifman, J.

    1997-01-01

    A comprehensive survey of computer-based systems that apply artificial intelligence methods to detect and identify component faults in nuclear power plants is presented. Classification criteria are established that categorize artificial intelligence diagnostic systems according to the types of computing approaches used (e.g., computing tools, computer languages, and shell and simulation programs), the types of methodologies employed (e.g., types of knowledge, reasoning and inference mechanisms, and diagnostic approach), and the scope of the system. The major issues of process diagnostics and computer-based diagnostic systems are identified and cross-correlated with the various categories used for classification. Ninety-five publications are reviewed

  13. Cloud Computing Security in Openstack Architecture: General Overview

    OpenAIRE

    Gleb Igorevich Shakulo

    2015-01-01

    The subject of article is cloud computing security. Article begins with author analyzing cloud computing advantages and disadvantages, factors of growth, both positive and negative. Among latter, security is deemed one of the most prominent. Furthermore, author takes architecture of OpenStack project as an example for study: describes its essential components and their interconnection. As conclusion, author raises series of questions as possible areas of further research to resolve security c...

  14. IMPLEMENTATION OF CLOUD COMPUTING AS A COMPONENT OF THE UNIVERSITY IT INFRASTRUCTURE

    Directory of Open Access Journals (Sweden)

    Vasyl P. Oleksyuk

    2014-05-01

    Full Text Available The article investigated the concept of IT infrastructure of higher educational institution. The article described models of deploying of cloud technologies in IT infrastructure. The hybrid model is most recent for higher educational institution. The unified authentication is an important component of IT infrastructure. The author suggests the public (Google Apps, Office 365 and private (Cloudstack, Eucalyptus, OpenStack cloud platforms to deploying in IT infrastructure of higher educational institution. Open source platform for organizing enterprise clouds were analyzed by the author. The article describes the experience of the deployment enterprise cloud in IT infrastructure of Department of Physics and Mathematics of Ternopil V. Hnatyuk National Pedagogical University.

  15. Gauss Seidel-type methods for energy states of a multi-component Bose Einstein condensate

    Science.gov (United States)

    Chang, Shu-Ming; Lin, Wen-Wei; Shieh, Shih-Feng

    2005-01-01

    In this paper, we propose two iterative methods, a Jacobi-type iteration (JI) and a Gauss-Seidel-type iteration (GSI), for the computation of energy states of the time-independent vector Gross-Pitaevskii equation (VGPE) which describes a multi-component Bose-Einstein condensate (BEC). A discretization of the VGPE leads to a nonlinear algebraic eigenvalue problem (NAEP). We prove that the GSI method converges locally and linearly to a solution of the NAEP if and only if the associated minimized energy functional problem has a strictly local minimum. The GSI method can thus be used to compute ground states and positive bound states, as well as the corresponding energies of a multi-component BEC. Numerical experience shows that the GSI converges much faster than JI and converges globally within 10-20 steps.

  16. Investigation of low-latitude hydrogen emission in terms of a two-component interstellar gas model

    International Nuclear Information System (INIS)

    Baker, P.L.; Burton, W.B.

    1975-01-01

    The high-resolution 21-cm hydrogen line observations at low galactic latitude of Burton and Verschuur have been analyzed to determine the large-scale distribution of galactic hydrogen. The distribution parameters are found by model fitting. Optical depth affects have been computed using a two-component gas model. Analysis shows that a multiphase description of the medium is essential to the interpretation of low-latitude emission observations. Where possible, the number of free parameters in the gas model has been reduced. Calculations were performed for a one-component, uniform spin temperature, gas model in order to show the systematic departures between this model and the data caused by the incorrect treatment of the optical depth effect. In the two-component gas, radiative transfer is treated by a Monte Carlo calculation since the opacity of the gas arises in a randomly distributed, cold, optically thick, low velocity-dispersion, cloud medium. The emission arises in both the cloud medium and a smoothly distributed, optically thin, high velocity-dispersion, intercloud medium. The synthetic profiles computed from the two-component model reproduce both the large-scale trends of the observed emission profiles and the magnitude of the small-scale emission irregularities. The analysis permits the determination of values for []he thickness of the galactic disk between half density points, the total observed neutral hydrogen mass of the Galaxy, and the central number density of the intercloud atoms. In addition, the analysis is sensitive to the size of clouds contributing to the observations. Computations also show that synthetic emission profiles based on the two-component model display both the zero-velocity and high-velocity ridges, indicative of optical thinness on a large scale, in spite of the presence of optically thick gas

  17. Computer models and output, Spartan REM: Appendix B

    Science.gov (United States)

    Marlowe, D. S.; West, E. J.

    1984-01-01

    A computer model of the Spartan Release Engagement Mechanism (REM) is presented in a series of numerical charts and engineering drawings. A crack growth analysis code is used to predict the fracture mechanics of critical components.

  18. Agile Development of Various Computational Power Adaptive Web-Based Mobile-Learning Software Using Mobile Cloud Computing

    Science.gov (United States)

    Zadahmad, Manouchehr; Yousefzadehfard, Parisa

    2016-01-01

    Mobile Cloud Computing (MCC) aims to improve all mobile applications such as m-learning systems. This study presents an innovative method to use web technology and software engineering's best practices to provide m-learning functionalities hosted in a MCC-learning system as service. Components hosted by MCC are used to empower developers to create…

  19. Real-time computing platform for spiking neurons (RT-spike).

    Science.gov (United States)

    Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael

    2006-07-01

    A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.

  20. Seismic assessment and performance of nonstructural components affected by structural modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hur, Jieun; Althoff, Eric; Sezen, Halil; Denning, Richard; Aldemir, Tunc [Ohio State University, Columbus (United States)

    2017-03-15

    Seismic probabilistic risk assessment (SPRA) requires a large number of simulations to evaluate the seismic vulnerability of structural and nonstructural components in nuclear power plants. The effect of structural modeling and analysis assumptions on dynamic analysis of 3D and simplified 2D stick models of auxiliary buildings and the attached nonstructural components is investigated. Dynamic characteristics and seismic performance of building models are also evaluated, as well as the computational accuracy of the models. The presented results provide a better understanding of the dynamic behavior and seismic performance of auxiliary buildings. The results also help to quantify the impact of uncertainties associated with modeling and analysis of simplified numerical models of structural and nonstructural components subjected to seismic shaking on the predicted seismic failure probabilities of these systems.

  1. Model validation and calibration based on component functions of model output

    International Nuclear Information System (INIS)

    Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei

    2015-01-01

    The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods

  2. Biomorphic Multi-Agent Architecture for Persistent Computing

    Science.gov (United States)

    Lodding, Kenneth N.; Brewster, Paul

    2009-01-01

    A multi-agent software/hardware architecture, inspired by the multicellular nature of living organisms, has been proposed as the basis of design of a robust, reliable, persistent computing system. Just as a multicellular organism can adapt to changing environmental conditions and can survive despite the failure of individual cells, a multi-agent computing system, as envisioned, could adapt to changing hardware, software, and environmental conditions. In particular, the computing system could continue to function (perhaps at a reduced but still reasonable level of performance) if one or more component( s) of the system were to fail. One of the defining characteristics of a multicellular organism is unity of purpose. In biology, the purpose is survival of the organism. The purpose of the proposed multi-agent architecture is to provide a persistent computing environment in harsh conditions in which repair is difficult or impossible. A multi-agent, organism-like computing system would be a single entity built from agents or cells. Each agent or cell would be a discrete hardware processing unit that would include a data processor with local memory, an internal clock, and a suite of communication equipment capable of both local line-of-sight communications and global broadcast communications. Some cells, denoted specialist cells, could contain such additional hardware as sensors and emitters. Each cell would be independent in the sense that there would be no global clock, no global (shared) memory, no pre-assigned cell identifiers, no pre-defined network topology, and no centralized brain or control structure. Like each cell in a living organism, each agent or cell of the computing system would contain a full description of the system encoded as genes, but in this case, the genes would be components of a software genome.

  3. Methods of measuring residual stresses in components

    International Nuclear Information System (INIS)

    Rossini, N.S.; Dassisti, M.; Benyounis, K.Y.; Olabi, A.G.

    2012-01-01

    Highlights: ► Defining the different methods of measuring residual stresses in manufactured components. ► Comprehensive study on the hole drilling, neutron diffraction and other techniques. ► Evaluating advantage and disadvantage of each method. ► Advising the reader with the appropriate method to use. -- Abstract: Residual stresses occur in many manufactured structures and components. Large number of investigations have been carried out to study this phenomenon and its effect on the mechanical characteristics of these components. Over the years, different methods have been developed to measure residual stress for different types of components in order to obtain reliable assessment. The various specific methods have evolved over several decades and their practical applications have greatly benefited from the development of complementary technologies, notably in material cutting, full-field deformation measurement techniques, numerical methods and computing power. These complementary technologies have stimulated advances not only in measurement accuracy and reliability, but also in range of application; much greater detail in residual stresses measurement is now available. This paper aims to classify the different residual stresses measurement methods and to provide an overview of some of the recent advances in this area to help researchers on selecting their techniques among destructive, semi destructive and non-destructive techniques depends on their application and the availabilities of those techniques. For each method scope, physical limitation, advantages and disadvantages are summarized. In the end this paper indicates some promising directions for future developments.

  4. ATLAS Distributed Computing in LHC Run2

    International Nuclear Information System (INIS)

    Campana, Simone

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run-2. An increase in both the data rate and the computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (Prodsys-2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward a flexible computing model. A flexible computing utilization exploring the use of opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model; the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover, a new data management strategy, based on a defined lifetime for each dataset, has been defined to better manage the lifecycle of the data. In this note, an overview of an operational experience of the new system and its evolution is presented. (paper)

  5. Video Analysis of Projectile Motion Using Tablet Computers as Experimental Tools

    Science.gov (United States)

    Klein, P.; Gröber, S.; Kuhn, J.; Müller, A.

    2014-01-01

    Tablet computers were used as experimental tools to record and analyse the motion of a ball thrown vertically from a moving skateboard. Special applications plotted the measurement data component by component, allowing a simple determination of initial conditions and "g" in order to explore the underlying laws of motion. This experiment…

  6. Simulated lumbar minimally invasive surgery educational model with didactic and technical components.

    Science.gov (United States)

    Chitale, Rohan; Ghobrial, George M; Lobel, Darlene; Harrop, James

    2013-10-01

    The learning and development of technical skills are paramount for neurosurgical trainees. External influences and a need for maximizing efficiency and proficiency have encouraged advancements in simulator-based learning models. To confirm the importance of establishing an educational curriculum for teaching minimally invasive techniques of pedicle screw placement using a computer-enhanced physical model of percutaneous pedicle screw placement with simultaneous didactic and technical components. A 2-hour educational curriculum was created to educate neurosurgical residents on anatomy, pathophysiology, and technical aspects associated with image-guided pedicle screw placement. Predidactic and postdidactic practical and written scores were analyzed and compared. Scores were calculated for each participant on the basis of the optimal pedicle screw starting point and trajectory for both fluoroscopy and computed tomographic navigation. Eight trainees participated in this module. Average mean scores on the written didactic test improved from 78% to 100%. The technical component scores for fluoroscopic guidance improved from 58.8 to 52.9. Technical score for computed tomography-navigated guidance also improved from 28.3 to 26.6. Didactic and technical quantitative scores with a simulator-based educational curriculum improved objectively measured resident performance. A minimally invasive spine simulation model and curriculum may serve a valuable function in the education of neurosurgical residents and outcomes for patients.

  7. Computational tasks in robotics and factory automation

    NARCIS (Netherlands)

    Biemans, Frank P.; Vissers, C.A.

    1988-01-01

    The design of Manufacturing Planning and Control Systems (MPCSs) — systems that negotiate with Customers and Suppliers to exchange products in return for money in order to generate profit, is discussed. The computational task of MPCS components are systematically specified as a starting point for

  8. On Perturbation Components Correspondence between Diffusion and Transport

    Energy Technology Data Exchange (ETDEWEB)

    G. Palmiotti

    2012-11-01

    We have established a correspondence between perturbation components in diffusion and transport theory. In particular we have established the correspondence between the leakage perturbation component of the diffusion theory to that of the group self scattering in transport theory. This has been confirmed by practical applications on sodium void reactivity calculations of fast reactors. Why this is important for current investigations? Recently, there has been a renewed interest in designing fast reactors where the sodium void reactivity coefficient is minimized. In particular the ASTRID8,9 reactor concept has been optimized with this goal in mind. The correspondence on the leakage term that has been established here has a twofold implication for the design of this kind of reactors. First, this type of reactor has a radial reflector; therefore, as shown before, the sodium void reactivity coefficient calculation requires the use of transport theory. The minimization of the sodium reactivity coefficient is normally done by increasing the leakage component that has a negative sign. The correspondence established in this paper allows to directly look at this component in transport theory. The second implication is related to the uncertainty evaluation on sodium void reactivity. As it has shown before, the total sodium void reactivity effect is the result of a large compensation (opposite sign) between the scattering (called often spectral) component and the leakage one. Consequently, one has to evaluate separately the uncertainty on each separate component and then combine them statistically. If one wants to compute the cross section sensitivity coefficients of the two different components, the formulation established in this paper allows to achieve this goal by playing on the contribution to the sodium void reactivity coming from the group self scattering of the sodium cross section.

  9. Measuring the migration of the components and polyethylene wear after total hip arthroplasty: beads and specialised radiographs are not necessary.

    Science.gov (United States)

    Devane, P A; Horne, J G; Foley, G; Stanley, J

    2017-10-01

    This paper describes the methodology, validation and reliability of a new computer-assisted method which uses models of the patient's bones and the components to measure their migration and polyethylene wear from radiographs after total hip arthroplasty (THA). Models of the patient's acetabular and femoral component obtained from the manufacturer and models of the patient's pelvis and femur built from a single computed tomography (CT) scan, are used by a computer program to measure the migration of the components and the penetration of the femoral head from anteroposterior and lateral radiographs taken at follow-up visits. The program simulates the radiographic setup and matches the position and orientation of the models to outlines of the pelvis, the acetabular and femoral component, and femur on radiographs. Changes in position and orientation reflect the migration of the components and the penetration of the femoral head. Validation was performed using radiographs of phantoms simulating known migration and penetration, and the clinical feasibility of measuring migration was assessed in two patients. Migration of the acetabular and femoral components can be measured with limits of agreement (LOA) of 0.37 mm and 0.33 mm, respectively. Penetration of the femoral head can be measured with LOA of 0.161 mm. The migration of components and polyethylene wear can be measured without needing specialised radiographs. Accurate measurement may allow earlier prediction of failure after THA. Cite this article: Bone Joint J 2017;99-B:1290-7. ©2017 The British Editorial Society of Bone & Joint Surgery.

  10. Structural analysis of nuclear components

    International Nuclear Information System (INIS)

    Ikonen, K.; Hyppoenen, P.; Mikkola, T.; Noro, H.; Raiko, H.; Salminen, P.; Talja, H.

    1983-05-01

    THe report describes the activities accomplished in the project 'Structural Analysis Project of Nuclear Power Plant Components' during the years 1974-1982 in the Nuclear Engineering Laboratory at the Technical Research Centre of Finland. The objective of the project has been to develop Finnish expertise in structural mechanics related to nuclear engineering. The report describes the starting point of the research work, the organization of the project and the research activities on various subareas. Further the work done with computer codes is described and also the problems which the developed expertise has been applied to. Finally, the diploma works, publications and work reports, which are mainly in Finnish, are listed to give a view of the content of the project. (author)

  11. Personal computers in accelerator control

    International Nuclear Information System (INIS)

    Anderssen, P.S.

    1988-01-01

    The advent of the personal computer has created a popular movement which has also made a strong impact on science and engineering. Flexible software environments combined with good computational performance and large storage capacities are becoming available at steadily decreasing costs. Of equal importance, however, is the quality of the user interface offered on many of these products. Graphics and screen interaction is available in ways that were only possible on specialized systems before. Accelerator engineers were quick to pick up the new technology. The first applications were probably for controllers and data gatherers for beam measurement equipment. Others followed, and today it is conceivable to make personal computer a standard component of an accelerator control system. This paper reviews the experience gained at CERN so far and describes the approach taken in the design of the common control center for the SPS and the future LEP accelerators. The design goal has been to be able to integrate personal computers into the accelerator control system and to build the operator's workplace around it. (orig.)

  12. Computational science: Emerging opportunities and challenges

    International Nuclear Information System (INIS)

    Hendrickson, Bruce

    2009-01-01

    In the past two decades, computational methods have emerged as an essential component of the scientific and engineering enterprise. A diverse assortment of scientific applications has been simulated and explored via advanced computational techniques. Computer vendors have built enormous parallel machines to support these activities, and the research community has developed new algorithms and codes, and agreed on standards to facilitate ever more ambitious computations. However, this track record of success will be increasingly hard to sustain in coming years. Power limitations constrain processor clock speeds, so further performance improvements will need to come from ever more parallelism. This higher degree of parallelism will require new thinking about algorithms, programming models, and architectural resilience. Simultaneously, cutting edge science increasingly requires more complex simulations with unstructured and adaptive grids, and multi-scale and multi-physics phenomena. These new codes will push existing parallelization strategies to their limits and beyond. Emerging data-rich scientific applications are also in need of high performance computing, but their complex spatial and temporal data access patterns do not perform well on existing machines. These interacting forces will reshape high performance computing in the coming years.

  13. Towards early software reliability prediction for computer forensic tools (case study).

    Science.gov (United States)

    Abu Talib, Manar

    2016-01-01

    Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.

  14. Learning by statistical cooperation of self-interested neuron-like computing elements.

    Science.gov (United States)

    Barto, A G

    1985-01-01

    Since the usual approaches to cooperative computation in networks of neuron-like computating elements do not assume that network components have any "preferences", they do not make substantive contact with game theoretic concepts, despite their use of some of the same terminology. In the approach presented here, however, each network component, or adaptive element, is a self-interested agent that prefers some inputs over others and "works" toward obtaining the most highly preferred inputs. Here we describe an adaptive element that is robust enough to learn to cooperate with other elements like itself in order to further its self-interests. It is argued that some of the longstanding problems concerning adaptation and learning by networks might be solvable by this form of cooperativity, and computer simulation experiments are described that show how networks of self-interested components that are sufficiently robust can solve rather difficult learning problems. We then place the approach in its proper historical and theoretical perspective through comparison with a number of related algorithms. A secondary aim of this article is to suggest that beyond what is explicitly illustrated here, there is a wealth of ideas from game theory and allied disciplines such as mathematical economics that can be of use in thinking about cooperative computation in both nervous systems and man-made systems.

  15. An ODP computational model of a cooperative binding object

    Science.gov (United States)

    Logé, Christophe; Najm, Elie; Chen, Ken

    1997-12-01

    A next generation of systems that should appear will have to manage simultaneously several geographically distributed users. These systems belong to the class of computer-supported cooperative work systems (CSCW). The development of such complex systems requires rigorous development methods and flexible open architectures. Open distributed processing (ODP) is a standardization effort that aims at providing such architectures. ODP features appropriate abstraction levels and a clear articulation between requirements, programming and infrastructure support. ODP advocates the use of formal methods for the specification of systems and components. The computational model, an object-based model, one of the abstraction levels identified within ODP, plays a central role in the global architecture. In this model, basic objects can be composed with communication and distribution abstractions (called binding objects) to form a computational specification of distributed systems, or applications. Computational specifications can then be mapped (in a mechanism akin to compilation) onto an engineering solution. We use an ODP-inspired method to computationally specify a cooperative system. We start from a general purpose component that we progressively refine into a collection of basic and binding objects. We focus on two issues of a co-authoring application, namely, dynamic reconfiguration and multiview synchronization. We discuss solutions for these issues and formalize them using the MT-LOTOS specification language that is currently studied in the ISO standardization formal description techniques group.

  16. Evaluation and development of measurement components for a catalytic converter test system

    Energy Technology Data Exchange (ETDEWEB)

    Chan, A.K.

    1997-08-01

    The purpose of this research and development project was to evaluate and configure the components of a test system designed for the analysis of full-scale vehicle-aged automobile catalytic converters. The components tested included an exhaust gas analyzer for measuring hydrocarbon, CO, CO{sub 2} and O{sub 2} contents and a chemiluminescence detector (CLD) for measuring NO{sub x}, as well as thermocouples and pressure meters. A software package (TestPoint vers.2.Ob, Capital Equipment) was used to develop a computer-based data sampling and acquisition interface with the components and sensors connected to the test system. Tests of the Sun MGA-1200, a non-dispersive infrared (NDIR) analyzer, with CO, CO{sub 2} and various hydrocarbons (propane and octane) revealed sensitivity to large pressure changes (greater than 500 mbar). There was no significant cross-sensitivity between the gases except for the slight response of the hydrocarbon (HC) register to water vapor and CO. However, the low range of HC(5-30 vppm) expected in car exhaust after the converter means any cross-sensitivity could have an effect on the measurement, depending on measurement conditions. Tests of the EcoPhysics CLD 700 NO{sub x} meter also indicated sensitivity to pressure. An executable TestPoint run-time application was created to allow near real-time monitoring of the test system using a desktop computer. Nine channels of analog data are fed to a desktop computer via an A/D board and seven exhaust gas parameters through an RS-232C interface with the MGA-1200 23 refs, 32 figs, 10 tabs. Examination paper

  17. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    Science.gov (United States)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  18. A methodology for performing computer security reviews

    International Nuclear Information System (INIS)

    Hunteman, W.J.

    1991-01-01

    DOE Order 5637.1, ''Classified Computer Security,'' requires regular reviews of the computer security activities for an ADP system and for a site. Based on experiences gained in the Los Alamos computer security program through interactions with DOE facilities, we have developed a methodology to aid a site or security officer in performing a comprehensive computer security review. The methodology is designed to aid a reviewer in defining goals of the review (e.g., preparation for inspection), determining security requirements based on DOE policies, determining threats/vulnerabilities based on DOE and local threat guidance, and identifying critical system components to be reviewed. Application of the methodology will result in review procedures and checklists oriented to the review goals, the target system, and DOE policy requirements. The review methodology can be used to prepare for an audit or inspection and as a periodic self-check tool to determine the status of the computer security program for a site or specific ADP system. 1 tab

  19. A methodology for performing computer security reviews

    International Nuclear Information System (INIS)

    Hunteman, W.J.

    1991-01-01

    This paper reports on DIE Order 5637.1, Classified Computer Security, which requires regular reviews of the computer security activities for an ADP system and for a site. Based on experiences gained in the Los Alamos computer security program through interactions with DOE facilities, the authors have developed a methodology to aid a site or security officer in performing a comprehensive computer security review. The methodology is designed to aid a reviewer in defining goals of the review (e.g., preparation for inspection), determining security requirements based on DOE policies, determining threats/vulnerabilities based on DOE and local threat guidance, and identifying critical system components to be reviewed. Application of the methodology will result in review procedures and checklists oriented to the review goals, the target system, and DOE policy requirements. The review methodology can be used to prepare for an audit or inspection and as a periodic self-check tool to determine the status of the computer security program for a site or specific ADP system

  20. Computer Assistance for Writing Interactive Programs: TICS.

    Science.gov (United States)

    Kaplow, Roy; And Others

    1973-01-01

    Investigators developed an on-line, interactive programing system--the Teacher-Interactive Computer System (TICS)--to provide assistance to those who were not programers, but nevertheless wished to write interactive instructional programs. TICS had two components: an author system and a delivery system. Underlying assumptions were that…

  1. Genetic algorithm using independent component analysis in x-ray reflectivity curve fitting of periodic layer structures

    International Nuclear Information System (INIS)

    Tiilikainen, J; Bosund, V; Tilli, J-M; Sormunen, J; Mattila, M; Hakkarainen, T; Lipsanen, H

    2007-01-01

    A novel genetic algorithm (GA) utilizing independent component analysis (ICA) was developed for x-ray reflectivity (XRR) curve fitting. EFICA was used to reduce mutual information, or interparameter dependences, during the combinatorial phase. The performance of the new algorithm was studied by fitting trial XRR curves to target curves which were computed using realistic multilayer models. The median convergence properties of conventional GA, GA using principal component analysis and the novel GA were compared. GA using ICA was found to outperform the other methods with problems having 41 parameters or more to be fitted without additional XRR curve calculations. The computational complexity of the conventional methods was linear but the novel method had a quadratic computational complexity due to the applied ICA method which sets a practical limit for the dimensionality of the problem to be solved. However, the novel algorithm had the best capability to extend the fitting analysis based on Parratt's formalism to multiperiodic layer structures

  2. Multi-level programming paradigm for extreme computing

    International Nuclear Information System (INIS)

    Petiton, S.; Sato, M.; Emad, N.; Calvin, C.; Tsuji, M.; Dandouna, M.

    2013-01-01

    In order to propose a framework and programming paradigms for post peta-scale computing, on the road to exa-scale computing and beyond, we introduced new languages, associated with a hierarchical multi-level programming paradigm, allowing scientific end-users and developers to program highly hierarchical architectures designed for extreme computing. In this paper, we explain the interest of such hierarchical multi-level programming paradigm for extreme computing and its well adaptation to several large computational science applications, such as for linear algebra solvers used for reactor core physic. We describe the YML language and framework allowing describing graphs of parallel components, which may be developed using PGAS-like language such as XMP, scheduled and computed on supercomputers. Then, we propose experimentations on supercomputers (such as the 'K' and 'Hooper' ones) of the hybrid method MERAM (Multiple Explicitly Restarted Arnoldi Method) as a case study for iterative methods manipulating sparse matrices, and the block Gauss-Jordan method as a case study for direct method manipulating dense matrices. We conclude proposing evolutions for this programming paradigm. (authors)

  3. Computational advances in transition phase analysis

    International Nuclear Information System (INIS)

    Morita, K.; Kondo, S.; Tobita, Y.; Shirakawa, N.; Brear, D.J.; Fischer, E.A.

    1994-01-01

    In this paper, historical perspective and recent advances are reviewed on computational technologies to evaluate a transition phase of core disruptive accidents in liquid-metal fast reactors. An analysis of the transition phase requires treatment of multi-phase multi-component thermohydraulics coupled with space- and energy-dependent neutron kinetics. Such a comprehensive modeling effort was initiated when the program of SIMMER-series computer code development was initiated in the late 1970s in the USA. Successful application of the latest SIMMER-II in USA, western Europe and Japan have proved its effectiveness, but, at the same time, several areas that require further research have been identified. Based on the experience and lessons learned during the SIMMER-II application through 1980s, a new project of SIMMER-III development is underway at the Power Reactor and Nuclear Fuel Development Corporation (PNC), Japan. The models and methods of SIMMER-III are briefly described with emphasis on recent advances in multi-phase multi-component fluid dynamics technologies and their expected implication on a future reliable transition phase analysis. (author)

  4. Dimensioning storage and computing clusters for efficient High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Scientific experiments are producing huge amounts of data, and they continue increasing the size of their datasets and the total volume of data. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of Scientific Data Centres has shifted from coping efficiently with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful s...

  5. Dimensioning storage and computing clusters for efficient high throughput computing

    International Nuclear Information System (INIS)

    Accion, E; Bria, A; Bernabeu, G; Caubet, M; Delfino, M; Espinal, X; Merino, G; Lopez, F; Martinez, F; Planas, E

    2012-01-01

    Scientific experiments are producing huge amounts of data, and the size of their datasets and total volume of data continues increasing. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of scientific data centers has shifted from efficiently coping with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful data storage and processing service in an intensive HTC environment.

  6. Integration of Simulink Models with Component-based Software Models

    Directory of Open Access Journals (Sweden)

    MARIAN, N.

    2008-06-01

    Full Text Available Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behavior as a means of computation, communication and constraints, using computational blocks and aggregates for both discrete and continuous behavior, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI, University of Southern Denmark. Once specified, the software model has to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behavior, and the transformation of the software system into the S

  7. CernVM Co-Pilot: an Extensible Framework for Building Scalable Computing Infrastructures on the Cloud

    Science.gov (United States)

    Harutyunyan, A.; Blomer, J.; Buncic, P.; Charalampidis, I.; Grey, F.; Karneyeu, A.; Larsen, D.; Lombraña González, D.; Lisec, J.; Segal, B.; Skands, P.

    2012-12-01

    CernVM Co-Pilot is a framework for instantiating an ad-hoc computing infrastructure on top of managed or unmanaged computing resources. Co-Pilot can either be used to create a stand-alone computing infrastructure, or to integrate new computing resources into existing infrastructures (such as Grid or batch). Unlike traditional middleware systems, Co-Pilot components communicate using the Extensible Messaging and Presence protocol (XMPP). This allows the system to be easily scaled in case of a high load, and it also simplifies the development of new components. In this contribution we present the latest developments and the current status of the framework, discuss how it can be extended to suit the needs of a particular community, as well as describe the operational experience of using the framework in the LHC@home 2.0 volunteer computing project.

  8. CernVM Co-Pilot: an Extensible Framework for Building Scalable Computing Infrastructures on the Cloud

    International Nuclear Information System (INIS)

    Harutyunyan, A; Blomer, J; Buncic, P; Charalampidis, I; Grey, F; Karneyeu, A; Larsen, D; Lombraña González, D; Lisec, J; Segal, B; Skands, P

    2012-01-01

    CernVM Co-Pilot is a framework for instantiating an ad-hoc computing infrastructure on top of managed or unmanaged computing resources. Co-Pilot can either be used to create a stand-alone computing infrastructure, or to integrate new computing resources into existing infrastructures (such as Grid or batch). Unlike traditional middleware systems, Co-Pilot components communicate using the Extensible Messaging and Presence protocol (XMPP). This allows the system to be easily scaled in case of a high load, and it also simplifies the development of new components. In this contribution we present the latest developments and the current status of the framework, discuss how it can be extended to suit the needs of a particular community, as well as describe the operational experience of using the framework in the LHC at home 2.0 volunteer computing project.

  9. Towards a Tool for Computer Supported Structuring of Products

    DEFF Research Database (Denmark)

    Hansen, Claus Thorp

    1997-01-01

    . However, a product possesses not only a component structure but also various organ structures which are superimposed on the component structure. The organ structures carry behaviour and make the product suited for its life phases.Our long-term research goal is to develop a computer-based system...... that is capable of supporting synthesis activities in engineering design, and thereby also support handling of various organ structures. Such a system must contain a product model, in which it is possible to describe and manipulate both various organ structures and the component structure.In this paper we focus...... on the relationships between organ structures and the component structure. By an analysis of an existing product it is shown that a component may contribute to more than one organ. A set of organ structures is identified and their influence on the component strucute is illustrated....

  10. Remote media vision-based computer input device

    Science.gov (United States)

    Arabnia, Hamid R.; Chen, Ching-Yi

    1991-11-01

    In this paper, we introduce a vision-based computer input device which has been built at the University of Georgia. The user of this system gives commands to the computer without touching any physical device. The system receives input through a CCD camera; it is PC- based and is built on top of the DOS operating system. The major components of the input device are: a monitor, an image capturing board, a CCD camera, and some software (developed by use). These are interfaced with a standard PC running under the DOS operating system.

  11. Computed tomography in malignant primary bone tumours

    International Nuclear Information System (INIS)

    Kersjes, W.; Harder, T.; Haeffner, P.

    1990-01-01

    The importance of computed tomography is examined in malignant primary bone tumours using a strongly defined examination group of 13 Patients (six Ewing's-sarcomas, five osteosarcomas, one chondrosarcoma and one spindle-shaped cell sarcoma). Computed tomography is judged superior compared to plain radiographs in recognition of bone marrow infiltration and presentation of parosteal tumour parts as well as in analysis of tissue components of tumours, CT is especially suitable for therapy planning and evaluating response to therapy. CT does not provide sufficient diagnostic information to determine dignity and exact diagnosis of bone tumours. (orig.) [de

  12. Clustering composite SaaS components in Cloud computing using a Grouping Genetic Algorithm

    OpenAIRE

    Yusoh, Zeratul Izzah Mohd; Tang, Maolin

    2012-01-01

    Recently, Software as a Service (SaaS) in Cloud computing, has become more and more significant among software users and providers. To offer a SaaS with flexible functions at a low cost, SaaS providers have focused on the decomposition of the SaaS functionalities, or known as composite SaaS. This approach has introduced new challenges in SaaS resource management in data centres. One of the challenges is managing the resources allocated to the composite SaaS. Due to the dynamic environment of ...

  13. Effects of Task Performance and Task Complexity on the Validity of Computational Models of Attention

    NARCIS (Netherlands)

    Koning, L. de; Maanen, P.P. van; Dongen, K. van

    2008-01-01

    Computational models of attention can be used as a component of decision support systems. For accurate support, a computational model of attention has to be valid and robust. The effects of task performance and task complexity on the validity of three different computational models of attention were

  14. Parallel Computing in SCALE

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.

    2010-01-01

    activities has been developed to provide an integrated framework for future methods development. Some of the major components of the SCALE parallel computing development plan are parallelization and multithreading of computationally intensive modules and redesign of the fundamental SCALE computational architecture.

  15. Reconditioning of Computers

    DEFF Research Database (Denmark)

    Bundgaard, Anja Marie

    2017-01-01

    Fast technological development and short innovation cycles has resulted in shorter life spans for certain consumer electronics. Reconditioning is proposed as one of the strategies to close the loops in the circular economy and increase the lifespan of products and components. The paper therefore...... qualitative research interviews. The case study of the contextual barriers indicated that trust from the buyer and the seller of the used computers were important for a viable business. If trust was not in place, it could be a potential barrier. Furthermore, economy obsolescence and a lack of influence...

  16. Teaching Computer-Aided Design of Fluid Flow and Heat Transfer Engineering Equipment.

    Science.gov (United States)

    Gosman, A. D.; And Others

    1979-01-01

    Describes a teaching program for fluid mechanics and heat transfer which contains both computer aided learning (CAL) and computer aided design (CAD) components and argues that the understanding of the physical and numerical modeling taught in the CAL course is essential to the proper implementation of CAD. (Author/CMV)

  17. Solving difficult problems creatively: A role for energy optimised deterministic/stochastic hybrid computing

    Directory of Open Access Journals (Sweden)

    Tim ePalmer

    2015-10-01

    Full Text Available How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.

  18. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing.

    Science.gov (United States)

    Palmer, Tim N; O'Shea, Michael

    2015-01-01

    How is the brain configured for creativity? What is the computational substrate for 'eureka' moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.

  19. Maze learning by a hybrid brain-computer system.

    Science.gov (United States)

    Wu, Zhaohui; Zheng, Nenggan; Zhang, Shaowu; Zheng, Xiaoxiang; Gao, Liqiang; Su, Lijuan

    2016-09-13

    The combination of biological and artificial intelligence is particularly driven by two major strands of research: one involves the control of mechanical, usually prosthetic, devices by conscious biological subjects, whereas the other involves the control of animal behaviour by stimulating nervous systems electrically or optically. However, to our knowledge, no study has demonstrated that spatial learning in a computer-based system can affect the learning and decision making behaviour of the biological component, namely a rat, when these two types of intelligence are wired together to form a new intelligent entity. Here, we show how rule operations conducted by computing components contribute to a novel hybrid brain-computer system, i.e., ratbots, exhibit superior learning abilities in a maze learning task, even when their vision and whisker sensation were blocked. We anticipate that our study will encourage other researchers to investigate combinations of various rule operations and other artificial intelligence algorithms with the learning and memory processes of organic brains to develop more powerful cyborg intelligence systems. Our results potentially have profound implications for a variety of applications in intelligent systems and neural rehabilitation.

  20. Maze learning by a hybrid brain-computer system

    Science.gov (United States)

    Wu, Zhaohui; Zheng, Nenggan; Zhang, Shaowu; Zheng, Xiaoxiang; Gao, Liqiang; Su, Lijuan

    2016-09-01

    The combination of biological and artificial intelligence is particularly driven by two major strands of research: one involves the control of mechanical, usually prosthetic, devices by conscious biological subjects, whereas the other involves the control of animal behaviour by stimulating nervous systems electrically or optically. However, to our knowledge, no study has demonstrated that spatial learning in a computer-based system can affect the learning and decision making behaviour of the biological component, namely a rat, when these two types of intelligence are wired together to form a new intelligent entity. Here, we show how rule operations conducted by computing components contribute to a novel hybrid brain-computer system, i.e., ratbots, exhibit superior learning abilities in a maze learning task, even when their vision and whisker sensation were blocked. We anticipate that our study will encourage other researchers to investigate combinations of various rule operations and other artificial intelligence algorithms with the learning and memory processes of organic brains to develop more powerful cyborg intelligence systems. Our results potentially have profound implications for a variety of applications in intelligent systems and neural rehabilitation.

  1. Challenges in design of zirconium alloy reactor components

    International Nuclear Information System (INIS)

    Kakodkar, Anil; Sinha, R.K.

    1992-01-01

    Zirconium alloy components used in core-internal assemblies of heavy water reactors have to be designed under constraints imposed by need to have minimum mass, limitations of fabrication, welding and joining techniques with this material, and unique mechanisms for degradation of the operating performance of these components. These constraints manifest as challenges for design and development when the size, shape and dimensions of the components and assemblies are unconventional or untried, or when one is aiming for maximization of service life of these components under severe operating conditions. A number of such challenges were successfully met during the development of core-internal components and assemblies of Dhruva reactor. Some of the then untried ideas which were developed and successfully implemented include use of electron beam welding, cold forming of hemispherical ends of reentrant cans, and a large variety of rolled joints of innovative designs. This experience provided the foundation for taking up and successfully completing several tasks relating to coolant channels, liquid poison channels and sparger channels for PHWRs and test sections for the in-pile loops of Dhruva reactor. For life prediction and safety assessment of coolant channels of PHWRs some analytical tools, notably, a computer code for prediction of creep limited life of coolant channels has been developed. Some of the future challenges include the development of easily replaceable coolant channels and also large diameter coolant channels for Advanced Heavy Water Reactor, and development of solutions to overcome deterioration of service life of coolant channels due to hydriding. (author). 5 refs., 13 figs., 1 tab

  2. FRANTIC: a computer code for time dependent unavailability analysis

    International Nuclear Information System (INIS)

    Vesely, W.E.; Goldberg, F.F.

    1977-03-01

    The FRANTIC computer code evaluates the time dependent and average unavailability for any general system model. The code is written in FORTRAN IV for the IBM 370 computer. Non-repairable components, monitored components, and periodically tested components are handled. One unique feature of FRANTIC is the detailed, time dependent modeling of periodic testing which includes the effects of test downtimes, test overrides, detection inefficiencies, and test-caused failures. The exponential distribution is used for the component failure times and periodic equations are developed for the testing and repair contributions. Human errors and common mode failures can be included by assigning an appropriate constant probability for the contributors. The output from FRANTIC consists of tables and plots of the system unavailability along with a breakdown of the unavailability contributions. Sensitivity studies can be simply performed and a wide range of tables and plots can be obtained for reporting purposes. The FRANTIC code represents a first step in the development of an approach that can be of direct value in future system evaluations. Modifications resulting from use of the code, along with the development of reliability data based on operating reactor experience, can be expected to provide increased confidence in its use and potential application to the licensing process

  3. Computational simulation in architectural and environmental acoustics methods and applications of wave-based computation

    CERN Document Server

    Sakamoto, Shinichi; Otsuru, Toru

    2014-01-01

    This book reviews a variety of methods for wave-based acoustic simulation and recent applications to architectural and environmental acoustic problems. Following an introduction providing an overview of computational simulation of sound environment, the book is in two parts: four chapters on methods and four chapters on applications. The first part explains the fundamentals and advanced techniques for three popular methods, namely, the finite-difference time-domain method, the finite element method, and the boundary element method, as well as alternative time-domain methods. The second part demonstrates various applications to room acoustics simulation, noise propagation simulation, acoustic property simulation for building components, and auralization. This book is a valuable reference that covers the state of the art in computational simulation for architectural and environmental acoustics.  

  4. Eucalyptus: an open-source cloud computing infrastructure

    International Nuclear Information System (INIS)

    Nurmi, Daniel; Wolski, Rich; Grzegorczyk, Chris; Obertelli, Graziano; Soman, Sunil; Youseff, Lamia; Zagorodnov, Dmitrii

    2009-01-01

    Utility computing, elastic computing, and cloud computing are all terms that refer to the concept of dynamically provisioning processing time and storage space from a ubiquitous 'cloud' of computational resources. Such systems allow users to acquire and release the resources on demand and provide ready access to data from processing elements, while relegating the physical location and exact parameters of the resources. Over the past few years, such systems have become increasingly popular, but nearly all current cloud computing offerings are either proprietary or depend upon software infrastructure that is invisible to the research community. In this work, we present Eucalyptus, an open-source software implementation of cloud computing that utilizes compute resources that are typically available to researchers, such as clusters and workstation farms. In order to foster community research exploration of cloud computing systems, the design of Eucalyptus emphasizes modularity, allowing researchers to experiment with their own security, scalability, scheduling, and interface implementations. In this paper, we outline the design of Eucalyptus, describe our own implementations of the modular system components, and provide results from experiments that measure performance and scalability of a Eucalyptus installation currently deployed for public use. The main contribution of our work is the presentation of the first research-oriented open-source cloud computing system focused on enabling methodical investigations into the programming, administration, and deployment of systems exploring this novel distributed computing model.

  5. A Distributed Agent Architecture for a Computer Virus Immune System

    National Research Council Canada - National Science Library

    Harmer, Paul

    2000-01-01

    .... Information protection and information assurance are vital components required for achieving superiority in the Infosphere, but these goals are threatened by the exponential birth rate of new computer viruses...

  6. Suicide Risk by Military Occupation in the DoD Active Component Population

    Science.gov (United States)

    Trofimovich, Lily; Reger, Mark A.; Luxton, David D.; Oetjen-Gerdes, Lynne A.

    2013-01-01

    Suicide risk based on occupational cohorts within the U.S. military was investigated. Rates of suicide based on military occupational categories were computed for the Department of Defense (DoD) active component population between 2001 and 2010. The combined infantry, gun crews, and seamanship specialist group was at increased risk of suicide…

  7. Intrinsic light variation in very close eclipsing binary systems with distorted components

    International Nuclear Information System (INIS)

    Binnendijk, L.

    1974-01-01

    Using the theory of distorted components in very close binaries, the semiaxes are computed for several possible models. The coefficients of limb darkening and the gravity effect are found as weighted mean values of the results of several theoretical investigations. A simple linear relation is found between the coefficient of limb darkening and the logarithm of the effective temperature in the spectral interval A0 to K0. The Fourier series for the intrinsic light variation of such distorted components is given for any polytropic index. The influence of the tidal bulge is described in detail. (IAA)

  8. Probabilistic methods in nuclear power plant component ageing analysis

    International Nuclear Information System (INIS)

    Simola, K.

    1992-03-01

    The nuclear power plant ageing research is aimed to ensure that the plant safety and reliability are maintained at a desired level through the designed, and possibly extended lifetime. In ageing studies, the reliability of components, systems and structures is evaluated taking into account the possible time- dependent decrease in reliability. The results of analyses can be used in the evaluation of the remaining lifetime of components and in the development of preventive maintenance, testing and replacement programmes. The report discusses the use of probabilistic models in the evaluations of the ageing of nuclear power plant components. The principles of nuclear power plant ageing studies are described and examples of ageing management programmes in foreign countries are given. The use of time-dependent probabilistic models to evaluate the ageing of various components and structures is described and the application of models is demonstrated with two case studies. In the case study of motor- operated closing valves the analysis are based on failure data obtained from a power plant. In the second example, the environmentally assisted crack growth is modelled with a computer code developed in United States, and the applicability of the model is evaluated on the basis of operating experience

  9. A model for determining condition-based maintenance policies for deteriorating multi-component systems

    NARCIS (Netherlands)

    Hontelez, J.A.M.; Wijnmalen, D.J.D.

    1993-01-01

    We discuss a method to determine strategies for preventive maintenance of systems consisting of gradually deteriorating components. A model has been developed to compute not only the range of conditions inducing a repair action, but also inspection moments based on the last known condition value so

  10. Functional safeguards for computers for protection systems for Savannah River reactors

    International Nuclear Information System (INIS)

    Kritz, W.R.

    1977-06-01

    Reactors at the Savannah River Plant have recently been equipped with a ''safety computer'' system. This system utilizes dual digital computers in a primary protection system that monitors individual fuel assembly coolant flow and temperature. The design basis for the (SRP safety) computer systems allowed for eventual failure of any input sensor or any computer component. These systems are routinely used by reactor operators with a minimum of training in computer technology. The hardware configuration and software design therefore contain safeguards so that both hardware and human failures do not cause significant loss of reactor protection. The performance of the system to date is described

  11. An integrated computational tool for precipitation simulation

    Science.gov (United States)

    Cao, W.; Zhang, F.; Chen, S.-L.; Zhang, C.; Chang, Y. A.

    2011-07-01

    Computer aided materials design is of increasing interest because the conventional approach solely relying on experimentation is no longer viable within the constraint of available resources. Modeling of microstructure and mechanical properties during precipitation plays a critical role in understanding the behavior of materials and thus accelerating the development of materials. Nevertheless, an integrated computational tool coupling reliable thermodynamic calculation, kinetic simulation, and property prediction of multi-component systems for industrial applications is rarely available. In this regard, we are developing a software package, PanPrecipitation, under the framework of integrated computational materials engineering to simulate precipitation kinetics. It is seamlessly integrated with the thermodynamic calculation engine, PanEngine, to obtain accurate thermodynamic properties and atomic mobility data necessary for precipitation simulation.

  12. Computational simulation of concurrent engineering for aerospace propulsion systems

    Science.gov (United States)

    Chamis, C. C.; Singhal, S. N.

    1992-01-01

    Results are summarized of an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulations methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties - fundamental in developing such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering for propulsion systems and systems in general. Benefits and facets needing early attention in the development are outlined.

  13. Computational simulation for concurrent engineering of aerospace propulsion systems

    Science.gov (United States)

    Chamis, C. C.; Singhal, S. N.

    1993-01-01

    Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.

  14. Thermochemical modelling of multi-component systems

    International Nuclear Information System (INIS)

    Sundman, B.; Gueneau, C.

    2015-01-01

    Computational thermodynamic, also known as the Calphad method, is a standard tool in industry for the development of materials and improving processes and there is an intense scientific development of new models and databases. The calculations are based on thermodynamic models of the Gibbs energy for each phase as a function of temperature, pressure and constitution. Model parameters are stored in databases that are developed in an international scientific collaboration. In this way, consistent and reliable data for many properties like heat capacity, chemical potentials, solubilities etc. can be obtained for multi-component systems. A brief introduction to this technique is given here and references to more extensive documentation are provided. (authors)

  15. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  16. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00291854; The ATLAS collaboration; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-01-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computin...

  17. CompGC: Efficient Offline/Online Semi-Honest Two-Party Computation

    Science.gov (United States)

    2017-02-03

    modular way. Just as a programmer writes code for a complex function by using existing simpler functions, the circuits for these functions use...We observe that real-world functions are generally constructed in a modular way, comprising many standard components for common tasks like arithmetic...overall garbled circuit computation; see Section 6 for details. More technically , a component-based garbling scheme is a triple of algorithms (GARBLE

  18. A Method for Identifying Contours in Processing Digital Images from Computer Tomograph

    Science.gov (United States)

    Roşu, Şerban; Pater, Flavius; Costea, Dan; Munteanu, Mihnea; Roşu, Doina; Fratila, Mihaela

    2011-09-01

    The first step in digital processing of two-dimensional computed tomography images is to identify the contour of component elements. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating new algorithms and methods in medical 2D and 3D imagery.

  19. Practical clinical applications of the computer in nuclear medicine

    International Nuclear Information System (INIS)

    Price, R.R.; Erickson, J.J.; Patton, J.A.; Jones, J.P.; Lagan, J.E.; Rollo, F.D.

    1978-01-01

    The impact of the computer on the practice of nuclear medicine has been felt primarily in the area of rapid dynamic studies. At this time it is difficult to find a clinic which routinely performs computer processing of static images. The general purpose digital computer is a sophisticated and flexible instrument. The number of applications for which one can use the computer to augment data acquisition, analysis, or display is essentially unlimited. In this light, the purpose of this exhibit is not to describe all possible applications of the computer in nuclear medicine but rather to illustrate those applications which have generally been accepted as practical in the routine clinical environment. Specifically, we have chosen examples of computer augmented cardiac, and renal function studies as well as examples of relative organ blood flow studies. In addition, a short description of basic computer components and terminology along with a few examples of non-imaging applications are presented

  20. Component Fragility Research Program: Phase 1 component prioritization

    International Nuclear Information System (INIS)

    Holman, G.S.; Chou, C.K.

    1987-06-01

    Current probabilistic risk assessment (PRA) methods for nuclear power plants utilize seismic ''fragilities'' - probabilities of failure conditioned on the severity of seismic input motion - that are based largely on limited test data and on engineering judgment. Under the NRC Component Fragility Research Program (CFRP), the Lawrence Livermore National Laboratory (LLNL) has developed and demonstrated procedures for using test data to derive probabilistic fragility descriptions for mechanical and electrical components. As part of its CFRP activities, LLNL systematically identified and categorized components influencing plant safety in order to identify ''candidate'' components for future NRC testing. Plant systems relevant to safety were first identified; within each system components were then ranked according to their importance to overall system function and their anticipated seismic capacity. Highest priority for future testing was assigned to those ''very important'' components having ''low'' seismic capacity. This report describes the LLNL prioritization effort, which also included application of ''high-level'' qualification data as an alternate means of developing probabilistic fragility descriptions for PRA applications

  1. Consolidation of cloud computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Cordeiro, Cristovao; Hover, John; Kouba, Tomas; Love, Peter; Mcnab, Andrew; Schovancova, Jaroslava; Sobie, Randall; Giordano, Domenico

    2017-01-01

    Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in resp...

  2. Quadratic integrand double-hybrid made spin-component-scaled

    Energy Technology Data Exchange (ETDEWEB)

    Brémond, Éric, E-mail: eric.bremond@iit.it; Savarese, Marika [CompuNet, Istituto Italiano di Tecnologia, via Morego 30, I-16163 Genoa (Italy); Sancho-García, Juan C.; Pérez-Jiménez, Ángel J. [Departamento de Química Física, Universidad de Alicante, E-03080 Alicante (Spain); Adamo, Carlo [CompuNet, Istituto Italiano di Tecnologia, via Morego 30, I-16163 Genoa (Italy); Chimie ParisTech, PSL Research University, CNRS, Institut de Recherche de Chimie Paris IRCP, F-75005 Paris (France); Institut Universitaire de France, 103 Boulevard Saint Michel, F-75005 Paris (France)

    2016-03-28

    We propose two analytical expressions aiming to rationalize the spin-component-scaled (SCS) and spin-opposite-scaled (SOS) schemes for double-hybrid exchange-correlation density-functionals. Their performances are extensively tested within the framework of the nonempirical quadratic integrand double-hybrid (QIDH) model on energetic properties included into the very large GMTKN30 benchmark database, and on structural properties of semirigid medium-sized organic compounds. The SOS variant is revealed as a less computationally demanding alternative to reach the accuracy of the original QIDH model without losing any theoretical background.

  3. Computer-generated diagram of an LHC dipole

    CERN Multimedia

    AC Team

    1998-01-01

    This computer-generated image of an LHC dipole magnet shows some of the parts vital for the operation of these components. The magnets must be cooled to 1.9 K (less than –270.3°C) so that the superconducting coils can produce the required 8 T magnetic field strength.

  4. Description of the TREBIL, CRESSEX and STREUSL computer programs, that belongs to RALLY computer code pack for the analysis of reliability systems

    International Nuclear Information System (INIS)

    Fernandes Filho, T.L.

    1982-11-01

    The RALLY computer code pack (RALLY pack) is a set of computer codes destinate to the reliability of complex systems, aiming to a risk analysis. Three of the six codes, are commented, presenting their purpose, input description, calculation methods and results obtained with each one of those computer codes. The computer codes are: TREBIL, to obtain the fault tree logical equivalent; CRESSEX, to obtain the minimal cut and the punctual values of the non-reliability and non-availability of the system; and STREUSL, for the dispersion calculation of those values around the media. In spite of the CRESSEX, in its version available at CNEN, uses a little long method to obtain the minimal cut in an HB-CNEN system, the three computer programs show good results, mainly the STREUSL, which permits the simulation of various components. (E.G.) [pt

  5. Hierarchical modeling of systems with similar components: A framework for adaptive monitoring and control

    International Nuclear Information System (INIS)

    Memarzadeh, Milad; Pozzi, Matteo; Kolter, J. Zico

    2016-01-01

    System management includes the selection of maintenance actions depending on the available observations: when a system is made up by components known to be similar, data collected on one is also relevant for the management of others. This is typically the case of wind farms, which are made up by similar turbines. Optimal management of wind farms is an important task due to high cost of turbines' operation and maintenance: in this context, we recently proposed a method for planning and learning at system-level, called PLUS, built upon the Partially Observable Markov Decision Process (POMDP) framework, which treats transition and emission probabilities as random variables, and is therefore suitable for including model uncertainty. PLUS models the components as independent or identical. In this paper, we extend that formulation, allowing for a weaker similarity among components. The proposed approach, called Multiple Uncertain POMDP (MU-POMDP), models the components as POMDPs, and assumes the corresponding parameters as dependent random variables. Through this framework, we can calibrate specific degradation and emission models for each component while, at the same time, process observations at system-level. We compare the performance of the proposed MU-POMDP with PLUS, and discuss its potential and computational complexity. - Highlights: • A computational framework is proposed for adaptive monitoring and control. • It adopts a scheme based on Markov Chain Monte Carlo for inference and learning. • Hierarchical Bayesian modeling is used to allow a system-level flow of information. • Results show potential of significant savings in management of wind farms.

  6. Linear kinetic stability of a field-reversed configuration with two ion components

    International Nuclear Information System (INIS)

    Staudenmeier, J.L.; Barnes, D.C.; Lewis, H.R.

    1990-01-01

    It has been suggested that a small fraction of non-axis encircling high energy ions may be sufficient to stabilize the tilt mode in a large s FRC. Experimental alteration of the ion distribution function in this manner might be achieved by rf heating the tail of the distribution function or by neutral beam injection. A linear Vlasov-fluid eigenfunction-eigenfrequency approach was used to investigate possible stabilization of the tilt mode by a high energy component. The ion distribution function is modeled as the sum of two Maxwellians with separate temperatures and no ion flow velocity. The cold component has a thermal s = 7, where s is the approximate number of ion gyroradii contained between the field null and the separatrix. The temperature ratio of the hot component to the cold component (T H /T T ) was varied from 2 to 100. Global hot particle fractions (n H ) up to ∼ .5 were used in the computations

  7. Software Engineering Environment for Component-based Design of Embedded Software

    DEFF Research Database (Denmark)

    Guo, Yu

    2010-01-01

    as well as application models in a computer-aided software engineering environment. Furthermore, component models have been realized following carefully developed design patterns, which provide for an efficient and reusable implementation. The components have been ultimately implemented as prefabricated...... executable objects that can be linked together into an executable application. The development of embedded software using the COMDES framework is supported by the associated integrated engineering environment consisting of a number of tools, which support basic functionalities, such as system modelling......, validation, and executable code generation for specific hardware platforms. Developing such an environment and the associated tools is a highly complex engineering task. Therefore, this thesis has investigated key design issues and analysed existing platforms supporting model-driven software development...

  8. Nonlinear analysis of LWR components: areas of investigation/benefits/recommendations

    Energy Technology Data Exchange (ETDEWEB)

    Brown, S. J. [ed.

    1980-04-01

    The purpose of this study is to identify specific topics of investigation into design procedures, design concepts, methods of analysis, testing practices, and standards which are characterized by nonlinear behavior (both geometric and material) and which are considered to offer some economic and/or technical benefits to the LWR industry (excluding piping). In this study these topics were collected, compiled, and subjectively evaluated as to their potential benefit. The topics considered to have the greatest benefit/impact potential are discussed. The topics of investigation were found to fall basically into three areas: component, code interpretation, and load/failure mechanism. The topics are arbitrarily reorganized into six areas of investigation: Fracture, Fatigue, Vibration/Dynamic/Seismic, Plasticity, Component/Computational Considerations, and Code Interpretation.

  9. Compiler Technology for Parallel Scientific Computation

    Directory of Open Access Journals (Sweden)

    Can Özturan

    1994-01-01

    Full Text Available There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL. Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

  10. A component architecture for the two-phase flows simulation system Neptune

    Energy Technology Data Exchange (ETDEWEB)

    Bechaud, C; Boucker, M; Douce, A [Electricite de France (EDF-RD/MFTT), 78 - Chatou (France); Grandotto, M [CEA Cadarache (DEN/DTP/STH), 13 - Saint-Paul-lez-Durance (France); Tajchman, M [CEA Saclay (DEN/DM2S/SFME), 91 - Gif-sur-Yvette (France)

    2003-07-01

    Electricite de France (EdF) and the French atomic energy commission (Cea) have planed a large project to build a new set of software in nuclear reactors analysis. One of the main idea is to allow coupled calculations in which several scientific domains are involved. This paper presents the software architecture of the two-phase flows simulation Neptune project. Neptune should allow computations of two-phase flows in 3 dimensions under normal operating conditions as well as safety conditions. Three scales are identified: the local scale where there is only homogenization between the two phases, an intermediate scale where solid internal structures are homogenized with the fluid and the system scale where some parts of the geometry under study are considered point-wise or subject to one dimensional simplifications. The main properties of this architecture are as follow: -) coupling with scientific domains, and between different scales, -) re-using of quite all or parts of existing validated codes, -) components usable by the different scales, -) easy introducing of new physical modeling as well as new numerical methods, -) local, distributed and parallel computing. The Neptune architecture is based on the component concept with stable and well suited interface. In the case of a distributed application the components are managed through a Corba bus. The building of the components is organized in shell: a programming shell (Fortran or C++ routines), a managing shell (C++ language), an interpreted shell (Python language), a Corba shell and a global driving shell (C++ or Python). Neptune will use the facilities offered by the Salome project: pre and post processors and controls. A data model has been built to have a common access to the information exchanged between the components (meshes, fields, physical and technical information). This architecture has first been setup and tested on some simple but significant cases and is now currently in use to build the Neptune

  11. A component architecture for the two-phase flows simulation system Neptune

    International Nuclear Information System (INIS)

    Bechaud, C.; Boucker, M.; Douce, A.; Grandotto, M.; Tajchman, M.

    2003-01-01

    Electricite de France (EdF) and the French atomic energy commission (Cea) have planed a large project to build a new set of software in nuclear reactors analysis. One of the main idea is to allow coupled calculations in which several scientific domains are involved. This paper presents the software architecture of the two-phase flows simulation Neptune project. Neptune should allow computations of two-phase flows in 3 dimensions under normal operating conditions as well as safety conditions. Three scales are identified: the local scale where there is only homogenization between the two phases, an intermediate scale where solid internal structures are homogenized with the fluid and the system scale where some parts of the geometry under study are considered point-wise or subject to one dimensional simplifications. The main properties of this architecture are as follow: -) coupling with scientific domains, and between different scales, -) re-using of quite all or parts of existing validated codes, -) components usable by the different scales, -) easy introducing of new physical modeling as well as new numerical methods, -) local, distributed and parallel computing. The Neptune architecture is based on the component concept with stable and well suited interface. In the case of a distributed application the components are managed through a Corba bus. The building of the components is organized in shell: a programming shell (Fortran or C++ routines), a managing shell (C++ language), an interpreted shell (Python language), a Corba shell and a global driving shell (C++ or Python). Neptune will use the facilities offered by the Salome project: pre and post processors and controls. A data model has been built to have a common access to the information exchanged between the components (meshes, fields, physical and technical information). This architecture has first been setup and tested on some simple but significant cases and is now currently in use to build the Neptune

  12. Hardware and software maintenance strategies for upgrading vintage computers

    International Nuclear Information System (INIS)

    Wang, B.C.; Buijs, W.J.; Banting, R.D.

    1992-01-01

    The paper focuses on the maintenance of the computer hardware and software for digital control computers (DCC). Specific design and problems related to various maintenance strategies are reviewed. A foundation was required for a reliable computer maintenance and upgrading program to provide operation of the DCC with high availability and reliability for 40 years. This involved a carefully planned and executed maintenance and upgrading program, involving complementary hardware and software strategies. The computer system was designed on a modular basis, with large sections easily replaceable, to facilitate maintenance and improve availability of the system. Advances in computer hardware have made it possible to replace DCC peripheral devices with reliable, inexpensive, and widely available components from PC-based systems (PC = personal computer). By providing a high speed link from the DCC to a PC, it is now possible to use many commercial software packages to process data from the plant. 1 fig

  13. Study of displacement cascades in metals by means of component analysis

    International Nuclear Information System (INIS)

    Hou, M.

    1981-01-01

    Component analysis is used to study the spatial distributions of point defects resulting from collision cascades in solids. The components are the three (orthogonal) eigenvectors of the covariance matrix of the spatial distribution. Those corresponding to the extreme eigenvalues determine the directions maximizing and minimizing the variance of the spatial distribution. The intermediate one is the direction maximizing the variance of the distribution projected on a plane perpendicular to the principal component. The standard deviations of the distribution projected on the three components give a measure of its size. This measure is only dependent on the cascade structure. Vacancy and interstitial distributions generated in metals by the computer code MARLOWE based on the binary collision approximation are analysed and compared in this picture. The simulation of hundreds of cascades generated by projectiles in the keV energy range incident on polycrystalline gold makes it possible to collect information on their average spatial anisotropy, energy density and on the casade development. The dependence of characteristics on the energy and the masses involved is discussed. (orig.)

  14. Proceedings of the 2011 International Conference on Informatics, Cybernetics, and Computer Engineering

    CERN Document Server

    2012-01-01

    This volume includes a set of selected papers extended and revised from the International Conference on Informatics, Cybernetics, and Computer Engineering. A computer network is a collection of computers and devices interconnected by communications channels that facilitate communications and allows sharing of resources and information among interconnected devices. Networks may be classified according to a wide variety of characteristics such as medium used to transport the data, communications protocol used, scale, topology, organizational scope, etc. Electronics engineering is an engineering discipline where non-linear and active electrical components such as electron tubes, and semiconductor devices, especially transistors, diodes and integrated circuits, are utilized to design electronic circuits, devices and systems, typically also including passive electrical components and based on printed circuit boards. ICCE 2011 Volume 3 is to provide a forum for researchers, educators, engineers, and government offi...

  15. Enhancing Student Writing and Computer Programming with LATEX and MATLAB in Multivariable Calculus

    Science.gov (United States)

    Sullivan, Eric; Melvin, Timothy

    2016-01-01

    Written communication and computer programming are foundational components of an undergraduate degree in the mathematical sciences. All lower-division mathematics courses at our institution are paired with computer-based writing, coding, and problem-solving activities. In multivariable calculus we utilize MATLAB and LATEX to have students explore…

  16. Computer, Network, Software, and Hardware Engineering with Applications

    CERN Document Server

    Schneidewind, Norman F

    2012-01-01

    There are many books on computers, networks, and software engineering but none that integrate the three with applications. Integration is important because, increasingly, software dominates the performance, reliability, maintainability, and availability of complex computer and systems. Books on software engineering typically portray software as if it exists in a vacuum with no relationship to the wider system. This is wrong because a system is more than software. It is comprised of people, organizations, processes, hardware, and software. All of these components must be considered in an integr

  17. Computer applications in thermochemistry

    International Nuclear Information System (INIS)

    Vana Varamban, S.

    1996-01-01

    Knowledge of equilibrium is needed under many practical situations. Simple stoichiometric calculations can be performed by the use of hand calculators. Multi-component, multi-phase gas - solid chemical equilibrium calculations are far beyond the conventional devices and methods. Iterative techniques have to be resorted. Such problems are most elegantly handled by the use of modern computers. This report demonstrates the possible use of computers for chemical equilibrium calculations in the field of thermochemistry and chemical metallurgy. Four modules are explained. To fit the experimental C p data and to generate the thermal functions, to perform equilibrium calculations to the defined conditions, to prepare the elaborate input to the equilibrium and to analyse the calculated results graphically. The principles of thermochemical calculations are briefly described. An extensive input guide is given. Several illustrations are included to help the understanding and usage. (author)

  18. Creep buckling problems in fast reactor components

    International Nuclear Information System (INIS)

    Ramesh, R.; Damodaran, S.P.; Chellapandi, P.; Chetal, S.C.; Bhoje, S.B.

    1995-01-01

    Creep buckling analyses for two important components of 500 M We Prototype Fast Breeder Reactor (PFBR), viz. Intermediate Heat Exchanger (IHX) and Inner Vessel (IV), are reported. The INCA code of CASTEM system is used for the large displacement elasto-plastic-creep analysis of IHX shell. As a first step, INCA is validated for a typical benchmark problem dealing with the creep buckling of a tube under external pressure. Prediction of INCA is also compared with the results obtained using Hoff's theory. For IV, considering the prohibitively high computational cost for the actual analysis, a simplified analysis which involves only large displacement elastoplastic buckling analysis is performed using isochronous stress strain curve approach. From both of these analysis is performed using isochronous stress strain curve approach. From both of these analysis, it has been inferred that creep buckling failure mode is not of great concern in the design of PFBR components. It has also been concluded from the analysis that Creep Cross Over Curve given in RCC-MR is applicable for creep buckling failure mode also. (author). 8 refs., 9 figs., 1 tab

  19. Introduction to the LaRC central scientific computing complex

    Science.gov (United States)

    Shoosmith, John N.

    1993-01-01

    The computers and associated equipment that make up the Central Scientific Computing Complex of the Langley Research Center are briefly described. The electronic networks that provide access to the various components of the complex and a number of areas that can be used by Langley and contractors staff for special applications (scientific visualization, image processing, software engineering, and grid generation) are also described. Flight simulation facilities that use the central computers are described. Management of the complex, procedures for its use, and available services and resources are discussed. This document is intended for new users of the complex, for current users who wish to keep appraised of changes, and for visitors who need to understand the role of central scientific computers at Langley.

  20. Relating two proposed methods for speedup of algorithms for fitting two- and three-way principal component and related multilinear models

    NARCIS (Netherlands)

    Kiers, Henk A.L.; Harshman, Richard A.

    Multilinear analysis methods such as component (and three-way component) analysis of very large data sets can become very computationally demanding and even infeasible unless some method is used to compress the data and/or speed up the algorithms. We discuss two previously proposed speedup methods.