WorldWideScience

Sample records for vver components basing

  1. The analysis of normative requirements to materials of VVER components, basing on LBB concepts

    Energy Technology Data Exchange (ETDEWEB)

    Anikovsky, V.V.; Karzov, G.P.; Timofeev, B.T. [CRISM Prometey, St. Petersburg (Russian Federation)

    1997-04-01

    The paper demonstrates an insufficiency of some requirements native Norms (when comparing them with the foreign requirements for the consideration of calculating situations): (1) leak before break (LBB); (2) short cracks; (3) preliminary loading (warm prestressing). In particular, the paper presents (1) Comparison of native and foreign normative requirements (PNAE G-7-002-86, Code ASME, BS 1515, KTA) on permissible stress levels and specifically on the estimation of crack initiation and propagation; (2) comparison of RF and USA Norms of pressure vessel material acceptance and also data of pressure vessel hydrotests; (3) comparison of Norms on the presence of defects (RF and USA) in NPP vessels, developments of defect schematization rules; foundation of a calculated defect (semi-axis correlation a/b) for pressure vessel and piping components: (4) sequence of defect estimation (growth of initial defects and critical crack sizes) proceeding from the concept LBB; (5) analysis of crack initiation and propagation conditions according to the acting Norms (including crack jumps); (6) necessity to correct estimation methods of ultimate states of brittle an ductile fracture and elastic-plastic region as applied to calculating situation: (a) LBB and (b) short cracks; (7) necessity to correct estimation methods of ultimate states with the consideration of static and cyclic loading (warm prestressing effect) of pressure vessel; estimation of the effect stability; (8) proposals on PNAE G-7-002-86 Norm corrections.

  2. Thread Inspection Manipulator for Primary Loop Components of VVER 1000/1200 Nuclear Power Plants

    OpenAIRE

    Rušev, Marko

    2014-01-01

    HRID developed special manipulator for inspection of different size of threads (M36, M48, M52, M60, M64, M100) on nuclear power plant (VVER 1000/1200) components with eddy current and ultrasonic methods. Manipulator is extremely easy to use reducing personnel time in radiation zone significantly. 95% of all assembling and disassembling activities can be performed manually without use of any tool. It allows quick inspection of threads with both methods in fully automatic mode.

  3. The choice of the fuel assembly for VVER-1000 in a closed fuel cycle based on REMIX-technology

    Directory of Open Access Journals (Sweden)

    Bobrov Evgenii

    2016-01-01

    Full Text Available This paper shows basic features of different fuel assembly (FA application for VVER-1000 in a closed fuel cycle based on REMIX-technology. This investigation shows how the change in the water–fuel ratio in the VVER FA affects on the fuel characteristics produced by REMIX technology during multiple recycling.

  4. Development of data base with mechanical properties of un- and pre-irradiated VVER cladding

    Energy Technology Data Exchange (ETDEWEB)

    Asmolov, V.; Yegorova, L.; Kaplar, E.; Lioutov, K. [Nuclear Safety Inst. of Russian Research Centre, Moscow (Russian Federation). Kurchatov Inst.; Smirnov, V.; Prokhorov, V.; Goryachev, A. [State Research Centre, Dimitrovgrad (Russian Federation). Research Inst. of Atomic Reactors

    1998-03-01

    Analysis of recent RIA test with PWR and VVER high burnup fuel, performed at CABRI, NSRR, IGR reactors has shown that the data base with mechanical properties of the preirradiated cladding is necessary to interpret the obtained results. During 1997 the corresponding cycle of investigations for VVER clad material was performed by specialists of NSI RRC KI and RIAR in cooperation with NRC (USA), IPSN (France) in two directions: measurements of mechanical properties of Zr-1%Nb preirradiated cladding versus temperature and strain rate; measurements of failure parameters for gas pressurized cladding tubes. Preliminary results of these investigations are presented in this paper.

  5. PCA-based ANN approach to leak classification in the main pipes of VVER-1000

    Energy Technology Data Exchange (ETDEWEB)

    Hadad, Kamal; Jabbari, Masoud; Tabadar, Z. [Shiraz Univ. (Iran, Islamic Republic of). School of Mechanical Engineering; Hashemi-Tilehnoee, Mehdi [Islamic Azad Univ., Aliabad Katoul (Iran, Islamic Republic of). Dept. of Engineering

    2012-11-15

    This paper presents a neural network based fault diagnosing approach which allows dynamic crack and leaks fault identification. The method utilizes the Principal Component Analysis (PCA) technique to reduce the problem dimension. Such a dimension reduction approach leads to faster diagnosing and allows a better graphic presentation of the results. To show the effectiveness of the proposed approach, two methodologies are used to train the neural network (NN). At first, a training matrix composed of 14 variables is used to train a Multilayer Perceptron neural network (MLP) with Resilient Backpropagation (RBP) algorithm. Employing the proposed method, a more accurate and simpler network is designed where the input size is reduced from 14 to 6 variables for training the NN. In short, the application of PCA highly reduces the network topology and allows employing more efficient training algorithms. The accuracy, generalization ability, and reliability of the designed networks are verified using 10 simulated events data from a VVER-1000 simulation using DINAMIKA-97 code. Noise is added to the data to evaluate the robustness of the method and the method again shows to be effective and powerful. (orig.)

  6. The virtual digital nuclear power plant: A modern tool for supporting the lifecycle of VVER-based nuclear power units

    Science.gov (United States)

    Arkadov, G. V.; Zhukavin, A. P.; Kroshilin, A. E.; Parshikov, I. A.; Solov'ev, S. L.; Shishov, A. V.

    2014-10-01

    The article describes the "Virtual Digital VVER-Based Nuclear Power Plant" computerized system comprising a totality of verified initial data (sets of input data for a model intended for describing the behavior of nuclear power plant (NPP) systems in design and emergency modes of their operation) and a unified system of new-generation computation codes intended for carrying out coordinated computation of the variety of physical processes in the reactor core and NPP equipment. Experiments with the demonstration version of the "Virtual Digital VVER-Based NPP" computerized system has shown that it is in principle possible to set up a unified system of computation codes in a common software environment for carrying out interconnected calculations of various physical phenomena at NPPs constructed according to the standard AES-2006 project. With the full-scale version of the "Virtual Digital VVER-Based NPP" computerized system put in operation, the concerned engineering, design, construction, and operating organizations will have access to all necessary information relating to the NPP power unit project throughout its entire lifecycle. The domestically developed commercial-grade software product set to operate as an independently operating application to the project will bring about additional competitive advantages in the modern market of nuclear power technologies.

  7. Burnup analysis of the VVER-1000 reactor using thorium-based fuel

    Energy Technology Data Exchange (ETDEWEB)

    Korkmaz, Mehmet E.; Agar, Osman; Bueyueker, Eylem [Karamanoglu Mehmetbey Univ., Karaman (Turkey). Faculty of Kamil Ozdag Science

    2014-12-15

    This paper aims to investigate {sup 232}Th/{sup 233}U fuel cycles in a VVER-1000 reactor through calculation by computer. The 3D core geometry of VVER-1000 system was designed using the Serpent Monte Carlo 1.1.19 Code. The Serpent Code using parallel programming interface (Message Passing Interface-MPI), was run on a workstation with 12-core and 48 GB RAM. {sup 232}Th/{sup 235}U/{sup 238}U oxide mixture was considered as fuel in the core, when the mass fraction of {sup 232}Th was increased as 0.05-0.1-0.2-0.3-0.4 respectively, the mass fraction of {sup 238}U equally was decreased. In the system, the calculations were made for 3 000 MW thermal power. For the burnup analyses, the core is assumed to deplete from initial fresh core up to a burnup of 16 MWd/kgU without refuelling considerations. In the burnup calculations, a burnup interval of 360 effective full power days (EFPDs) was defined. According to burnup, the mass changes of the {sup 232}Th, {sup 233}U, {sup 238}U, {sup 237}Np, {sup 239}Pu, {sup 241}Am and {sup 244}Cm were evaluated, and also flux and criticality of the system were calculated in dependence of the burnup rate.

  8. The concept of extending the service life of the VVER-440-based power units at the Novovoronezh nuclear power plant

    Science.gov (United States)

    Asmolov, V. G.; Povarov, V. P.; Vitkovskii, S. L.; Berkovich, V. Ya.; Chetverikov, A. E.; Mozul', I. A.; Semchenkov, Yu. M.; Suslov, A. I.

    2014-02-01

    Basic statements of the Concept of Extending the Service Life of the VVER-440-Based Power Units at the Novovoronezh NPP beyond 45 years are considered. This topic is raised in connection with the fact that that in December 2016 and in December 2017 the extended service lives of Units 3 and 4 at this NPP will expire. The adopted concept of repeatedly extending the service life of the Novovoronezh NPP Unit 4 implies fitting the power unit with additional reactor core cooling systems with a view to extend the (ultimate) design-basis accidents (which have hitherto been adopted to be a loss of coolant accident involving a leak of reactor coolant through a break with a nominal diameter of 100 mm) to a reactor coolant leak equivalent to rupture of the main reactor coolant pipeline. The modified Unit 4 will also use the safety systems of Unit 3 that is going to be decommissioned. Preliminary calculated assessments of the new design-basis accident scenario involving rupture of the reactor coolant pipeline in Unit 4 fitted with a new configuration of safety systems confirmed the correctness of the adopted concept of repeatedly extending the service life of Unit 4.

  9. The corrosion and corrosion mechanical properties evaluation for the LBB concept in VVERs

    Energy Technology Data Exchange (ETDEWEB)

    Ruscak, M.; Chvatal, P.; Karnik, D.

    1997-04-01

    One of the conditions required for Leak Before Break application is the verification that the influence of corrosion environment on the material of the component can be neglected. Both the general corrosion and/or the initiation and, growth of corrosion-mechanical cracks must not cause the degradation. The primary piping in the VVER nuclear power plant is made from austenitic steels (VVER 440) and low alloy steels protected with the austenitic cladding (VVER 1000). Inspection of the base metal and heterogeneous weldments from the VVER 440 showed that the crack growth rates are below 10 m/s if a low oxygen level is kept in the primary environment. No intergranular cracking was observed in low and high oxygen water after any type of testing, with constant or periodic loading. In the framework of the LBB assessment of the VVER 1000, the corrosion and corrosion mechanical properties were also evaluated. The corrosion and corrosion mechanical testing was oriented predominantly to three types of tests: stress corrosion cracking tests corrosion fatigue tests evaluation of the resistance against corrosion damage. In this paper, the methods used for these tests are described and the materials are compared from the point of view of response on static and periodic mechanical stress on the low alloyed steel 10GN2WA and weld metal exposed in the primary circuit environment. The slow strain rate tests and static loading of both C-rings and CT specimens were performed in order to assess the stress corrosion cracking characteristics. Cyclic loading of CT specimens was done to evaluate the kinetics of the crack growth under periodical loading. Results are shown to illustrate the approaches used. The data obtained were evaluated also from the point of view of comparison of the influence of different structure on the stress corrosion cracking appearance. The results obtained for the base metal and weld metal of the piping are presented here.

  10. Comparison of attenuation coefficients for VVER-440 and VVER-1000 pressure vessels

    Energy Technology Data Exchange (ETDEWEB)

    Marek, M.; Rataj, J.; Vandlik, S. [Reactor Physics Dept., Research Centre Rez, Husinec 130, 25068 (Czech Republic)

    2011-07-01

    The paper summarizes the attenuation coefficient of the neutron fluence with E > 0.5 MeV through a reactor pressure vessel for vodo-vodyanoi energetichesky reactor (VVER) reactor types measured and/or calculated for mock-up experiments, as well as for operated nuclear power plant (NPP) units. The attenuation coefficient is possible to evaluate directly only by using the retro-dosimetry, based on a combination of the measured activities from the weld sample and concurrent ex-vessel measurement. The available neutron fluence attenuation coefficients (E > 0.5 MeV), calculated and measured at a mock-up experiment simulating the VVER-440-unit conditions, vary from 3.5 to 6.15. A similar situation is used for the calculations and mock-up experiment measurements for the VVER-1000 RPV, where the attenuation coefficient of the neutron fluence varies from 5.99 to 8.85. Because of the difference in calculations for the real units and the mock-up experiments, the necessity to design and perform calculation benchmarks both for VVER-440 and VVER-1000 would be meaningful if the calculation model is designed adequately to a given unit. (authors)

  11. Modernization of existing VVER-1000 surveillance programs

    Energy Technology Data Exchange (ETDEWEB)

    Kochkin, V.; Erak, D.; Makhotin, D. [NRC ' Kurchatov Inst.' , 1 Kurchatov Square, Moscow 123182 (Russian Federation)

    2011-07-01

    According to generally accepted world practice, evaluation of the reactor pressure vessel (RPV) material behavior during operation is carried out using tests of surveillance specimens. The main objective of the surveillance program consists in insurance of safe RPV operation during the design lifetime and lifetime-extension period. At present, the approaches of pressure vessels residual life validation based on the test results of their surveillance specimens have been developed and introduced in Russia and are under consideration in other countries where vodo-vodyanoi energetichesky reactors- (VVER-) 1000 are in operation. In this case, it is necessary to ensure leading irradiation of surveillance specimens (as compared to the pressure vessel wall) and to provide uniformly irradiated specimen groups for mechanical testing. Standard surveillance program of VVER-1000 has several significant shortcomings and does not meet these requirements. Taking into account program of lifetime extension of VVER-1000 operating in Russia, it is necessary to carry out upgrading of the VVER-1000 surveillance program. This paper studies the conditions of a surveillance specimen's irradiation and upgrading of existing sets to provide monitoring and prognosis of RPV material properties for extension of the reactor's lifetime up to 60 years or more. (authors)

  12. Safety of VVER-440 reactors

    CERN Document Server

    Slugen, Vladimir

    2011-01-01

    Safety of VVER-440 Reactors endeavours to promote an increase in the safety of VVER-440 nuclear reactors via the improvement of fission products limitation systems and the implementation of special non-destructive spectroscopic methods for materials testing. All theoretical and experimental studies performed the by author over the last 25 years have been undertaken with the aim of improving VVER-440 defence in depth, which is one of the most important principle for ensuring safety in nuclear power plants. Safety of VVER-440 Reactors is focused on the barrier system through which the safety pri

  13. Development and application of an information-analytic system on the problem of flow accelerated corrosion of pipeline elements in the secondary coolant circuit of VVER-440-based power units at the Novovoronezh nuclear power plant

    Science.gov (United States)

    Tomarov, G. V.; Povarov, V. P.; Shipkov, A. A.; Gromov, A. F.; Kiselev, A. N.; Shepelev, S. V.; Galanin, A. V.

    2015-02-01

    Specific features relating to development of the information-analytical system on the problem of flow-accelerated corrosion of pipeline elements in the secondary coolant circuit of the VVER-440-based power units at the Novovoronezh nuclear power plant are considered. The results from a statistical analysis of data on the quantity, location, and operating conditions of the elements and preinserted segments of pipelines used in the condensate-feedwater and wet steam paths are presented. The principles of preparing and using the information-analytical system for determining the lifetime to reaching inadmissible wall thinning in elements of pipelines used in the secondary coolant circuit of the VVER-440-based power units at the Novovoronezh NPP are considered.

  14. VIPRE modeling of VVER-1000 reactor core for DNB analyses

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Y.; Nguyen, Q. [Westinghouse Electric Corporation, Pittsburgh, PA (United States); Cizek, J. [Nuclear Research Institute, Prague, (Czech Republic)

    1995-09-01

    Based on the one-pass modeling approach, the hot channels and the VVER-1000 reactor core can be modeled in 30 channels for DNB analyses using the VIPRE-01/MOD02 (VIPRE) code (VIPRE is owned by Electric Power Research Institute, Palo Alto, California). The VIPRE one-pass model does not compromise any accuracy in the hot channel local fluid conditions. Extensive qualifications include sensitivity studies of radial noding and crossflow parameters and comparisons with the results from THINC and CALOPEA subchannel codes. The qualifications confirm that the VIPRE code with the Westinghouse modeling method provides good computational performance and accuracy for VVER-1000 DNB analyses.

  15. ASTEC and ICARE/CATHARE modelling improvement for VVERs

    Energy Technology Data Exchange (ETDEWEB)

    Zvonarev, Yu [Russian Research Centre ' Kurchatov Institute' (RRC KI), NRI, Kurchatov Square 1, Moscow (Russian Federation); Volchek, A., E-mail: voltchek@nsi.kiae.r [Russian Research Centre ' Kurchatov Institute' (RRC KI), NRI, Kurchatov Square 1, Moscow (Russian Federation); Kobzar, V. [Russian Research Centre ' Kurchatov Institute' (RRC KI), NRI, Kurchatov Square 1, Moscow (Russian Federation); Chatelard, P.; Van Dorsselaere, J.P. [Institut de Radioprotection et de Surete Nucleaire (IRSN), Sadarache (France)

    2011-04-15

    ASTEC and ICARE/CATHARE computer codes, developed by IRSN (France) (the former with GRS, Germany), are used in RRC KI (Russia) for the analyses of accident transients on VVER-type NPPs. The latest versions of the codes were continuously improved and validated to provide a better understanding of the main processes during hypothetical severe accidents on VVERs. This paper describes modelling improvements for VVERs carried out recently in the ICARE common part of the above codes. These actions concern the important models of fuel rod cladding mechanical behaviour and oxidation in steam at high and very high temperatures. The existing models were improved basing on the experience in the field and latest literature data sources for Zr + 1%Nb material used for manufacture of VVERs fuel rod claddings. Best-fitted correlations for the Zr alloy oxidation through a broad temperature range were established, along with recommendations on model application in clad geometry and starvation conditions. A model for the creep velocity was chosen for the clad mechanical model and some cladding burst criteria were established as a function of temperature. After verification of modelling improvements on Separate Effect Tests, validation was carried out on integral bundle tests such as QUENCH, CODEX-CT, PARAMETER-SF (the application to the CORA-VVER experiments is not described in the present paper) and on the Paks-2 cleaning tank incident. The comparison of updated code results with experimental data demonstrated very good numerical predictions, which increases the level of code applicability to VVER-type materials.

  16. Accident Management in VVER-1000

    Directory of Open Access Journals (Sweden)

    F. D'Auria

    2008-01-01

    Full Text Available The present paper deals with the investigation study on accident management in VVER-1000 reactor type conducted in the framework of a European Commission funded project. The mentioned study involved both experimental and computational fields. The purpose of this paper is to summarize the main findings from the execution of a wide-range analysis focused on AM in VVER-1000 with main regard to the qualification of computational tools and the proposal for an optimal AM strategy for this kind of NPP.

  17. Natural circulation in a VVER reactor geometry: Experiments with the PACTEL facility and Cathare simulations

    Energy Technology Data Exchange (ETDEWEB)

    Raussi, P.; Kainulainen, S. [Lappeenranta Univ. of Technology, Lappeenranta (Finland); Kouhia, J. [VTT Energy, Lappeenranta (Finland)

    1995-09-01

    There are some 40 reactors based on the VVER design in use. Database available for computer code assessment for VVER reactors is rather limited. Experiments were conducted to study natural circulation behaviour in the PACTEL facility, a medium-scale integral test loop patterned after VVER pressurized water reactors. Flow behaviour over a range of coolant inventories was studied with a small-break experiment. In the small-break experiments, flow stagnation and system repressurization were observed when the water level in the upper plenum fell below the entrances to the hot legs. The cause was attributed to the hot leg loop seals, which are a unique feature of the VVER geometry. At low primary inventories, core cooling was achieved through the boiler-condenser mode. The experiment was simulated using French thermalhydraulic system code CATHARE.

  18. Optimized planning of in-service inspections of local flow-accelerated corrosion of pipeline elements used in the secondary coolant circuit of the VVER-440-based units at the Novovoronezh NPP

    Science.gov (United States)

    Tomarov, G. V.; Povarov, V. P.; Shipkov, A. A.; Gromov, A. F.; Budanov, V. A.; Golubeva, T. N.

    2015-03-01

    Matters concerned with making efficient use of the information-analytical system on the flow-accelerated corrosion problem in setting up in-service examination of the metal of pipeline elements operating in the secondary coolant circuit of the VVER-440-based power units at the Novovoronezh NPP are considered. The principles used to select samples of pipeline elements in planning ultrasonic thickness measurements for timely revealing metal thinning due to flow-accelerated corrosion along with reducing the total amount of measurements in the condensate-feedwater path are discussed.

  19. Results of Post Irradiation Examinations of VVER Leaky Rods

    Energy Technology Data Exchange (ETDEWEB)

    Markov, D.; Perepelkin, S.; Polenok, V.; Zhitelev, V.; Mayorshina, G. [Head of Fuel Research Department, JSC ' SSC RIAR' , 433510, Dimitrovgrad-10, Ulyanovsk region (Russian Federation)

    2009-06-15

    The most important requirement imposed on fuel elements is to maintain integrity of fuel rod claddings under operation, storage and transportation, since it is directly related to the operational safety. However, failed rod claddings are sometimes observed under reactor operation. Identification and unloading of fuel assemblies with leaky rods from VVER is available only at the time of planned preventive maintenance. An unscheduled reactor shutdown due to the excess of coolant activity limit as well as a preterm unloading of the fuel assembly cause economic damage to nuclear plant. Therefore, models and calculation codes were developed to forecast coolant contamination and failed fuel rod behavior. Criteria based on calculations were set to determine the admissible number of the failed rods in core and the opportunity to continue the reactor operation or pre-term unloading of the fuel assembly with the failed rods. Nevertheless, to prevent the fuel rod failure (for unfailing operation) it is necessary to reveal disadvantages of the design, fabrication method and fuel operation conditions, and to eliminate defects. The most complete and significant information about spent fuel assemblies may be received following the post irradiation material examinations. In order to reveal failure origins and mechanism of changes in VVER fuel and failed rod cladding condition depending on the operation, the examinations of 12 VVER-1000 fuel assemblies and 3 VVER-440 fuel assemblies, operated under normal conditions up to the fuel burnup 13..47 MWd/kgU were carried out. To evaluate the rod cladding condition, reveal defects and determine their parameters, the ultrasonic control of cladding integrity, surface visual inspection, eddy current defectoscopy, measurement of geometrical parameters were applied. In separate cases we used the metallography, measured the hydrogen percentage and carried out the mechanical tests of o-ring samples. The pellet condition was evaluated in

  20. Conducting water chemistry of the secondary coolant circuit of VVER-based nuclear power plant units constructed without using copper containing alloys

    Science.gov (United States)

    Tyapkov, V. F.

    2014-07-01

    The secondary coolant circuit water chemistry with metering amines began to be put in use in Russia in 2005, and all nuclear power plant units equipped with VVER-1000 reactors have been shifted to operate with this water chemistry for the past seven years. Owing to the use of water chemistry with metering amines, the amount of products from corrosion of structural materials entering into the volume of steam generators has been reduced, and the flow-accelerated corrosion rate of pipelines and equipment has been slowed down. The article presents data on conducting water chemistry in nuclear power plant units with VVER-1000 reactors for the secondary coolant system equipment made without using copper-containing alloys. Statistical data are presented on conducting ammonia-morpholine and ammonia-ethanolamine water chemistries in new-generation operating power units with VVER-1000 reactors with an increased level of pH. The values of cooling water leaks in turbine condensers the tube system of which is made of stainless steel or titanium alloy are given.

  1. Preparation macroconstants to simulate the core of VVER-1000 reactor

    Science.gov (United States)

    Seleznev, V. Y.

    2017-01-01

    Dynamic model is used in simulators of VVER-1000 reactor for training of operating staff and students. As a code for the simulation of neutron-physical characteristics is used DYNCO code that allows you to perform calculations of stationary, transient and emergency processes in real time to a different geometry of the reactor lattices [1]. To perform calculations using this code, you need to prepare macroconstants for each FA. One way of getting macroconstants is to use the WIMS code, which is based on the use of its own 69-group macroconstants library. This paper presents the results of calculations of FA obtained by the WIMS code for VVER-1000 reactor with different parameters of fuel and coolant, as well as the method of selection of energy groups for further calculation macroconstants.

  2. VVER-440 and VVER-1000 reactor dosimetry benchmark - BUGLE-96 versus ALPAN VII.0

    Energy Technology Data Exchange (ETDEWEB)

    Duo, J. I. [Radiation Engineering and Analysis, Westinghouse Electric Company LLC, 1000 Westinghouse Dr., Cranberry Township, PA 16066 (United States)

    2011-07-01

    Document available in abstract form only, full text of document follows: Analytical results of the vodo-vodyanoi energetichesky reactor-(VVER-) 440 and VVER-1000 reactor dosimetry benchmarks developed from engineering mockups at the Nuclear Research Inst. Rez LR-0 reactor are discussed. These benchmarks provide accurate determination of radiation field parameters in the vicinity and over the thickness of the reactor pressure vessel. Measurements are compared to calculated results with two sets of tools: TORT discrete ordinates code and BUGLE-96 cross-section library versus the newly Westinghouse-developed RAPTOR-M3G and ALPAN VII.0. The parallel code RAPTOR-M3G enables detailed neutron distributions in energy and space in reduced computational time. ALPAN VII.0 cross-section library is based on ENDF/B-VII.0 and is designed for reactor dosimetry applications. It uses a unique broad group structure to enhance resolution in thermal-neutron-energy range compared to other analogous libraries. The comparison of fast neutron (E > 0.5 MeV) results shows good agreement (within 10%) between BUGLE-96 and ALPAN VII.O libraries. Furthermore, the results compare well with analogous results of participants of the REDOS program (2005). Finally, the analytical results for fast neutrons agree within 15% with the measurements, for most locations in all three mockups. In general, however, the analytical results underestimate the attenuation through the reactor pressure vessel thickness compared to the measurements. (authors)

  3. Investigation of station blackout scenario in VVER440/v230 with RELAP5 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Gencheva, Rositsa Veselinova, E-mail: roseh@mail.bg; Stefanova, Antoaneta Emilova, E-mail: antoanet@inrne.bas.bg; Groudev, Pavlin Petkov, E-mail: pavlinpg@inrne.bas.bg

    2015-12-15

    Highlights: • We have modeled SBO in VVER440. • RELAP5/MOD3 computer code has been used. • Base case calculation has been done. • Fail case calculation has been done. • Operator and alternative operator actions have been investigated. - Abstract: During the development of symptom-based emergency operating procedures (SB-EOPs) for VVER440/v230 units at Kozloduy Nuclear Power Plant (NPP) a number of analyses have been performed using the RELAP5/MOD3 (Carlson et al., 1990). Some of them investigate the response of VVER440/v230 during the station blackout (SBO). The main purpose of the analyses presented in this paper is to identify the behavior of important VVER440 parameters in case of total station blackout. The RELAP5/MOD3 has been used to simulate the SBO in VVER440 NPP model (Fletcher and Schultz, 1995). This model was developed at the Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences (INRNE-BAS), Sofia, for analyses of operational occurrences, abnormal events and design based scenarios. The model provides a significant analytical capability for specialists working in the field of NPP safety.

  4. Application of a Russian nuclear reactor simulator VVER-1000; Aplicacion de un simulador de reactor nuclear ruso VVER-1000

    Energy Technology Data Exchange (ETDEWEB)

    Lopez-Peniche S, A. [UNAM, Facultad de Ingenieria, Circuito Interior, Ciudad Universitaria, 04360 Mexico D. F. (Mexico); Salazar S, E., E-mail: alpsordo@hotmail.com [UNAM, Facultad de Ingenieria, Departamento de Sistemas Energeticos, Laboratorio de Analisis en Ingenieria de Reactores Nucleares, 62250 Jiutepec, Morelos (Mexico)

    2012-10-15

    The objective of the present work is to give to know the most important characteristics in the Russian nuclear reactor of pressurized light water VVER-1000, doing emphasis in the differences that has with the western equivalent the reactor PWR in the design and the safety systems. Therefore, a description of the computerized simulation of the reactor VVER-1000 developed by the company Eniko TSO that the International Atomic of Energy Agency distributes to the states members with academic purposes will take place. The simulator includes mathematical models that represent to the essential systems in the real nuclear power plant, for what is possible to reproduce common faults and transitory characteristic of the nuclear industry with a behavior sufficiently attached to the reality. In this work is analyzed the response of the system before a turbine shot. After the accident in the nuclear power plant of Three Mile Island (US) they have been carried out improvements in the design of the reactor PWR and their safety systems. To know the reach and the limitations of the program, the events that gave place to this accident will be reproduced in the simulator VVER-1000. With base to the results of the simulation we will conclude that so reliable is the response of the safety system of this reactor. (Author)

  5. Component Based Testing with ioco

    NARCIS (Netherlands)

    van der Bijl, H.M.; Rensink, Arend; Tretmans, G.J.

    Component based testing concerns the integration of components which have already been tested separately. We show that, with certain restrictions, the ioco-test theory for conformance testing is suitable for component based testing, in the sense that the integration of fully conformant components is

  6. Formalization in Component Based Development

    DEFF Research Database (Denmark)

    Holmegaard, Jens Peter; Knudsen, John; Makowski, Piotr;

    2006-01-01

    We present a unifying conceptual framework for components, component interfaces, contracts and composition of components by focusing on the collection of properties or qualities that they must share. A specific property, such as signature, functionality behaviour or timing is an aspect. Each aspe...... by small examples, using UML as concrete syntax for various aspects, and is illustrated by one larger case study based on an industrial prototype of a complex component based system....

  7. Modernization of Cross Section Library for VVER-1000 Type Reactors Internals and Pressure Vessel Dosimetry

    Directory of Open Access Journals (Sweden)

    Voloschenko Andrey

    2016-01-01

    Full Text Available The broad-group library BGL1000_B7 for neutron and gamma transport calculations in VVER-1000 internals, RPV and shielding was carried out on a base of fine-group library v7-200n47g from SCALE-6 system. The comparison of the library BGL1000_B7 with the library v7-200n47g and the library BGL1000 (the latter is using for VVER-1000 calculations is demonstrated on several calculation and experimental tests.

  8. Modernization of Cross Section Library for VVER-1000 Type Reactors Internals and Pressure Vessel Dosimetry

    Science.gov (United States)

    Voloschenko, Andrey; Zaritskiy, Sergey; Egorov, Aleksander; Boyarinov, Viktor

    2016-02-01

    The broad-group library BGL1000_B7 for neutron and gamma transport calculations in VVER-1000 internals, RPV and shielding was carried out on a base of fine-group library v7-200n47g from SCALE-6 system. The comparison of the library BGL1000_B7 with the library v7-200n47g and the library BGL1000 (the latter is using for VVER-1000 calculations) is demonstrated on several calculation and experimental tests.

  9. Comparison of microstructural features of radiation embrittlement of VVER-440 and VVER-1000 reactor pressure vessel steels

    Science.gov (United States)

    Kuleshova, E. A.; Gurovich, B. A.; Shtrombakh, Ya. I.; Erak, D. Yu.; Lavrenchuk, O. V.

    2002-02-01

    Comparative microstructural studies of both surveillance specimens and reactor pressure vessel (RPV) materials of VVER-440 and VVER-1000 light water reactor systems have been carried out, following irradiation to different fast neutron fluences and of the heat treatment for extended periods at the operating temperatures. It is shown that there are several microstructural features in the radiation embrittlement of VVER-1000 steels compared to VVER-440 RPV steels that can cause changes in the contributions of different radiation embrittlement mechanisms for VVER-1000 steel.

  10. Fine structure behaviour of VVER-1000 RPV materials under irradiation

    Science.gov (United States)

    Gurovich, B. A.; Kuleshova, E. A.; Shtrombakh, Ya. I.; Erak, D. Yu.; Chernobaeva, A. A.; Zabusov, O. O.

    2009-06-01

    Changes in the fine structure and mechanical properties of the base metal (BM) and weld metal (WM) of VVER-1000 pressure vessels during accumulation of neutron dose in the range of fluences ˜(3.2-15) × 10 23 m -2 ( E > 0.5 MeV) at 290 °C are studied using methods of transmission electron microscopy, fractographic analysis, and Auger electron spectroscopy. A correlation was found between the changes of mechanical properties and the micro- and nano-structures of the studied steels. Accumulation of neutron dose considerably raises the strength characteristics and transition temperature of VVER-1000 pressure vessel steels. The rate of changes in the mechanical properties of the weld metal is significantly higher than that of the base metal. The slower growth of strength characteristics and transition temperature shift of the base metal under irradiation as compared with the weld metal is due to the slower growth of the density of radiation defects and radiation-induced precipitates. The level of intergranular embrittlement under irradiation in the weld metal is not higher then in the base metal in spite of the higher content of nickel.

  11. Formal Component-Based Semantics

    CERN Document Server

    Madlener, Ken; van Eekelen, Marko; 10.4204/EPTCS.62.2

    2011-01-01

    One of the proposed solutions for improving the scalability of semantics of programming languages is Component-Based Semantics, introduced by Peter D. Mosses. It is expected that this framework can also be used effectively for modular meta theoretic reasoning. This paper presents a formalization of Component-Based Semantics in the theorem prover Coq. It is based on Modular SOS, a variant of SOS, and makes essential use of dependent types, while profiting from type classes. This formalization constitutes a contribution towards modular meta theoretic formalizations in theorem provers. As a small example, a modular proof of determinism of a mini-language is developed.

  12. Simulation of a nuclear accident by an academic simulator of a VVER-1000 reactor; Simulacion de un accidente nuclear, mediante un simulador academico de un reactor VVER-1000

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez G, L. [UNAM, Facultad de Ingenieria, Ciudad Universitaria, 04510 Mexico D. F. (Mexico); Salazar S, E., E-mail: laurahg42@gmail.com [UNAM, Facultad de Ingenieria, Laboratorio de Analisis en Ingenieria de Reactores Nucleares, 62250 Jiutepec, Morelos (Mexico)

    2014-10-15

    This work is planned to simulate a scenario in which the same conditions that caused the accident at the Fukushima Daichi nuclear power plant are present, using a simulator of a nuclear power plant with VVER-1000 reactor, a different type of technology to the NPP where the accident occurred, which used BWR reactors. The software where it will take place the simulation was created and distributed by the IAEA for academic purposes, which contains the essential systems that characterize this type of NPP. The simulator has tools for the analysis of the characteristic phenomena of a VVER-1000 reactor in the different systems together and planned training tasks. This makes possible to identify the function of each component and how connects to other systems, thus facilitating the visualization of possible failures and the consequences that they have on the general behavior of the reactor. To program the conditions in the simulator, is necessary to know and synthesize a series of events occurred in Fukushima in 2011 and the realized maneuvers to reduce the effects of the system failures. Being different technologies interpretation of the changes that would suffer the VVER systems in the scenario in question will be developed. The Fukushima accident was characterized by the power loss of regular supply and emergency of the cooling systems which resulted in an increase in reactor temperature and subsequent fusion of their nuclei. Is interesting to reproduce this type of failure in a VVER, and extrapolate the lack of power supply in the systems that comprise, as well as pumping systems for cooling, has a pressure regulating system which involves more variables in the balance of the system. (Author)

  13. Uncertainty-accounted calculational-experimental approach for improved conservative evaluations of VVER RPV radiation loading parameters

    Energy Technology Data Exchange (ETDEWEB)

    Borodkin, P.G.; Borodkin, G.I.; Khrennikov, N.N. [Scientific and Engineering Centre for Nuclear and Radiation Safety SEC NRS, Building 5, Malaya Krasnoselskaya Street, 2/8, 107140 Moscow (Russian Federation)

    2011-07-01

    The approach of improved uncertainty-accounted conservative evaluation of vodo-vodyanoi energetichesky reactor (VVER) (reactor-) pressure-vessel (RPV) radiation loading parameters has been proposed. This approach is based on the calculational-experimental procedure, which takes into account C/E ratio, depending on over- or underestimation, and uncertainties of measured and calculated results. An application of elaborated approach to the full-scale ex-vessel neutron dosimetry experiments on Russian VVERs combined with neutron-transport calculations has been demonstrated in the paper. (authors)

  14. Evolution of microstructure and mechanical properties of VVER-1000 RPV steels under re-irradiation

    Science.gov (United States)

    Gurovich, B.; Kuleshova, E.; Shtrombakh, Ya.; Fedotova, S.; Erak, D.; Zhurko, D.

    2015-01-01

    This is a comprehensive study of microstructure and mechanical properties evolution at re-irradiation after recovery annealing of VVER-1000 RPV weld and base metals as well as the effect of annealing on the microstructure and properties of base metal in the zone of the temperature gradient that is implemented during annealing using special heating device. It is shown that the level of radiation-induced microstructural changes under accelerated re-irradiation of weld and base metal is not higher than for the primary irradiation. Thus, we can predict that re-embrittlement of VVER-1000 RPV materials considering the flux effect will not exceed the typical embrittlement rate for the primary irradiation.

  15. Rolls-Royce successful modernization of safety-critical Instrumentation and Control (I and C) equipment at the Dukovany VVER 440/213 Nuclear Power Plant, based on SPINLINE 3 platform

    Energy Technology Data Exchange (ETDEWEB)

    Rebreyend, P.; Burel, J.P. [Rolls-Royce Civil Nuclear SAS (France); Spoc, J. [Skoda JS (Czech republic); Karasek, A. [CEZ a.s.(Czech republic)

    2010-07-01

    Rolls-Royce has provided on-time delivery of a substantial safety-critical I and C overhaul for four Nuclear reactors operated by Czech Republic utility, CEZ a.s. This nine-year project is considered to be one of the largest I and C modernization projects in the world. The Dukovany VVER 440 I and C modernization project and its key success factors are profiled in this paper. The project is in the final stages with the last unit to be completed in 2009. Beginning in September 2000, the project is in compliance with the initial schedule. Rolls-Royce has been designing and manufacturing I and C solutions dedicated to the implementation of safety and safety-related functions in nuclear power plants (NPPs) for more than 30 years. Though the early solutions were non-software-based, since 1984 software-based solutions for safety I and C functions have been deployed in operating NPPs across France and 15 other countries. The Rolls-Royce platform is suitable for implementation of safety I and C functions in new NPPs, as well as in the modernization of safety equipment in existing plants. CEZ a.s. is a major electricity supplier for the national grid. At Dukovany, CEZ a.s. operates four units of VVER-440/213-type reactors producing one quarter of CEZ a.s. electricity production. The first of these units was connected to the grid in 1985. Since the year 2000, the nine-year modernization program has been underway at Dukovany, at a cost of more than 200 million Euros. The equipment replacement was implemented during regular, planned outages of the original equipment and systems. After an international bidding phase, CEZ a.s. awarded a contract to Skoda JS for general engineering and project management. Individual subcontracts were then signed between Skoda JS and a consortium between Rolls-Royce and Areva for modernization of the safety systems, including the Reactor Protection System (RPS), the Reactor Control System (RCS), and the Post-Accident Monitoring System (PAMS). Two

  16. Embrittlement of low copper VVER 440 surveillance samples neutron-irradiated to high fluences

    Science.gov (United States)

    Miller, M. K.; Russell, K. F.; Kocik, J.; Keilova, E.

    2000-11-01

    An atom probe tomography microstructural characterization of low copper (0.06 at.% Cu) surveillance samples from a VVER 440 reactor has revealed manganese and silicon segregation to dislocations and other ultrafine features in neutron-irradiated base and weld materials (fluences 1×10 25 m-2 and 5×10 24 m-2, E>0.5 MeV, respectively). The results indicate that there is an additional mechanism of embrittlement during neutron irradiation that manifests itself at high fluences.

  17. Isothermal and thermal–mechanical fatigue of VVER-440 reactor pressure vessel steels

    Energy Technology Data Exchange (ETDEWEB)

    Fekete, Balazs, E-mail: fekete.mm.bme@gmail.com [College of Dunaujvaros, Tancsics 1A, Dunaujvaros H-2400 (Hungary); Department of Applied Mechanics, Budapest University of Technology and Economics, Muegyetem 5, Budapest H-1111 (Hungary); Trampus, Peter [College of Dunaujvaros, Tancsics 1A, Dunaujvaros H-2400 (Hungary)

    2015-09-15

    Highlights: • We aimed to determine the thermomechanical behaviour of VVER reactor steels. • Material tests were developed and performed on GLEEBLE 3800 physical simulator. • Coffin–Manson curves and parameters were derived. • High accuracy of the strain energy based evaluation was found. • The observed dislocation evolution correlates with the mechanical behaviour. - Abstract: The fatigue life of the structural materials 15Ch2MFA (CrMoV-alloyed ferritic steel) and 08Ch18N10T (CrNi-alloyed austenitic steel) of VVER-440 reactor pressure vessel under completely reserved total strain controlled low cycle fatigue tests were investigated. An advanced test facility was developed for GLEEBLE-3800 physical simulator which was able to perform thermomechanical fatigue experiments under in-service conditions of VVER nuclear reactors. The low cycle fatigue results were evaluated with the plastic strain based Coffin–Manson law, and plastic strain energy based model as well. It was shown that both methods are able to predict the fatigue life of reactor pressure vessel steels accurately. Interrupted fatigue tests were also carried out to investigate the kinetic of the fatigue evolution of the materials. On these samples microstructural evaluation by TEM was performed. The investigated low cycle fatigue behavior can provide reference for remaining life assessment and lifetime extension analysis.

  18. Formalization in Component Based Development

    DEFF Research Database (Denmark)

    Holmegaard, Jens Peter; Knudsen, John; Makowski, Piotr

    2006-01-01

    We present a unifying conceptual framework for components, component interfaces, contracts and composition of components by focusing on the collection of properties or qualities that they must share. A specific property, such as signature, functionality behaviour or timing is an aspect. Each aspect...

  19. Multiple recycle of REMIX fuel at VVER-1000 operation in closed fuel cycle

    Science.gov (United States)

    Alekseev, P. N.; Bobrov, E. A.; Chibinyaev, A. V.; Teplov, P. S.; Dudnikov, A. A.

    2015-12-01

    The basic features of loading the VVER-1000 core with a new variant of REMIX fuel (REgenerated MIXture of U-Pu oxides) are considered during its multiple recycle in a closed nuclear fuel cycle. The fuel composition is produced on the basis of the uranium-plutonium regenerate extracted at processing the spent nuclear fuel (SNF) from a VVER-1000, depleted uranium, and the fissionable material: 235U as a part of highly enriched uranium (HEU) from warheads superfluous for defense purposes or 233U accumulated in thorium blankets of fusion (electronuclear) neutron sources or fast reactors. Production of such a fuel assumes no use of natural uranium in addition. When converting a part of the VVER-1000 reactors to the closed fuel cycle based on the REMIX technology, the consumption of natural uranium decreases considerably, and there is no substantial degradation of the isotopic composition of plutonium or change in the reactor-safety characteristics at the passage from recycle to recycle.

  20. Multiple recycle of REMIX fuel at VVER-1000 operation in closed fuel cycle

    Energy Technology Data Exchange (ETDEWEB)

    Alekseev, P. N.; Bobrov, E. A., E-mail: evgeniybobrov89@rambler.ru; Chibinyaev, A. V.; Teplov, P. S.; Dudnikov, A. A. [National Research Center Kurchatov Institute (Russian Federation)

    2015-12-15

    The basic features of loading the VVER-1000 core with a new variant of REMIX fuel (REgenerated MIXture of U–Pu oxides) are considered during its multiple recycle in a closed nuclear fuel cycle. The fuel composition is produced on the basis of the uranium–plutonium regenerate extracted at processing the spent nuclear fuel (SNF) from a VVER-1000, depleted uranium, and the fissionable material: {sup 235}U as a part of highly enriched uranium (HEU) from warheads superfluous for defense purposes or {sup 233}U accumulated in thorium blankets of fusion (electronuclear) neutron sources or fast reactors. Production of such a fuel assumes no use of natural uranium in addition. When converting a part of the VVER-1000 reactors to the closed fuel cycle based on the REMIX technology, the consumption of natural uranium decreases considerably, and there is no substantial degradation of the isotopic composition of plutonium or change in the reactor-safety characteristics at the passage from recycle to recycle.

  1. Validation of Advanced Computer Codes for VVER Technology: LB-LOCA Transient in PSB-VVER Facility

    Directory of Open Access Journals (Sweden)

    A. Del Nevo

    2012-01-01

    Full Text Available The OECD/NEA PSB-VVER project provided unique and useful experimental data for code validation from PSB-VVER test facility. This facility represents the scaled-down layout of the Russian-designed pressurized water reactor, namely, VVER-1000. Five experiments were executed, dealing with loss of coolant scenarios (small, intermediate, and large break loss of coolant accidents, a primary-to-secondary leak, and a parametric study (natural circulation test aimed at characterizing the VVER system at reduced mass inventory conditions. The comparative analysis, presented in the paper, regards the large break loss of coolant accident experiment. Four participants from three different institutions were involved in the benchmark and applied their own models and set up for four different thermal-hydraulic system codes. The benchmark demonstrated the performances of such codes in predicting phenomena relevant for safety on the basis of fixed criteria.

  2. Development and qualification of a thermal-hydraulic nodalization for modeling station blackout accident in PSB-VVER test facility

    Energy Technology Data Exchange (ETDEWEB)

    Saghafi, Mahdi [Department of Energy Engineering, Sharif University of Technology, Azadi Avenue, Tehran (Iran, Islamic Republic of); Ghofrani, Mohammad Bagher, E-mail: ghofrani@sharif.edu [Department of Energy Engineering, Sharif University of Technology, Azadi Avenue, Tehran (Iran, Islamic Republic of); D’Auria, Francesco [San Piero a Grado Nuclear Research Group (GRNSPG), University of Pisa, Via Livornese 1291, San Piero a Grado, Pisa (Italy)

    2016-07-15

    Highlights: • A thermal-hydraulic nodalization for PSB-VVER test facility has been developed. • Station blackout accident is modeled with the developed nodalization in MELCOR code. • The developed nodalization is qualified at both steady state and transient levels. • MELCOR predictions are qualitatively and quantitatively in acceptable range. • Fast Fourier Transform Base Method is used to quantify accuracy of code predictions. - Abstract: This paper deals with the development of a qualified thermal-hydraulic nodalization for modeling Station Black-Out (SBO) accident in PSB-VVER Integral Test Facility (ITF). This study has been performed in the framework of a research project, aiming to develop an appropriate accident management support tool for Bushehr nuclear power plant. In this regard, a nodalization has been developed for thermal-hydraulic modeling of the PSB-VVER ITF by MELCOR integrated code. The nodalization is qualitatively and quantitatively qualified at both steady-state and transient levels. The accuracy of the MELCOR predictions is quantified in the transient level using the Fast Fourier Transform Base Method (FFTBM). FFTBM provides an integral representation for quantification of the code accuracy in the frequency domain. It was observed that MELCOR predictions are qualitatively and quantitatively in the acceptable range. In addition, the influence of different nodalizations on MELCOR predictions was evaluated and quantified using FFTBM by developing 8 sensitivity cases with different numbers of control volumes and heat structures in the core region and steam generator U-tubes. The most appropriate case, which provided results with minimum deviations from the experimental data, was then considered as the qualified nodalization for analysis of SBO accident in the PSB-VVER ITF. This qualified nodalization can be used for modeling of VVER-1000 nuclear power plants when performing SBO accident analysis by MELCOR code.

  3. Validation of coupled neutronic / thermal-hydraulic codes for VVER reactors. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Mittag, S.; Grundmann, U.; Kliem, S.; Kozmenkov, Y.; Rindelhardt, U.; Rohde, U.; Weiss, F.-P.; Langenbuch, S.; Krzykacz-Hausmann, B.; Schmidt, K.-D.; Vanttola, T.; Haemaelaeinen, A.; Kaloinen, E.; Kereszturi, A.; Hegyi, G.; Panka, I.; Hadek, J.; Strmensky, C.; Darilek, P.; Petkov, P.; Stefanova, S.; Kuchin, A.; Khalimonchuk, V.; Hlbocky, P.; Sico, D.; Danilin, S.; Ionov, V.; Nikonov, S.; Powney, D.

    2004-08-01

    In recent years, the simulation methods for the safety analysis of nuclear power plants have been continuously improved to perform realistic calculations. Therefore in VALCO work package 2 (WP 2), the usual application of coupled neutron-kinetic / thermal-hydraulic codes to VVER has been supplemented by systematic uncertainty and sensitivity analyses. A comprehensive uncertainty analysis has been carried out. The GRS uncertainty and sensitivity method based on the statistical code package SUSA was applied to the two transients studied earlier in SRR-1/95: A load drop of one turbo-generator in Loviisa-1 (VVER-440), and a switch-off of one feed water pump in Balakovo-4 (VVER-1000). The main steps of these analyses and the results obtained by applying different coupled code systems (SMABRE - HEXTRAN, ATHLET - DYN3D, ATHLET - KIKO3D, ATHLET - BIPR-8) are described in this report. The application of this method is only based on variations of input parameter values. No internal code adjustments are needed. An essential result of the analysis using the GRS SUSA methodology is the identification of the input parameters, such as the secondary-circuit pressure, the control-assembly position (as a function of time), and the control-assembly efficiency, that most sensitively affect safety-relevant output parameters, like reactor power, coolant heat-up, and primary pressure. Uncertainty bands for these output parameters have been derived. The variation of potentially uncertain input parameter values as a consequence of uncertain knowledge can activate system actions causing quite different transient evolutions. This gives indications about possible plant conditions that might be reached from the initiating event assuming only small disturbances. In this way, the uncertainty and sensitivity analysis reveals the spectrum of possible transient evolutions. Deviations of SRR-1/95 coupled code calculations from measurements also led to the objective to separate neutron kinetics from

  4. Component Based Electronic Voting Systems

    Science.gov (United States)

    Lundin, David

    An electronic voting system may be said to be composed of a number of components, each of which has a number of properties. One of the most attractive effects of this way of thinking is that each component may have an attached in-depth threat analysis and verification strategy. Furthermore, the need to include the full system when making changes to a component is minimised and a model at this level can be turned into a lower-level implementation model where changes can cascade to as few parts of the implementation as possible.

  5. Sequence of decommissioning of the main equipment in a central type VVER 440 V-230; Secuencia de desmantelamiento de los equipos principales de una central Tipo VVer 440 V-230

    Energy Technology Data Exchange (ETDEWEB)

    Andres, E.; Garcia Ruiz, R.

    2014-10-01

    IBERDROLA Ingenieria y Construccion S.A.U., leader of consortium with Empresarios Agrupados and INDRA, has developed the Basic Engineering for the decommissioning of contaminated systems and building of a VVER 440 V-230 Nuclear Power Plant, establishing the sequence and methodology for the main equipment fragmentation. For that, it has been designed dry and wet cutting zones to be set up in the area where steam generators, main cooling pumps and pressurizer are located; these components will be dismantled previously. (Author)

  6. PMK-2, the First Integral Thermal-Hydraulics Tests for the Safety Evaluation of VVER-440/213 Nuclear Power Plants

    Directory of Open Access Journals (Sweden)

    Gy. Ézsöl

    2012-01-01

    Full Text Available The PMK-2 facility is a full-pressure thermal-hydraulic model of the primary and partly the secondary circuit of the VVER-type units of Paks NPP. The facility was the first integral-type facility for VVERs. The PMK-2 was followed later by the PACTEL (for VVER-440, the ISB, and PSB for VVER-1000. Since the startup of the facility in 1985, 55 experiments have been performed primarily in international frameworks with the participation of experts from 29 European and overseas countries forming a scientific school to better understand VVER system behaviour and reach a high level of modelling of accident sequences. The ATHLET, CATHARE, and RELAP5 codes have been validated including both qualitative and quantitative assessments. The former was almost exclusively applied to the early phase of validation by integral experiments, while the quantitative assessments have been performed by the Fast Fourier Transform Based Method. Paper gives comprehensive information on the design features of PMK-2 facility with a special respect to the representativeness of phenomena, the experiments performed, and the results of the validation of ATHLET, CATHARE, and RELAP5 codes. Safety significance of the PMK-2 projects is also discussed.

  7. Microstructure alterations in the base material, heat affected zone and weld metal of a 440-VVER-reactor pressure vessel caused by high fluence irradiation during long term operation: material: 15 Ch2MFA {approx} 0, 15 C-2,5 Cr-0, 7Mo-0,3 V; Veraenderungen der Mikrostruktur in Grundwerkstoff, WEZ und Schweissgut eines 440-VVER-Reaktordruckbehaelters, verursacht durch Neutronenbestrahlung im langzeitigen Betrieb; Werkstoff: 15 Ch2MFA {approx} 0,15 C-2,5 Cr-0, 7Mo-0,3 V

    Energy Technology Data Exchange (ETDEWEB)

    Maussner, G.; Scharf, L.; Langer, R. [Siemens AG Energieerzeugung KWU, Erlangen (Germany); Gurovich, B. [Russian Research Centre Kurchatov Inst., Moscow (Russian Federation)

    1998-11-01

    Within the scope of the Tacis `91/1.1 project of the European Community, ``Reactor Vessel Embrittlement``, specimens were taken from the heavily irradiated circumferential welds of a VVER pressure vessel. The cumulated fast neutron fluence in the specimens amounts to up to 6.5 x 10{sup 19} cm{sup -}2 (E > 0.5 MeV). For the multi-laboratory, coordinated study, the specimens were cutted for mechanical testing as well as analytical, microstructural and microanalytical examinations in the base metal, HAZ and weld metal with respect to the effects of reactor operatio and post-irradiation annealing as well as thermal treatment (475 C, 560 C). The analytical transmission electron microscopy (200 kV) revealed the alterations found in the mechanical properties to be due to the formation of black dots and irradiation-induced segregations and accumulations of copper and carbides. These effects, caused by operation, (neutron radiation, temperature), are much more significant in the HAZ than in the base metal. (orig./CB) [Deutsch] Im Rahmen des von der Europaeischen Union beauftragten Tacis `91/1.1 Programms `Reactor Vessel Embrittlement` wurden Bohrkerne aus dem hochbestrahlten Rundnahtbereich eines VVER-Reaktordruckbehaelters entnommen. Die kumulierte schnelle Neutronenfluenz in diesen Proben betraegt bis zu 6,5 x 10{sup 19} cm{sup -2} (E>0,5 MeV). In einer gemeinschaftlichen Untersuchung wurden mechanisch-technologische, chemische sowie mirkostrukturelle Untersuchungen an Grundwerkstoff-, WEZ- und Schweissgutproben im vergleichbaren Ausgangs-, bestrahlten und anschliessend waermebehandelten (475 C, 560 C) Werkstoffzustand durchgefuehrt. Die analytische Durchstrahlelektronenmikroskopie (200 kV) laesst als Ursache fuer die festgestellten Veraenderungen der mechanischen Eigenschaften die Bildung von Versetzungsringen (black dots) sowie von bestrahlungsinduzierten Ausscheidungen und Anreicherungen von Kupfer in den Karbiden erkennen. Diese Effekte, als Folge der betrieblichen

  8. The artifacts of component-based development

    CERN Document Server

    Qureshi, M Rizwan Jameel

    2012-01-01

    Component based development idea was floated in a conference name "Mass Produced Software Components" in 1968 [1]. Since then engineering and scientific libraries are developed to reuse the previously developed functions. This concept is now widely used in SW development as component based development (CBD). Component-based software engineering (CBSE) is used to develop/ assemble software from existing components [2]. Software developed using components is called component ware [3]. This paper presents different architectures of CBD such as ActiveX, common object request broker architecture (CORBA), remote method invocation (RMI) and simple object access protocol (SOAP). The overall objective of this paper is to support the practice of CBD by comparing its advantages and disadvantages. This paper also evaluates object oriented process model to adapt it for CBD.

  9. Microstructure and embrittlement of VVER 440 reactor pressure vessel steels; Microstructure et fragilisation des aciers de cuve des reacteurs nucleaires VVER 440

    Energy Technology Data Exchange (ETDEWEB)

    Hennion, A

    1999-03-15

    27 VVER 440 pressurised water reactors operate in former Soviet Union and in Eastern Europe. The pressure vessel, is made of Cr-Mo-V steel. It contains a circumferential arc weld in front of the nuclear core. This weld undergoes a high neutron flux and contains large amounts of copper and phosphorus, elements well known for their embrittlement potency under irradiation. The embrittlement kinetic of the steel is accelerated, reducing the lifetime of the reactor. In order to get informations on the microstructure and mechanical properties of these steels, base metals, HAZ, and weld metals have been characterized. The high amount of phosphorus in weld metals promotes the reverse temper embrittlement that occurs during post-weld heat treatment. The radiation damage structure has been identified by small angle neutron scattering, atomic probe, and transmission electron microscopy. Nanometer-sized clusters of solute atoms, rich in copper with almost the same characteristics as in western pressure vessels steels, and an evolution of the size distribution of vanadium carbides, which are present on dislocation structure, are observed. These defects disappear during post-irradiation tempering. As in western steels, the embrittlement is due to both hardening and reduction of interphase cohesion. The radiation damage specificity of VVER steels arises from their high amount of phosphorus and from their significant density of fine vanadium carbides. (author)

  10. BASES COMPONENTS OF PARETO EFFICIENCY

    Directory of Open Access Journals (Sweden)

    Daniela POPESCU

    2011-01-01

    Full Text Available This Study take into discussion the problem of underlay the decisions, which are particularly complex and actual, based of an important volume of information, which need an important quantity of work. From our investigations, we conclusion that some inconvenient can be evitable by use also of others concepts, which apply to this kind of information. In this direction, the Study follow up to end the manner which base the decisions, we allot a especial attention to analyze the Concept of Efficiency Pareto, which finally has two fundamental elements: final benefit and opportunity cost, use also in the process for take decisions. So we explain the ample analyze of Concept of Efficiency Pareto, where the main accent is on quantitative aspects evaluation of elements, which characterize them. By amplification is thoroughness the analyze of process for take decisions. So is underlined the closed link between different economical concepts and their great usefulness in practice.

  11. Graphene-based spintronic components

    OpenAIRE

    Zeng, Minggang; Shen, Lei; Su, Haibin; Zhou, Miao; Zhang, Chun; Feng, Yuanping

    2010-01-01

    A major challenge of spintronics is in generating, controlling and detecting spin-polarized current. Manipulation of spin-polarized current, in particular, is difficult. We demonstrate here, based on calculated transport properties of graphene nanoribbons, that nearly +-100% spin-polarized current can be generated in zigzag graphene nanoribbons (ZGNRs) and tuned by a source-drain voltage in the bipolar spin diode, in addition to magnetic configurations of the electrodes. This unusual transpor...

  12. Graphene-based spintronic components

    OpenAIRE

    Zeng, Minggang; Shen, Lei; Su, Haibin; Zhou, Miao; Zhang, Chun; Feng, Yuanping

    2010-01-01

    A major challenge of spintronics is in generating, controlling and detecting spin-polarized current. Manipulation of spin-polarized current, in particular, is difficult. We demonstrate here, based on calculated transport properties of graphene nanoribbons, that nearly +-100% spin-polarized current can be generated in zigzag graphene nanoribbons (ZGNRs) and tuned by a source-drain voltage in the bipolar spin diode, in addition to magnetic configurations of the electrodes. This unusual transpor...

  13. Component Based Dynamic Reconfigurable Test System

    Institute of Scientific and Technical Information of China (English)

    LAI Hong; HE Lingsong; ZHANG Dengpan

    2006-01-01

    In this paper, a novel component based framework of test system is presented for the new requirements of dynamic changes of test functions and reconfiguration of test resources. The complexity of dynamic reconfiguration arises from the scale, redirection, extensibility and interconnection of components in test system. The paper is started by discussing the component assembly based framework which provide the open platform to the deploy of components and then the script interpreter model is introduced to dynamically create the components and build the test system by analyzing XML based information of test system. A pipeline model is presented to provide the data channels and behavior reflection among the components. Finally, a dynamic reconfigurable test system is implemented on the basis of COM and applied in the remote test and control system of CNC machine.

  14. Evolution of structure and properties of VVER-1000 RPV steels under accelerated irradiation up to beyond design fluences

    Science.gov (United States)

    Gurovich, B.; Kuleshova, E.; Shtrombakh, Ya.; Fedotova, S.; Maltsev, D.; Frolov, A.; Zabusov, O.; Erak, D.; Zhurko, D.

    2015-01-01

    In this paper comprehensive studies of structure and properties of VVER-1000 RPV steels after the accelerated irradiation to fluences corresponding to extended lifetime up to 60 years or more as well as comparative studies of materials irradiated with different fluxes were carried out. The significant flux effect is confirmed for the weld metal (nickel concentration ⩾1.35%) which is mainly due to development of reversible temper brittleness. The rate of radiation embrittlement of VVER-1000 RPV steels under operation up to 60 years and more (based on the results of accelerated irradiation considering flux effect for weld metal) is expected not to differ significantly from the observed rate under irradiation within surveillance specimens.

  15. KARATE - a code for VVER-440 core calculation

    Energy Technology Data Exchange (ETDEWEB)

    Gado, J.; Hegedus, Cs.J.; Hegyi, Gy.; Kereszturi, A.; Makai, M.; Maraczi, Cs.; Telbisz, M.

    1994-12-31

    A modular calculation system has been elaborated at the KFKI Atomic Energy Research Institute for VVER-440 cores. The purpose of KARATE is the calculation of neutron physical and thermal-hydraulic processes in the core at normal, startup, and slow transient conditions. KARATE is under validation and verification (V&V) against mathematical, experimental, and operational data.

  16. Positron Annihilation Studies of VVER Type Reactor Steels

    OpenAIRE

    Brauer, G.

    1995-01-01

    A summary of recent positron annihilation work on Russian VVER type reactor steels is presented. Thereby, special attention is paid to the outline of basic processes that might help to understand the positron behaviour in this class of industrial material. The idea of positron trapping by irradiation-induced precipitates, which are probably carbides, is discussed in detail.

  17. Laser based refurbishment of steel mill components

    CSIR Research Space (South Africa)

    Kazadi, P

    2006-03-01

    Full Text Available Laser refurbishment capabilities were demonstrated and promising results were obtained for repair of distance sleeves, foot rolls, descaler cassette, idler rolls. Based on the cost projections and the results of the in-situ testing, components which...

  18. Isothermal and thermal-mechanical fatigue of VVER-440 reactor pressure vessel steels

    Science.gov (United States)

    Fekete, Balazs; Trampus, Peter

    2015-09-01

    The fatigue life of the structural materials 15Ch2MFA (CrMoV-alloyed ferritic steel) and 08Ch18N10T (CrNi-alloyed austenitic steel) of VVER-440 reactor pressure vessel under completely reserved total strain controlled low cycle fatigue tests were investigated. An advanced test facility was developed for GLEEBLE-3800 physical simulator which was able to perform thermomechanical fatigue experiments under in-service conditions of VVER nuclear reactors. The low cycle fatigue results were evaluated with the plastic strain based Coffin-Manson law, and plastic strain energy based model as well. It was shown that both methods are able to predict the fatigue life of reactor pressure vessel steels accurately. Interrupted fatigue tests were also carried out to investigate the kinetic of the fatigue evolution of the materials. On these samples microstructural evaluation by TEM was performed. The investigated low cycle fatigue behavior can provide reference for remaining life assessment and lifetime extension analysis.

  19. Economical Feedback of Increasing Fuel Enrichment on Electricity Cost for VVER-1000

    Directory of Open Access Journals (Sweden)

    Mohammed Saad Dwiddar

    2015-08-01

    Full Text Available A methodology of evaluating the economics of the front-end nuclear fuel cycle with a price change sensitivity analysis for a VVER-1000 reactor core as a case study is presented. The effect of increasing the fuel enrichment and its corresponding reactor cycle length on the energy cost is investigated. The enrichment component was found to represent the highly expenses dynamic component affecting the economics of the front-end fuel cycle. Nevertheless, the increase of the fuel enrichment will increase the reactor cycle length, which will have a positive feedback on the electricity generation cost (cent/KWh. A long reactor operation time with a cheaper energy cost set the nuclear energy as a competitive alternative when compared with other energy sources.

  20. Design analysis of the molten core confinement within the reactor vessel in the case of severe accidents at nuclear power plants equipped with a reactor of the VVER type

    Science.gov (United States)

    Zvonaryov, Yu. A.; Budaev, M. A.; Volchek, A. M.; Gorbaev, V. A.; Zagryazkin, V. N.; Kiselyov, N. P.; Kobzar', V. L.; Konobeev, A. V.; Tsurikov, D. F.

    2013-12-01

    The present paper reports the results of the preliminary design estimate of the behavior of the core melt in vessels of reactors of the VVER-600 and VVER-1300 types (a standard optimized and informative nuclear power unit based on VVER technology—VVER TOI) in the case of beyond-design-basis severe accidents. The basic processes determining the state of the core melt in the reactor vessel are analyzed. The concept of molten core confinement within the vessel based on the idea of outside cooling is discussed. Basic assumptions and models, as well as the results of calculation of the interaction between molten materials of the core and the wall of the reactor vessel performed by means of the SOCRAT severe accident code, are presented and discussed. On the basis of the data obtained, the requirements on the operation of the safety systems are determined, upon the fulfillment of which there will appear potential prerequisites for implementing the concept of the confinement of the core melt within the reactor in cases of severe accidents at nuclear power plants equipped with VVER reactors.

  1. Radiochemical Assays of Irradiated VVER-440 Fuel for Use in Spent Fuel Burnup Credit Activities

    Energy Technology Data Exchange (ETDEWEB)

    Jardine, L J

    2005-04-25

    The objective of this spent fuel burnup credit work was to study and describe a VVER-440 reactor spent fuel assembly (FA) initial state before irradiation, its operational irradiation history and the resulting radionuclide distribution in the fuel assembly after irradiation. This work includes the following stages: (1) to pick out and select a specific spent (irradiated) FA for examination; (2) to describe the FA initial state before irradiation; (3) to describe the irradiation history, including thermal calculations; (4) to examine the burnup distribution of select radionuclides along the FA height and cross-section; (5) to examine the radionuclide distributions; (6) to determine the Kr-85 release into the plenum; (7) to select and prepare FA rod specimens for destructive examinations; (8) to determine the radionuclide compositions, isotope masses and burnup in the rod specimens; and (9) to analyze, document and process the results. The specific workscope included the destructive assay (DA) of spent fuel assembly rod segments with an {approx}38.5 MWd/KgU burnup from a single VVER-440 fuel assembly from the Novovorenezh reactor in Russia. Based on irradiation history criteria, four rods from the fuel assembly were selected and removed from the assembly for examination. Next, 8 sections were cut from the four rods and sent for destructive analysis of radionuclides by radiochemical analyses. The results were documented in a series of seven reports over a period of {approx}1 1/2 years.

  2. Analysis of measured and calculated counterpart test data in PWR and VVER 1000 simulators

    Directory of Open Access Journals (Sweden)

    d’Auria Francesco

    2005-01-01

    Full Text Available This paper presents an over view of the "scaling strategy", in particular the role played by the counter part test methodology. The recent studies dealing with a scaling analysis in light water reactor with special regard to the VVER 1000 Russian reactor type are presented to demonstrate the phenomena important for scaling. The adopted scaling approach is based on the selection of a few characteristic parameters chosen by taking into account their relevance in the behavior of the transient. The adopted computer code used is RELAP5/Mod3.3 and its accuracy has been demonstrated by qualitative and quantitative evaluation. Comparing experimental data, it was found that the investigated facilities showed similar behavior concerning the time trends, and that the same thermal hydraulic phenomena on a qualitative level could be predicted. The main results are: PSB and LOBI main parameters have similar trends. This fact is the confirmation of the validity of the adopted scaling approach and it shows that PWR and VVER reactor type behavior is very similar. No new phenomena occurred during the counter part test, despite the fact that the two facilities had a different lay out, and the already known phenomena were predicted correctly by the code. The code capability and accuracy are scale-independent. Both character is tics are necessary to permit the full scale calculation with the aim of nuclear power plant behavior prediction. .

  3. CFD Analysis of a Slug Mixing Experiment Conducted on a VVER-1000 Model

    Directory of Open Access Journals (Sweden)

    F. Moretti

    2009-01-01

    Full Text Available A commercial CFD code was applied, for validation purposes, to the simulation of a slug mixing experiment carried out at OKB “Gidropress” scaled facility in the framework of EC TACIS project R2.02/02: “Development of safety analysis capabilities for VVER-1000 transients involving spatial variations of coolant properties (temperature or boron concentration at core inlet.” Such experimental model reproduces a VVER-1000 nuclear reactor and is aimed at investigating the in-vessel mixing phenomena. The addressed experiment involves the start-up of one of the four reactor coolant pumps (the other three remaining idle, and the presence of a tracer slug on the starting loop, which is thus transported to the reactor pressure vessel where it mixes with the clear water. Such conditions may occur in a boron dilution scenario, hence the relevance of the addressed phenomena for nuclear reactor safety. Both a pretest and a posttest CFD simulations of the mentioned experiment were performed, which differ in the definition of the boundary conditions (based either on nominal quantities or on measured quantities, resp.. The numerical results are qualitatively and quantitatively analyzed and compared against the measured data in terms of space and time tracer distribution at the core inlet. The improvement of the results due to the optimization of the boundary conditions is evidenced, and a quantification of the simulation accuracy is proposed.

  4. Test case for VVER-1000 complex modeling using MCU and ATHLET

    Science.gov (United States)

    Bahdanovich, R. B.; Bogdanova, E. V.; Gamtsemlidze, I. D.; Nikonov, S. P.; Tikhomirov, G. V.

    2017-01-01

    The correct modeling of processes occurring in the fuel core of the reactor is very important. In the design and operation of nuclear reactors it is necessary to cover the entire range of reactor physics. Very often the calculations are carried out within the framework of only one domain, for example, in the framework of structural analysis, neutronics (NT) or thermal hydraulics (TH). However, this is not always correct, as the impact of related physical processes occurring simultaneously, could be significant. Therefore it is recommended to spend the coupled calculations. The paper provides test case for the coupled neutronics-thermal hydraulics calculation of VVER-1000 using the precise neutron code MCU and system engineering code ATHLET. The model is based on the fuel assembly (type 2M). Test case for calculation of power distribution, fuel and coolant temperature, coolant density, etc. has been developed. It is assumed that the test case will be used for simulation of VVER-1000 reactor and in the calculation using other programs, for example, for codes cross-verification. The detailed description of the codes (MCU, ATHLET), geometry and material composition of the model and an iterative calculation scheme is given in the paper. Script in PERL language was written to couple the codes.

  5. Radiochemical Assays of Irradiated VVER-440 Fuel for Use in Spent Fuel Burnup Credit Activities

    Energy Technology Data Exchange (ETDEWEB)

    Jardine, L J

    2005-04-25

    The objective of this spent fuel burnup credit work was to study and describe a VVER-440 reactor spent fuel assembly (FA) initial state before irradiation, its operational irradiation history and the resulting radionuclide distribution in the fuel assembly after irradiation. This work includes the following stages: (1) to pick out and select a specific spent (irradiated) FA for examination; (2) to describe the FA initial state before irradiation; (3) to describe the irradiation history, including thermal calculations; (4) to examine the burnup distribution of select radionuclides along the FA height and cross-section; (5) to examine the radionuclide distributions; (6) to determine the Kr-85 release into the plenum; (7) to select and prepare FA rod specimens for destructive examinations; (8) to determine the radionuclide compositions, isotope masses and burnup in the rod specimens; and (9) to analyze, document and process the results. The specific workscope included the destructive assay (DA) of spent fuel assembly rod segments with an {approx}38.5 MWd/KgU burnup from a single VVER-440 fuel assembly from the Novovorenezh reactor in Russia. Based on irradiation history criteria, four rods from the fuel assembly were selected and removed from the assembly for examination. Next, 8 sections were cut from the four rods and sent for destructive analysis of radionuclides by radiochemical analyses. The results were documented in a series of seven reports over a period of {approx}1 1/2 years.

  6. Outlier Mining Based on Principal Component Estimation

    Institute of Scientific and Technical Information of China (English)

    Hu Yang; Ting Yang

    2005-01-01

    Outlier mining is an important aspect in data mining and the outlier mining based on Cook distance is most commonly used. But we know that when the data have multicollinearity, the traditional Cook method is no longer effective. Considering the excellence of the principal component estimation, we use it to substitute the least squares estimation, and then give the Cook distance measurement based on principal component estimation, which can be used in outlier mining. At the same time, we have done some research on related theories and application problems.

  7. Thermal ageing mechanisms of VVER-1000 reactor pressure vessel steels

    Science.gov (United States)

    Shtrombakh, Yaroslav I.; Gurovich, Boris A.; Kuleshova, Evgenia A.; Maltsev, Dmitry A.; Fedotova, Svetlana V.; Chernobaeva, Anna A.

    2014-09-01

    In this paper a complex of microstructural studies (TEM and SEM) and a comparative analysis of the results of these studies with the data of mechanical tests of temperature sets of VVER-1000 RPV surveillance specimens with exposure times up to ∼200,000 h were conducted. Special annealing of control and temperature sets of SS which provides the dissolution of grain boundary segregation was performed to clarify the mechanisms of thermal ageing. It was demonstrated that during long-term exposures up to 200,000 h at the operating temperature of about 310-320 °C thermal ageing effects reveal themselves only for the weld metal (Ni content ⩾ 1.35%) and are the result of grain boundary segregation accumulation (development of reversible temper brittleness). The obtained results improve the accuracy of prediction of the thermal ageing rate of VVER-1000 materials in case of RPV service life extension up to 60 years.

  8. Methodological studies on the VVER-440 control assembly calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hordosy, G.; Kereszturi, A.; Maraczy, C. [KFKI Atomic Energy Research Institute, Budapest (Hungary)

    1995-12-31

    The control assembly regions of VVER-440 reactors are represented by 2-group albedo matrices in the global calculations of the KARATE code system. Some methodological aspects of calculating albedo matrices with the COLA transport code are presented. Illustrations are given how these matrices depend on the relevant parameters describing the boron steel and steel regions of the control assemblies. The calculation of the response matrix for a node consisting of two parts filled with different materials is discussed.

  9. Coolability of ballooned VVER bundles with pellet relocation

    Energy Technology Data Exchange (ETDEWEB)

    Hozer, Z.; Nagy, I.; Windberg, P.; Vimi, A. [AEKI, P.O.box 49, Budapest, H-1525 (Hungary)

    2009-06-15

    During a LOCA incident the high pressure in the fuel rods can lead to clad ballooning and the debris of fuel pellets can fill the enlarged volume. The evaluation of the role of these two effects on the coolability of VVER type fuel bundles was the main objective of the experimental series. The tests were carried out in the modified configuration of the CODEX facility. 19-rod electrically heated VVER type bundle was used. The test section was heated up to 600 deg. C in steam atmosphere and the bundle was quenched from the bottom by cold water. Three series of tests were performed: 1. Reference bundle with fuel rods without ballooning, with uniform power profile. 2. Bundle with 86% blockage rate and with uniform power profile. The blockage rate was reached by superimposing hollow sleeves on all 19 fuel rods. 3. Bundle with 86% blockage rate and with local power peak in the ballooned area. The local power peak was produced by the local reduction the cross section of the internal heater bar inside of the fuel rods. In all three bundle configurations three different cooling water flow-rates were applied. The experimental results confirmed that a VVER bundle with even 86% blockage rate remains coolable after a LOCA event. The ballooned section creates some obstacles for the cooling water during reflood of the bundle, but this effect causes only a short delay in the cooling down of the hot fuel rods. Earlier tests on the coolability of ballooned bundles were performed only with Western type bundles with square fuel lattice. The present test series was the first confirmation of the coolability of VVER type bundles with triangular lattice. The accumulation of fuel pellet debris in the ballooned volume results in a local power peak, which leads to further slowing down of quench front. The first tests indicated that the effect of local power peak was less significant on the delay of cooling down than the effect of ballooning. (authors)

  10. A refinement driven component-based design

    DEFF Research Database (Denmark)

    Chen, Zhenbang; Liu, Zhiming; Ravn, Anders Peter;

    2007-01-01

    to integrate sophisticated checkers, generators and transformations. A feasible approach to ensuring high quality of such add-ins is to base them on sound formal foundations. This paper summarizes our research on the Refinement of Component and Object Systems (rCOS) and illustrates it with experiences from...... the work on the Common Component Modelling Example (CoCoME). This gives evidence that the formal techniques developed in rCOS can be integrated into a model-driven development process and shows where it may be integrated in computer-aided software engineering (CASE) tools for adding formally supported...

  11. On the possibility of using uranium-beryllium oxide fuel in a VVER reactor

    Science.gov (United States)

    Kovalishin, A. A.; Prosyolkov, V. N.; Sidorenko, V. D.; Stogov, Yu. V.

    2014-12-01

    The possibility of using UO2-BeO fuel in a VVER reactor is considered with allowance for the thermophysical properties of this fuel. Neutron characteristics of VVER fuel assemblies with UO2-BeO fuel pellets are estimated.

  12. Diversification of the VVER fuel market in Eastern Europe and Ukraine

    Energy Technology Data Exchange (ETDEWEB)

    Kirst, Michael [Westinghouse EMEA, Brussels (Belgium); Benjaminsson, Ulf; Oenneby, Carina [Westinghouse Electric Sweden AB, Vaesteraes (Sweden)

    2015-03-15

    There are a total of 33 VVER active reactors in the EU and Ukraine, accounting for the largest percentage of the total electricity supply in the countries operating these. The responsible governments and utilities operating these units want too see an increased diversification of the nuclear fuel supply. Westinghouse is the only nuclear fuel producer outside Russia, which has taken the major steps to develop, qualify and manufacture VVER fuel designs - both for VVER-440 and VVER-1000 reactors. The company has delivered reloads of VVER-440 fuel to Loviisa 2 in Finland, VVER-1000 fuel for both the initial core and follow-on regions to Temelin 1-2 in the Czech Republic and more recently reloads of VVER-1000 fuel to South Ukraine 2-3. Technical challenges in form of mechanical interference with the resident fuel have been encountered in Ukraine, but innovative solutions have been developed and successfully implemented and today Ukraine has, for the first time in its history, a viable VVER-1000 fuel design alternative, representing a tremendous lever in energy security for the country.

  13. Component-Based Cartoon Face Generation

    Directory of Open Access Journals (Sweden)

    Saman Sepehri Nejad

    2016-11-01

    Full Text Available In this paper, we present a cartoon face generation method that stands on a component-based facial feature extraction approach. Given a frontal face image as an input, our proposed system has the following stages. First, face features are extracted using an extended Active Shape Model. Outlines of the components are locally modified using edge detection, template matching and Hermit interpolation. This modification enhances the diversity of output and accuracy of the component matching required for cartoon generation. Second, to bring cartoon-specific features such as shadows, highlights and, especially, stylish drawing, an array of various face photographs and corresponding hand-drawn cartoon faces are collected. These cartoon templates are automatically decomposed into cartoon components using our proposed method for parameterizing cartoon samples, which is fast and simple. Then, using shape matching methods, the appropriate cartoon component is selected and deformed to fit the input face. Finally, a cartoon face is rendered in a vector format using the rendering rules of the selected template. Experimental results demonstrate effectiveness of our approach in generating life-like cartoon faces.

  14. Technology of repair of selected equipment in the power plant type VVER 440

    Energy Technology Data Exchange (ETDEWEB)

    Barborka, J.; Magula, V. [Welding Research Inst. (WRI), Bratislava (Slovakia)

    1998-11-01

    This article is divided in two parts: The first part is studying the effect of individual parameters by the usual and pulsed welding of 15CH2MFA steel. It can be concluded that by use of mechanized or automatic TIG process in PC position with addition of a cold wire with high nickel content the desired quality of repair welded joints of a pressure vessel of VVER 440 reactor can be achieved. Based on the results of the second laboratory study of the renovation technology applied for the rotary surfaces of pressure-tight cover and spindle of the main closing armature type DN 500 it can be concluded, that the developed technology for surfacing the sealing surfaces by TIG process with addition of a high-nickel cold wire the functional capability of the mentioned parts can be fully restored.

  15. Study of reactor plant disturbed cooling condition modes caused by the VVER reactor secondary circuit

    Directory of Open Access Journals (Sweden)

    V.I. Belozerov

    2016-12-01

    Based on the RELAP-5, TRAC, and TRACE software codes, reactor plant cooling condition malfunction modes caused by the VVER-1000 secondary circuit were simulated and investigated. Experimental data on the mode with the turbine-generator stop valve closing are presented. The obtained dependences made it possible to determine the maximum values of pressure and temperature in the circulation circuit as well as estimate the Minimum Critical Heat Flux Ratio (MCHFR. It has been found that, if any of the initial events occurs, safety systems are activated according to the set points; transient processes are stabilized in time; and the Critical Heat Flux (CHF limit is provided. Therefore, in the event of emergency associated with the considered modes, the reactor plant safety will be ensured.

  16. Retrospective Dosimetry of Vver 440 Reactor Pressure Vessel at the 3RD Unit of Dukovany Npp

    Science.gov (United States)

    Marek, M.; Viererbl, L.; Sus, F.; Klupak, V.; Rataj, J.; Hogel, J.

    2009-08-01

    Reactor pressure vessel (RPV) residual lifetime of the Czech VVER-440 is currently monitored under Surveillance Specimens Programs (SSP) focused on reactor pressure vessel materials. Neutron fluence in the samples and its distribution in the RPV are determined by a combination of calculation results and the experimental data coming from the reactor dosimetry measurements both in the specimen containers and in the reactor cavity. The direct experimental assessment of the neutron flux density incident onto RPV and neutron fluence for the entire period of nuclear power plant unit operation can be based on the evaluation of the samples taken from the inner RPV cladding. The Retrospective Dosimetry was also used at Dukovany NPP at its 3rd unit after the 18th cycle. The paper describes methodology, experimental setup for sample extraction, measurement of activities, and the determination of the neutron flux and fluence averaged over the samples.

  17. Industrialisation of flyash based building components

    Energy Technology Data Exchange (ETDEWEB)

    Rajkumar, C.; Lal, R. [National Council for Cement and Building Materials (India)

    1996-12-31

    There is an acute shortage of housing in India and the prevailing backlog of housing is increasing every year, as the rate of construction has not kept pace with population growth. One way of partially meeting the increasing demand for building materials is to make use of non-conventional materials and technologies based on the use of industrial by-products. Studies conducted have shown that manufacture of building materials or components, particularly bricks and blocks, is the most promising direction of fly ash utilization. 3 figs., 3 tabs.

  18. A systematic approach for component-based software development

    NARCIS (Netherlands)

    Guareis de farias, Cléver; van Sinderen, Marten J.; Ferreira Pires, Luis

    2000-01-01

    Component-based software development enables the construction of software artefacts by assembling prefabricated, configurable and independently evolving building blocks, called software components. This paper presents an approach for the development of component-based software artefacts. This

  19. Mechanisms of radiation embrittlement of VVER-1000 RPV steel at irradiation temperatures of (50-400)°C

    Science.gov (United States)

    Kuleshova, E. A.; Gurovich, B. A.; Bukina, Z. V.; Frolov, A. S.; Maltsev, D. A.; Krikun, E. V.; Zhurko, D. A.; Zhuchkov, G. M.

    2017-07-01

    This work summarizes and analyzes our recent research results on the effect of irradiation temperature within the range of (50-400)°C on microstructure and properties of 15Kh2NMFAA class 1 steel (VVER-1000 reactor pressure vessel (RPV) base metal). The paper considers the influence of accelerated irradiation with different temperature up to different fluences on the carbide and irradiation-induced phases, radiation defects, yield strength changes and critical brittleness temperature shift (ΔTK) as well as on changes of the fraction of brittle intergranular fracture and segregation processes in the steel. Low temperature irradiation resulted solely in formation of radiation defects - dislocation loops of high number density, the latter increased with increase in irradiation temperature while their size decreased. In this regard high embrittlement rate observed at low temperature irradiation is only due to the hardening mechanism of radiation embrittlement. Accelerated irradiation at VVER-1000 RPV operating temperature (∼300 °C) caused formation of radiation-induced precipitates and dislocation loops, as well as some increase in phosphorus grain boundary segregation. The observed ΔTK shift being within the regulatory curve for VVER-1000 RPV base metal is due to both hardening and non-hardening mechanisms of radiation embrittlement. Irradiation at elevated temperature caused more intense phosphorus grain boundary segregation, but no formation of radiation-induced precipitates or dislocation loops in contrast to irradiation at 300 °C. Carbide transformations observed only after irradiation at 400 °C caused increase in yield strength and, along with a contribution of the non-hardening mechanism, resulted in the lowest ΔTK shift in the studied range of irradiation temperature and fluence.

  20. Moessbauer study of EUROFER and VVER steel reactor materials

    Energy Technology Data Exchange (ETDEWEB)

    Kuzmann, E., E-mail: kuzmann@ludens.elte.hu [Eoetvoes University, Laboratory of Nuclear Chemistry, Institute of Chemistry (Hungary); Horvath, A. [Hungarian Academy of Sciences, Centre for Energy Research (Hungary); Alves, L.; Silva, J. F.; Gomes, U.; Souza, C. [Universidade Federal do Rio Grande do Norte (University) (Brazil); Homonnay, Z. [Eoetvoes University, Laboratory of Nuclear Chemistry, Institute of Chemistry (Hungary)

    2013-04-15

    {sup 57}Fe Moessbauer spectroscopy and X-ray diffractometry were used to study EUROFER or VVER ferritic reactor steels mechanically alloyed with TaC or NbC. Significant changes were found in the Moessbauer spectra and in the corresponding hyperfine field distributions between the ball milled pure steel and that alloyed with TaC or NbC. Spectral differences were also found in the case of use of same carbides with different origin, too. The observed spectral changes as an effect of ball milling of the reactor material steels with carbides can be associated with change in short range order of the constituents of steel.

  1. Issues in the use of Weapons-Grade MOX Fuel in VVER-1000 Nuclear Reactors: Comparison of UO2 and MOX Fuels

    Energy Technology Data Exchange (ETDEWEB)

    Carbajo, J.J.

    2005-05-27

    The purpose of this report is to quantify the differences between mixed oxide (MOX) and low-enriched uranium (LEU) fuels and to assess in reasonable detail the potential impacts of MOX fuel use in VVER-1000 nuclear power plants in Russia. This report is a generic tool to assist in the identification of plant modifications that may be required to accommodate receiving, storing, handling, irradiating, and disposing of MOX fuel in VVER-1000 reactors. The report is based on information from work performed by Russian and U.S. institutions. The report quantifies each issue, and the differences between LEU and MOX fuels are described as accurately as possible, given the current sources of data.

  2. Post-Test Analysis of 11% Break at PSB-VVER Experimental Facility using Cathare 2 Code

    Science.gov (United States)

    Sabotinov, Luben; Chevrier, Patrick

    The best estimate French thermal-hydraulic computer code CATHARE 2 Version 2.5_1 was used for post-test analysis of the experiment “11% upper plenum break”, conducted at the large-scale test facility PSB-VVER in Russia. The PSB rig is 1:300 scaled model of VVER-1000 NPP. A computer model has been developed for CATHARE 2 V2.5_1, taking into account all important components of the PSB facility: reactor model (lower plenum, core, bypass, upper plenum, downcomer), 4 separated loops, pressurizer, horizontal multitube steam generators, break section. The secondary side is represented by recirculation model. A large number of sensitivity calculations has been performed regarding break modeling, reactor pressure vessel modeling, counter current flow modeling, hydraulic losses, heat losses. The comparison between calculated and experimental results shows good prediction of the basic thermal-hydraulic phenomena and parameters such as pressures, temperatures, void fractions, loop seal clearance, etc. The experimental and calculation results are very sensitive regarding the fuel cladding temperature, which show a periodical nature. With the applied CATHARE 1D modeling, the global thermal-hydraulic parameters and the core heat up have been reasonably predicted.

  3. Electric Vehicle based on standard industrial components

    OpenAIRE

    Fernández Ramos, José; Aghili Kathir, Foroohar

    2013-01-01

    The aim of this paper is to presents the complete design of an electric vehicle by using standard industrial components as VRLA batteries, AC induction motors and standard frequency converters. In comparison with dedicated components, the use of standard components has the following advantages: higher reliability, low price, broad range of products and suppliers, and high availability and technological independence. Besides this, we show that these components allow to ...

  4. Semantic network based component organization model for program mining

    Institute of Scientific and Technical Information of China (English)

    王斌; 张尧学; 陈松乔

    2003-01-01

    Based on the definition of component ontology, an effective component classification mechanism and a facet named component relationship are proposed. Then an application domain oriented, hierarchical component organization model is established. At last a hierarchical component semantic network (HCSN) described by ontology interchange language(OIL) is presented and then its function is described. Using HCSN and cooperating with other components retrieving algorithms based on component description, other components information and their assembly or composite modes related to the key component can be found. Based on HCSN, component directory library is catalogued and a prototype system is constructed. The prototype system proves that component library organization based on this model gives guarantee to the reliability of component assembly during program mining.

  5. An experimental study of a VVER reactor's steam generator model operating in the condensing mode

    Science.gov (United States)

    Morozov, A. V.; Remizov, O. V.

    2012-05-01

    Results obtained from an experimental study of a VVER reactor's steam generator model operating in the condensing mode are presented. The obtained empirical dependence for calculating the power of heat exchangers operating in the steam condensation mode is presented.

  6. PRIZMA predictions of in-core detection indications in the VVER-1000 reactor

    Science.gov (United States)

    Kandiev, Yadgar Z.; Kashayeva, Elena A.; Malyshin, Gennady N.; Modestov, Dmitry G.; Khatuntsev, Kirill E.

    2014-06-01

    The paper describes calculations which were done by the PRIZMA code(1) to predict indications of in-core rhodium detectors in the VVER-1000 reactor for some core fragments with allowance for fuel and rhodium burnout.

  7. Lifestyles Based on Health Components in Iran

    Directory of Open Access Journals (Sweden)

    Babaei

    2016-07-01

    Full Text Available Context Lifestyle is a way employed by people, groups and nations and is formed in specific geographical, economic, political, cultural and religious texts. Health depends on lifestyle and is essential to preserve and promote health and improve lifestyle. Objectives The present study aimed to investigate lifestyle based on health-oriented components in Iran. Data Sources The research was conducted through E-banks including scientific information database (SID, Iran medical science databank (Iran Medex, Iran journal databank (Magiran and other databases such as Elsevier, PubMed and google scholar meta search engine regarding the subject from 2000 to 2014. Moreover, Official Iranian statistics and information were applied. The search terms used included lifestyle, health, health promoting behaviors, health-oriented lifestyle and lifestyle in Iran. Study Selection In the primary research, many papers were observed out of which 157 (120 in Farsi and 37 in English were selected. Data Extraction Following the careful study of these papers and excluding the unqualified papers, 19 papers with thorough information and higher relevance with the research purpose were selected. Results After examining articles based on the selected keywords and search strategies, 215 articles (134 in Farsi and 81 in English were obtained. Components of lifestyle and health are increasing in recent years; therefore, 8 (42% and 11 (58% articles were published during 2005 - 2010 and 2011 - 2014, respectively. Among them, there were 3 (16%, 8 (42%, 2 (10.5%, 2 (10.5% and 0 articles on the review of literature, descriptive-analytic, qualitative, analytic and descriptive articles, respectively. Conclusions Due to positive effect of healthy lifestyle on health promotion of individuals, it would be better for the government to provide comprehensive programs and policies in the society to enhance awareness of people about positive effects of health-oriented lifestyle on life and

  8. Component-Based Software Reuse on the World Wide Web

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Component-based software reuse (CBSR) has been widely used in software developing practice and has an even more brilliant future with the rapid extension of the Internet, because World Wide Web (WWW) makes the large scale of component resources from different vendors become available to software developers. In this paper, an abstract component model suitable for representing components on WWW isproposed, which plays important roles both in achieving interoperability among components and amongreusable component libraries (RCLs). Some necessary changes to many aspects of component management brought by WWW are also discussed, such as the classification of components and the corresponding searching methods, and the certification of components.

  9. The Component-Based Application for GAMESS

    Energy Technology Data Exchange (ETDEWEB)

    Peng, Fang [Iowa State Univ., Ames, IA (United States)

    2007-01-01

    GAMESS, a quantum chetnistry program for electronic structure calculations, has been freely shared by high-performance application scientists for over twenty years. It provides a rich set of functionalities and can be run on a variety of parallel platforms through a distributed data interface. While a chemistry computation is sophisticated and hard to develop, the resource sharing among different chemistry packages will accelerate the development of new computations and encourage the cooperation of scientists from universities and laboratories. Common Component Architecture (CCA) offers an enviromnent that allows scientific packages to dynamically interact with each other through components, which enable dynamic coupling of GAMESS with other chetnistry packages, such as MPQC and NWChem. Conceptually, a cotnputation can be constructed with "plug-and-play" components from scientific packages and require more than componentizing functions/subroutines of interest, especially for large-scale scientific packages with a long development history. In this research, we present our efforts to construct cotnponents for GAMESS that conform to the CCA specification. The goal is to enable the fine-grained interoperability between three quantum chemistry programs, GAMESS, MPQC and NWChem, via components. We focus on one of the three packages, GAMESS; delineate the structure of GAMESS computations, followed by our approaches to its component development. Then we use GAMESS as the driver to interoperate integral components from the other tw"o packages, arid show the solutions for interoperability problems along with preliminary results. To justify the versatility of the design, the Tuning and Analysis Utility (TAU) components have been coupled with GAMESS and its components, so that the performance of GAMESS and its components may be analyzed for a wide range of systetn parameters.

  10. The development of component-based information systems

    CERN Document Server

    Cesare, Sergio de; Macredie, Robert

    2015-01-01

    This work provides a comprehensive overview of research and practical issues relating to component-based development information systems (CBIS). Spanning the organizational, developmental, and technical aspects of the subject, the original research included here provides fresh insights into successful CBIS technology and application. Part I covers component-based development methodologies and system architectures. Part II analyzes different aspects of managing component-based development. Part III investigates component-based development versus commercial off-the-shelf products (COTS), includi

  11. The criterion for blanking-off heat-transfer tubes in the steam generators at VVER-based nuclear power plants based on the results of eddy-current examination

    Science.gov (United States)

    Lunin, V. P.; Zhdanov, A. G.; Chegodaev, V. V.; Stolyarov, A. A.

    2015-05-01

    The problem of defining the criterion for blanking off heat-transfer tubes in the steam generators at nuclear power plants on the basis of signals obtained from the standard multifrequency eddy-current examination is considered. The decision about blanking off one or another tube is presently made with reference to one parameter of the relevant signal at the working frequency, namely, with reference to its phase, which directly depends on the depth of the flaw being detected, i.e., a crack in the tube. The crack depth equal to 60% of the tube wall thickness is regarded to be the critical one, at which a decision about withdrawing such a tube out from operation (blanking off) must be taken. However, since mechanical tensile rupture tests of heat-transfer tubes show the possibility of their further use with such flaws, the secondary parameter of the signal, namely, its amplitude, must be used for determining the blanking-off criterion. The signals produced by the standard flow-type transducers in response to flaws in the form of a longitudinal crack having the depth and length within the limits permitted by the relevant regulations were calculated using 3D finite-element modeling. Based on the obtained results, the values of the eddy-current signal amplitude were determined, which, together with the signal phase value, form a new amplitude-phase criterion for blanking off heat-transfer tubes. For confirming the effectiveness of this technique, the algorithm for revealing the signal indications satisfying the proposed amplitude-phase criterion was tested on real signals obtained from operational eddy-current examination of the state of steam generator heat-transfer tubes carried out within the framework of planned preventive repair.

  12. Component-based Discrete Event Simulation Using the Fractal Component Model

    OpenAIRE

    Dalle, Olivier

    2007-01-01

    In this paper we show that Fractal, a generic component model coming from the Component-Based Software Engineering (CBSE) community, meets most of the functional expectations identified so far in the simulation community for component-based modeling and simulation. We also demonstrate that Fractal offers additional features that have not yet been identified in the simulation community despite their potential usefulness. Eventually we describe our ongoing work on such a new simulation architec...

  13. Study of engine noise based on independent component analysis

    Institute of Scientific and Technical Information of China (English)

    HAO Zhi-yong; JIN Yan; YANG Chen

    2007-01-01

    Independent component analysis was applied to analyze the acoustic signals from diesel engine. First the basic principle of independent component analysis (ICA) was reviewed. Diesel engine acoustic signal was decomposed into several independent components (Ics); Fourier transform and continuous wavelet transform (CWT) were applied to analyze the independent components. Different noise sources of the diesel engine were separated, based on the characteristics of different component in time-frequency domain.

  14. AN EVEN COMPONENT BASED FACE RECOGNITION METHOD

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    This paper presents a novel face recognition algorithm. To provide additional variations to training data set, even-odd decomposition is adopted, and only the even components (half-even face images) are used for further processing. To tackle with shift-variant problem,Fourier transform is applied to half-even face images. To reduce the dimension of an image,PCA (Principle Component Analysis) features are extracted from the amplitude spectrum of half-even face images. Finally, nearest neighbor classifier is employed for the task of classification. Experimental results on ORL database show that the proposed method outperforms in terms of accuracy the conventional eigenface method which applies PCA on original images and the eigenface method which uses both the original images and their mirror images as training set.

  15. VVER Knowledge Preservation and Transfer within the Frame of CORONA Project Activities

    Directory of Open Access Journals (Sweden)

    Mitev Mladen

    2016-01-01

    Full Text Available The CORONA project is funded by the European Commission under the FP7 programme with overall objective to establish a Regional Centre of Competence for VVER Technology and Nuclear Applications. The Centre will provide support and services for preservation and transfer of VVER-related nuclear knowledge as well as know-how and capacity building. Specific training schemes aimed at nuclear professionals and researchers, non-nuclear professionals and students are developed and implemented in cooperation with local, national and international training and educational institutions. Pilot trainings are executed for each specific target group to assess the applicability of the training materials. The training scheme implemented for nuclear professionals and researchers is focussed on the NPP Lifetime Management. The available knowledge on enhancing safety and performance of nuclear installations with VVER technology is used in the preparation of the training materials. The Online Multimedia Training Course on VVER Reactor Pressure Vessel Embrittlement and Integrity Assessment, developed by the joint effort of JRC-IET and IAEA is used in the training. The outcome collected from the trainees showed that the tool meets its primary goal of consolidating the existing knowledge on the VVER RPV Embrittlement and Integrity Assessment, provides adequate ground for transfer of this knowledge.

  16. VVER Knowledge Preservation and Transfer within the Frame of CORONA Project Activities

    Science.gov (United States)

    Mitev, Mladen; Corniani, Enrico; Manolova, Maria; Pironkov, Lybomir; Tsvetkov, Iskren

    2016-02-01

    The CORONA project is funded by the European Commission under the FP7 programme with overall objective to establish a Regional Centre of Competence for VVER Technology and Nuclear Applications. The Centre will provide support and services for preservation and transfer of VVER-related nuclear knowledge as well as know-how and capacity building. Specific training schemes aimed at nuclear professionals and researchers, non-nuclear professionals and students are developed and implemented in cooperation with local, national and international training and educational institutions. Pilot trainings are executed for each specific target group to assess the applicability of the training materials. The training scheme implemented for nuclear professionals and researchers is focussed on the NPP Lifetime Management. The available knowledge on enhancing safety and performance of nuclear installations with VVER technology is used in the preparation of the training materials. The Online Multimedia Training Course on VVER Reactor Pressure Vessel Embrittlement and Integrity Assessment, developed by the joint effort of JRC-IET and IAEA is used in the training. The outcome collected from the trainees showed that the tool meets its primary goal of consolidating the existing knowledge on the VVER RPV Embrittlement and Integrity Assessment, provides adequate ground for transfer of this knowledge.

  17. Face Recognition Based on Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Ali Javed

    2013-02-01

    Full Text Available The purpose of the proposed research work is to develop a computer system that can recognize a person by comparing the characteristics of face to those of known individuals. The main focus is on frontal two dimensional images that are taken in a controlled environment i.e. the illumination and the background will be constant. All the other methods of person’s identification and verification like iris scan or finger print scan require high quality and costly equipment’s but in face recognition we only require a normal camera giving us a 2-D frontal image of the person that will be used for the process of the person’s recognition. Principal Component Analysis technique has been used in the proposed system of face recognition. The purpose is to compare the results of the technique under the different conditions and to find the most efficient approach for developing a facial recognition system

  18. Evolution of weld metals nanostructure and properties under irradiation and recovery annealing of VVER-type reactors

    Science.gov (United States)

    Gurovich, B.; Kuleshova, E.; Shtrombakh, Ya.; Fedotova, S.; Zabusov, O.; Prikhodko, K.; Zhurko, D.

    2013-03-01

    The results of VVER-440 steel Sv-10KhMFT and VVER-1000 steel SV-10KhGNMAA investigations by transmission electron microscopy, scanning electron microscopy, Auger-electron spectroscopy and mechanical tests are presented in this paper. The both types of weld metals with different content of impurities and alloying elements were studied after irradiations to fast neutron (E > 0.5 MeV) fluences in the wide range below and beyond the design values, after recovery annealing procedures and after re-irradiation following the annealing. The distinctive features of embrittlement kinetics of VVER-440 and VVER-1000 RPV weld metals conditioned by their chemical composition differences were investigated. It is shown that the main contribution into radiation strengthening within the design fluence can be attributed to radiation-induced precipitates, on reaching the design or beyond design values of fast neutron fluencies the main contribution into VVER-440 welds strengthening is made by radiation-induced dislocation loops, and in case of VVER-1000 welds - radiation-induced precipitates and grain-boundary phosphorous segregations. Recovery annealing of VVER-440 welds at 475 °C during 100 h causes irradiation-induced defects disappearance, transformation of copper enriched precipitates into bigger copper-rich precipitates with lower number density and leads to almost full recovery of mechanical properties followed by comparatively slow re-embrittlement rate. The recovery annealing temperature of VVER-1000 welds was higher - 565 °C during 100 h - to avoid temper brittleness. The annealing of VVER-1000 welds leads to almost full recovery of mechanical properties due to irradiation-induced defects disappearance and decrease in precipitates number density and grain-boundary segregation of phosphorus. The re-embrittlement rate of VVER-1000 weld during subsequent re-irradiation is at least not higher than the initial rate.

  19. SAT-based verification for timed component connectors

    NARCIS (Netherlands)

    Kemper, S.

    2011-01-01

    Component-based software construction relies on suitable models underlying components, and in particular the coordinators which orchestrate component behaviour. Verifying correctness and safety of such systems amounts to model checking the underlying system model. The model checking techniques not o

  20. Independent component analysis based on adaptive artificial bee colony

    National Research Council Canada - National Science Library

    Shi Zhang; Chao-Wei Bao; Hai-Bin Shen

    2016-01-01

    .... An independent component analysis method based on adaptive artificial bee colony algorithm is proposed in this paper, aiming at the problems of slow convergence and low computational precision...

  1. Component-based event composition modeling for CPS

    Science.gov (United States)

    Yin, Zhonghai; Chu, Yanan

    2017-06-01

    In order to combine event-drive model with component-based architecture design, this paper proposes a component-based event composition model to realize CPS’s event processing. Firstly, the formal representations of component and attribute-oriented event are defined. Every component is consisted of subcomponents and the corresponding event sets. The attribute “type” is added to attribute-oriented event definition so as to describe the responsiveness to the component. Secondly, component-based event composition model is constructed. Concept lattice-based event algebra system is built to describe the relations between events, and the rules for drawing Hasse diagram are discussed. Thirdly, as there are redundancies among composite events, two simplification methods are proposed. Finally, the communication-based train control system is simulated to verify the event composition model. Results show that the event composition model we have constructed can be applied to express composite events correctly and effectively.

  2. Leveraging Component-Based Software Engineering with Fraclet

    OpenAIRE

    Rouvoy, Romain; Merle, Philippe

    2009-01-01

    International audience; Component-based software engineering has achieved wide acceptance in the domain of software engineering by improving productivity, reusability and composition. This success has also encouraged the emergence of a plethora of component models. Nevertheless, even if the abstract models of most of lightweight component models are quite similar, their programming models can still differ a lot. This drawback limits the reuse and composition of components implemented using di...

  3. Java Applications Development Based on Component and Metacomponent Approach

    OpenAIRE

    Danijel Radošević; Mario Konecki; Tihomir Orehovački

    2008-01-01

    Component based modeling offers new and improved approach to design, construction, implementation and evolution of software applications development. This kind of software applications development is usually represented by appropriate component model/diagram. UML, for example, offers component diagram for representation of this kind of model. On the other hand, metacomponents usage offers some new features which hardly could be achieved by using generic components. Firstly, implementation of ...

  4. Si-based RF MEMS components.

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, James E.; Nordquist, Christopher Daniel; Baker, Michael Sean; Fleming, James Grant; Stewart, Harold D.; Dyck, Christopher William

    2005-01-01

    Radio frequency microelectromechanical systems (RF MEMS) are an enabling technology for next-generation communications and radar systems in both military and commercial sectors. RF MEMS-based reconfigurable circuits outperform solid-state circuits in terms of insertion loss, linearity, and static power consumption and are advantageous in applications where high signal power and nanosecond switching speeds are not required. We have demonstrated a number of RF MEMS switches on high-resistivity silicon (high-R Si) that were fabricated by leveraging the volume manufacturing processes available in the Microelectronics Development Laboratory (MDL), a Class-1, radiation-hardened CMOS manufacturing facility. We describe novel tungsten and aluminum-based processes, and present results of switches developed in each of these processes. Series and shunt ohmic switches and shunt capacitive switches were successfully demonstrated. The implications of fabricating on high-R Si and suggested future directions for developing low-loss RF MEMS-based circuits are also discussed.

  5. FUZZY PRINCIPAL COMPONENT ANALYSIS AND ITS KERNEL BASED MODEL

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Principal Component Analysis (PCA) is one of the most important feature extraction methods, and Kernel Principal Component Analysis (KPCA) is a nonlinear extension of PCA based on kernel methods. In real world, each input data may not be fully assigned to one class and it may partially belong to other classes. Based on the theory of fuzzy sets, this paper presents Fuzzy Principal Component Analysis (FPCA) and its nonlinear extension model, i.e., Kernel-based Fuzzy Principal Component Analysis (KFPCA). The experimental results indicate that the proposed algorithms have good performances.

  6. Economical aspects of multiple plutonium and uranium recycling in VVER reactors

    Energy Technology Data Exchange (ETDEWEB)

    Alekseev, P.N.; Bobrov, E.A.; Dudnikov, A.A.; Teplov, P.S. [National Research Centre ' Kurchatov Institute' , Moscow (Russian Federation)

    2016-09-15

    The basic strategy of Russian Nuclear Energy development is the formation of the closed fuel cycle based on fast breeder and thermal reactors, as well as the solution of problems of spent nuclear fuel accumulation and availability of resources. Three options of multiple Pu and U recycling in VVER reactors are considered in this work. Comparison of MOX and REMIX fuel recycling approaches for the closed fuel cycle involving thermal reactors is presented. REMIX fuel is supposed to be fabricated from non-separated mixture of uranium and plutonium obtained in spent fuel reprocessing with further makeup by enriched U. These options make it possible to recycle several times the total amount of Pu and U obtained from spent fuel. The main difference is the full or partial fuel loading of the core by assemblies with recycled Pu. The third option presents the concept of heterogeneous arrangement of fuel pins made of enriched uranium and MOX in one fuel assembly. It should be noted that fabrication of all fuel assemblies with Pu requires the use of expensive manufacturing technology. These three options of core loading can be balanced with respect to maximum Pu and U involvement in the fuel cycle. Various physical and economical aspects of Pu and U multiple recycling for selected options are considered in this work.

  7. Generation and Testing of XS Libraries for VVER Using APOLLO2 and TRIPOLI4

    Science.gov (United States)

    Zheleva, Nonka; Petrov, Nikolay; Todorova, Galina; Kolev, Nikola

    2014-06-01

    MOC based calculation schemes with APOLLO2 were used to generate few-group cross-section libraries for VVER-1000 at the nodal and pin level. This paper presents an overview of the testing of the schemes and the libraries, as well as the computational aspects. Two major ameliorations are considered: application of new developments in APOLLO2 and multicore computation for an acceptable trade-off between accuracy and efficiency. Two-level Pij-MOC industrial calculation schemes were tested against TRIPOLI4 reference results. Benchmarking of the schemes shows that the higher-order linear surface method of characteristics (LS MOC) is an efficient option for cross-section library generation. There is a significant potential for further refinement of the MOC energy mesh and the MOC parameters with the progress in distributed computing. A multi-parameter cross-section library for MSLB analysis with homogenized nodes was tested in 2D core simulation with COBAYA3 vs. whole-core TRIPOLI4 solutions on the CEA CCRT HPC system. Pin-by-pin cross-sections and interface discontinuity factors of Black Box Homogenization type were tested in diffusion calculations with COBAYA3 pin-by-pin against transport reference solutions. Good agreement is displayed.

  8. Coupled neutronic core and subchannel analysis of nanofluids in VVER-1000 type reactor

    Energy Technology Data Exchange (ETDEWEB)

    Zarifi, Ehsan; Sepanloo, Kamran [Nuclear Science and Technology Research Institute (NSTRI), Tehran (Iran, Islamic Republic of). Reactor and Nuclear Safety School; Jahanfarnia, Golamreza [Islamic Azad Univ., Tehran (Iran, Islamic Republic of). Dept. of Nuclear Engineering, Science and Research Branch

    2017-05-15

    This study is aimed to perform the coupled thermal-hydraulic/neutronic analysis of nanofluids as the coolant in the hot fuel assembly of VVER-1000 reactor core. Water-based nanofluid containing various volume fractions of Al{sub 2}O{sub 3} nanoparticle is analyzed. WIMS and CITATION codes are used for neutronic simulation of the reactor core, calculating neutron flux and thermal power distribution. In the thermal-hydraulic modeling, the porous media approach is used to analyze the thermal behavior of the reactor core and the subchannel analysis is used to calculate the hottest fuel assembly thermal-hydraulic parameters. The derived conservation equations for coolant and conduction heat transfer equation for fuel and clad are discretized by Finite volume method and solved numerically using visual FORTRAN program. Finally the analysis results for nanofluids and pure water are compared together. The achieved results show that at low concentration (0.1 percent volume fraction) alumina is the optimum nanoparticles for normal reactor operation.

  9. Exposure conditions of reactor internals of Rovno VVER-440 NPP units 1 and 2

    Energy Technology Data Exchange (ETDEWEB)

    Grytsenko, O.V.; Pugach, S.M.; Diemokhin, V.L.; Bukanov, V.N. [Inst. for Nuclear Research, Kyiv, 03680 (Ukraine); Marek, M.; Vandlik, S. [Nuclear Research Inst. Rez Plc., Rez, 25068 (Czech Republic)

    2011-07-01

    Results of determination of irradiation conditions for vessel internals of VVER-440 reactor No. 1 and 2 at Rovno Nuclear Power Plant, obtained by specialists at Inst. for Nuclear Research Kyiv (Ukraine)), and Nuclear Research Inst. Rez (Czech Republic)), are presented. To calculate neutron transport, detailed calculation models of these reactors were prepared. Distribution of neutron flux functionals on the surface of reactor VVER-440 baffle and core barrel for different core loads was studied. Agreement between results obtained by specialists at Inst. for Nuclear Research and at Nuclear Research Inst. is shown. (authors)

  10. Component-Based Approach in Learning Management System Development

    Science.gov (United States)

    Zaitseva, Larisa; Bule, Jekaterina; Makarov, Sergey

    2013-01-01

    The paper describes component-based approach (CBA) for learning management system development. Learning object as components of e-learning courses and their metadata is considered. The architecture of learning management system based on CBA being developed in Riga Technical University, namely its architecture, elements and possibilities are…

  11. Advances in resonance based NDT for ceramic components

    Science.gov (United States)

    Hunter, L. J.; Jauriqui, L. M.; Gatewood, G. D.; Sisneros, R.

    2012-05-01

    The application of resonance based non-destructive testing methods has been providing benefit to manufacturers of metal components in the automotive and aerospace industries for many years. Recent developments in resonance based technologies are now allowing the application of resonance NDT to ceramic components including turbine engine components, armor, and hybrid bearing rolling elements. Application of higher frequencies and advanced signal interpretation are now allowing Process Compensated Resonance Testing to detect both internal material defects and surface breaking cracks in a variety of ceramic components. Resonance techniques can also be applied to determine material properties of coupons and to evaluate process capability for new manufacturing methods.

  12. 3D face recognition algorithm based on detecting reliable components

    Institute of Scientific and Technical Information of China (English)

    Huang Wenjun; Zhou Xuebing; Niu Xiamu

    2007-01-01

    Fisherfaces algorithm is a popular method for face recognition. However, there exist some unstable components that degrade recognition performance. In this paper, we propose a method based on detecting reliable components to overcome the problem and introduce it to 3D face recognition. The reliable components are detected within the binary feature vector, which is generated from the Fisherfaces feature vector based on statistical properties, and is used for 3D face recognition as the final feature vector. Experimental results show that the reliable components feature vector is much more effective than the Fisherfaces feature vector for face recognition.

  13. Component-based Control Software Design for Flexible Manufacturing System

    Institute of Scientific and Technical Information of China (English)

    周炳海; 奚立峰; 曹永上

    2003-01-01

    A new method that designs and implements the component-based distributed & hierarchical flexible manufacturing control software is described with a component concept in this paper. The proposed method takes aim at improving the flexibility and reliability of the control system. On the basis of describing the concepts of component-based software and the distributed object technology, the architecture of the component-based software of the control system is suggested with the Common Object Request Broker Architecture (CORBA). And then, we propose a design method for component-based distributed & hierarchical flexible manufacturing control system. Finally, to verify the software design method, a prototype flexible manufacturing control system software has been implemented in Orbix 2. 3c, VC++6.0 and has been tested in connection with the physical flexible ranufacturing shop at the WuXi Professional Institute.

  14. Optimization of Component Based Software Engineering Model Using Neural Network

    Directory of Open Access Journals (Sweden)

    Gaurav Kumar

    2014-10-01

    Full Text Available The goal of Component Based Software Engineering (CBSE is to deliver high quality, more reliable and more maintainable software systems in a shorter time and within limited budget by reusing and combining existing quality components. A high quality system can be achieved by using quality components, framework and integration process that plays a significant role. So, techniques and methods used for quality assurance and assessment of a component based system is different from those of the traditional software engineering methodology. In this paper, we are presenting a model for optimizing Chidamber and Kemerer (CK metric values of component-based software. A deep analysis of a series of CK metrics of the software components design patterns is done and metric values are drawn from them. By using unsupervised neural network- Self Organizing Map, we have proposed a model that provides an optimized model for Software Component engineering model based on reusability that depends on CK metric values. Average, standard deviated and optimized values for the CK metric are compared and evaluated to show the optimized reusability of component based model.

  15. Methods for reliability based design optimization of structural components

    OpenAIRE

    Dersjö, Tomas

    2012-01-01

    Cost and quality are key properties of a product, possibly even the two most important. Onedefinition of quality is fitness for purpose. Load-bearing products, i.e. structural components,loose their fitness for purpose if they fail. Thus, the ability to withstand failure is a fundamentalmeasure of quality for structural components. Reliability based design optimization(RBDO) is an approach for development of structural components which aims to minimizethe cost while constraining the probabili...

  16. Methodology of Fuel Burn Up Fitting in VVER-1000 Reactor Core by Using New Ex-Vessel Neutron Dosimetry and In-Core Measurements and its Application for Routine Reactor Pressure Vessel Fluence Calculations

    Directory of Open Access Journals (Sweden)

    Borodkin Pavel

    2016-01-01

    Full Text Available Paper describes the new approach of fitting axial fuel burn-up patterns in peripheral fuel assemblies of VVER-1000 type reactors, on the base of ex-core neutron leakage measurements, neutron-physical calculations and in-core SPND measured data. The developed approach uses results of new ex-vessel measurements on different power units through different reactor cycles and their uncertainties to clear the influence of a fitted fuel burn-up profile to the RPV neutron fluence calculations. The new methodology may be recommended to be included in the routine fluence calculations used in RPV lifetime management and may be taken into account during VVER-1000 core burn-up pattern correction.

  17. Methodology of Fuel Burn Up Fitting in VVER-1000 Reactor Core by Using New Ex-Vessel Neutron Dosimetry and In-Core Measurements and its Application for Routine Reactor Pressure Vessel Fluence Calculations

    Science.gov (United States)

    Borodkin, Pavel; Borodkin, Gennady; Khrennikov, Nikolay

    2016-02-01

    Paper describes the new approach of fitting axial fuel burn-up patterns in peripheral fuel assemblies of VVER-1000 type reactors, on the base of ex-core neutron leakage measurements, neutron-physical calculations and in-core SPND measured data. The developed approach uses results of new ex-vessel measurements on different power units through different reactor cycles and their uncertainties to clear the influence of a fitted fuel burn-up profile to the RPV neutron fluence calculations. The new methodology may be recommended to be included in the routine fluence calculations used in RPV lifetime management and may be taken into account during VVER-1000 core burn-up pattern correction.

  18. Calculational Benchmark Problems for VVER-1000 Mixed Oxide Fuel Cycle

    Energy Technology Data Exchange (ETDEWEB)

    Emmett, M.B.

    2000-03-17

    Standard problems were created to test the ability of American and Russian computational methods and data regarding the analysis of the storage and handling of Russian pressurized water reactor (VVER) mixed oxide fuel. Criticality safety and radiation shielding problems were analyzed. Analysis of American and Russian multiplication factors for fresh fuel storage for low-enriched uranium (UOX), weapons- (MOX-W) and reactor-grade (MOX-R) MOX differ by less than 2% for all variations of water density. For shielding calculations for fresh fuel, the ORNL results for the neutron source differ from the Russian results by less than 1% for UOX and MOX-R and by approximately 3% for MOX-W. For shielding calculations for fresh fuel assemblies, neutron dose rates at the surface of the assemblies differ from the Russian results by 5% to 9%; the level of agreement for gamma dose varies depending on the type of fuel, with UOX differing by the largest amount. The use of different gamma group structures and instantaneous versus asymptotic decay assumptions also complicate the comparison. For the calculation of dose rates from spent fuel in a shipping cask, the neutron source for UOX after 3-year cooling is within 1% and for MOX-W within 5% of one of the Russian results while the MOX-R difference is the largest at over 10%. These studies are a portion of the documentation required by the Russian nuclear regulatory authority, GAN, in order to certify Russian programs and data as being acceptably accurate for the analysis of mixed oxide fuels.

  19. Chemical composition effect on VVER-1000 RPV weld metal thermal aging

    Science.gov (United States)

    Gurovich, B. A.; Chernobaeva, A. A.; Erak, D. Yu; Kuleshova, E. A.; Zhurko, D. A.; Papina, V. B.; Skundin, M. A.; Maltsev, D. A.

    2015-10-01

    Temperature and fast neutron flux simultaneously affect the material of welded joints of reactor pressure vessels under irradiation. Understanding thermal aging effects on the weld metal allows for an explanation of the mechanisms that govern an increase in the ductile-to-brittle transition temperature of the reactor pressure vessel materials under long term irradiation at operation temperature. This paper reports on new results and reassessment of the VVER-1000 weld metal surveillance specimen database performed at the National Research Center "Kurchatov Institute". The current database of VVER-1000 weld metal thermal aging at 310-320 °C includes 50 transition temperature values with the maximum holding time of 208,896 h. The updated database completed with the information on intergranular fracture shear and phosphorous content in the grain boundaries has allowed us to propose a new mechanism of VVER-1000 weld materials thermal aging at 310-320 °C and develop models of ductile-to-brittle transition temperature shift for VVER-1000 weld metal during a long-term exposure at 310-320 °C.

  20. Conformity Between LR0 Mock-Ups and Vvers Npp Rpv Neutron Flux Attenuation

    Science.gov (United States)

    Belousov, Sergey; Ilieva, Krassimira; Kirilova, Desislava

    2009-08-01

    The conformity of the mock-up results and those for reactor pressure vessel (RPV) of nuclear power plants (NPP) has been evaluated in order to qualify if the mock-ups data could be used for benchmark's purpose only, or/and for simulating of the NPP irradiation conditions. Neutron transport through the vessel has been calculated by the three-dimensional discrete ordinate code TORT with problem oriented multigroup energy neutron cross-section library BGL. Neutron flux/fluence and spectrum shape represented by normalized group neutron fluxes in the multigroup energy structure, for neutrons with energy above 0.5 MeV, have been used for conformity analysis. It has been demonstrated that the relative difference of the attenuation factor as well as the group neutron fluxes did not exceed 10% at all considered positions for VVER-440. For VVER-1000, it has been obtained the same consistency, except for the location behind the RPV. The neutron flux attenuation behind the RPV is 18% higher than the mock-up attenuation. It has been shown that this difference arises from the dissimilarity of the biological shielding. The obtained results have demonstrated that the VVERs' mock-ups are appropriate for simulating the NPP irradiation conditions. The mock-up results for VVER-1000 have to be applied more carefully i.e. taking into account the existing peculiarity of the biological shielding and RPV attenuation azimuthal dependence.

  1. Component

    Directory of Open Access Journals (Sweden)

    Tibor Tot

    2011-01-01

    Full Text Available A unique case of metaplastic breast carcinoma with an epithelial component showing tumoral necrosis and neuroectodermal stromal component is described. The tumor grew rapidly and measured 9 cm at the time of diagnosis. No lymph node metastases were present. The disease progressed rapidly and the patient died two years after the diagnosis from a hemorrhage caused by brain metastases. The morphology and phenotype of the tumor are described in detail and the differential diagnostic options are discussed.

  2. Implementing Quality Assurance Features in Component-based Software System

    National Research Council Canada - National Science Library

    Navdeep Batolar; Parminder Kaur

    2016-01-01

    The increasing demand of component-based development approach (CBDA) gives opportunity to the software developers to increase the speed of the software development process and lower its production cost...

  3. Water quality assessment using SVD-based principal component ...

    African Journals Online (AJOL)

    Water quality assessment using SVD-based principal component analysis of hydrological data. ... value decomposition (SVD) of hydrological data was tested for water quality assessment. ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  4. High-extensible scene graph framework based on component techniques

    Institute of Scientific and Technical Information of China (English)

    LI Qi-cheng; WANG Guo-ping; ZHOU Feng

    2006-01-01

    In this paper, a novel component-based scene graph is proposed, in which all objects in the scene are classified to different entities, and a scene can be represented as a hierarchical graph composed of the instances of entities. Each entity contains basic data and its operations which are encapsulated into the entity component. The entity possesses certain behaviours which are responses to rules and interaction defined by the high-level application. Such behaviours can be described by script or behaviours model. The component-based scene graph in the paper is more abstractive and high-level than traditional scene graphs. The contents of a scene could be extended flexibly by adding new entities and new entity components, and behaviour modification can be obtained by modifying the model components or behaviour scripts. Its robustness and efficiency are verified by many examples implemented in the Virtual Scenario developed by Peking University.

  5. Cloud Based Big Data Infrastructure: Architectural Components and Automated Provisioning

    OpenAIRE

    Demchenko, Yuri; Turkmen, Fatih; Blanchet, Christophe; Loomis, Charles; Laat, Caees de

    2016-01-01

    This paper describes the general architecture and functional components of the cloud based Big Data Infrastructure (BDI). The proposed BDI architecture is based on the analysis of the emerging Big Data and data intensive technologies and supported by the definition of the Big Data Architecture Framework (BDAF) that defines the following components of the Big Data technologies: Big Data definition, Data Management including data lifecycle and data structures, Big Data Infrastructure (generical...

  6. Issues in Component-Based Development: Towards Specification with ADLs

    Directory of Open Access Journals (Sweden)

    Rafael González

    2006-10-01

    Full Text Available Software development has been coupled with time and cost problems through history. This has motivated the search for flexible, trustworthy and time and cost-efficient development. In order to achieve this, software reuse appears fundamental and component-based development, the way towards reuse. This paper discusses the present state of component-based development and some of its critical issues for success, such as: the existence of adequate repositories, component integration within a software architecture and an adequate specification. This paper suggests ADLs (Architecture Description Languages as a possible means for this specification.

  7. VVER-440 Ex-Core Neutron Transport Calculations by MCNP-5 Code and Comparison with Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Borodkin, Pavel; Khrennikov, Nikolay [Scientific and Engineering Centre for Nuclear and Radiation Safety (SEC NRS) Malaya Krasnoselskaya ul., 2/8, bld. 5, 107140 Moscow (Russian Federation)

    2008-07-01

    Ex-core neutron transport calculations are needed to evaluate radiation loading parameters (neutron fluence, fluence rate and spectra) on the in-vessel equipment, reactor pressure vessel (RPV) and support constructions of VVER type reactors. Due to these parameters are used for reactor equipment life-time assessment, neutron transport calculations should be carried out by precise and reliable calculation methods. In case of RPVs, especially, of first generation VVER-440s, the neutron fluence plays a key role in the prediction of RPV lifetime. Main part of VVER ex-core neutron transport calculations are performed by deterministic and Monte-Carlo methods. This paper deals with precise calculations of the Russian first generation VVER-440 by MCNP-5 code. The purpose of this work was an application of this code for expert calculations, verification of results by comparison with deterministic calculations and validation by neutron activation measured data. Deterministic discrete ordinates DORT code, widely used for RPV neutron dosimetry and many times tested by experiments, was used for comparison analyses. Ex-vessel neutron activation measurements at the VVER-440 NPP have provided space (in azimuth and height directions) and neutron energy (different activation reactions) distributions data for experimental (E) validation of calculated results. Calculational intercomparison (DORT vs. MCNP-5) and comparison with measured values (MCNP-5 and DORT vs. E) have shown agreement within 10-15% for different space points and reaction rates. The paper submits a discussion of results and makes conclusions about practice use of MCNP-5 code for ex-core neutron transport calculations in expert analysis. (authors)

  8. Software component composition based on ADL and Middleware

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    How to compose prefabricated components is a key issue in component-based reuse. Research on Software Architecture (SA) and Component-based Software Development (CBSD) provides two hopeful solutions from different perspectives. SA provides a top-down approach to realizing component-based reuse. However, it pays insufficient attention to the refinement and implementation of the architectural descriptions, and does not provide the necessary capability to automate the transformation or composition to form a final executable application. CBSD provides a bottom-up way by using existing middleware infrastructures. However, these technologies do not take into account the systematic methodology that can guide the CBSD process, especially the component composition at higher abstract levels. We argue that it is a natural solution to combine these two approaches.   In this paper, an architecture-based component composition approach is presented. In this way, SA description, using mapping rules and mini-tools to narrow the gap between design and implementation, is used as the blueprint and middleware technology as the runtime scaffold for component composition. Our approach presents an ADL, which supports user-defined connectors and has an extensible framework, to specify software architectures. To map a SA description into implementation, it is necessary to map it first to an OO design model described in UML, then to the final implementation. The architectural description can be mapped into source code or executable code by using some ORB conforming to CORBA standard. Also a toolkit is provided to support this approach efficiently.

  9. Evolution of the nanostructure of VVER-1000 RPV materials under neutron irradiation and post irradiation annealing

    Science.gov (United States)

    Miller, M. K.; Chernobaeva, A. A.; Shtrombakh, Y. I.; Russell, K. F.; Nanstad, R. K.; Erak, D. Y.; Zabusov, O. O.

    2009-04-01

    A high nickel VVER-1000 (15Kh2NMFAA) base metal (1.34 wt% Ni, 0.47% Mn, 0.29% Si and 0.05% Cu), and a high nickel (12Kh2N2MAA) weld metal (1.77 wt% Ni, 0.74% Mn, 0.26% Si and 0.07% Cu) have been characterized by atom probe tomography to determine the changes in the microstructure during neutron irradiation to high fluences. The base metal was studied in the unirradiated condition and after neutron irradiation to fluences between 2.4 and 14.9 × 10 23 m -2 ( E > 0.5 MeV), and the weld metal was studied in the unirradiated condition and after neutron irradiation to fluences between 2.4 and 11.5 × 10 23 m -2 ( E > 0.5 MeV). High number densities of ˜2-nm-diameter Ni-, Si- and Mn-enriched nanoclusters were found in the neutron irradiated base and weld metals. No significant copper enrichment was associated with these nanoclusters and no copper-enriched precipitates were observed. The number densities of these nanoclusters correlate with the shifts in the ΔT 41 J ductile-to-brittle transition temperature. These nanoclusters were present after a post irradiation anneal of 2 h at 450 °C, but had dissolved into the matrix after 24 h at 450 °C. Phosphorus, nickel, silicon and to a lesser extent manganese were found to be segregated to the dislocations.

  10. Least Dependent Component Analysis Based on Mutual Information

    CERN Document Server

    Stögbauer, H; Astakhov, S A; Grassberger, P; St\\"ogbauer, Harald; Kraskov, Alexander; Astakhov, Sergey A.; Grassberger, Peter

    2004-01-01

    We propose to use precise estimators of mutual information (MI) to find least dependent components in a linearly mixed signal. On the one hand this seems to lead to better blind source separation than with any other presently available algorithm. On the other hand it has the advantage, compared to other implementations of `independent' component analysis (ICA) some of which are based on crude approximations for MI, that the numerical values of the MI can be used for: (i) estimating residual dependencies between the output components; (ii) estimating the reliability of the output, by comparing the pairwise MIs with those of re-mixed components; (iii) clustering the output according to the residual interdependencies. For the MI estimator we use a recently proposed k-nearest neighbor based algorithm. For time sequences we combine this with delay embedding, in order to take into account non-trivial time correlations. After several tests with artificial data, we apply the resulting MILCA (Mutual Information based ...

  11. Java Applications Development Based on Component and Metacomponent Approach

    Directory of Open Access Journals (Sweden)

    Danijel Radošević

    2008-12-01

    Full Text Available Component based modeling offers new and improved approach to design, construction, implementation and evolution of software applications development. This kind of software applications development is usually represented by appropriate component model/diagram. UML, for example, offers component diagram for representation of this kind of model. On the other hand, metacomponents usage offers some new features which hardly could be achieved by using generic components. Firstly, implementation of program properties which are dispersed on different classes and other program units, i.e. aspects, is offered. This implies using automated process of assembling components and their interconnection for building applications, according to appropriate model offered in this paper, which also offers generic components usage. Benefits of this hybrid process are higher flexibility achieved by automated connection process, optimization through selective features inclusion and easier application maintenance and development. In this paper we offer an approach of application development based on hybrid component/metacomponent model. The component model is given by UML diagrams, while the metacomponent model is given by generator scripting model. We explain that hybrid approach on an example of Java Web application development.

  12. Investigation on Supply Chain Management Based on ComponentConfiguration

    Institute of Scientific and Technical Information of China (English)

    张洁; 陈淮莉; 马登哲

    2004-01-01

    From supply-push mode to demand-pull mode, SCM systems will face four main points: (1) real time visibility that covers the whole supply chain, (2) agility for choice of supply and source, (3) response to diverse customer demands and short delivery deadlines, and (4) rapid occurrence of new products following the market trends and new designs. Component-based SCM has become a hot spot in research areas. A multi-layer framework is set up, including a database server layer, an application server layer, a kernel component layer and a user interface layer. Some function components are designed, which are optimal planning arithmetic components, controller components and evaluation indexes components, in order to suit both discrete and continuous manufacturing. This paper studies a three-dimensional SCM configuration method based on the types of enterprise, manufacturing and products, provides powerful tools for SCM system implementations, and adopts an object-oriented technology to construct component-based distributed information system to assure right time, right materials, right place, right quantity and right customers.

  13. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae

    2008-01-01

    constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behaviour as a means of computation...... constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems) is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI), University of Southern Denmark. Once specified, the software model has......Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics...

  14. Component-based Systems Reconfigurations Using Graph Grammars

    Directory of Open Access Journals (Sweden)

    O. Kouchnarenko

    2016-01-01

    Full Text Available Dynamic reconfigurations can modify the architecture of component-based systems without incurring any system downtime. In this context, the main contribution of the present article is the establishment of correctness results proving component-based systems reconfigurations using graph grammars. New guarded reconfigurations allow us to build reconfigurations based on primitive reconfiguration operations using sequences of reconfigurations and the alternative and the repetitive constructs, while preserving configuration consistency. A practical contribution consists of the implementation of a component-based model using the GROOVE graph transformation tool. Then, after enriching the model with interpreted configurations and reconfigurations in a consistency compatible manner, a simulation relation is exploited to validate component systems’ implementations. This sound implementation is illustrated on a cloud-based multitier application hosting environment managed as a component-based system.

  15. Component-based integration of chemistry and optimization software.

    Science.gov (United States)

    Kenny, Joseph P; Benson, Steven J; Alexeev, Yuri; Sarich, Jason; Janssen, Curtis L; McInnes, Lois Curfman; Krishnan, Manojkumar; Nieplocha, Jarek; Jurrus, Elizabeth; Fahlstrom, Carl; Windus, Theresa L

    2004-11-15

    Typical scientific software designs make rigid assumptions regarding programming language and data structures, frustrating software interoperability and scientific collaboration. Component-based software engineering is an emerging approach to managing the increasing complexity of scientific software. Component technology facilitates code interoperability and reuse. Through the adoption of methodology and tools developed by the Common Component Architecture Forum, we have developed a component architecture for molecular structure optimization. Using the NWChem and Massively Parallel Quantum Chemistry packages, we have produced chemistry components that provide capacity for energy and energy derivative evaluation. We have constructed geometry optimization applications by integrating the Toolkit for Advanced Optimization, Portable Extensible Toolkit for Scientific Computation, and Global Arrays packages, which provide optimization and linear algebra capabilities. We present a brief overview of the component development process and a description of abstract interfaces for chemical optimizations. The components conforming to these abstract interfaces allow the construction of applications using different chemistry and mathematics packages interchangeably. Initial numerical results for the component software demonstrate good performance, and highlight potential research enabled by this platform.

  16. Reo: A Channel-based Coordination Model for Component Composition

    NARCIS (Netherlands)

    Arbab, F.

    2004-01-01

    In this paper, we present Reo, which forms a paradigm for composition of software components based on the notion of mobile channels. Reo is a channel-based exogenous coordination model in which complex coordinators, called connectors, are compositionally built out of simpler ones. The simplest conne

  17. A channel-based coordination model for component composition

    NARCIS (Netherlands)

    Arbab, F.

    2002-01-01

    In this paper, we present $P epsilon omega$, a paradigm for composition of software components based on the notion of mobile channels. $P repsilon omega$ is a channel-based exogenous coordination model wherein complex coordinators, called {em connectors are compositionally built out of simpler ones.

  18. Isocyanide based multi component reactions in combinatorial chemistry.

    NARCIS (Netherlands)

    Dömling, A.

    1998-01-01

    Although usually regarded as a recent development, the combinatorial approach to the synthesis of libraries of new drug candidates was first described as early as 1961 using the isocyanide-based one-pot multicomponent Ugi reaction. Isocyanide-based multi component reactions (MCR's) markedly differ f

  19. Isocyanide based multi component reactions in combinatorial chemistry.

    NARCIS (Netherlands)

    Dömling, A.

    1998-01-01

    Although usually regarded as a recent development, the combinatorial approach to the synthesis of libraries of new drug candidates was first described as early as 1961 using the isocyanide-based one-pot multicomponent Ugi reaction. Isocyanide-based multi component reactions (MCR's) markedly differ

  20. CCPA: Component-based communication protocol architecture for embedded systems

    Institute of Scientific and Technical Information of China (English)

    DAI Hong-jun; CHEN Tian-zhou; CHEN Chun

    2005-01-01

    For increased and various communication requirements of modem applications on embedded systems, general purpose protocol stacks and protocol models are not efficient because they are fixed to execute in the static mode. We present the Component-Based Communication Protocol Architecture (CCPA) to make communication dynamic and configurable. It can develop, test and store the customized components for flexible reuse. The protocols are implemented by component assembly and support by configurable environments. This leads to smaller memory, more flexibility, more reconfiguration ability, better concurrency, and multiple data channel support.

  1. Secure Wireless Embedded Systems Via Component-based Design

    DEFF Research Database (Denmark)

    Hjorth, Theis S.; Torbensen, R.

    2010-01-01

    This paper introduces the method secure-by-design as a way of constructing wireless embedded systems using component-based modeling frameworks. This facilitates design of secure applications through verified, reusable software. Following this method we propose a security framework with a secure...... communication component for distributed wireless embedded devices. The components communicate using the Secure Embedded Exchange Protocol (SEEP), which has been designed for flexible trust establishment so that small, resource-constrained, wireless embedded systems are able to communicate short command messages...

  2. Secure wireless embedded systems via component-based design

    DEFF Research Database (Denmark)

    Hjorth, T.; Torbensen, R.

    2010-01-01

    This paper introduces the method secure-by-design as a way of constructing wireless embedded systems using component-based modeling frameworks. This facilitates design of secure applications through verified, reusable software. Following this method we propose a security framework with a secure...... communication component for distributed wireless embedded devices. The components communicate using the Secure Embedded Exchange Protocol (SEEP), which has been designed for flexible trust establishment so that small, resource-constrained, wireless embedded systems are able to communicate short command messages...

  3. Algorithms for Synthesizing Priorities in Component-based Systems

    CERN Document Server

    Cheng, Chih-Hong; Chen, Yu-Fang; Yan, Rongjie; Jobstmann, Barbara; Ruess, Harald; Buckl, Christian; Knoll, Alois

    2011-01-01

    We present algorithms to synthesize component-based systems that are safe and deadlock-free using priorities, which define stateless-precedence between enabled actions. Our core method combines the concept of fault-localization (using safety-game) and fault-repair (using SAT for conflict resolution). For complex systems, we propose three complementary methods as preprocessing steps for priority synthesis, namely (a) data abstraction to reduce component complexities, (b) alphabet abstraction and #-deadlock to ignore components, and (c) automated assumption learning for compositional priority synthesis.

  4. The Design of PSB-VVER Experiments Relevant to Accident Management

    Science.gov (United States)

    Nevo, Alessandro Del; D'Auria, Francesco; Mazzini, Marino; Bykov, Michael; Elkin, Ilya V.; Suslov, Alexander

    Experimental programs carried-out in integral test facilities are relevant for validating the best estimate thermal-hydraulic codes(1), which are used for accident analyses, design of accident management procedures, licensing of nuclear power plants, etc. The validation process, in fact, is based on well designed experiments. It consists in the comparison of the measured and calculated parameters and the determination whether a computer code has an adequate capability in predicting the major phenomena expected to occur in the course of transient and/or accidents. University of Pisa was responsible of the numerical design of the 12 experiments executed in PSB-VVER facility (2), operated at Electrogorsk Research and Engineering Center (Russia), in the framework of the TACIS 2.03/97 Contract 3.03.03 Part A, EC financed (3). The paper describes the methodology adopted at University of Pisa, starting form the scenarios foreseen in the final test matrix until the execution of the experiments. This process considers three key topics: a) the scaling issue and the simulation, with unavoidable distortions, of the expected performance of the reference nuclear power plants; b) the code assessment process involving the identification of phenomena challenging the code models; c) the features of the concerned integral test facility (scaling limitations, control logics, data acquisition system, instrumentation, etc.). The activities performed in this respect are discussed, and emphasis is also given to the relevance of the thermal losses to the environment. This issue affects particularly the small scaled facilities and has relevance on the scaling approach related to the power and volume of the facility.

  5. Recommendations on selecting the closing relations for calculating friction pressure drop in the loops of nuclear power stations equipped with VVER reactors

    Science.gov (United States)

    Alipchenkov, V. M.; Belikov, V. V.; Davydov, A. V.; Emel'yanov, D. A.; Mosunova, N. A.

    2013-05-01

    Closing relations describing friction pressure drop during the motion of two-phase flows that are widely applied in thermal-hydraulic codes and in calculations of the parameters characterizing the flow of water coolant in the loops of reactor installations used at nuclear power stations and in other thermal power systems are reviewed. A new formula developed by the authors of this paper is proposed. The above-mentioned relations are implemented in the HYDRA-IBRAE thermal-hydraulic computation code developed at the Nuclear Safety Institute of the Russian Academy of Sciences. A series of verification calculations is carried out for a wide range of pressures, flowrates, and heat fluxes typical for transient and emergency operating conditions of nuclear power stations equipped with VVER reactors. Advantages and shortcomings of different closing relations are revealed, and recommendations for using them in carrying out thermal-hydraulic calculations of coolant flow in the loops of VVER-based nuclear power stations are given.

  6. Modeling Component-based Bragg gratings Application: tunable lasers

    Directory of Open Access Journals (Sweden)

    Hedara Rachida

    2011-09-01

    Full Text Available The principal function of a grating Bragg is filtering, which can be used in optical fibers based component and active or passive semi conductors based component, as well as telecommunication systems. Their ideal use is with lasers with fiber, amplifiers with fiber or Laser diodes. In this work, we are going to show the principal results obtained during the analysis of various types of grating Bragg by the method of the coupled modes. We then present the operation of DBR are tunable. The use of Bragg gratings in a laser provides single-mode sources, agile wavelength. The use of sampled grating increases the tuning range.

  7. Towards a Component Based Model for Database Systems

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2004-02-01

    Full Text Available Due to their effectiveness in the design and development of software applications and due to their recognized advantages in terms of reusability, Component-Based Software Engineering (CBSE concepts have been arousing a great deal of interest in recent years. This paper presents and extends a component-based approach to object-oriented database systems (OODB introduced by us in [1] and [2]. Components are proposed as a new abstraction level for database system, logical partitions of the schema. In this context, the scope is introduced as an escalated property for transactions. Components are studied from the integrity, consistency, and concurrency control perspective. The main benefits of our proposed component model for OODB are the reusability of the database design, including the access statistics required for a proper query optimization, and a smooth information exchange. The integration of crosscutting concerns into the component database model using aspect-oriented techniques is also discussed. One of the main goals is to define a method for the assessment of component composition capabilities. These capabilities are restricted by the component’s interface and measured in terms of adaptability, degree of compose-ability and acceptability level. The above-mentioned metrics are extended from database components to generic software components. This paper extends and consolidates into one common view the ideas previously presented by us in [1, 2, 3].[1] Octavian Paul Rotaru, Marian Dobre, Component Aspects in Object Oriented Databases, Proceedings of the International Conference on Software Engineering Research and Practice (SERP’04, Volume II, ISBN 1-932415-29-7, pages 719-725, Las Vegas, NV, USA, June 2004.[2] Octavian Paul Rotaru, Marian Dobre, Mircea Petrescu, Integrity and Consistency Aspects in Component-Oriented Databases, Proceedings of the International Symposium on Innovation in Information and Communication Technology (ISIICT

  8. Verifying Embedded Systems using Component-based Runtime Observers

    DEFF Research Database (Denmark)

    Guan, Wei; Marian, Nicolae; Angelov, Christo K.

    Formal verification methods, such as exhaustive model checking, are often infeasible because of high computational complexity. Runtime observers (monitors) provide an alternative, light-weight verification method, which offers a non-exhaustive yet feasible approach to monitoring system behavior...... against formally specified properties. This paper presents a component-based design method for runtime observers, which are configured from instances of prefabricated reusable components---Predicate Evaluator (PE) and Temporal Evaluator (TE). The PE computes atomic propositions for the TE; the latter...... is a reconfigurable component processing a data structure, representing the state transition diagram of a non-deterministic state machine, i.e. a Buchi automaton derived from a system property specified in Linear Temporal Logic (LTL). Observer components have been implemented using design models and design patterns...

  9. Context sensitivity and ambiguity in component-based systems design

    Energy Technology Data Exchange (ETDEWEB)

    Bespalko, S.J.; Sindt, A.

    1997-10-01

    Designers of components-based, real-time systems need to guarantee to correctness of soft-ware and its output. Complexity of a system, and thus the propensity for error, is best characterized by the number of states a component can encounter. In many cases, large numbers of states arise where the processing is highly dependent on context. In these cases, states are often missed, leading to errors. The following are proposals for compactly specifying system states which allow the factoring of complex components into a control module and a semantic processing module. Further, the need for methods that allow for the explicit representation of ambiguity and uncertainty in the design of components is discussed. Presented herein are examples of real-world problems which are highly context-sensitive or are inherently ambiguous.

  10. Maximum flow-based resilience analysis: From component to system

    Science.gov (United States)

    Jin, Chong; Li, Ruiying; Kang, Rui

    2017-01-01

    Resilience, the ability to withstand disruptions and recover quickly, must be considered during system design because any disruption of the system may cause considerable loss, including economic and societal. This work develops analytic maximum flow-based resilience models for series and parallel systems using Zobel’s resilience measure. The two analytic models can be used to evaluate quantitatively and compare the resilience of the systems with the corresponding performance structures. For systems with identical components, the resilience of the parallel system increases with increasing number of components, while the resilience remains constant in the series system. A Monte Carlo-based simulation method is also provided to verify the correctness of our analytic resilience models and to analyze the resilience of networked systems based on that of components. A road network example is used to illustrate the analysis process, and the resilience comparison among networks with different topologies but the same components indicates that a system with redundant performance is usually more resilient than one without redundant performance. However, not all redundant capacities of components can improve the system resilience, the effectiveness of the capacity redundancy depends on where the redundant capacity is located. PMID:28545135

  11. Can Component/Service-Based Systems Be Proved Correct?

    CERN Document Server

    Attiogbe, Christian

    2009-01-01

    Component-oriented and service-oriented approaches have gained a strong enthusiasm in industries and academia with a particular interest for service-oriented approaches. A component is a software entity with given functionalities, made available by a provider, and used to build other application within which it is integrated. The service concept and its use in web-based application development have a huge impact on reuse practices. Accordingly a considerable part of software architectures is influenced; these architectures are moving towards service-oriented architectures. Therefore applications (re)use services that are available elsewhere and many applications interact, without knowing each other, using services available via service servers and their published interfaces and functionalities. Industries propose, through various consortium, languages, technologies and standards. More academic works are also undertaken concerning semantics and formalisation of components and service-based systems. We consider...

  12. Nominal and Structural Subtyping in Component-Based Programming

    DEFF Research Database (Denmark)

    Ostermann, Klaus

    2007-01-01

    type. We analyze structural and different flavors of nominal subtyping from the perspective of component-based programming, where issues such as blame assignment and modular extensibility are important. Our analysis puts various existing subtyping mechanisms into a common frame of reference...

  13. Management of Globally Distributed Component-Based Software Development Projects

    NARCIS (Netherlands)

    J. Kotlarsky (Julia)

    2005-01-01

    textabstractGlobally Distributed Component-Based Development (GD CBD) is expected to become a promising area, as increasing numbers of companies are setting up software development in a globally distributed environment and at the same time are adopting CBD methodologies. Being an emerging area, the

  14. RECEIVE ANTENNA SUBSET SELECTION BASED ON ORTHOGONAL COMPONENTS

    Institute of Scientific and Technical Information of China (English)

    Lan Peng; Liu Ju; Gu Bo; Zhang Wei

    2007-01-01

    A new receive antenna subset selection algorithm with low complexity for wireless Multiple-Input Multiple-Output (MIMO) systems is proposed, which is based on the orthogonal components of the channel matrix. Larger capacity is achieved compared with the existing antenna selection methods. Simulation results of quasi-static flat fading channel demonstrate the significant performance of the proposed selection algorithm.

  15. Reliability-Based Design of Wind Turbine Components

    DEFF Research Database (Denmark)

    Toft, Henrik Stensgaard; Sørensen, John Dalsgaard

    2010-01-01

    wind turbine design a deterministic design approach based on partial safety factors is normally used. In the present paper a numerical example demonstrates how information from tests with wind turbine blades can be used to establish a probabilistic basis for reliabilitybased design. It is also......Application of reliability-based design for wind turbines requires a definition of the probabilistic basis for the individual components of the wind turbine. In the present paper reliability-based design of structural wind turbine components is considered. A framework for the uncertainties which...... demonstrated how partial safety factors can be derived for reliability-based design and how the partial safety factors changes dependent on the uncertainty in the test results....

  16. Assessment of computer codes for VVER-440/213-type nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Szabados, L.; Ezsol, Gy.; Perneczky [Atomic Energy Research Institute, Budapest (Hungary)

    1995-09-01

    Nuclear power plant of VVER-440/213 designed by the former USSR have a number of special features. As a consequence of these features the transient behaviour of such a reactor system should be different from the PWR system behaviour. To study the transient behaviour of the Hungarian Paks Nuclear Power Plant of VVER-440/213-type both analytical and experimental activities have been performed. The experimental basis of the research in the PMK-2 integral-type test facility , which is a scaled down model of the plant. Experiments performed on this facility have been used to assess thermal-hydraulic system codes. Four tests were selected for {open_quotes}Standard Problem Exercises{close_quotes} of the International Atomic Energy Agency. Results of the 4th Exercise, of high international interest, are presented in the paper, focusing on the essential findings of the assessment of computer codes.

  17. Generation of XS library for the reflector of VVER reactor core using Monte Carlo code Serpent

    Science.gov (United States)

    Usheva, K. I.; Kuten, S. A.; Khruschinsky, A. A.; Babichev, L. F.

    2017-01-01

    A physical model of the radial and axial reflector of VVER-1200-like reactor core has been developed. Five types of radial reflector with different material composition exist for the VVER reactor core and 1D and 2D models were developed for all of them. Axial top and bottom reflectors are described by the 1D model. A two-group XS library for diffusion code DYN3D has been generated for all types of reflectors by using Serpent 2 Monte Carlo code. Power distribution in the reactor core calculated in DYN3D is flattened in the core central region to more extent in the 2D model of the radial reflector than in its 1D model.

  18. Estimation of Control Rod Worth in a VVER-1000 Reactor using DRAGON4 and DONJON4

    Directory of Open Access Journals (Sweden)

    Saadatian-derakhshandeh Farahnaz

    2014-07-01

    Full Text Available One of the main issues in safety and control systems design of power and research reactors is to prevent accidents or reduce the imposed hazard. Control rod worth plays an important role in safety and control of reactors. In this paper, we developed a justifiable approach called D4D4 to estimate the control rod worth of a VVER-1000 reactor that enables to perform the best estimate analysis and reduce the conservatism that utilize DRAGON4 and DONJON4. The results are compared with WIMS-D4/CITATION to show the effectiveness and superiority of the developed package in predicting reactivity worth of the rod and also other reactor physics parameters of the VVER-1000 reactor. The results of this study are in good agreement with the plant's FSAR.

  19. High energy pipe line break postulations and their mitigation - examples for VVER nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Zdarek, J.; Pecinka, L.; Kadecka, P.; Dotrel, J. [Nuclear Res. Inst., Rez (Czech Republic)

    1998-11-01

    The concept and the proposals for the protection and reinforcement of equipment against the effects of postulated rupture of the high-energy piping, in VVER Plant, are presented. The most recent version of the US NRC Guidelines has been used. The development of the legislation, the basic approach and selection of criteria for the assessment of the rupture of high energy piping, provide the basis for the application of the separation concept in the overall safety philosophy. (orig.)

  20. Floor response spectra for seismic qualification of Kozloduy VVER 440-230 NPP

    Energy Technology Data Exchange (ETDEWEB)

    Kostov, M.K. [Bulgarian Academy of Sciences, Sofia (BG). Central Lab. for Seismic Mechanics and Earthquake Engineering; Ma, D.C. [Argonne National Lab., IL (United States); Prato, C.A. [Univ. of Cordoba (AR); Stevenson, J.D. [Stevenson and Associates, Cleveland, OH (US)

    1993-08-01

    In this paper the floor response spectra generation methodology for Kozloduy NPP, Unit 1-2 of VVER 440-230 is presented. The 2D coupled soil-structure interaction models are used combined with a simplified correction of the final results for accounting of torsional effects. Both time history and direct approach for in-structure spectra generation are used and discussion of results is made.

  1. End-to-end calculation of the radiation characteristics of VVER-1000 spent fuel assemblies

    Science.gov (United States)

    Linge, I. I.; Mitenkova, E. F.; Novikov, N. V.

    2012-12-01

    The results of end-to-end calculation of the radiation characteristics of VVER-1000 spent nuclear fuel are presented. Details of formation of neutron and gamma-radiation sources are analyzed. Distributed sources of different types of radiation are considered. A comparative analysis of calculated radiation characteristics is performed with the use of nuclear data from different ENDF/B and EAF files and ANSI/ANS and ICRP standards.

  2. 3D analysis of the reactivity insertion accident in VVER-1000

    Energy Technology Data Exchange (ETDEWEB)

    Abdullayev, A. M.; Zhukov, A. I.; Slyeptsov, S. M. [NSC Kharkov Inst. for Physics and Technology, 1, Akademicheskaya Str., Kharkov 61108 (Ukraine)

    2012-07-01

    Fuel parameters such as peak enthalpy and temperature during rod ejection accident are calculated. The calculations are performed by 3D neutron kinetics code NESTLE and 3D thermal-hydraulic code VIPRE-W. Both hot zero power and hot full power cases were studied for an equilibrium cycle with Westinghouse hex fuel in VVER-1000. It is shown that the use of 3D methodology can significantly increase safety margins for current criteria and met future criteria. (authors)

  3. Irradiation capabilities of LR-0 reactor with VVER-1000 Mock-Up core.

    Science.gov (United States)

    Košťál, Michal; Rypar, Vojtěch; Svadlenková, Marie; Cvachovec, František; Jánský, Bohumil; Milčák, Ján

    2013-12-01

    Even low power reactors, such as zero power reactors, are sufficient for semiconductor radiation hardness effect investigation. This reflects the fact that fluxes necessary for affecting semiconductor electrical resistance are much lower than fluxes necessary to affect material parameters. The paper aims to describe the irradiation possibilities of the LR-0 reactor with a special core arrangement corresponding to VVER-1000 dosimetry Mock-Up.

  4. Component-based software for high-performance scientific computing

    Science.gov (United States)

    Alexeev, Yuri; Allan, Benjamin A.; Armstrong, Robert C.; Bernholdt, David E.; Dahlgren, Tamara L.; Gannon, Dennis; Janssen, Curtis L.; Kenny, Joseph P.; Krishnan, Manojkumar; Kohl, James A.; Kumfert, Gary; Curfman McInnes, Lois; Nieplocha, Jarek; Parker, Steven G.; Rasmussen, Craig; Windus, Theresa L.

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  5. Verifying Embedded Systems using Component-based Runtime Observers

    DEFF Research Database (Denmark)

    Guan, Wei; Marian, Nicolae; Angelov, Christo K.

    Formal verification methods, such as exhaustive model checking, are often infeasible because of high computational complexity. Runtime observers (monitors) provide an alternative, light-weight verification method, which offers a non-exhaustive yet feasible approach to monitoring system behavior...... against formally specified properties. This paper presents a component-based design method for runtime observers, which are configured from instances of prefabricated reusable components---Predicate Evaluator (PE) and Temporal Evaluator (TE). The PE computes atomic propositions for the TE; the latter...... specified properties via simulation. The presented method has been experimentally validated in an industrial case study---a control system for a safety-critical medical ventilator unit....

  6. Lifting a Butterfly – A Component-Based FFT

    Directory of Open Access Journals (Sweden)

    Sibylle Schupp

    2003-01-01

    Full Text Available While modern software engineering, with good reason, tries to establish the idea of reusability and the principles of parameterization and loosely coupled components even for the design of performance-critical software, Fast Fourier Transforms (FFTs tend to be monolithic and of a very low degree of parameterization. The data structures to hold the input and output data, the element type of these data, the algorithm for computing the so-called twiddle factors, the storage model for a given set of twiddle factors, all are unchangeably defined in the so-called butterfly, restricting its reuse almost entirely. This paper shows a way to a component-based FFT by designing a parameterized butterfly. Based on the technique of lifting, this parameterization includes algorithmic and implementation issues without violating the complexity guarantees of an FFT. The paper demonstrates the lifting process for the Gentleman-Sande butterfly, i.e., the butterfly that underlies the large class of decimation-in-frequency (DIF FFTs, shows the resulting components and summarizes the implementation of a component-based, generic DIF library in C++.

  7. The power distribution and neutron fluence measurements and calculations in the VVER-1000 Mock-Up on the LR-0 research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Kostal, M.; Juricek, V.; Rypar, V.; Svadlenkova, M. [Research Center Rez Ltd., 250 68 Husinec-Rez 130 (Czech Republic); Cvachovec, F. [Univ. of Defence, Kounicova 65, 662 10 Brno (Czech Republic)

    2011-07-01

    The power density distribution in a reactor has significant influence on core structures and pressure vessel mechanical resistance, as well as on the physical characteristics of nuclear fuel. This quantity also has an effect on the leakage neutron and photon field. This issue has become of increasing importance, as it touches on actual questions of the VVER nuclear power plant life time extension. This paper shows the comparison of calculated and experimentally determined pin by pin power distributions. The calculation has been performed with deterministic and Monte Carlo approaches. This quantity is accompanied by the neutron and photon flux density calculation and measurements at different points of the light water zero-power (LR-0) research reactor mock-up core, reactor built-in component (core barrel), and reactor pressure vessel and model. The effect of the different data libraries used for calculation is discussed. (authors)

  8. Issues of intergranular embrittlement of VVER-type nuclear reactors pressure vessel materials

    Science.gov (United States)

    Zabusov, O.

    2016-04-01

    In light of worldwide tendency to extension of service life of operating nuclear power plants - VVER-type in the first place - recently a special attention is concentrated on phenomena taking place in reactor pressure vessel materials that are able to lead to increased level of mechanical characteristics degradation (resistibility to brittle fracture) during long term of operation. Formerly the hardening mechanism of degradation (increase in the yield strength under influence of irradiation) mainly had been taken into consideration to assess pressure vessel service life limitations, but when extending the service life up to 60 years and more the non-hardening mechanism (intergranular embrittlement of the steels) must be taken into account as well. In this connection NRC “Kurchatov Institute” has initiated a number of works on investigations of this mechanism contribution to the total embrittlement of reactor pressure vessel steels. The main results of these investigations are described in this article. Results of grain boundary phosphorus concentration measurements in specimens made of first generation of VVER-type pressure vessels materials as well as VVER-1000 surveillance specimens are presented. An assessment of non-hardening mechanism contribution to the total ductile-to- brittle transition temperature shift is given.

  9. Study of the flux effect nature for VVER-1000 RPV welds with high nickel content

    Science.gov (United States)

    Kuleshova, E. A.; Gurovich, B. A.; Lavrukhina, Z. V.; Maltsev, D. A.; Fedotova, S. V.; Frolov, A. S.; Zhuchkov, G. M.

    2017-01-01

    This work extends the research of the basic regularities of segregation processes in the grain boundaries (GB) of VVER-1000 reactor pressure vessel (RPV) steels. The paper considers the influence of irradiation with different fast neutron fluxes on the structure, yield strength and ductile-to-brittle transition temperature (TK) changes as well as on changes of the share of brittle intergranular fracture and development of segregation processes in the VVER-1000 RPV weld metal (WM). The obtained experimental results allow to separate the contribution of the hardening and non-hardening mechanisms to mechanical properties degradation of material irradiated at the operating temperature. It is shown that the difference in TK shift in WM irradiated to the same fluence with different fast neutron fluxes is mainly due to the difference in the GB accumulation kinetics of impurities and only to a small extent due to the material hardening. Phosphorus bulk diffusion coefficients were evaluated for the temperature exposure, accelerated irradiation and irradiation within surveillance specimens (SS) using a kinetic model of phosphorus GB accumulation in low-alloyed low-carbon steels under the influence of operational factors. The correlation between the GB segregation level of phosphorus and nickel, and the TK shift - in WM SS was obtained experimentally and indicates the non-hardening mechanism contribution to the total radiation embrittlement of VVER-1000 RPV steels throughout its extended lifetime.

  10. Real Time Engineering Analysis Based on a Generative Component Implementation

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Klitgaard, Jens

    2007-01-01

    The present paper outlines the idea of a conceptual design tool with real time engineering analysis which can be used in the early conceptual design phase. The tool is based on a parametric approach using Generative Components with embedded structural analysis. Each of these components uses...... the geometry, material properties and fixed point characteristics to calculate the dimensions and subsequent feasibility of any architectural design. The proposed conceptual design tool provides the possibility for the architect to work with both the aesthetic as well as the structural aspects of architecture...... with a static determinate roof structure modelled by beam components is given. The example outlines the idea of the tool for conceptual design in early phase of a multidisciplinary design process between architecture and structural engineering....

  11. Real Time Engineering Analysis Based on a Generative Component Implementation

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Klitgaard, Jens

    2007-01-01

    The present paper outlines the idea of a conceptual design tool with real time engineering analysis which can be used in the early conceptual design phase. The tool is based on a parametric approach using Generative Components with embedded structural analysis. Each of these components uses...... without jumping from aesthetics to structural digital design tools and back, but to work with both simultaneously and real time. The engineering level of knowledge is incorporated at a conceptual thinking level, i.e. qualitative information is used in stead of using quantitative information. An example...... with a static determinate roof structure modelled by beam components is given. The example outlines the idea of the tool for conceptual design in early phase of a multidisciplinary design process between architecture and structural engineering....

  12. A probabilistic model for component-based shape synthesis

    KAUST Repository

    Kalogerakis, Evangelos

    2012-07-01

    We present an approach to synthesizing shapes from complex domains, by identifying new plausible combinations of components from existing shapes. Our primary contribution is a new generative model of component-based shape structure. The model represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation that can be effectively learned without supervision from a set of compatibly segmented shapes. We evaluate the model on a number of shape datasets with complex structural variability and demonstrate its application to amplification of shape databases and to interactive shape synthesis. © 2012 ACM 0730-0301/2012/08-ART55.

  13. Component Based Effort Estimation During Software Development: Problematic View

    Directory of Open Access Journals (Sweden)

    VINIT KUMAR

    2011-10-01

    Full Text Available Component-based software development (CBD is anemerging discipline that promises to take softwareengineering into a new era. Building on theachievements of object-oriented software construction,CBD aims to deliver software engineering from acottage industry into an industrial age for InformationTechnology, wherein software can be assembled fromcomponents, in the manner that hardware systems arecurrently constructed from kits of parts. Componentbaseddevelopment (CBD is a branch of softwareengineering that emphasizes the separation ofconcerns in respect of the wide-ranging functionalityavailable throughout a given software system. Thispractice aims to bring about an equally wide-rangingdegree of benefits in both the short-term and the longtermfor the software itself and for organizations thatsponsor such software. Software engineers regardcomponents as part of the starting platformfor service-orientation. Components play this role, for example, in Web services, and more recently, in service-oriented architectures (SOA, whereby a component is converted by the Web service into aservice and subsequently inherits further characteristics beyond that of an ordinary component. Components can produce or consume events and can be used for event driven architectures (EDA.

  14. Shaping of the axial power density distribution in the core to minimize the vapor volume fraction at the outlet of the VVER-1200 fuel assemblies

    Science.gov (United States)

    Savander, V. I.; Shumskiy, B. E.; Pinegin, A. A.

    2016-12-01

    The possibility of decreasing the vapor fraction at the VVER-1200 fuel assembly outlet by shaping the axial power density field is considered. The power density field was shaped by axial redistribution of the concentration of the burnable gadolinium poison in the Gd-containing fuel rods. The mathematical modeling of the VVER-1200 core was performed using the NOSTRA computer code.

  15. SA BASED SOFTWARE DEPLOYMENT RELIABILITY ESTIMATION CONSIDERING COMPONENT DEPENDENCE

    Institute of Scientific and Technical Information of China (English)

    Su Xihong; Liu Hongwei; Wu Zhibo; Yang Xiaozong; Zuo Decheng

    2011-01-01

    Reliability is one of the most critical properties of software system.System deployment architecture is the allocation of system software components on host nodes.Software Architecture (SA)based software deployment models help to analyze reliability of different deployments.Though many approaches for architecture-based reliability estimation exist,little work has incorporated the influence of system deployment and hardware resources into reliability estimation.There are many factors influencing system deployment.By translating the multi-dimension factors into degree matrix of component dependence,we provide the definition of component dependence and propose a method of calculating system reliability of deployments.Additionally,the parameters that influence the optimal deployment may change during system execution.The existing software deployment architecture may be ill-suited for the given environment,and the system needs to be redeployed to improve reliability.An approximate algorithm,A*_D,to increase system reliability is presented.When the number of components and host nodes is relative large,experimental results show that this algorithm can obtain better deployment than stochastic and greedy algorithms.

  16. State Inspection for Transmission Lines Based on Independent Component Analysis

    Institute of Scientific and Technical Information of China (English)

    REN Li-jia; JIANG Xiu-chen; SHENG Ge-hao; YANG Wei-wei

    2009-01-01

    Monitoring transmission towers is of great importance to prevent severe thefts on them and ensure the reliability and safety of the power grid operation. Independent component analysis (ICA) is a method for finding underlying factors or components from multivariate statistical data based on dimension reduction methods, and it is applicable to extract the non-stationary signals. FastICA based on negentropy is presented to effectively extract and separate the vibration signals caused by human activity in this paper. A new method combined empirical mode decomposition (EMD) technique with the adaptive threshold method is applied to extract the vibration pulses, and suppress the interference signals. The practical tests demonstrate that the method proposed in the paper is effective in separating and extracting the vibration signals.

  17. Education Knowledge System Combination Model Based on the Components

    Institute of Scientific and Technical Information of China (English)

    CHEN Lei; LI Dehua; LI Xiaojian; WU Chunxiang

    2007-01-01

    Resources are the base and core of education information, but current web education resources have no structure and it is still difficult to reuse them and make them can be self assembled and developed continually. According to the knowledge structure of course and text, the relation among knowledge points, knowledge units from three levels of media material, we can build education resource components, and build TKCM (Teaching Knowledge Combination Model) based on resource components. Builders can build and assemble knowledge system structure and make knowledge units can be self assembled, thus we can develop and consummate them continually. Users can make knowledge units can be self assembled and renewed, and build education knowledge system to satisfy users' demand under the form of education knowledge system.

  18. Component-based analysis of embedded control applications

    DEFF Research Database (Denmark)

    Angelov, Christo K.; Guan, Wei; Marian, Nicolae

    2011-01-01

    presents an analysis technique that can be used to validate COMDES design models in SIMULINK. It is based on a transformation of the COMDES design model into a SIMULINK analysis model, which preserves the functional and timing behaviour of the application. This technique has been employed to develop...... configuration of applications from validated design models and trusted components. This design philosophy has been instrumental for developing COMDES—a component-based framework for distributed embedded control systems. A COMDES application is conceived as a network of embedded actors that are configured from...... instances of reusable, executable components—function blocks (FBs). System actors operate in accordance with a timed multitasking model of computation, whereby I/O signals are exchanged with the controlled plant at precisely specified time instants, resulting in the elimination of I/O jitter. The paper...

  19. The problems of mass transfer and formation of deposits of corrosion products on fuel assemblies of a VVER-1200 reactor

    Science.gov (United States)

    Rodionov, Yu. A.; Kritskii, V. G.; Berezina, I. G.; Gavrilov, A. V.

    2014-03-01

    On the basis of examination of materials published both in Russia and abroad, as well as their own investigations, the authors explain the reasons for the occurrence of such effects as AOA (Axial Offset Anomalies) and an increase in the coolant pressure difference in the core of nuclear reactors of the VVER type. To detect the occurrence of the AOA effect, the authors suggest using the specific activity of 58Co in the coolant. In the VVER-1200 design the thermohydraulic regime for fuel assemblies in the first year of their service life involves slight boiling of the coolant in the upper part of the core, which may induce the occurrence of the AOA effect, intensification of corrosion of fuel claddings, and abnormal increase in deposition of corrosion products. Radiolysis of the water coolant in the boiling section (boiling in pores of deposits) may intensify not only general corrosion but also a localized (nodular) one. As a result of intensification of the corrosion processes and growth of deposits, deterioration of the radiation situation in the rooms of the primary circuit of a VVER-1200 reactor as compared to that at nuclear power plants equipped with reactors of the VVER-1000 type is possible. Recommendations for preventing the AOA effect at nuclear power plants with VVER-1200 reactors on the matter of the direction of further investigations are made.

  20. Investigation of Burst Pressures in PWR Primary Pressure Boundary Components

    Directory of Open Access Journals (Sweden)

    Ihn Namgung

    2016-02-01

    Full Text Available In a reactor coolant system of a nuclear power plant (NPP, an overpressure protection system keeps pressure in the loop within 110% of design pressure. However if the system does not work properly, pressure in the loop could elevate hugely in a short time. It would be seriously disastrous if a weak point in the pressure boundary component bursts and releases radioactive material within the containment; and it may lead to a leak outside the containment. In this study, a gross deformation that leads to a burst of pressure boundary components was investigated. Major components in the primary pressure boundary that is structurally important were selected based on structural mechanics, then, they were used to study the burst pressure of components by finite element method (FEM analysis and by number of closed forms of theoretical relations. The burst pressure was also used as a metric of design optimization. It revealed which component was the weakest and which component had the highest margin to bursting failure. This information is valuable in severe accident progression prediction. The burst pressures of APR-1400, AP1000 and VVER-1000 reactor coolant systems were evaluated and compared to give relative margins of safety.

  1. Space cryogenics components based on the thermomechanical (TM) effect

    Science.gov (United States)

    Yuan, S. W. K.; Frederking, T. H. K.

    1988-01-01

    He II vapor-liquid phase separation (VLPS) is discussed, with emphasis on fluid-related transport phenomena. The VLPS system has been studied for both linear and nonlinear regimes, demonstrating that well-defined convection patterns exist in porous plug phase separators. In the second part, other components based on the thermomechanical effect are discussed in the limit of ideal conditions. Examples considered include the heat pipe transfer of zero net mass flow, liquid transfer pumps based on the fountain effect, mechanocaloric devices for cooling purposes, and He II vortex refrigerators.

  2. A generalized GPU-based connected component labeling algorithm

    CERN Document Server

    Komura, Yukihiro

    2016-01-01

    We propose a generalized GPU-based connected component labeling (CCL) algorithm that can be applied to both various lattices and to non-lattice environments in a uniform fashion. We extend our recent GPU-based CCL algorithm without the use of conventional iteration to the generalized method. As an application of this algorithm, we deal with the bond percolation problem. We investigate bond percolation on the honeycomb and triangle lattices to confirm the correctness of this algorithm. Moreover, we deal with bond percolation on the Bethe lattice as a substitute for a network structure, and demonstrate the performance of this algorithm on those lattices.

  3. Likelihood-based CT reconstruction of objects containing known components

    Energy Technology Data Exchange (ETDEWEB)

    Stayman, J. Webster [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Biomedical Engineering; Otake, Yoshito; Uneri, Ali; Prince, Jerry L.; Siewerdsen, Jeffrey H.

    2011-07-01

    There are many situations in medical imaging where there are known components within the imaging volume. Such is the case in diagnostic X-ray CT imaging of patients with implants, in intraoperative CT imaging where there may be surgical tools in the field, or in situations where the patient support (table or frame) or other devices are outside the (truncated) reconstruction FOV. In such scenarios it is often of great interest to image the relation between the known component and the surrounding anatomy, or to provide high-quality images at the boundary of these objects, or simply to minimize artifacts arising from such components. We propose a framework for simultaneously estimating the position and orientation of a known component and the surrounding volume. Toward this end, we adopt a likelihood-based objective function with an image volume jointly parameterized by a known object, or objects, with unknown registration parameters and an unknown background attenuation volume. The objective is solved iteratively using an alternating minimization approach between the two parameter types. Because this model integrates a substantial amount of prior knowledge about the overall volume, we expect a number of advantages including the reduction of metal artifacts, potential for more sparse data acquisition (decreased time and dose), and/or improved image quality. We illustrate this approach using simulated spine CT data that contains pedicle screws placed in a vertebra, and demonstrate improved performance over traditional filtered-backprojection and penalized-likelihood reconstruction techniques. (orig.)

  4. Microstructural behavior of VVER-440 reactor pressure vessel steels under irradiation to neutron fluences beyond the design operation period

    Science.gov (United States)

    Kuleshova, E. A.; Gurovich, B. A.; Shtrombakh, Ya. I.; Nikolaev, Yu. A.; Pechenkin, V. A.

    2005-06-01

    Electron-microscopy and fractographic studies of the surveillance specimens from base and weld metals of VVER-440/213 reactor pressure vessel (RPV) in the original state and after irradiations to different fast neutron fluences from ˜5 × 10 23 n m -2 ( E > 0.5 MeV) up to over design values have been carried out. The maximum specimens irradiation time was 84 480 h. It is shown that there is an evolution in radiation-induced structural behavior with radiation dose increase, which causes a change in relative contribution of the mechanisms responsible for radiation embrittlement of RPV materials. Particularly, radiation coalescence of copper-enriched precipitates and extensive density increase of dislocation loops was observed. Increase in dislocation loop density was shown to provide the dominant contribution to radiation hardening at the late irradiation stages (after reaching double the design end-of-life neutron fluence of ˜4 × 10 24 n m -2). The fracture mechanism of the base metal at those stages was observed to change from transcrystalline to intercrystalline.

  5. A Component Based Approach to Scientific Workflow Management

    CERN Document Server

    Le Goff, Jean-Marie; Baker, Nigel; Brooks, Peter; McClatchey, Richard

    2001-01-01

    CRISTAL is a distributed scientific workflow system used in the manufacturing and production phases of HEP experiment construction at CERN. The CRISTAL project has studied the use of a description driven approach, using meta- modelling techniques, to manage the evolving needs of a large physics community. Interest from such diverse communities as bio-informatics and manufacturing has motivated the CRISTAL team to re-engineer the system to customize functionality according to end user requirements but maximize software reuse in the process. The next generation CRISTAL vision is to build a generic component architecture from which a complete software product line can be generated according to the particular needs of the target enterprise. This paper discusses the issues of adopting a component product line based approach and our experiences of software reuse.

  6. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae; Top, Søren

    2008-01-01

    of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical...... of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behaviour, and the transformation of the software system into the S-functions. The general aim of this work is the improvement of multi-disciplinary development of embedded systems with the focus on the relation...... constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behaviour as a means of computation...

  7. Crack growth behaviour of low alloy steels for pressure boundary components under transient light water reactor operating conditions (CASTOC)

    Energy Technology Data Exchange (ETDEWEB)

    Foehl, J.; Weissenberg, T. [Materialpruefungsanstalt, Univ. Stuttgart (Germany); Gomez-Briceno, D.; Lapena, J. [Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas (CIEMAT) (Spain); Ernestova, M.; Zamboch, M. [Nuclear Research Inst. (NRI) (Czech Republic); Seifert, H.P.; Ritter, S. [Paul Scherrer Inst. (PSI) (Switzerland); Roth, A.; Devrient, B. [Framatome ANP GmbH (F ANP) (Germany); Ehrnsten, U. [Technical Research Centre of Finland (VTT) (Finland)

    2004-07-01

    The CASTOC project addresses environmentally assisted cracking (EAC) phenomena in low alloy steels used for pressure boundary components in both Western type boiling water reactors (BWR) and Russian type pressurised water reactors (VVER). It comprises the four work packages (WP): inter-laboratory comparison test (WP1); EAC behaviour under static load (WP2), EAC behaviour under cyclic load and load transients (WP3); evaluation of the results with regard to their relevance for components in practice (WP4). The use of sophisticated test facilities and measurement techniques for the on-line detection of crack advances have provided a more detailed understanding of the mechanisms of environmentally assisted cracking and provided quantitative data of crack growth rates as a function of loading events and time, respectively. The effect of several major parameters controlling EAC was investigated with particular emphasis on the transferability of the results to components in service. The obtained crack growth rate data were reflected on literature data and on commonly applied prediction curves as presented in the appropriate Code. At relevant stress intensity factors it could be shown that immediate cessation of growing cracks occurs after changing from cyclic to static load in high purity oxygenated BWR water and oxygen-free VVER water corresponding to steady state operation conditions. Susceptibility to environmentally assisted cracking under static load was observed for a heat affected zone material in oxygenated high purity water and also in base materials during a chloride transient representing BWR water condition below Action Level 1 of the EPRI Water Chemistry Guidelines according to the lectrical conductivity of the water but in the range of Action Level 2 according to the content of chlorides. Time based crack growth was also observed in one Russian type base material in oxygenated VVER water and in one Western type base material in oxygenated high purity BWR

  8. The differential characteristics of control rods of VVER-1000 core simulator at a low number of axial mesh points

    Science.gov (United States)

    Bolsunov, A. A.; Karpov, S. A.

    2013-12-01

    An algorithm for refining the differential characteristics of the control rods (CRs) of the control and protection system (CPS) for a neutronics model of the VVER-1000 simulator at a low number of axial mesh points of the core is described. The problem of determining the constants for a cell with a partially inserted CR is solved. The cell constants obtained using the proposed approach ensure smoothing of the differential characteristics of an absorbing rod. The algorithm was used in the VVER-1000 simulators (Bushehr NPP, unit no. 1; Rostov NPP, unit no. 1; and Balakovo NPP, unit no. 4).

  9. A Component-Based Debugging Approach for Detecting Structural Inconsistencies in Declarative Equation Based Models

    Institute of Scientific and Technical Information of China (English)

    Jian-Wan Ding; Li-Ping Chen; Fan-Li Zhou

    2006-01-01

    Object-oriented modeling with declarative equation based languages often unconsciously leads to structural inconsistencies. Component-based debugging is a new structural analysis approach that addresses this problem by analyzing the structure of each component in a model to separately locate faulty components. The analysis procedure is performed recursively based on the depth-first rule. It first generates fictitious equations for a component to establish a debugging environment, and then detects structural defects by using graph theoretical approaches to analyzing the structure of the system of equations resulting from the component. The proposed method can automatically locate components that cause the structural inconsistencies, and show the user detailed error messages. This information can be a great help in finding and localizing structural inconsistencies, and in some cases pinpoints them immediately.

  10. Influence of operation factors on brittle fracture initiation and critical local normal stress in SE(B) type specimens of VVER reactor pressure vessel steels

    Science.gov (United States)

    Kuleshova, E. A.; Erak, A. D.; Kiselev, A. S.; Bubyakin, S. A.; Bandura, A. P.

    2015-12-01

    A complex of mechanical tests and fractographic studies of VVER-1000 RPV SE(B) type surveillance specimens was carried out: the brittle fracture origins were revealed (non-metallic inclusions and structural boundaries) and the correlation between fracture toughness parameters (CTOD) and fracture surface parameters (CID) was established. A computational and experimental method of the critical local normal stress determination for different origin types was developed. The values of the critical local normal stress for the structural boundary origin type both for base and weld metal after thermal exposure and neutron irradiation are lower than that for initial state due to the lower cohesive strength of grain boundaries as a result of phosphorus segregation.

  11. Component-based assistants for MEMS design tools

    Science.gov (United States)

    Hahn, Kai; Brueck, Rainer; Schneider, Christian; Schumer, Christian; Popp, Jens

    2001-04-01

    With this paper a new approach for MEMS design tools will be introduced. An analysis of the design tool market leads to the result that most of the designers work with large and inflexible frameworks. Purchasing and maintaining these frameworks is expensive, and gives no optimum support for MEMS design process. The concept of design assistants, carried out with the concept of interacting software components, denotes a new generation of flexible, small, semi-autonomous software systems that are used to solve specific MEMS design tasks in close interaction with the designer. The degree of interaction depends on the complexity of the design task to be performed and the possibility to formalize the respective knowledge. In this context the Internet as one of today's most important communication media provides support for new tool concepts on the basis of the Java programming language. These modern technologies can be used to set up distributed and platform-independent applications. Thus the idea emerged to implement design assistants using Java. According to the MEMS design model new process sequences have to be defined new for every specific design object. As a consequence, assistants have to be built dynamically depending on the requirements of the design process, what can be achieved with component based software development. Componentware offers the possibility to realize design assistants, in areas like design rule checks, process consistency checks, technology definitions, graphical editors, etc. that may reside distributed over the Internet, communicating via Internet protocols. At the University of Siegen a directory for reusable MEMS components has been created, containing a process specification assistant and a layout verification assistant for lithography based MEMS technologies.

  12. Nonlinear Process Fault Diagnosis Based on Serial Principal Component Analysis.

    Science.gov (United States)

    Deng, Xiaogang; Tian, Xuemin; Chen, Sheng; Harris, Chris J

    2016-12-22

    Many industrial processes contain both linear and nonlinear parts, and kernel principal component analysis (KPCA), widely used in nonlinear process monitoring, may not offer the most effective means for dealing with these nonlinear processes. This paper proposes a new hybrid linear-nonlinear statistical modeling approach for nonlinear process monitoring by closely integrating linear principal component analysis (PCA) and nonlinear KPCA using a serial model structure, which we refer to as serial PCA (SPCA). Specifically, PCA is first applied to extract PCs as linear features, and to decompose the data into the PC subspace and residual subspace (RS). Then, KPCA is performed in the RS to extract the nonlinear PCs as nonlinear features. Two monitoring statistics are constructed for fault detection, based on both the linear and nonlinear features extracted by the proposed SPCA. To effectively perform fault identification after a fault is detected, an SPCA similarity factor method is built for fault recognition, which fuses both the linear and nonlinear features. Unlike PCA and KPCA, the proposed method takes into account both linear and nonlinear PCs simultaneously, and therefore, it can better exploit the underlying process's structure to enhance fault diagnosis performance. Two case studies involving a simulated nonlinear process and the benchmark Tennessee Eastman process demonstrate that the proposed SPCA approach is more effective than the existing state-of-the-art approach based on KPCA alone, in terms of nonlinear process fault detection and identification.

  13. NONLINEAR DATA RECONCILIATION METHOD BASED ON KERNEL PRINCIPAL COMPONENT ANALYSIS

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In the industrial process situation, principal component analysis (PCA) is a general method in data reconciliation.However, PCA sometime is unfeasible to nonlinear feature analysis and limited in application to nonlinear industrial process.Kernel PCA (KPCA) is extension of PCA and can be used for nonlinear feature analysis.A nonlinear data reconciliation method based on KPCA is proposed.The basic idea of this method is that firstly original data are mapped to high dimensional feature space by nonlinear function, and PCA is implemented in the feature space.Then nonlinear feature analysis is implemented and data are reconstructed by using the kernel.The data reconciliation method based on KPCA is applied to ternary distillation column.Simulation results show that this method can filter the noise in measurements of nonlinear process and reconciliated data can represent the true information of nonlinear process.

  14. Support vector classifier based on principal component analysis

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Support vector classifier (SVC) has the superior advantages for small sample learning problems with high dimensions,with especially better generalization ability.However there is some redundancy among the high dimensions of the original samples and the main features of the samples may be picked up first to improve the performance of SVC.A principal component analysis (PCA) is employed to reduce the feature dimensions of the original samples and the pre-selected main features efficiently,and an SVC is constructed in the selected feature space to improve the learning speed and identification rate of SVC.Furthermore,a heuristic genetic algorithm-based automatic model selection is proposed to determine the hyperparameters of SVC to evaluate the performance of the learning machines.Experiments performed on the Heart and Adult benchmark data sets demonstrate that the proposed PCA-based SVC not only reduces the test time drastically,but also improves the identify rates effectively.

  15. Application of the thermal-hydraulic codes in VVER-440 steam generators modelling

    Energy Technology Data Exchange (ETDEWEB)

    Matejovic, P.; Vranca, L.; Vaclav, E. [Nuclear Power Plant Research Inst. VUJE (Slovakia)

    1995-12-31

    Performances with the CATHARE2 V1.3U and RELAP5/MOD3.0 application to the VVER-440 SG modelling during normal conditions and during transient with secondary water lowering are described. Similar recirculation model was chosen for both codes. In the CATHARE calculation, no special measures were taken with the aim to optimize artificially flow rate distribution coefficients for the junction between SG riser and steam dome. Contrary to RELAP code, the CATHARE code is able to predict reasonable the secondary swell level in nominal conditions. Both codes are able to model properly natural phase separation on the SG water level. 6 refs.

  16. Calculation with MCNP of capture photon flux in VVER-1000 experimental reactor.

    Science.gov (United States)

    Töre, Candan; Ortego, Pedro

    2005-01-01

    The aim of this study is to obtain by Monte Carlo method the high energy photon flux due to neutron capture in the internals and vessel layers of the experimental reactor LR-0 located in REZ, Czech Republic, and loaded with VVER-1000 fuel. The calclated neutron, photon and photon to neutron flux ratio are compared with experimental measurements performed with a multi-parameter stilbene detector. The results show clear underestimation of photon flux in downcomer and some overestimation at vessel surface and 1/4 thickness but a good fitting for deeper points in vessel.

  17. Rod ejection simulation on VVER 1000/320 core using PARCS/TRACE

    OpenAIRE

    Ruscak, Marek

    2016-01-01

    The rod ejection (RE) is a design basis accident in accordance with NUREG-0800 and usually studied using point kinetics. In this thesis a 3D kinetic model is prepared and coupled with a thermal hydraulic system code for simulating this accident scenario for general VVER 1000 technology. This topic has been defined by the Research Centre Rez of the Czech Republic as a part of a larger project concerning beyond design basis accident focused on the Station Black Out (SBO) and a Lo...

  18. Effect of Ni content on thermal and radiation resistance of VVER RPV steel

    Science.gov (United States)

    Shtrombakh, Ya. I.; Gurovich, B. A.; Kuleshova, E. A.; Frolov, A. S.; Fedotova, S. V.; Zhurko, D. A.; Krikun, E. V.

    2015-06-01

    In this paper thermal stability and radiation resistance of VVER-type RPV steels for pressure vessels of advanced reactors with different nickel content were studied. A complex of microstructural studies and mechanical tests of the steels in different states (after long thermal exposures, provoking embrittling heat treatment and accelerated neutron irradiation) was carried out. It is shown that nickel content (other things being equal) determines the extent of materials degradation under influence of operational factors: steels with a lower nickel concentration demonstrate a higher thermal stability and radiation resistance.

  19. Mössbauer study of EUROFER and VVER steel reactor materials

    Science.gov (United States)

    Kuzmann, E.; Horváth, Á.; Alves, L.; Silva, J. F.; Gomes, U.; Souza, C.; Homonnay, Z.

    2013-04-01

    57Fe Mössbauer spectroscopy and X-ray diffractometry were used to study EUROFER or VVER ferritic reactor steels mechanically alloyed with TaC or NbC. Significant changes were found in the Mössbauer spectra and in the corresponding hyperfine field distributions between the ball milled pure steel and that alloyed with TaC or NbC. Spectral differences were also found in the case of use of same carbides with different origin, too. The observed spectral changes as an effect of ball milling of the reactor material steels with carbides can be associated with change in short range order of the constituents of steel.

  20. An Evaluation of Component-Based Software Design Approaches

    OpenAIRE

    Puppin, Diego; Silvestri, Fabrizio; Laforenza, Domenico

    2004-01-01

    There is growing attention for a component-oriented software design of Grid applications. Within this framework, applications are built by assembling together independently developed-software components. A component is a software unit with a clearly defined interface and explicit dependencies. It is designed to be integrated with other components, but independently from them. Unix filters and the pipe composition model, the first successful component-oriented model, allowed more complex appli...

  1. Modelling raster-based monthly water balance components for Europe

    Energy Technology Data Exchange (ETDEWEB)

    Ulmen, C.

    2000-11-01

    The terrestrial runoff component is a comparatively small but sensitive and thus significant quantity in the global energy and water cycle at the interface between landmass and atmosphere. As opposed to soil moisture and evapotranspiration which critically determine water vapour fluxes and thus water and energy transport, it can be measured as an integrated quantity over a large area, i.e. the river basin. This peculiarity makes terrestrial runoff ideally suited for the calibration, verification and validation of general circulation models (GCMs). Gauging stations are not homogeneously distributed in space. Moreover, time series are not necessarily continuously measured nor do they in general have overlapping time periods. To overcome this problems with regard to regular grid spacing used in GCMs, different methods can be applied to transform irregular data to regular so called gridded runoff fields. The present work aims to directly compute the gridded components of the monthly water balance (including gridded runoff fields) for Europe by application of the well-established raster-based macro-scale water balance model WABIMON used at the Federal Institute of Hydrology, Germany. Model calibration and validation is performed by separated examination of 29 representative European catchments. Results indicate a general applicability of the model delivering reliable overall patterns and integrated quantities on a monthly basis. For time steps less then too weeks further research and structural improvements of the model are suggested. (orig.)

  2. High Q, Miniaturized LCP-Based Passive Components

    KAUST Repository

    Shamim, Atif

    2014-10-16

    Various methods and systems are provided for high Q, miniaturized LCP-based passive components. In one embodiment, among others, a spiral inductor includes a center connection and a plurality of inductors formed on a liquid crystal polymer (LCP) layer, the plurality of inductors concentrically spiraling out from the center connection. In another embodiment, a vertically intertwined inductor includes first and second inductors including a first section disposed on a side of the LCP layer forming a fraction of a turn and a second section disposed on another side of the LCP layer. At least a portion of the first section of the first inductor is substantially aligned with at least a portion of the second section of the second inductor and at least a portion of the first section of the second inductor is substantially aligned with at least a portion of the second section of the first inductor.

  3. Nonlinear fault diagnosis method based on kernel principal component analysis

    Institute of Scientific and Technical Information of China (English)

    Yan Weiwu; Zhang Chunkai; Shao Huihe

    2005-01-01

    To ensure the system run under working order, detection and diagnosis of faults play an important role in industrial process. This paper proposed a nonlinear fault diagnosis method based on kernel principal component analysis (KPCA). In proposed method, using essential information of nonlinear system extracted by KPCA, we constructed KPCA model of nonlinear system under normal working condition. Then new data were projected onto the KPCA model. When new data are incompatible with the KPCA model, it can be concluded that the nonlinear system isout of normal working condition. Proposed method was applied to fault diagnosison rolling bearings. Simulation results show proposed method provides an effective method for fault detection and diagnosis of nonlinear system.

  4. Performance of nickel base superalloy components in gas turbines

    DEFF Research Database (Denmark)

    Dahl, Kristian Vinter

    2006-01-01

    The topic of this thesis is the microstructural behaviour of hot section components in the industrial gas turbine......The topic of this thesis is the microstructural behaviour of hot section components in the industrial gas turbine...

  5. Radiotoxicity and decay heat power of spent nuclear fuel of VVER type reactors at long-term storage.

    Science.gov (United States)

    Bergelson, B R; Gerasimov, A S; Tikhomirov, G V

    2005-01-01

    Radiotoxicity and decay heat power of the spent nuclear fuel of VVER-1000 type reactors are calculated during storage time up to 300,000 y. Decay heat power of radioactive waste (radwaste) determines parameters of the heat removal system for the safe storage of spent nuclear fuel. Radiotoxicity determines the radiological hazard of radwaste after its leakage and penetration into the environment.

  6. Sensitivity and System Response of Pin Power Peaking in VVER-1000 Fuel Assembly Using TSUNAMI-2D

    Science.gov (United States)

    Frybort, J.

    2014-04-01

    Pin power peaking in a VVER-1000 fuel assembly and its sensitivity and uncertainty was analyzed by TSUNAMI-2D code. Several types of fuel assemblies were considered. They differ in number and position of gadolinium fuel pins. The calculations were repeated for several fuel compositions obtained by fuel depletion calculation. The results are quantified sensitivity data, which can be used for enrichment profiling.

  7. Reflooding and boil-off experiments in a VVER-440 like rod bundle and analyses with the CATHARE code

    Energy Technology Data Exchange (ETDEWEB)

    Korteniemi, V.; Haapalehto, T. [Lappeenranta Univ. of Technology (Finland); Puustinen, M. [VTT Energy, Lappeenranta (Finland)

    1995-09-01

    Several experiments were performed with the VEERA facility to simulate reflooding and boil-off phenomena in a VVER-440 like rod bundle. The objective of these experiments was to get experience of a full-scale bundle behavior and to create a database for verification of VVER type core models used with modern thermal-hydraulic codes. The VEERA facility used in the experiments is a scaled-down model of the Russian VVER-440 type pressurized water reactors used in Loviisa, Finland. The test section of the facility consists of one full-scale copy of a VVER-440 reactor rod bundle with 126 full-length electrically heated rod simulators. Bottom and top-down reflooding, different modes of emergency core cooling (ECC) injection and the effect of heating power on the heat-up of the rods was studied. In this paper the results of calculations simulating two reflood and one boil-off experiment with the French CATHARE2 thermal-hydraulic code are also presented. Especially the performance of the recently implemented top-down reflood model of the code was studied.

  8. Department of Energy's team's analyses of Soviet designed VVERs (water-cooled water-moderated atomic energy reactors)

    Energy Technology Data Exchange (ETDEWEB)

    1989-09-01

    This document contains apprendices A through P of this report. Topics discussed are: a cronyms and technical terms, accident analyses reactivity control; Soviet safety regulations; radionuclide inventory; decay heat; operations and maintenance; steam supply system; concrete and concrete structures; seismicity; site information; neutronic parameters; loss of electric power; diesel generator reliability; Soviet codes and standards; and comparisons of PWR and VVER features. (FI)

  9. A Roadmap and Discussion of Issues for Physics Analyses Required to Support Plutonium Disposition in VVER-1000 Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Primm, R.T.; Drischler, J.D.; Pavlovichev, A.M. Styrine, Y.A.

    2000-06-01

    The purpose of this report is to document the physics analyses that must be performed to successfully disposition weapons-usable plutonium in VVER-1000 reactors in the Russian Federation. The report is a document to support programmatic and financial planning. It does not include documentation of the technical procedures by which physics analyses are performed, nor are the results of any analyses included.

  10. Post test calculations of a severe accident experiment for VVER-440 reactors by the ATHLET code

    Energy Technology Data Exchange (ETDEWEB)

    Gyoergy, Hunor [Budapest Univ. of Technology and Economics (Hungary). Inst. of Nuclear Techniques (BME NTI); Trosztel, Istvan [Hungarian Academy of Sciences, Budapest (Hungary). Centre for Energy Research (MTA EK)

    2013-09-15

    Severe accident - if no mitigation action is taken - leads to core melt. An effective severe accident management strategy can be the external reactor pressure vessel cooling for corium localization and stabilization. For some time discussion was going on, whether the in-vessel retention can be applied for the VVER-440 type reactors. It had to be demonstrated that the available space between the reactor vessel and biological protection allows sufficient cooling to keep the melted core in the vessel, without the reactor pressure vessel losing its integrity. In order to demonstrate the feasibility of the concept an experimental facility was realized in Hungary. The facility called Cooling Effectiveness on the Reactor External Surface (CERES) is modeling the vessel external surface and the biological protection of Paks NPP. A model of the CERES facility for the ATHLET TH system code was developed. The results of the ATHLET calculation agree well with the measurements showing that the vessel cooling can be insured for a long time in a VVER-440 reactor. (orig.)

  11. Qualification of the APOLLO2 lattice physics code of the NURISP platform for VVER hexagonal lattices

    Energy Technology Data Exchange (ETDEWEB)

    Hegyi, Gyoergy; Kereszturi, Andras; Tota, Adam [Hungarian Academy of Sciences, Budapest (Hungary). Reactor Analysis Dept.

    2012-08-15

    The experiments performed at the ZR-6 zero power critical reactor by the Temporary International Collective (TIC) and a burnup benchmark specified for depletion calculation of a VVER-440 assembly containing Gd burnable poison were used to qualify the APOLLO2.8-3.E (APOLLO2) code as a part of its ongoing validation activity. The work is part of the NURISP project, where KFKI AEKI undertook to develop and qualify some calculation schemes for hexagonal problems. Concerning the ZR-6 measurements, single cell, macro-cell and 2D calculations of selected regular and perturbed experiments are used for the validation. In the 2D cases, the radial leakage is also taken into account by the axial leakage represented by the measured axial buckling. Criticality parameter and reaction rate comparisons are presented. Although various sets of the experiments have been selected for the validation, good agreement of the measured and calculated parameters could be found by using the various options offered by APOLLO2. An additional mathematical benchmark - presented in the paper - also attests for the reliability of APOLLO2. All the test results prove the reliability of APOLLO2 for VVER core calculations. (orig.)

  12. Evaluating the hydrological consistency of satellite based water cycle components

    KAUST Repository

    Lopez Valencia, Oliver M.

    2016-06-15

    Advances in multi-satellite based observations of the earth system have provided the capacity to retrieve information across a wide-range of land surface hydrological components and provided an opportunity to characterize terrestrial processes from a completely new perspective. Given the spatial advantage that space-based observations offer, several regional-to-global scale products have been developed, offering insights into the multi-scale behaviour and variability of hydrological states and fluxes. However, one of the key challenges in the use of satellite-based products is characterizing the degree to which they provide realistic and representative estimates of the underlying retrieval: that is, how accurate are the hydrological components derived from satellite observations? The challenge is intrinsically linked to issues of scale, since the availability of high-quality in-situ data is limited, and even where it does exist, is generally not commensurate to the resolution of the satellite observation. Basin-scale studies have shown considerable variability in achieving water budget closure with any degree of accuracy using satellite estimates of the water cycle. In order to assess the suitability of this type of approach for evaluating hydrological observations, it makes sense to first test it over environments with restricted hydrological inputs, before applying it to more hydrological complex basins. Here we explore the concept of hydrological consistency, i.e. the physical considerations that the water budget impose on the hydrologic fluxes and states to be temporally and spatially linked, to evaluate the reproduction of a set of large-scale evaporation (E) products by using a combination of satellite rainfall (P) and Gravity Recovery and Climate Experiment (GRACE) observations of storage change, focusing on arid and semi-arid environments, where the hydrological flows can be more realistically described. Our results indicate no persistent hydrological

  13. Bonding and Integration Technologies for Silicon Carbide Based Injector Components

    Science.gov (United States)

    Halbig, Michael C.; Singh, Mrityunjay

    2008-01-01

    Advanced ceramic bonding and integration technologies play a critical role in the fabrication and application of silicon carbide based components for a number of aerospace and ground based applications. One such application is a lean direct injector for a turbine engine to achieve low NOx emissions. Ceramic to ceramic diffusion bonding and ceramic to metal brazing technologies are being developed for this injector application. For the diffusion bonding, titanium interlayers (PVD and foils) were used to aid in the joining of silicon carbide (SiC) substrates. The influence of such variables as surface finish, interlayer thickness (10, 20, and 50 microns), processing time and temperature, and cooling rates were investigated. Microprobe analysis was used to identify the phases in the bonded region. For bonds that were not fully reacted an intermediate phase, Ti5Si3Cx, formed that is thermally incompatible in its thermal expansion and caused thermal stresses and cracking during the processing cool-down. Thinner titanium interlayers and/or longer processing times resulted in stable and compatible phases that did not contribute to microcracking and resulted in an optimized microstructure. Tensile tests on the joined materials resulted in strengths of 13-28 MPa depending on the SiC substrate material. Non-destructive evaluation using ultrasonic immersion showed well formed bonds. For the joining technology of brazing Kovar fuel tubes to silicon carbide, preliminary development of the joining approach has begun. Various technical issues and requirements for the injector application are addressed.

  14. Application of fuzzy-MOORA method: Ranking of components for reliability estimation of component-based software systems

    Directory of Open Access Journals (Sweden)

    Zeeshan Ali Siddiqui

    2016-01-01

    Full Text Available Component-based software system (CBSS development technique is an emerging discipline that promises to take software development into a new era. As hardware systems are presently being constructed from kits of parts, software systems may also be assembled from components. It is more reliable to reuse software than to create. It is the glue code and individual components reliability that contribute to the reliability of the overall system. Every component contributes to overall system reliability according to the number of times it is being used, some components are of critical usage, known as usage frequency of component. The usage frequency decides the weight of each component. According to their weights, each component contributes to the overall reliability of the system. Therefore, ranking of components may be obtained by analyzing their reliability impacts on overall application. In this paper, we propose the application of fuzzy multi-objective optimization on the basis of ratio analysis, Fuzzy-MOORA. The method helps us find the best suitable alternative, software component, from a set of available feasible alternatives named software components. It is an accurate and easy to understand tool for solving multi-criteria decision making problems that have imprecise and vague evaluation data. By the use of ratio analysis, the proposed method determines the most suitable alternative among all possible alternatives, and dimensionless measurement will realize the job of ranking of components for estimating CBSS reliability in a non-subjective way. Finally, three case studies are shown to illustrate the use of the proposed technique.

  15. WEB SERVICE SELECTION ALGORITHM BASED ON PRINCIPAL COMPONENT ANALYSIS

    Institute of Scientific and Technical Information of China (English)

    Kang Guosheng; Liu Jianxun; Tang Mingdong; Cao Buqing

    2013-01-01

    Existing Web service selection approaches usually assume that preferences of users have been provided in a quantitative form by users.However,due to the subjectivity and vagueness of preferences,it may be impractical for users to specify quantitative and exact preferences.Moreover,due to that Quality of Service (QoS) attributes are often interrelated,existing Web service selection approaches which employ weighted summation of QoS attribute values to compute the overall QoS of Web services may produce inaccurate results,since they do not take correlations among QoS attributes into account.To resolve these problems,a Web service selection framework considering user's preference priority is proposed,which incorporates a searching mechanism with QoS range setting to identify services satisfying the user's QoS constraints.With the identified service candidates,based on the idea of Principal Component Analysis (PCA),an algorithm of Web service selection named PCAoWSS (Web Service Selection based on PCA) is proposed,which can eliminate the correlations among QoS attributes and compute the overall QoS of Web services accurately.After computing the overall QoS for each service,the algorithm ranks the Web service candidates based on their overall QoS and recommends services with top QoS values to users.Finally,the effectiveness and feasibility of our approach are validated by experiments,i.e.the selected Web service by our approach is given high average evaluation than other ones by users and the time cost of PCA-WSS algorithm is not affected acutely by the number of service candidates.

  16. An experimental substantiation of the design functions imposed on the additional system for passively flooding the core of a VVER reactor

    Science.gov (United States)

    Morozov, A. V.; Remizov, O. V.

    2012-05-01

    Results obtained from a research work on experimentally substantiating the serviceability of the additional system for passively flooding the core of a VVER reactor from the second-stage hydro accumulators are presented.

  17. Architectures: Design patterns for component-based systems

    OpenAIRE

    Bliudze, Simon

    2014-01-01

    Architectures depict design principles, paradigms that can be understood by all, allow thinking on a higher plane and avoiding low-level mistakes. They provide means for ensuring correctness by construction by enforcing global properties characterizing the coordination between components. An architecture can be considered as an operator A that, applied to a set of components B, builds a composite component A(B) meeting a characteristic property P. A theory of architectures must address sever...

  18. Laser repairing surface crack of Ni-based superalloy components

    Institute of Scientific and Technical Information of China (English)

    王忠柯; 叶和清; 许德胜; 黄索逸

    2001-01-01

    Surface crack of components of the cast nickel-base superalloy was repaired with twin laser beams under proper technological conditions. One laser beam was used to melt the substrate material of crack, and the other to fill in powder material to the crack region. The experimental results show that the surface crack with the width of 0.1~0.3mm could be repaired under the laser power of 3kW and the scanning speed of 6~8mm/s. The repaired deepness of crack region is below 6.5mm. The microstructure of repaired region is the cellular crystal, columnar crystal dendrite crystal from the transition region to the top filled layer. The phases in repaired region mainly consisted of supersaturated α-Co with plenty of Ni, some Cr and Al, Cr23C6, Co2B, Co-Ni-Mo, Ni4B3, TiSi and VSi. The hardness of filled layer in repaired region ranged from HV0.2450 to HV0.2500, and the hardness decreases gradually from the filled layer to joined zone.

  19. Service oriented architecture assessment based on software components

    Directory of Open Access Journals (Sweden)

    Mahnaz Amirpour

    2016-01-01

    Full Text Available Enterprise architecture, with detailed descriptions of the functions of information technology in the organization, tries to reduce the complexity of technology applications resulting in tools with greater efficiency in achieving the objectives of the organization. Enterprise architecture consists of a set of models describing this technology in different components performance as well as various aspects of the applications in any organization. Therefore, information technology development and maintenance management can perform well within organizations. This study aims to suggest a method to identify different types of services in service-oriented architecture analysis step that applies some previous approaches in an integrated form and, based on the principles of software engineering, to provide a simpler and more transparent approach through the expression of analysis details. Advantages and disadvantages of proposals should be evaluated before the implementation and costs allocation. Evaluation methods can better identify strengths and weaknesses of the current situation apart from selecting appropriate model out of several suggestions, and clarify this technology development solution for organizations in the future. We will be able to simulate data and processes flow within the organization by converting the output of the model to colored Petri nets and evaluate and test it by examining various inputs to enterprise architecture before implemented in terms of reliability and response time. A model of application has been studied for the proposed model and the results can describe and design architecture for data.

  20. Biological agent detection based on principal component analysis

    Science.gov (United States)

    Mudigonda, Naga R.; Kacelenga, Ray

    2006-05-01

    This paper presents an algorithm, based on principal component analysis for the detection of biological threats using General Dynamics Canada's 4WARN Sentry 3000 biodetection system. The proposed method employs a statistical method for estimating background biological activity so as to make the algorithm adaptive to varying background situations. The method attempts to characterize the pattern of change that occurs in the fluorescent particle counts distribution and uses the information to suppress false-alarms. The performance of the method was evaluated using a total of 68 tests including 51 releases of Bacillus Globigii (BG), six releases of BG in the presence of obscurants, six releases of obscurants only, and five releases of ovalbumin at the Ambient Breeze Tunnel Test facility, Battelle, OH. The peak one-minute average concentration of BG used in the tests ranged from 10 - 65 Agent Containing Particles per Liter of Air (ACPLA). The obscurants used in the tests included diesel smoke, white grenade smoke, and salt solution. The method successfully detected BG at a sensitivity of 10 ACPLA and resulted in an overall probability of detection of 94% for BG without generating any false-alarms for obscurants at a detection threshold of 0.6 on a scale of 0 to 1. Also, the method successfully detected BG in the presence of diesel smoke and salt water fumes. The system successfully responded to all the five ovalbumin releases with noticeable trends in algorithm output and alarmed for two releases at the selected detection threshold.

  1. Cnn Based Retinal Image Upscaling Using Zero Component Analysis

    Science.gov (United States)

    Nasonov, A.; Chesnakov, K.; Krylov, A.

    2017-05-01

    The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.

  2. Development and application of the coupled thermal-hydraulics and neutron-kinetics code ATHLET/BIPR-VVER for safety analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lizorkin, M.; Nikonov, S. [Kurchatov Institute for Atomic Energy, Moscow (Russian Federation); Langenbuch, S.; Velkov, K. [Gesellschaft fur Anlagen- und Reaktorsicherheit (GRS) mbH, Garching (Germany)

    2006-07-01

    The coupled thermal-hydraulics and neutron-kinetics code ATHLET/BIPR-VVER was developed within a co-operation between the RRC Kurchatov Institute (KI) and GRS. The modeling capability of this coupled code as well as the status of validation by benchmark activities and comparison with plant measurements are described. The paper is focused on the modeling of flow mixing in the reactor pressure vessel including its validation and the application for the safety justification of VVER plants. (authors)

  3. Internet MEMS design tools based on component technology

    Science.gov (United States)

    Brueck, Rainer; Schumer, Christian

    1999-03-01

    The micro electromechanical systems (MEMS) industry in Europe is characterized by small and medium sized enterprises specialized on products to solve problems in specific domains like medicine, automotive sensor technology, etc. In this field of business the technology driven design approach known from micro electronics is not appropriate. Instead each design problem aims at its own, specific technology to be used for the solution. The variety of technologies at hand, like Si-surface, Si-bulk, LIGA, laser, precision engineering requires a huge set of different design tools to be available. No single SME can afford to hold licenses for all these tools. This calls for a new and flexible way of designing, implementing and distributing design software. The Internet provides a flexible manner of offering software access along with methodologies of flexible licensing e.g. on a pay-per-use basis. New communication technologies like ADSL, TV cable of satellites as carriers promise to offer a bandwidth sufficient even for interactive tools with graphical interfaces in the near future. INTERLIDO is an experimental tool suite for process specification and layout verification for lithography based MEMS technologies to be accessed via the Internet. The first version provides a Java implementation even including a graphical editor for process specification. Currently, a new version is brought into operation that is based on JavaBeans component technology. JavaBeans offers the possibility to realize independent interactive design assistants, like a design rule checking assistants, a process consistency checking assistants, a technology definition assistants, a graphical editor assistants, etc. that may reside distributed over the Internet, communicating via Internet protocols. Each potential user thus is able to configure his own dedicated version of a design tool set dedicated to the requirements of the current problem to be solved.

  4. Modeling QoS Parameters in Component-Based Systems

    Science.gov (United States)

    2004-08-01

    deployed components, begins with the system developer, willing to build a system, by presenting a query to the system generator . The query describes...is built using the system generator . If some of the components are not found then the system integrator can modify the system query by adding more

  5. Independet Component Analyses of Ground-based Exoplanetary Transits

    Science.gov (United States)

    Silva Martins-Filho, Walter; Griffith, Caitlin Ann; Pearson, Kyle; Waldmann, Ingo; Biddle, Lauren; Zellem, Robert Thomas; Alvarez-Candal, Alvaro

    2016-10-01

    Most observations of exoplanetary atmospheres are conducted when a "Hot Jupiter" exoplanet transits in front of its host star. These Jovian-sized planets have small orbital periods, on the order of days, and therefore a short transit time, making them more ameanable to observations. Measurements of Hot Jupiter transits must achieve a 10-4 level of accuracy in the flux to determine the spectral modulations of the exoplanetary atmosphere. In order to accomplish this level of precision, we need to extract systematic errors, and, for ground-based measurements, the effects of Earth's atmosphere, from the signal due to the exoplanet, which is several orders of magnitudes smaller. Currently, the effects of the terrestrial atmosphere and the some of the time-dependent systematic errors are treated by dividing the host star by a reference star at each wavelength and time step of the transit. More recently, Independent Component Analyses (ICA) have been used to remove systematic effects from the raw data of space-based observations (Waldmann 2014,2012; Morello et al.,2015,2016). ICA is a statistical method born from the ideas of the blind-source separation studies, which can be used to de-trend several independent source signals of a data set (Hyvarinen and Oja, 2000). One strength of this method is that it requires no additional prior knowledge of the system. Here, we present a study of the application of ICA to ground-based transit observations of extrasolar planets, which are affected by Earth's atmosphere. We analyze photometric data of two extrasolar planets, WASP-1b and GJ3470b, recorded by the 61" Kuiper Telescope at Stewart Observatory using the Harris B and U filters. The presentation will compare the light curve depths and their dispersions as derived from the ICA analysis to those derived by analyses that ratio of the host star to nearby reference stars.References: Waldmann, I.P. 2012 ApJ, 747, 12, Waldamann, I. P. 2014 ApJ, 780, 23; Morello G. 2015 ApJ, 806

  6. Knowledge-based System Prototype in Structural Component Design Based on FM

    Institute of Scientific and Technical Information of China (English)

    JIANG; Tao; LI; Qing-fen; LI; Ming; FU; Wei

    2002-01-01

    A knowledge-based system in structural component design based on fracture mechanics is developed in this paper. The system consists of several functional parts: a general inference engine, a set of knowledge bases and data-bases, an interpretation engine, a bases administration system and the interface. It can simulate a human expert to make analysis and design scheme mainly for four kinds of typical structural components widely used in shipbuilding industry: pressure vessels, huge rotation constructions, pump-rod and welded structures. It is an open system which may be broadened and perfected to cover a wider range of engineering application through the modification and enlargement of knowledge bases and data-bases. It has a natural and friendly interface that may be easily operated. An on-line help service is also provided.

  7. A flexible framework for sparse simultaneous component based data integration

    Directory of Open Access Journals (Sweden)

    Van Deun Katrijn

    2011-11-01

    Full Text Available Abstract 1 Background High throughput data are complex and methods that reveal structure underlying the data are most useful. Principal component analysis, frequently implemented as a singular value decomposition, is a popular technique in this respect. Nowadays often the challenge is to reveal structure in several sources of information (e.g., transcriptomics, proteomics that are available for the same biological entities under study. Simultaneous component methods are most promising in this respect. However, the interpretation of the principal and simultaneous components is often daunting because contributions of each of the biomolecules (transcripts, proteins have to be taken into account. 2 Results We propose a sparse simultaneous component method that makes many of the parameters redundant by shrinking them to zero. It includes principal component analysis, sparse principal component analysis, and ordinary simultaneous component analysis as special cases. Several penalties can be tuned that account in different ways for the block structure present in the integrated data. This yields known sparse approaches as the lasso, the ridge penalty, the elastic net, the group lasso, sparse group lasso, and elitist lasso. In addition, the algorithmic results can be easily transposed to the context of regression. Metabolomics data obtained with two measurement platforms for the same set of Escherichia coli samples are used to illustrate the proposed methodology and the properties of different penalties with respect to sparseness across and within data blocks. 3 Conclusion Sparse simultaneous component analysis is a useful method for data integration: First, simultaneous analyses of multiple blocks offer advantages over sequential and separate analyses and second, interpretation of the results is highly facilitated by their sparseness. The approach offered is flexible and allows to take the block structure in different ways into account. As such

  8. An Ensemble Algorithm Based Component for Geomagnetic Data Assimilation

    Directory of Open Access Journals (Sweden)

    Zhibin Sun and Weijia Kuang

    2015-01-01

    Full Text Available Geomagnetic data assimilation is one of the most recent developments in geomagnetic studies. It combines geodynamo model outputs and surface geomagnetic observations to provide more accurate estimates of the core dynamic state and provide accurate geomagnetic secular variation forecasting. To facilitate geomagnetic data assimilation studies, we develop a stand-alone data assimilation component for the geomagnetic community. This component is used to calculate the forecast error covariance matrices and the gain matrix from a given geodynamo solution, which can then be used for sequential geomagnetic data assimilation. This component is very flexible and can be executed independently. It can also be easily integrated with arbitrary dynamo models.

  9. Design and Fabrication of SOI-based photonic crystal components

    DEFF Research Database (Denmark)

    Borel, Peter Ingo; Frandsen, Lars Hagedorn; Harpøth, Anders;

    2004-01-01

    We present examples of ultra-compact photonic crystal components realized in silicon-on-insulator material. We have fabricated several different types of photonic crystal waveguide components displaying high transmission features. This includes 60° and 120° bends, different types of couplers......, and splitters. Recently, we have designed and fabricated components with more than 200 nm bandwidths. Design strategies to enhance the performance include systematic variation of design parameters using finite-difference time-domain simulations and inverse design methods such as topology optimization....

  10. A spatial kinetic model for simulating VVER-1000 start-up transient

    Energy Technology Data Exchange (ETDEWEB)

    Kashi, Samira [Department of Nuclear Engineering, Shahid Beheshti University, Tehran (Iran, Islamic Republic of); Moghaddam, Nader Maleki, E-mail: nader.moghaddam@gmail.com [Department of Nuclear Engineering and Physics, Amir Kabir University of Technology, Tehran (Iran, Islamic Republic of); Shahriari, Majid [Department of Nuclear Engineering, Shahid Beheshti University, Tehran (Iran, Islamic Republic of)

    2011-06-15

    Research highlights: > A spatial kinetic model of a VVER-1000 reactor core is presented. > The reactor power is tracked using the point kinetic equations from 100 W to 612 kW. > The lamped parameter approximation is used for solving the energy balance equations. > The value of reactivity related to feedback effects of core elements is calculated. > The main neutronic parameters during the transient are calculated. - Abstract: An accurate prediction of reactor core behavior in transients depends on how much it could be possible to exactly determine the thermal feedbacks of the core elements such as fuel, clad and coolant. In short time transients, results of these feedbacks directly affect the reactor power and determine the reactor response. Such transients are commonly happened during the start-up process which makes it necessary to carefully evaluate the detail of process. Hence this research evaluates a short time transient occurring during the start up of VVER-1000 reactor. The reactor power was tracked using the point kinetic equations from HZP state (100 W) to 612 kW. Final power (612 kW) was achieved by withdrawing control rods and resultant excess reactivity was set into dynamic equations to calculate the reactor power. Since reactivity is the most important part in the point kinetic equations, using a Lumped Parameter (LP) approximation, energy balance equations were solved in different zones of the core. After determining temperature and total reactivity related to feedbacks in each time step, the exact value of reactivity is obtained and is inserted into point kinetic equations. In reactor core each zone has a specific temperature and its corresponding thermal feedback. To decrease the effects of point kinetic approximations, these partial feedbacks in different zones are superposed to show an accurate model of reactor core dynamics. In this manner the reactor point kinetic can be extended to the whole reactor core which means 'Reactor spatial

  11. Investigations of the VVER-1000 coolant transient benchmark phase 1 with the coupled code system RELAP5/PARCS

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Espinoza, Victor Hugo

    2008-07-15

    local power distribution within the core. The code's predictions are strongly influenced by the way how coolant mixing is modeled by the analyst by means of 1D-thermal hydraulic codes. Sensitivity evaluations are therefore necessary to identify the most important phenomena and assumptions affecting the numerical predictions of coupled codes. Selected results of these investigations will be presented and discussed. A comparison of the thermal hydraulic data obtained during the tests with the code's predictions will be also given. It can be stated that even though the overall trends of most plant parameters are in a reasonable agreement with the experimental data, these investigations show that multidimensional thermal hydraulic models are needed for a more realistic description of the coolant mixing phenomena within the reactor pressure vessel. Hence the subsequent Phase 2 of this V1000-CT benchmark is focused on CFD-based simulations for transient conditions typical of a main steam line break transients of VVER-1000 reactors. (orig.)

  12. Simulating the corrosion of zirconium alloys in the water coolant of VVER reactors

    Science.gov (United States)

    Kritskii, V. G.; Berezina, I. G.; Motkova, E. A.

    2013-07-01

    A model for predicting the corrosion of cladding zirconium alloys depending on their composition and operating conditions is proposed. Laws of thermodynamics and chemical kinetics of the reactions through which the multicomponent zirconium alloy is oxidized in the reactor coolant constitute the physicochemical heart of the model. The developed version of the model is verified against the results obtained from tests of fuel rod claddings made of commercial-grade and experimental zirconium alloys carried out by different researchers under autoclave and reactor conditions. It is shown that the proposed model adequately describes the corrosion of alloys in coolants used at nuclear power stations. It is determined that, owing to boiling of coolant and its acidification in a VVER-1200 reactor, Zr-1% Nb alloys with additions of iron and oxygen must be more resistant to corrosion than the commercial-grade alloy E110.

  13. Application of spectral tuning on the dynamic model of the reactor VVER 1000 support cylinder

    Directory of Open Access Journals (Sweden)

    Musil A.

    2007-10-01

    Full Text Available The paper deals with the optimization of parameters of the dynamic model of the reactor VVER 1000 support cylinder. Within the model of the whole reactor, support cylinder appears to be a significant subsystem for its modal properties having dominant influence on the behaviour of the reactor as a whole. Relative sensitivities of eigenfrequencies to a change of the discrete parameters of the model were determined. Obtained values were applied in the following spectral tuning process of the (selected discrete parameters. Since the past calculations have shown that spectral tuning by the changes of mass parameters is not effective, the presented paper demonstrates what results are achieved when the set of the tuning parameters is extended by the geometric parameters. Tuning itself is then formulated as an optimization problem with inequalities.

  14. Fracture mechanical investigation of a thermo shock scenario for a VVER-440 RPV

    Energy Technology Data Exchange (ETDEWEB)

    Altstadt, E.; Abendroth, Martin [Forschungszentrum Dresden-Rossendorf (Germany)

    2008-07-01

    The paper describes the modelling and evaluation of a pressurized thermal shock (PTS) scenario in a VVER-440 reactor pressure vessel due to an emergency cooling. An axially oriented semi-elliptical crack is assumed to be located in the core welding seam. Two variants of fracture mechanical evaluation are performed: the analysis of a sub-cladding crack and of a surface crack. Three-dimensional finite element (FE) models are used to compute the global transient temperature and stress-strain fields. By using a three-dimensional submodel, which includes the crack, the local crack stress-strain field is obtained. Within the subsequent postprocessing using the J-integral technique the stress intensity factors K{sub I} along the crack front are obtained. The FE results are compared to analytical calculations proposed in the VERLIFE code. The stress intensity factors are compared to the fracture toughness curve of the weld material. (orig.)

  15. CATHARE Multi-1D Modeling of Coolant Mixing in VVER-1000 for RIA Analysis

    Directory of Open Access Journals (Sweden)

    I. Spasov

    2010-01-01

    Full Text Available The paper presents validation results for multichannel vessel thermal-hydraulic models in CATHARE used in coupled 3D neutronic/thermal hydraulic calculations. The mixing is modeled with cross flows governed by local pressure drops. The test cases are from the OECD VVER-1000 coolant transient benchmark (V1000CT and include asymmetric vessel flow transients and main steam line break (MSLB transients. Plant data from flow mixing experiments are available for comparison. Sufficient mesh refinement with up to 24 sectors in the vessel is considered for acceptable resolution. The results demonstrate the applicability of such validated thermal-hydraulic models to MSLB scenarios involving thermal mixing, azimuthal flow rotation, and primary pump trip. An acceptable trade-off between accuracy and computational efficiency can be obtained.

  16. Neutron Dosimetry in Edf Experimental Surveillance Programme for VVER-440 Nuclear Power Plants

    Science.gov (United States)

    Brumovsky, Milan; Erben, Oldrich; Zerola, Ladislav; Hogel, Josef; Massoud, Jean-Paul; Trollat, Christophe

    2003-06-01

    Fourteen chains containing experimental surveillance material specimens of the VVER 440/213 nuclear power reactor pressure vessels were irradiated in the surveillance channels of the Nuclear Power Plant Dukovany in the Czech Republic. The irradiation periods were one, two or three cycles. For the absolute fluence values evaluation account was taken of the time history of the reactor power and of local changes of the neutron flux along the reactor core height, and of correction factors due to the orientation of monitors with respect to the reactor core centre. Neutron fluence values above 0.5 MeV energy and above 1.0 MeV energy in the container axis at the axial positions of the sample centres and fluence values in the geometric centre of the samples were calculated making use the exponential attenuation model of the incident neutron beam.

  17. IMPROVED MODELS AND METHOD OF POWER CHANGE OF NPP UNIT WITH VVER-1000

    Directory of Open Access Journals (Sweden)

    Tymur Foshch

    2017-05-01

    Full Text Available This study represents the improved mathematical and imitational allocated in space multi-zone model of VVER-1000 which differs from the known one. It allows to take into account the energy release of 235U nuclei fission as well as 239Pu . Moreover, this model includes sub-models of simultaneous control impact of the boric acid concentration in the coolant of the first circuit and the position of 9th group control rods which allows to consider it as the model with allocated parameters and also allows to monitor changes in the mentioned technological parameters by reactor core symmetry sectors, by layers of reactor core height and by fuel assembly group each symmetry sector. Moreover, this model allows to calculate important process-dependent parameters of the reactor (including axial offset as quantitative measure of its safety. As the mathematical and imitational models were improved, it allows to take into account intrinsic properties of the reactor core (including transient processes of xenon and thus reduce the error of modelling static and dynamic properties of the reactor.The automated control method of power change of the NPP unit with VVER-1000 was proposed for the first time. It uses three control loops. One of which maintains the regulatory change of reactor power by regulating the concentration of boric acid in the coolant, the second circuit keeps the required value of axial offset by changing the position of control rods, and the third one holds constant the coolant temperature mode by regulating the position of the main turbo generator valves.On the basis of the above obtained method, two control programs were improved. The first one is the improved control program that implements the constant temperature of the coolant in the first circuit and the second one is the improved control program that implements the constant steam pressure in the second circuit.

  18. A four-component organogel based on orthogonal chemical interactions.

    Science.gov (United States)

    Luisier, Nicolas; Schenk, Kurt; Severin, Kay

    2014-09-14

    A thermoresponsive organogel was obtained by orthogonal assembly of four compounds using dynamic covalent boronate ester and imine bonds, as well as dative boron-nitrogen bonds. It is shown that the gel state can be disrupted or reinforced by chemicals which undergo exchange reactions with the gel components.

  19. Industrial Component-based Sample Mobile Robot System

    Directory of Open Access Journals (Sweden)

    Péter Kucsera

    2007-12-01

    Full Text Available The mobile robot development can be done in two different ways. The first is tobuild up an embedded system, the second is to use ‘ready to use’ industrial components.With the spread of Industrial mobile robots there are more and more components on themarket which can be used to build up a whole control and sensor system of a mobile robotplatform. Using these components electrical hardware development is not needed, whichspeeds up the development time and decreases the cost. Using a PLC on board, ‘only’constructing the program is needed and the developer can concentrate on the algorithms,not on developing hardware. My idea is to solve the problem of mobile robot localizationand obstacle avoidance using industrial components and concentrate this topic to themobile robot docking. In factories, mobile robots can be used to deliver parts from oneplace to another, but there are always two critical points. The robot has to be able tooperate in human environment, and also reach the target and get to a predefined positionwhere another system can load it or get the delivered product. I would like to construct amechanically simple robot model, which can calculate its position from the rotation of itswheels, and when it reaches a predefined location with the aid of an image processingsystem it can dock to an electrical connector. If the robot succeeded it could charge itsbatteries through this connector as well.

  20. A Component-Based Dataflow Framework for Simulation and Visualization

    NARCIS (Netherlands)

    Telea, Alexandru

    1999-01-01

    Reuse in the context of scientific simulation applications has mostly taken the form of procedural or object-oriented libraries. End users of such systems are however often non software experts needing very simple, possibly interactive ways to build applications from domain-specific components and t

  1. 78 FR 68475 - Certain Vision-Based Driver Assistance System Cameras and Components Thereof; Institution of...

    Science.gov (United States)

    2013-11-14

    ... COMMISSION Certain Vision-Based Driver Assistance System Cameras and Components Thereof; Institution of...-based driver assistance system cameras and components thereof by reason of infringement of certain... assistance system cameras and components thereof by reason of infringement of one or more of claims 1, 2,...

  2. Applying full multigroup cell characteristics from MCU code to finite difference calculations of neutron field in VVER core

    Energy Technology Data Exchange (ETDEWEB)

    Gorodkov, S.S.; Kalugin, M.A. [Nuclear Research Centre ' ' Kurchatov Institute' ' , Moscow (Russian Federation)

    2015-09-15

    Up to now core calculations with Monte Carlo provided only average cross-sections of mesh cells for further use either in finite difference calculations or as benchmark ones for approximate spectral algorithms. Now MCU code is capable to handle functions, which may be interpreted as average diffusion coefficients. Subsequently the results of finite difference calculations with cells characteristic sets obtained in such a way can be compared with Monte Carlo results as benchmarks, giving reliable information on quality of production code under consideration. As an example of such analysis, the results of mesh calculations with 1-, 2-, 4-, 8- and 12 neutron groups of some model VVER fuel assembly are presented in comparison with the exact Monte Carlo solution. As a second example, an analysis is presented of water gap approximate enlargement between fuel assemblies, allowing VVER core region be covered by regular mesh.

  3. Calculations of 3D full-scale VVER fuel assembly and core models using MCU and BIPR-7A codes

    Energy Technology Data Exchange (ETDEWEB)

    Aleshin, Sergey S.; Bikeev, Artem S.; Bolshagin, Sergey N.; Kalugin, Mikhail A.; Kosourov, Evgeniy K.; Pavlovichev, Aleksandr M.; Pryanichnikov, Aleksandr V.; Sukhino-Khomenko, Evgenia A.; Shcherenko, Anna I.; Shcherenko, Anastasia I.; Shkarovskiy, Denis A. [Nuclear Research Centre ' ' Kurchatov Institute' ' , Moscow (Russian Federation)

    2015-09-15

    Two types of calculations were made to compare BIPR-7A and MCU results for 3D full-scale models. First EPS (emergency protection system) efficiency and in-core power distributions were analyzed for an equilibrium fuel load of VVER-1000 assuming its operation within an 18-month cycle. Computations were performed without feedbacks and with fuel burnup distributed over the core. After 3D infinite lattices of full-scale VVER-1000 fuel assemblies (A's) with uranium fuel 4.4% enrichment and uranium-erbium fuel 4.4% enrichment and Er{sub 2}O{sub 3} 1 % wt were considered. Computations were performed with feedbacks and fuel burnup at the constant power level. For different time moments effective multiplication factor and power distribution were obtained. EPS efficiency and reactivity effects at chosen time moments were analyzed.

  4. Changes to Irradiation Conditions of VVER-1000 Surveillance Specimens Resulting from Fuel Assemblies with Greater Fuel Height

    Directory of Open Access Journals (Sweden)

    Panferov Pavel

    2016-01-01

    Full Text Available The goal of the work was to obtain experimental data on the influence of newtype fuel assemblies with higher fuel rods on the irradiation conditions of surveillance specimens installed on the baffe of VVER-1000. For this purpose, two surveillance sets with container assemblies of the same design irradiated in reactors with different fuel assemblies in the core were investigated. Measurements of neutron dosimeters from these sets and retrospective measurements of 54Mn activity accumulated in each irradiated specimen allow a detailed distribution of the fast neutron flux in the containers to be obtained. Neutron calculations have been done using 3D discrete ordinate code KATRIN. On the basis of the obtained results, a change of the lead factor due to newtype fuel assemblies was evaluated for all types of VVER-1000 container assemblies.

  5. Changes to Irradiation Conditions of VVER-1000 Surveillance Specimens Resulting from Fuel Assemblies with Greater Fuel Height

    Science.gov (United States)

    Panferov, Pavel; Kochkin, Viacheslav; Erak, Dmitry; Makhotin, Denis; Reshetnikov, Alexandr; Timofeev, Andrey

    2016-02-01

    The goal of the work was to obtain experimental data on the influence of newtype fuel assemblies with higher fuel rods on the irradiation conditions of surveillance specimens installed on the baffe of VVER-1000. For this purpose, two surveillance sets with container assemblies of the same design irradiated in reactors with different fuel assemblies in the core were investigated. Measurements of neutron dosimeters from these sets and retrospective measurements of 54Mn activity accumulated in each irradiated specimen allow a detailed distribution of the fast neutron flux in the containers to be obtained. Neutron calculations have been done using 3D discrete ordinate code KATRIN. On the basis of the obtained results, a change of the lead factor due to newtype fuel assemblies was evaluated for all types of VVER-1000 container assemblies.

  6. Action-based distribution functions for spheroidal galaxy components

    CERN Document Server

    Posti, Lorenzo; Nipoti, Carlo; Ciotti, Luca

    2014-01-01

    We present an approach to the design of distribution functions that depend on the phase-space coordinates through the action integrals. The approach makes it easy to construct a dynamical model of a given stellar component. We illustrate the approach by deriving distribution functions that self-consistently generate several popular stellar systems, including the Hernquist, Jaffe, Navarro, Frenk and White models. We focus on non-rotating spherical systems, but extension to flattened and rotating systems is trivial. Our distribution functions are easily added to each other and to previously published distribution functions for discs to create self-consistent multi-component galaxies. The models this approach makes possible should prove valuable both for the interpretation of observational data and for exploring the non-equilibrium dynamics of galaxies via N-body simulation.

  7. A New Image Steganography Based On First Component Alteration Technique

    Directory of Open Access Journals (Sweden)

    Amanpreet Kaur

    2009-12-01

    Full Text Available In this paper, A new image steganography scheme is proposed which is a kind of spatial domain technique. In order to hide secret data in cover-image, the first component alteration technique is used. Techniques used so far focuses only on the two or four bits of a pixel in a image (at the most five bits at the edge of an image which results in less peak to signal noise ratio and high root mean square error. In this technique, 8 bits of blue components of pixels are replaced with secret data bits. Proposed scheme can embed more data than previous schemes and shows better image quality. To prove this scheme, several experiments are performed, and are compared the experimental results with the related previous works.Keywords—image; mean square error; Peak signal to noise ratio; steganography;

  8. A two-component NZRI metamaterial based rectangular cloak

    Science.gov (United States)

    Islam, Sikder Sunbeam; Faruque, Mohammd Rashed Iqbal; Islam, Mohammad Tariqul

    2015-10-01

    A new two-component, near zero refractive index (NZRI) metamaterial is presented for electromagnetic rectangular cloaking operation in the microwave range. In the basic design a pi-shaped, metamaterial was developed and its characteristics were investigated for the two major axes (x and z-axis) wave propagation through the material. For the z-axis wave propagation, it shows more than 2 GHz bandwidth and for the x-axis wave propagation; it exhibits more than 1 GHz bandwidth of NZRI property. The metamaterial was then utilized in designing a rectangular cloak where a metal cylinder was cloaked perfectly in the C-band area of microwave regime. The experimental result was provided for the metamaterial and the cloak and these results were compared with the simulated results. This is a novel and promising design for its two-component NZRI characteristics and rectangular cloaking operation in the electromagnetic paradigm.

  9. A New Image Steganography Based On First Component Alteration Technique

    CERN Document Server

    Kaur, Amanpreet; Sikka, Geeta

    2010-01-01

    In this paper, A new image steganography scheme is proposed which is a kind of spatial domain technique. In order to hide secret data in cover-image, the first component alteration technique is used. Techniques used so far focuses only on the two or four bits of a pixel in a image (at the most five bits at the edge of an image) which results in less peak to signal noise ratio and high root mean square error. In this technique, 8 bits of blue components of pixels are replaced with secret data bits. Proposed scheme can embed more data than previous schemes and shows better image quality. To prove this scheme, several experiments are performed, and are compared the experimental results with the related previous works.

  10. QUALITY CONTROL OF SEMICONDUCTOR PACKAGING BASED ON PRINCIPAL COMPONENTS ANALYSIS

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    5 critical quality characteristics must be controlled in the surface mount and wire-bond process in semiconductor packaging. And these characteristics are correlated with each other. So the principal components analysis(PCA) is used in the analysis of the sample data firstly. And then the process is controlled with hotelling T2 control chart for the first several principal components which contain sufficient information. Furthermore, a software tool is developed for this kind of problems. And with sample data from a surface mounting device(SMD) process, it is demonstrated that the T2 control chart with PCA gets the same conclusion as without PCA, but the problem is transformed from high-dimensional one to a lower dimensional one, i.e., from 5 to 2 in this demonstration.

  11. Development of a cross-section methodology and a real-time core model for VVER-1000 simulator application

    Energy Technology Data Exchange (ETDEWEB)

    Georgieva, Emiliya Lyudmilova

    2016-06-06

    The novel academic contributions are summarized as follows. A) A cross-section modelling methodology and a cycle-specific cross-section update procedure are developed to meet fidelity requirements applicable to a cycle-specific reactor core simulation, as well as particular customer needs and practices supporting VVER-1000 operation and safety. B) A real-time version of the Nodal Expansion Method code is developed and implemented into Kozloduy 6 full-scope replica control room simulator.

  12. SANS response of VVER440-type weld material after neutron irradiation, post-irradiation annealing and reirradiation

    OpenAIRE

    Ulbricht, Andreas; Bergner, Frank; Boehmert, Juergen; Valo, Matti; Mathon, Marie-Helene; Heinemann, Andre

    2007-01-01

    Abstract It is well accepted that the reirradiation behaviour of reactor pressure vessel (RPV) steel after annealing can be different from the original irradiation behaviour. We present the first small-angle neutron scattering (SANS) study of neutron irradiated, annealed and reirradiated VVER440-type RPV weld material. The SANS results are analysed both in terms of the size distribution of irradiation-induced defect/solute atom clusters and in terms of the ratio of total and nuclea...

  13. An approach to software development based on heterogeneous component reuse and its supporting system

    Institute of Scientific and Technical Information of China (English)

    杨芙清; 梅宏; 吴穹; 朱冰

    1997-01-01

    Software reuse is considered as a practical approach to solving the software crisis. The BD-HCRUS, a software development supporting system based on heterogeneous component reuse, is introduced. The system has a reusable component library as its kernel in charge of the organization, storage and retrieval of the heterogeneous components, an object-oriented integrated language for the specification and composition of the heterogeneous components, and program comprehension tools for reverse-engineering and extracting reusable components from source code, then re-engineering the components. Therefore, a whole support is lent systematically to the acquisition, specification, organization, storage, retrieval and composition of reusable components.

  14. Investigation of a Coolant Mixing Phenomena within the Reactor Pressure Vessel of a VVER-1000 Reactor with Different Simulation Tools

    Directory of Open Access Journals (Sweden)

    V. Sánchez

    2010-01-01

    Full Text Available The Institute of Neutron Physics and Reactor Technology (INR is involved in the qualification of coupled codes for reactor safety evaluations, aiming to improve their prediction capability and acceptability. In the frame of the VVER-1000 Coolant Transient Benchmark Phase 1, RELAP5/PARCS has been extensively assessed. Phase 2 of this benchmark was focused on both multidimensional thermal hydraulic phenomena and core physics. Plant data will be used to qualify the 3D models of TRACE and RELAP5/CFX, which were coupled for this purpose. The developed multidimensional models of the VVER-1000 reactor pressure vessel (RPV as well as the performed calculations will be described in detail. The predicted results are in good agreement with experimental data. It was demonstrated that the chosen 3D nodalization of the RPV is adequate for the description of the coolant mixing phenomena in a VVER-1000 reactor. Even though only a 3D coarse nodalization is used in TRACE, the integral results are comparable to those obtained by RELAP5/CFX.

  15. The Procedure for Determination of Special Margin Factors to Account for a Bow of the VVER-1000 Fuel Assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Tsyganov, Sergey V.; Marin, Stanislav V.; Shishkov, Lev K. [Russian Research Center ' Kurchatov Institute' , 1., Kurchatov sq., 123182 Moscow (Russian Federation)

    2008-07-01

    Starting from 1980's, the problem of bow of the VVER-1000 reactor FAs and the effect of that on the operational safety is being discussed. At the initial period of time, the extension of time for dropping control rods of the control and protection system associated with this bow posed the highest threat. Later on, new more rigid structures were developed for FAs that eliminated the problems of control rods. However, bow of the VVER-1000 reactor FAs is observed up to now. The scale of this bow reduced significantly but it still effects safety. Even a minor bow available may lead to the noticeable increase of power of individual fuel pins associated with the local variation of the coolant amount. This effect must be taken into account on designing fuel loadings to eliminate the exceeding of set limitations. The introduction of additional special margins is the standard method for taking this effect into account. The present paper describes the conservative technique for the assessment of additional margins for bow of FAs of state-of-the-art designs. This technique is employed in the VVER-1000 reactor designing. The chosen conservatism degree is discussed as well as the method for its assurance and acceptable ways for its slackening. The example of the margin evaluation for the up-to-date fuel loading is given. (authors)

  16. Authentication Scheme Based on Principal Component Analysis for Satellite Images

    Directory of Open Access Journals (Sweden)

    Ashraf. K. Helmy

    2009-09-01

    Full Text Available This paper presents a multi-band wavelet image content authentication scheme for satellite images by incorporating the principal component analysis (PCA. The proposed schemeachieves higher perceptual transparency and stronger robustness. Specifically, the developed watermarking scheme can successfully resist common signal processing such as JPEG compression and geometric distortions such as cropping. In addition, the proposed scheme can be parameterized, thus resulting in more security. That is, an attacker may not be able to extract the embedded watermark if the attacker does not know the parameter.In an order to meet these requirements, the host image is transformed to YIQ to decrease the correlation between different bands, Then Multi-band Wavelet transform (M-WT is applied to each channel separately obtaining one approximate sub band and fifteen detail sub bands. PCA is then applied to the coefficients corresponding to the same spatial location in all detail sub bands. The last principle component band represents an excellent domain forinserting the water mark since it represents lowest correlated features in high frequency area of host image.One of the most important aspects of satellite images is spectral signature, the behavior of different features in different spectral bands, the results of proposed algorithm shows that the spectral stamp for different features doesn't tainted after inserting the watermark.

  17. Component-Based Approach for Educating Students in Bioinformatics

    Science.gov (United States)

    Poe, D.; Venkatraman, N.; Hansen, C.; Singh, G.

    2009-01-01

    There is an increasing need for an effective method of teaching bioinformatics. Increased progress and availability of computer-based tools for educating students have led to the implementation of a computer-based system for teaching bioinformatics as described in this paper. Bioinformatics is a recent, hybrid field of study combining elements of…

  18. Teacher Perceptions Regarding Portfolio-Based Components of Teacher Evaluations

    Science.gov (United States)

    Nagel, Charles I.

    2012-01-01

    This study reports the results of teachers' and principals' perceptions of the package evaluation process, a process that uses a combination of a traditional evaluation with a portfolio-based assessment tool. In addition, this study contributes to the educational knowledge base by exploring the participants' views on the impact of…

  19. Component-Based Approach for Educating Students in Bioinformatics

    Science.gov (United States)

    Poe, D.; Venkatraman, N.; Hansen, C.; Singh, G.

    2009-01-01

    There is an increasing need for an effective method of teaching bioinformatics. Increased progress and availability of computer-based tools for educating students have led to the implementation of a computer-based system for teaching bioinformatics as described in this paper. Bioinformatics is a recent, hybrid field of study combining elements of…

  20. Face Detection Using Adaboosted SVM-Based Component Classifier

    CERN Document Server

    Valiollahzadeh, Seyyed Majid; Nazari, Mohammad

    2008-01-01

    Recently, Adaboost has been widely used to improve the accuracy of any given learning algorithm. In this paper we focus on designing an algorithm to employ combination of Adaboost with Support Vector Machine as weak component classifiers to be used in Face Detection Task. To obtain a set of effective SVM-weaklearner Classifier, this algorithm adaptively adjusts the kernel parameter in SVM instead of using a fixed one. Proposed combination outperforms in generalization in comparison with SVM on imbalanced classification problem. The proposed here method is compared, in terms of classification accuracy, to other commonly used Adaboost methods, such as Decision Trees and Neural Networks, on CMU+MIT face database. Results indicate that the performance of the proposed method is overall superior to previous Adaboost approaches.

  1. Action-based distribution functions for spheroidal galaxy components

    Science.gov (United States)

    Posti, Lorenzo; Binney, James; Nipoti, Carlo; Ciotti, Luca

    2015-03-01

    We present an approach to the design of distribution functions that depend on the phase-space coordinates through the action integrals. The approach makes it easy to construct a dynamical model of a given stellar component. We illustrate the approach by deriving distribution functions that self-consistently generate several popular stellar systems, including the Hernquist, Jaffe, and Navarro, Frenk and White models. We focus on non-rotating spherical systems, but extension to flattened and rotating systems is trivial. Our distribution functions are easily added to each other and to previously published distribution functions for discs to create self-consistent multicomponent galaxies. The models this approach makes possible should prove valuable both for the interpretation of observational data and for exploring the non-equilibrium dynamics of galaxies via N-body simulations.

  2. Component Thermodynamical Selection Based Gene Expression Programming for Function Finding

    Directory of Open Access Journals (Sweden)

    Zhaolu Guo

    2014-01-01

    Full Text Available Gene expression programming (GEP, improved genetic programming (GP, has become a popular tool for data mining. However, like other evolutionary algorithms, it tends to suffer from premature convergence and slow convergence rate when solving complex problems. In this paper, we propose an enhanced GEP algorithm, called CTSGEP, which is inspired by the principle of minimal free energy in thermodynamics. In CTSGEP, it employs a component thermodynamical selection (CTS operator to quantitatively keep a balance between the selective pressure and the population diversity during the evolution process. Experiments are conducted on several benchmark datasets from the UCI machine learning repository. The results show that the performance of CTSGEP is better than the conventional GEP and some GEP variations.

  3. Feature-Based TAG in place of multi-component adjunction Computational Implications

    CERN Document Server

    Hockey, B A

    1994-01-01

    Using feature-based Tree Adjoining Grammar (TAG), this paper presents linguistically motivated analyses of constructions claimed to require multi-component adjunction. These feature-based TAG analyses permit parsing of these constructions using an existing unification-based Earley-style TAG parser, thus obviating the need for a multi-component TAG parser without sacrificing linguistic coverage for English.

  4. Empirical Evaluation of Fuzzy Synthetic Based Framework for Multifaceted Component Classification and Selection

    Directory of Open Access Journals (Sweden)

    Vinay

    2014-03-01

    Full Text Available Component Based Software Engineering (CBSE provides an approach to develop high quality software system at less cost by using fresh and existing software components. The quality of the software system is based on the quality of individual software component integrated. Application developer wants the good or the fittest component to assemble and improve the quality of the software product. The application developer specifies the criteria and requirements of software systems and uses them in selecting the fit components. Component classification and selection is a practical problem and requires complete and predictable input information. It is missing due to uncertainty in judgment and impression in calculations. Hence, component fitness evaluation, classification and selection are critical, multi-faceted, fuzzy and vague nature problems. There exists many component selection approaches, but theses lack the repeatable, usable, exile, multi-faceted and automated processes for component selection and filtration. These approaches are not fulfilling the objectives of software industry in terms of cost, quality and precision. So, there is need of hour to devise an intelligent approach for multifaceted component fitness evaluation, classification and selection. In this study, fuzzy synthetic based approach is proposed for multi-criteria fitness evaluation, classification and selection of software component. For validation of the proposed framework, fifteen black box components of calculators are used. It helps the application developer in selecting fit or high quality component. The proposed framework reduces the cost and enhances the quality, productivity of software systems.

  5. Component alignment and functional outcome following computer assisted and jig based total knee arthroplasty

    Directory of Open Access Journals (Sweden)

    Dnyanesh G Lad

    2013-01-01

    Conclusions: A significantly improved placement of the component was found in the coronal and sagittal planes of the tibial component by CAS. The placement of the components in the other planes was comparable with the values recorded in the jig-based surgery group. Functional outcome was not significantly different.

  6. EEG-Based Emotion Recognition Using Deep Learning Network with Principal Component Based Covariate Shift Adaptation

    Directory of Open Access Journals (Sweden)

    Suwicha Jirayucharoensak

    2014-01-01

    Full Text Available Automatic emotion recognition is one of the most challenging tasks. To detect emotion from nonstationary EEG signals, a sophisticated learning algorithm that can represent high-level abstraction is required. This study proposes the utilization of a deep learning network (DLN to discover unknown feature correlation between input signals that is crucial for the learning task. The DLN is implemented with a stacked autoencoder (SAE using hierarchical feature learning approach. Input features of the network are power spectral densities of 32-channel EEG signals from 32 subjects. To alleviate overfitting problem, principal component analysis (PCA is applied to extract the most important components of initial input features. Furthermore, covariate shift adaptation of the principal components is implemented to minimize the nonstationary effect of EEG signals. Experimental results show that the DLN is capable of classifying three different levels of valence and arousal with accuracy of 49.52% and 46.03%, respectively. Principal component based covariate shift adaptation enhances the respective classification accuracy by 5.55% and 6.53%. Moreover, DLN provides better performance compared to SVM and naive Bayes classifiers.

  7. Using problem-based learning in web-based components of nurse education.

    Science.gov (United States)

    Crawford, Tonia R

    2011-03-01

    Problem-based learning (PBL) is a student-centred method of teaching, and is initiated by introducing a clinical problem through which learning is fostered by active inquisition (Tavakol and Reicherter, 2003). Using this teaching and learning strategy for web-based environments is examined from the literature for potential implementation in a Bachelor of Nursing program. In view of the evidence, students accessing online nursing subjects would seem to benefit from web-based PBL as it provides flexibility, opportunities for discussion and co-participation, encourages student autonomy, and allows construction of meaning as the problems mirror the real world. PBL also promotes critical thinking and transfer of theory to practice. It is recommended that some components of practice-based subjects such as Clinical Practice or Community Health Nursing, could be implemented online using a PBL format, which should also include a discussion forum to enable group work for problem-solving activities, and tutor facilitation.

  8. Research on the Component-based Software Architecture

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Computer software has been becoming more and more c om plex with the development of hardware. Thus, how to efficiently develop extensib le, maintainable and adaptable software occurs to be an urgent problem. The comp onent-based software development technique is a better method to solve the prob lem. In this paper, we first discuss the concept, description method and some fa miliar styles of software architecture, and then analyze the merits of using the software architecture to guide the software developm...

  9. A Component-Based Software Configuration Management Model and Its Supporting System

    Institute of Scientific and Technical Information of China (English)

    梅宏; 张路; 杨芙清

    2002-01-01

    Software configuration management (SCM) is an important key technology in software development. Component-based software development (CBSD) is an emerging paradigm in software development. However, to apply CBSD effectively in real world practice,supporting SCM in CBSD needs to be further investigated. In this paper, the objects that need to be managed in CBSD is analyzed and a component-based SCM model is presented. In this model, components, as the integral logical constituents in a system, are managed as the basic configuration items in SCM, and the relationships between/among components are defined and maintained. Based on this model, a configuration management system is implemented.

  10. CURRENT USAGE OF COMPONENT BASED PRINCIPLES FOR DEVELOPING WEB APPLICATIONS WITH FRAMEWORKS: A LITERATURE REVIEW

    OpenAIRE

    Matija Novak; Ivan Švogor

    2016-01-01

    Component based software development has become a very popular paradigm in many software engineering branches. In the early phase of Web 2.0 appearance, it was also popular for web application development. From the analyzed papers, between this period and today, use of component based techniques for web application development was somewhat slowed down, however, the recent development indicates a comeback. Most of all it is apparent with W3C’s component web working group. In this article we wa...

  11. Fast Neutron Transport in the Biological Shielding Model and Other Regions of the VVER-1000 Mock-Up on the LR-0 Research Reactor

    Science.gov (United States)

    Košťál, Michal; Milčák, Ján; Cvachovec, František; Jánský, Bohumil; Rypar, Vojtěch; Juříček, Vlastimil; Novák, Evžen; Egorov, Alexander; Zaritskiy, Sergey

    2016-02-01

    A set of benchmark experiments was carried out in the full scale VVER-1000 mock-up on the reactor LR-0 in order to validate neutron transport calculation methodologies and to perform the optimization of the shape and locations of neutron flux operation monitors channels inside the shielding of the new VVER-1000 type reactors. Compared with previous experiments on the VVER-1000 mock-up on the reactor LR-0, the fast neutron spectra were measured in the extended neutron energy interval (0.1-10 MeV) and new calculations were carried out with the MCNPX code using various nuclear data libraries (ENDF/B VII.0, JEFF 3.1, JENDL 3.3, JENDL 4, ROSFOND 2009, and CENDL 3.1). Measurements and calculations were carried out at different points in the mock-up. The calculation and experimental data are compared.

  12. Fast Neutron Transport in the Biological Shielding Model and Other Regions of the VVER-1000 Mock-Up on the LR-0 Research Reactor

    Directory of Open Access Journals (Sweden)

    Košťál Michal

    2016-01-01

    Full Text Available A set of benchmark experiments was carried out in the full scale VVER-1000 mock-up on the reactor LR-0 in order to validate neutron transport calculation methodologies and to perform the optimization of the shape and locations of neutron flux operation monitors channels inside the shielding of the new VVER-1000 type reactors. Compared with previous experiments on the VVER-1000 mock-up on the reactor LR-0, the fast neutron spectra were measured in the extended neutron energy interval (0.1–10 MeV and new calculations were carried out with the MCNPX code using various nuclear data libraries (ENDF/B VII.0, JEFF 3.1, JENDL 3.3, JENDL 4, ROSFOND 2009, and CENDL 3.1. Measurements and calculations were carried out at different points in the mock-up. The calculation and experimental data are compared.

  13. Qualification of coupled 3D neutron kinetic/thermal hydraulic code systems by the calculation of a VVER-440 benchmark. Re-connection of an isolated loop

    Energy Technology Data Exchange (ETDEWEB)

    Kotsarev, Alexander; Lizorkin, Mikhail [National Research Centre ' Kurchatov Institute' , Moscow (Russian Federation); Bencik, Marek; Hadek, Jan [UJV Rez, a.s., Rez (Czech Republic); Kozmenkov, Yaroslav; Kliem, Soeren [Helmholtz-Zentrum Dresden-Rossendorf (HZDR) e.V., Dresden (Germany)

    2016-09-15

    The 7th AER dynamic benchmark is a continuation of the efforts to validate the codes systematically for the estimation of the transient behavior of VVER type nuclear power plants. The main part of the benchmark is the simulation of the re-connection of an isolated circulation loop with low temperature in a VVER-440 plant. This benchmark was calculated by the National Research Centre ''Kurchatov Institute'' (with the code ATHLET/BIPR-VVER), UJV Rez (with the code RELAP5-3D {sup copyright}) and HZDR (with the code DYN3D/ATHLET). The paper gives an overview of the behavior of the main thermal hydraulic and neutron kinetic parameters in the provided solutions.

  14. Knowledge Based Components of Expertise in Medical Diagnosis.

    Science.gov (United States)

    1981-09-01

    PAPVC. In fact, one variant of PAPVC, "scimitar syndrome ’, de- rives its name from its presentation of such a finding on x-ray (Lucas & Schmidt, 1977...just sort of guessing right now. I would say just Scimitar Syndrome (PAPVC) pri- marily based on the chest x-ray and ah. I’m not really sure whether... Laron Chicago, ML 60605 Code 306 Navy Personal RD Center 1 Office of Naval Research San Diego, CA 92152 Code 437 800 N. Quincy SStreet Arlington, VA

  15. Environmental and genetic effects on pigment-based vs. structural component of yellow feather colouration.

    Directory of Open Access Journals (Sweden)

    Jana Matrková

    Full Text Available BACKGROUND: Carotenoid plumage is of widespread use in bird communication. Carotenoid-based feather colouration has recently been shown to be dependent on both pigment concentration and feather structure. If these two components are determined differently, one plumage patch may potentially convey different aspects of individual quality. METHODOLOGY/PRINCIPAL FINDINGS: We evaluated the effects of genetic and environmental factors on carotenoid-based yellow breast colouration of Great Tit (Parus major nestlings. By partial cross-fostering, we separated the genetic and pre-natal vs. post-natal parental effects on both the structural and the pigment-based component of carotenoid-based plumage colouration. We also simultaneously manipulated the post-hatching environment by brood size manipulation. The structural component of nestling colouration reflected features of female colouration. On the other hand, the pigment-based component was more affected by rearing conditions presumably representing food quality. While the structural component was related to both origin- and environment-related factors, the pigment-based component seemed to be environment-dependent only. These results support the notion that pigment-based and structural components of feather colouration are determined differently. CONCLUSIONS/SIGNIFICANCE: Chromatic and achromatic components of carotenoid-based feather colouration reflected different aspects of individual quality and history, and thus may potentially form a multicomponent signal.

  16. High performance coated board inspection system based on commercial components

    CERN Document Server

    Barjaktarovic, M; Radunovic, J

    2007-01-01

    This paper presents a vision system for defect (fault) detection on a coated board developed using three industrial firewire cameras and a PC. Application for image processing and system control was realized with the LabView software package. Software for defect detection is based on a variation of the image segmentation algorithm. Standard steps in image segmentation are modified to match the characteristics of defects. Software optimization was accomplished using SIMD (Single Instruction Multiple Data) technology available in the Intel Pentium 4 processors that provided real time inspection capability. System provides benefits such as: improvement in production process, higher quality of delivered coated board and reduction of waste. This was proven during successful exploitation of the system for more than a year.

  17. Phase Change-based Fixturing for Arbitrarily Shaped Components

    Institute of Scientific and Technical Information of China (English)

    LI Bei-zhi; YANG Jian-guo; ZHOU Li-bing; XIANG Qian

    2002-01-01

    Issues in industrialization of RFPE (Reference Free Part Encapsulation) are discussed in this paper. The issues technique. A new method - adaptable location system (ATLS) is presented in this paper. ATLS consists of an array of pins which are controlled manually or automatically by an actuator. The actuation force comes from a shape memory alloy (SMA). Material properties of filler are very important for RFPE. The experiment has shown that machining error can be reduced by using conservative cutting parameters. Based on finite element analysis, the relationship between the deformation of the workpiece, the filler and the machining parameters has been achieved. A new approach, partial cage with active side wall (PCASW), allows machine tools to easily access any feature of the workpiece from different directions. It is convenient for every new setup.

  18. Recovery of a spectrum based on a compressive-sensing algorithm with weighted principal component analysis

    Science.gov (United States)

    Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang

    2017-07-01

    The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.

  19. Fuel Burnup and Fuel Pool Shielding Analysis for Bushehr Nuclear Reactor VVER-1000

    Science.gov (United States)

    Hadad, Kamal; Ayobian, Navid

    Bushehr Nuclear power plant (BNPP) is currently under construction. The VVER-1000 reactor will be loaded with 126 tons of about 4% enriched fuel having 3-years life cycle. The spent fuel (SF) will be transferred into the spent fuel pool (SPF), where it stays for 8 years before being transferred to Russia. The SPF plays a crucial role during 8 years when the SP resides in there. This paper investigates the shielding of this structure as it is designed to shield the SF radiation. In this study, the SF isotope inventory, for different cycles and with different burnups, was calculated using WIMS/4D transport code. Using MCNP4C nuclear code, the intensity of γ rays was obtained in different layers of SFP shields. These layers include the water above fuel assemblies (FA) in pool, concrete wall of the pool and water laid above transferring fuels. Results show that γ rays leakage from the shield in the mentioned layers are in agreement with the plant's PSAR data. Finally we analyzed an accident were the water height above the FA in the pool drops to 47 cm. In this case it was observed that exposure dose above pool, 10 and 30 days from the accident, are still high and in the levels of 1000 and 758 R/hr.

  20. Simulation of the VVER-Type bundle experiment QUENCH-12 with ATHLET-CD

    Energy Technology Data Exchange (ETDEWEB)

    Bratfisch, Christian; Hoffmann, Mathias; Koch, Marco K. [Bochum Univ. (Germany). Chair of Energy Systems and Energy Economics (LEE)

    2012-11-01

    To ensure the coolability of an overheated core, reflood of the uncovered fuel elements is an essential accident management measure to terminate a severe accident transient in Light Water Reactors (LWR) and therefore to avoid further core degradation. From analysis of the TMI-2 accident it is known that an enhanced oxidation of the zircaloy cladding may occur before the water succeeds in cooling the fuel rods. This oxidation process results in a sharp temperature increase, hydrogen generation and can finally after fuel rod cladding failure lead to fission product release. For further development and validation of the program ATHLET-CD, post-test calculations of experiments creating a reflood scenario in a controlled and defined environment are used. One of these experiments is QUNECH-12 in which Zr1%Nb (E 110) fuel rod claddings are used. Typically, fuel rods of VVER reactors are made from this material. In this work, the QUENCH-12 experiment and its conduct will be presented followed by explanations of the modeling in ATHLET-CD version 2.2A. Results of ATHLET-CD simulating the test will be discussed in order to validate the code's ability to adequately calculate phenomena like hydrogen production and melt oxidation during reflooding of uncovered fuel rods of E 110. (orig.)

  1. Irradiation-induced structural changes in surveillance material of VVER 440-type weld metal

    Science.gov (United States)

    Grosse, M.; Denner, V.; Böhmert, J.; Mathon, M.-H.

    2000-01-01

    The irradiation-induced microstructural changes in surveillance materials of the VVER 440-type weld metal Sv-10KhMFT were investigated by small angle neutron scattering (SANS) and anomalous small angle X-ray scattering (SAXS). Due to the high fluence, a strong effect was found in the SANS experiment. No significant effect of the irradiation is detected by SAXS. The reason for this discrepancy is the different scattering contrast of irradiation-induced defects for neutrons and X-rays. An analysis of the SAXS shows that the scattering intensity is mainly caused by vanadium-containing (VC) precipitates and grain boundaries. Both types of scattering defects are hardly changed by irradiation. Neutron irradiation rather produces additional scattering defects of a few nanometers in size. Assuming these defects are clusters containing copper and other foreign atoms with a composition according to results of atom probe field ion microscopy (APFIM) investigations, both the high SANS and the low SAXS effect can be explained.

  2. Aspects of using a best-estimate approach for VVER safety analysis in reactivity initiated accidents

    Energy Technology Data Exchange (ETDEWEB)

    Ovdiienko, Iurii; Bilodid, Yevgen; Ieremenko, Maksym [State Scientific and Technical Centre on Nuclear and Radiation, Safety (SSTC N and RS), Kyiv (Ukraine); Loetsch, Thomas [TUEV SUED Industrie Service GmbH, Energie und Systeme, Muenchen (Germany)

    2016-09-15

    At present time, Ukraine faces the problem of small margins of acceptance criteria in connection with the implementation of a conservative approach for safety evaluations. The problem is particularly topical conducting feasibility analysis of power up-rating for Ukrainian nuclear power plants. Such situation requires the implementation of a best-estimate approach on the basis of an uncertainty analysis. For some kind of accidents, such as loss-of-coolant accident (LOCA), the best estimate approach is, more or less, developed and established. However, for reactivity initiated accident (RIA) analysis an application of best estimate method could be problematical. A regulatory document in Ukraine defines a nomenclature of neutronics calculations and so called ''generic safety parameters'' which should be used as boundary conditions for all VVER-1000 (V-320) reactors in RIA analysis. In this paper the ideas of uncertainty evaluations of generic safety parameters in RIA analysis in connection with the use of the 3D neutron kinetic code DYN3D and the GRS SUSA approach are presented.

  3. Comparison of the radiological hazard of thorium and uranium spent fuels from VVER-1000 reactor

    Science.gov (United States)

    Frybort, Jan

    2014-11-01

    Thorium fuel is considered as a viable alternative to the uranium fuel used in the current generation of nuclear power plants. Switch from uranium to thorium means a complete change of composition of the spent nuclear fuel produced as a result of the fuel depletion during operation of a reactor. If the Th-U fuel cycle is implemented, production of minor actinides in the spent fuel is negligible. This is favourable for the spent fuel disposal. On the other hand, thorium fuel utilisation is connected with production of 232U, which decays via several alpha decays into a strong gamma emitter 208Tl. Presence of this nuclide might complicate manipulations with the irradiated thorium fuel. Monte-Carlo computation code MCNPX can be used to simulate thorium fuel depletion in a VVER-1000 reactor. The calculated actinide composition will be analysed and dose rate from produced gamma radiation will be calculated. The results will be compared to the reference uranium fuel. Dependence of the dose rate on time of decay after the end of irradiation in the reactor will be analysed. This study will compare the radiological hazard of the spent thorium and uranium fuel handling.

  4. A Four Group Reference Code for Solving Neutron Diffusion Equation in a VVER-440 Core

    Energy Technology Data Exchange (ETDEWEB)

    Saarinen, Simo [Fortum Nuclear Services Ltd., P.O. Box 100, 00048 Fortum (Finland)

    2008-07-01

    Nuclear reactor core power calculation is essential in the analysis of the nuclear power plant and especially the core. Currently, the core power distribution in Loviisa VVER-440 core is calculated using nodal code HEXBU-3D and pin-power reconstruction code ELSI-1440 that solve the two group neutron diffusion equation. The computer power available has increased significantly during the last decades allowing us to develop a fine mesh code HEXRE for solving the four group diffusion equation. The diffusion equations are discretized using piecewise linear polynomials. The core is discretized using one node per fuel pin cell. The axial discretization can be chosen freely. The boundary conditions are described using diffusion theory and albedos. Burnup dependence is modelled by tabulating diffusion parameters at certain burnup values and using interpolation for the intermediate values. A two degree polynomial is used for the modelling of the feedback effects. Eigenvalue calculation for both boron concentration and multiplication factor control has been formulated. A possibility to perform fuel loading and shuffling operations is implemented. HEXRE has been thoroughly compared with HEXBU-3D and ELSI-1440. The effect of the different energy and space discretizations used is investigated. Some safety criteria for the core calculated with the HEXRE and HEXBU-3D/ELSI-1440 have been compared. From the calculations (e.g. the safety criteria) we can estimate whether there exists systematic deviations in HEXBU- 3D/ELSI-1440 calculations or not. (author)

  5. Evaluating the consequences of loss of flow accident for a typical VVER-1000 nuclear reactor

    Energy Technology Data Exchange (ETDEWEB)

    Mirvakili, S.M.; Safaei, S. [Shiraz Univ., Shiraz (Iran, Islamic Republic of). Dept. of Nuclear Engineering, School of Mechanical Engineering; Faghihi, F. [Shiraz Univ., Shiraz (Iran, Islamic Republic of). Safety Research Center

    2010-07-01

    The loss of coolant flow in a nuclear reactor can result from a mechanical or electrical failure of the coolant pump. If the reactor is not tripped promptly, the immediate effect is a rapid increase in coolant temperature, decrease in minimum departure from nucleate boiling ratio (DNBR) and fuel damage. This study evaluated the shaft seizure of a reactor coolant pump in a VVER-1000 nuclear reactor. The locked rotor results in rapid reduction of flow through the affected reactor coolant loop and in turn leads to an increase in the primary coolant temperature and pressure. The analysis was conducted with regard for superimposing loss of power to the power plant at the initial accident moment. The required transient functions of flow, pressure and power were obtained using system transient calculations applied in COBRA-EN computer code in order to calculate the overall core thermal-hydraulic parameters such as temperature, critical heat flux and DNBR. The study showed that the critical period for the locked rotor accident is the first few seconds during which the maximum values of pressure and temperature are reached. 10 refs., 1 tab., 3 figs.

  6. Condition Based Monitoring of Gas Turbine Combustion Components

    Energy Technology Data Exchange (ETDEWEB)

    Ulerich, Nancy; Kidane, Getnet; Spiegelberg, Christine; Tevs, Nikolai

    2012-09-30

    The objective of this program is to develop sensors that allow condition based monitoring of critical combustion parts of gas turbines. Siemens teamed with innovative, small companies that were developing sensor concepts that could monitor wearing and cracking of hot turbine parts. A magnetic crack monitoring sensor concept developed by JENTEK Sensors, Inc. was evaluated in laboratory tests. Designs for engine application were evaluated. The inability to develop a robust lead wire to transmit the signal long distances resulted in a discontinuation of this concept. An optical wear sensor concept proposed by K Sciences GP, LLC was tested in proof-of concept testing. The sensor concept depended, however, on optical fiber tips wearing with the loaded part. The fiber tip wear resulted in too much optical input variability; the sensor could not provide adequate stability for measurement. Siemens developed an alternative optical wear sensor approach that used a commercial PHILTEC, Inc. optical gap sensor with an optical spacer to remove fibers from the wearing surface. The gap sensor measured the length of the wearing spacer to follow loaded part wear. This optical wear sensor was developed to a Technology Readiness Level (TRL) of 5. It was validated in lab tests and installed on a floating transition seal in an F-Class gas turbine. Laboratory tests indicate that the concept can measure wear on loaded parts at temperatures up to 800{degrees}C with uncertainty of < 0.3 mm. Testing in an F-Class engine installation showed that the optical spacer wore with the wearing part. The electro-optics box located outside the engine enclosure survived the engine enclosure environment. The fiber optic cable and the optical spacer, however, both degraded after about 100 operating hours, impacting the signal analysis.

  7. Case-Based Reasoning Topological Complexity Calculation of Design for Components

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Directly calculating the topological and geometric complexity from the STEP (standard for the exchange of product model data, ISO 10303) file is a huge task. So, a case-based reasoning approach is presented, which is based on the similarity between the new component and the old one, to calculate the topological and geometric complexity of new components. In order to index, retrieve in historical component database, a new way of component representation is brought forth. And then an algorithm is given to extract topological graph from its STEP files. A mathematical model, which describes how to compare the similarity, is discussed. Finally, an example is given to show the result.

  8. Research and Implementation of Distributed Virtual Simulation Platform Based on Components

    Institute of Scientific and Technical Information of China (English)

    SUN Zhi-xin; WANG Ru-chuan; WANG Shao-di

    2004-01-01

    This paper proposes a combination of system's theoretic simulation methodology with the virtual reality technology as a basis for a component-based virtual simulation framework. The created universal framework can be used in different fields, such as drive training, airplane fighting training, and so on. The result of the synergism is a powerful component-based virtual simulation framework. After having briefly introduced the concepts and principles of the distributed component object, the paper describes a software development method based on components. Then a method of virtual simulation system modeling based on components is proposed, and the integrated framework supporting distributed virtual simulation and its key technologies are discussed at length. Our experiments indicate that the framework can be widely used in simulation fields such as arms antagonism, driving simulation and so on.

  9. Feedback loops and temporal misalignment in component-based hydrologic modeling

    Science.gov (United States)

    Elag, Mostafa M.; Goodall, Jonathan L.; Castronova, Anthony M.

    2011-12-01

    In component-based modeling, a complex system is represented as a series of loosely integrated components with defined interfaces and data exchanges that allow the components to be coupled together through shared boundary conditions. Although the component-based paradigm is commonly used in software engineering, it has only recently been applied for modeling hydrologic and earth systems. As a result, research is needed to test and verify the applicability of the approach for modeling hydrologic systems. The objective of this work was therefore to investigate two aspects of using component-based software architecture for hydrologic modeling: (1) simulation of feedback loops between components that share a boundary condition and (2) data transfers between temporally misaligned model components. We investigated these topics using a simple case study where diffusion of mass is modeled across a water-sediment interface. We simulated the multimedia system using two model components, one for the water and one for the sediment, coupled using the Open Modeling Interface (OpenMI) standard. The results were compared with a more conventional numerical approach for solving the system where the domain is represented by a single multidimensional array. Results showed that the component-based approach was able to produce the same results obtained with the more conventional numerical approach. When the two components were temporally misaligned, we explored the use of different interpolation schemes to minimize mass balance error within the coupled system. The outcome of this work provides evidence that component-based modeling can be used to simulate complicated feedback loops between systems and guidance as to how different interpolation schemes minimize mass balance error introduced when components are temporally misaligned.

  10. A Component Based Heuristic Search method with Adaptive Perturbations for Hospital Personnel Scheduling

    CERN Document Server

    Li, Jingpeng; Burke, Edmund

    2008-01-01

    Nurse rostering is a complex scheduling problem that affects hospital personnel on a daily basis all over the world. This paper presents a new component-based approach with adaptive perturbations, for a nurse scheduling problem arising at a major UK hospital. The main idea behind this technique is to decompose a schedule into its components (i.e. the allocated shift pattern of each nurse), and then mimic a natural evolutionary process on these components to iteratively deliver better schedules. The worthiness of all components in the schedule has to be continuously demonstrated in order for them to remain there. This demonstration employs a dynamic evaluation function which evaluates how well each component contributes towards the final objective. Two perturbation steps are then applied: the first perturbation eliminates a number of components that are deemed not worthy to stay in the current schedule; the second perturbation may also throw out, with a low level of probability, some worthy components. The eli...

  11. An XML-based Software Component Description Method for Program Mining

    Institute of Scientific and Technical Information of China (English)

    DOUYuhong; ZHANGYaoxue; LIXing

    2004-01-01

    As Internet is rapidly emerging as a largescale distributed computing platform, service customization and on-demand computing become an important research issue. Program mining is a novel computingparadigm to achieve this goal through dynamically component discovery and composition from on-line component repositories1. However, components in different repositories are described and classified in ad hoc ways, laying obstacles for the provision of program mining. In this paper, we present an XML-based component description method, depicting the static properties, interface specification and classification information of software components in a standard way. Based on this description, distributed component directory can be established to provide a wellorganized mining resource for program mining.

  12. Component-Based Model for Single-Plate Shear Connections with Pretension and Pinched Hysteresis.

    Science.gov (United States)

    Weigand, Jonathan M

    2017-02-01

    Component-based connection models provide a natural framework for modeling the complex behaviors of connections under extreme loads by capturing both the individual behaviors of the connection components, such as the bolt, shear plate, and beam web, and the complex interactions between those components. Component-based models also provide automatic coupling between the in-plane flexural and axial connection behaviors, a feature that is essential for modeling the behavior of connections under column removal. This paper presents a new component-based model for single-plate shear connections that includes the effects of pre-tension in the bolts and provides the capability to model standard and slotted holes. The component-based models are exercised under component-level deformations calculated from the connection demands via a practical rigid-body displacement model, so that the results of the presented modeling approach remains hand-calculable. Validation cases are presented for connections subjected to both seismic and column removal loading. These validation cases show that the component-based model is capable of predicting the response of single-plate shear connections for both seismic and column removal loads.

  13. Accumulation of radioactive corrosion products on steel surfaces of VVER type nuclear reactors. I. 110mAg

    CSIR Research Space (South Africa)

    Hirschberg, G

    1999-03-01

    Full Text Available of radioactive corrosion products on steel surfaces of VVER type nuclear reactors. I. 110mAg G abor Hirschberg a,P al Baradlai a,K alm an Varga a,*, Gerrit Myburg b, J anos Schunk c,P eter Tilky c, Paul Stoddart d a Department of Radiochemistry, University...-cooled nuclear reactors is of great importance for a number of practical reasons. For instance, under normal operating conditions (when there is no ?ssion product release due to fuel cladding failure) the majority of radioactive contamination in the pri- mary...

  14. Absolute determination of power density in the VVER-1000 mock-up on the LR-0 research reactor.

    Science.gov (United States)

    Košt'ál, Michal; Švadlenková, Marie; Milčák, Ján

    2013-08-01

    The work presents a detailed comparison of calculated and experimentally determined net peak areas of selected fission products gamma lines. The fission products were induced during a 2.5 h irradiation on the power level of 9.5 W in selected fuel pins of the VVER-1000 Mock-Up. The calculations were done with deterministic and stochastic (Monte Carlo) methods. The effects of different nuclear data libraries used for calculations are discussed as well. The Net Peak Area (NPA) may be used for the determination of fission density across the mock-up. This fission density is practically identical to power density.

  15. 3D neutronic codes coupled with thermal-hydraulic system codes for PWR, and BWR and VVER reactors

    Energy Technology Data Exchange (ETDEWEB)

    Langenbuch, S.; Velkov, K. [GRS, Garching (Germany); Lizorkin, M. [Kurchatov-Institute, Moscow (Russian Federation)] [and others

    1997-07-01

    This paper describes the objectives of code development for coupling 3D neutronics codes with thermal-hydraulic system codes. The present status of coupling ATHLET with three 3D neutronics codes for VVER- and LWR-reactors is presented. After describing the basic features of the 3D neutronic codes BIPR-8 from Kurchatov-Institute, DYN3D from Research Center Rossendorf and QUABOX/CUBBOX from GRS, first applications of coupled codes for different transient and accident scenarios are presented. The need of further investigations is discussed.

  16. A Metadata Model Based on Coupling Testing Information to Increase Testability of Component

    Institute of Scientific and Technical Information of China (English)

    MA Liang-li; GUO Fu-liang; WU Zhao-hui

    2008-01-01

    A software component must be tested every time it is reused in order to assure quality of component itself and system in which it is to be integrated. So how to increase testability of component has become a key technology in the software engineering community. Here a method is introduced to increase component testability. And meanings of component testability and relative effective ways to increase testability are summarized. Then definitions of component coupling testing criterion, DU-I (Definition-Use Information) and OP-Vs (Observation-Point Values) are given. Base on these, a definition-use table is introduced, which includes DU-A and OP-Vs item, to help component testers to understand and observe interior details about component under test better. Then a framework of testable component based on above DU-table is given. These facilities provide ways to detect errors, observe state variables by observation-points based monitor mechanism. Moreover, above methods are applied to our application developed by ourselves before, and some test cases are generated. Then our method is compared with Orso method and Kan method using the same example, presenting the comparison results. The results illustrate the validity of our method, effectively generating test cases and killing more mutants.

  17. A CORBA BASED ARCHITECTURE FOR ACCESSING REUSABLE SOFTWARE COMPONENTS ON THE WEB.

    Directory of Open Access Journals (Sweden)

    R. Cenk ERDUR

    2003-01-01

    Full Text Available In a very near future, as a result of the continious growth of Internet and advances in networking technologies, Internet will become the common software repository for people and organizations who employ component based reuse approach in their software development life cycles. In order to use the reusable components such as source codes, analysis, designs, design patterns during new software development processes, environments that support the identification of the components over Internet are needed. Basic elements of such an environment are the coordinator programs which deliver user requests to appropriate component libraries, user interfaces for querying, and programs that wrap the component libraries. First, a CORBA based architecture is proposed for such an environment. Then, an alternative architecture that is based on the Java 2 platform technologies is given for the same environment. Finally, the two architectures are compared.

  18. Methods of Si based ceramic components volatilization control in a gas turbine engine

    Science.gov (United States)

    Garcia-Crespo, Andres Jose; Delvaux, John; Dion Ouellet, Noemie

    2016-09-06

    A method of controlling volatilization of silicon based components in a gas turbine engine includes measuring, estimating and/or predicting a variable related to operation of the gas turbine engine; correlating the variable to determine an amount of silicon to control volatilization of the silicon based components in the gas turbine engine; and injecting silicon into the gas turbine engine to control volatilization of the silicon based components. A gas turbine with a compressor, combustion system, turbine section and silicon injection system may be controlled by a controller that implements the control method.

  19. Seismic Response of Base-Isolated Structures under Multi-component Ground Motion Excitation

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    An analysis of a base-isolated structure for multi-component random ground motion is presented. The mean square response of the system is obtained under different parametric variations. The effectiveness of main parameters and the torsional component during an earthquake is quantified with the help of the response ratio and the root mean square response with and without base isolation. It is observed that the base isolation has considerable influence on the response and the effect of the torsional component is not ignored.

  20. Study on the dynamic response analysis for evaluating the effectiveness of base isolation for nuclear components

    Energy Technology Data Exchange (ETDEWEB)

    Mori, Kazunari; Tsutsumi, Hideaki; Yamada, Hiroyuki; Ebisawa, Katsumi; Shibata, Katsuyuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-07-01

    Introduction of the base isolation technique into the seismic design of nuclear power plant components as well as buildings has been expected as one of the effective countermeasure to reduce the seismic force applied to components. A research program on the base isolation of nuclear components has been carried out at the Japan Atomic Energy Research Institute (JAERI) since 1991. A methodology and a computer code (EBISA: Equipment Base Insolation System Analysis) for evaluating the failure frequency of the nuclear component with the base isolation were developed. In addition, a test program, which is concerned with the above development, aiming at improvement of failure frequency analysis models in the code has been conducted since 1996 to investigate the dynamic behavior and to verify the effectiveness of component base isolation systems. In the failure frequency analysis, methodology for evaluating the actual dynamic responses of the nuclear components with the base isolation in detail has been examined. In the methodology, the actual responses are computed by considering the scatter in mechanical properties of rock masses, reactor building and components under many earthquake motions with various frequency characteristics. The failure frequency of component is computed as the conditional probability where the actual response exceeds the capacity of components. It is a very important in the above methodology to investigates the dynamic response analysis method for the ground, reactor building and nuclear components as well as the scattering factors in the dynamic analysis. This report describes the accuracy of the dynamic response analysis method and analysis models, and the influence of scatters in properties of rock masses and reactor building on the dynamic response. (author)

  1. HARMONIC COMPONENT EXTRACTION FROM A CHAOTIC SIGNAL BASED ON EMPIRICAL MODE DECOMPOSITION METHOD

    Institute of Scientific and Technical Information of China (English)

    LI Hong-guang; MENG Guang

    2006-01-01

    A novel approach of signal extraction of a harmonic component from a chaotic signal generated by a Duffing oscillator was proposed. Based on empirical mode decomposition (EMD) and concept that any signal is composed of a series of the simple intrinsic modes, the harmonic components were extracted from the chaotic signals. Simulation results show the approach is satisfactory.

  2. Slow component of VO2 kinetics: Mechanistic bases and practical applications

    DEFF Research Database (Denmark)

    Jones, Andrew M; Grassi, Bruno; Christensen, Peter Møller

    2011-01-01

    state of knowledge concerning the mechanistic bases of the V¿O2 slow component and describes practical interventions which can attenuate the slow componentand thus enhance exercise tolerance. There is strong evidence that, during CWR exercise, the development of the V¿O2 slow component is associated...

  3. A Service Component-based Accounting and Charging Architecture to Support Interim Mechanisms across Multiple Domains

    NARCIS (Netherlands)

    Le, van M.; Beijnum, van B.J.F.; Huitema, G.B.

    2004-01-01

    Today, telematics services are often compositions of different chargeable service components offered by different service providers. To enhance component-based accounting and charging, the service composition information is used to match with the corresponding charging structure of a service session

  4. Reducing the Runtime Acceptance Costs of Large-Scale Distributed Component-Based Systems

    NARCIS (Netherlands)

    Gonzalez, A.; Piel, E.; Gross, H.G.

    2008-01-01

    Software Systems of Systems (SoS) are large-scale distributed component-based systems in which the individual components are elaborate and complex systems in their own right. Distinguishing characteristics are their short expected integration and deployment time, and the need to modify their archite

  5. A Study on Components of Internal Control-Based Administrative System in Secondary Schools

    Science.gov (United States)

    Montri, Paitoon; Sirisuth, Chaiyuth; Lammana, Preeda

    2015-01-01

    The aim of this study was to study the components of the internal control-based administrative system in secondary schools, and make a Confirmatory Factor Analysis (CFA) to confirm the goodness of fit of empirical data and component model that resulted from the CFA. The study consisted of three steps: 1) studying of principles, ideas, and theories…

  6. A service component-based accounting and charging architecture to support interim mechanisms across multiple domains

    NARCIS (Netherlands)

    Le, M. van; Beijnum, B.J.F. van; Huitema, G.B.

    2004-01-01

    Today, telematics services are often compositions of different chargeable service components offered by different service providers. To enhance component-based accounting and charging, the service composition information is used to match with the corresponding charging structure of a service session

  7. Prediction of Pure Component Adsorption Equilibria Using an Adsorption Isotherm Equation Based on Vacancy Solution Theory

    DEFF Research Database (Denmark)

    Marcussen, Lis; Aasberg-Petersen, K.; Krøll, Annette Elisabeth

    2000-01-01

    An adsorption isotherm equation for nonideal pure component adsorption based on vacancy solution theory and the Non-Random-Two-Liquid (NRTL) equation is found to be useful for predicting pure component adsorption equilibria at a variety of conditions. The isotherm equation is evaluated successfully...... adsorption systems, spreading pressure and isosteric heat of adsorption are also calculated....

  8. Reducing the Runtime Acceptance Costs of Large-Scale Distributed Component-Based Systems

    NARCIS (Netherlands)

    Gonzalez, A.; Piel, E.; Gross, H.G.

    2008-01-01

    Software Systems of Systems (SoS) are large-scale distributed component-based systems in which the individual components are elaborate and complex systems in their own right. Distinguishing characteristics are their short expected integration and deployment time, and the need to modify their

  9. RF Front End Based on MEMS Components for Miniaturized Digital EVA Radio Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In this SBIR project, AlphaSense, Inc. and the Carnegie Mellon University propose to develop a RF receiver front end based on CMOS-MEMS components for miniaturized...

  10. Model-Based Design Tools for Extending COTS Components To Extreme Environments Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation in this project is model-based design (MBD) tools for predicting the performance and useful life of commercial-off-the-shelf (COTS) components and...

  11. Synchronous Control of Reconfiguration in Fractal Component-based Systems -- a Case Study

    CERN Document Server

    Bouhadiba, Tayeb; Delaval, Gwenaël; Rutten, Éric

    2011-01-01

    In the context of component-based embedded systems, the management of dynamic reconfiguration in adaptive systems is an increasingly important feature. The Fractal component-based framework, and its industrial instantiation MIND, provide for support for control operations in the lifecycle of components. Nevertheless, the use of complex and integrated architectures make the management of this reconfiguration operations difficult to handle by programmers. To address this issue, we propose to use Synchronous languages, which are a complete approach to the design of reactive systems, based on behavior models in the form of transition systems. Furthermore, the design of closed-loop reactive managers of reconfigurations can benefit from formal tools like Discrete Controller Synthesis. In this paper we describe an approach to concretely integrate synchronous reconfiguration managers in Fractal component-based systems. We describe how to model the state space of the control problem, and how to specify the control obj...

  12. RF Front End Based on MEMS Components for Miniaturized Digital EVA Radio Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In this proposal, AlphaSense, Inc. (AI) and the Carnegie Mellon University (CMU) detail the development of RF front end based on MEMS components for miniaturized...

  13. Component-based Software Design and Implementation for Network Security System

    Directory of Open Access Journals (Sweden)

    Jianchao Han

    2009-12-01

    Full Text Available A computer network intrusion detection and prevention system consists of collecting network traffic data, discovering user behavior patterns as intrusion detection rules, and applying these rules to prevent malicious and misuse. Many commercial off-the-shelf (COTS products have been developed to perform each of these tasks. In this paper, the component-based software engineering approach is exploited to integrate these COTS products as components into a computerized system to automatically detect intrusion rules from network traffic data and setup IPTables to prevent future potential attacks. The component- based software architecture of this kind of system is designed, COTS components are analyzed and selected, adaptor components to connect COTS products are developed, the system implementation is illustrated, and the preliminary system experiment is presented.

  14. A Component-based Software Development and Execution Framework for CAx Applications

    Directory of Open Access Journals (Sweden)

    N. Matsuki

    2004-01-01

    Full Text Available Digitalization of the manufacturing process and technologies is regarded as the key to increased competitive ability. The MZ-Platform infrastructure is a component-based software development framework, designed for supporting enterprises to enhance digitalized technologies using software tools and CAx components in a self-innovative way. In the paper we show the algorithm, system architecture, and a CAx application example on MZ-Platform. We also propose a new parametric data structure based on MZ-Platform.

  15. Results of international standard problem No. 36 severe fuel damage experiment of a VVER fuel bundle

    Energy Technology Data Exchange (ETDEWEB)

    Firnhaber, M. [Gesellschaft fuer Anlagen-und Reaktorsicherheit, Koeln (Germany); Yegorova, L. [Nuclear Safety Institute of Russian Research Center, Moscow (Russian Federation); Brockmeier, U. [Ruhr-Univ. of Bochum (Germany)] [and others

    1995-09-01

    International Standard Problems (ISP) organized by the OECD are defined as comparative exercises in which predictions with different computer codes for a given physical problem are compared with each other and with a carefully controlled experimental study. The main goal of ISP is to increase confidence in the validity and accuracy of analytical tools used in assessing the safety of nuclear installations. In addition, it enables the code user to gain experience and to improve his competence. This paper presents the results and assessment of ISP No. 36, which deals with the early core degradation phase during an unmitigated severe LWR accident in a Russian type VVER. Representatives of 17 organizations participated in the ISP using the codes ATHLET-CD, ICARE2, KESS-III, MELCOR, SCDAP/RELAP5 and RAPTA. Some participants performed several calculations with different codes. As experimental basis the severe fuel damage experiment CORA-W2 was selected. The main phenomena investigated are thermal behavior of fuel rods, onset of temperature escalation, material behavior and hydrogen generation. In general, the calculations give the right tendency of the experimental results for the thermal behavior, the hydrogen generation and, partly, for the material behavior. However, some calculations deviate in important quantities - e.g. some material behavior data - showing remarkable discrepancies between each other and from the experiments. The temperature history of the bundle up to the beginning of significant oxidation was calculated quite well. Deviations seem to be related to the overall heat balance. Since the material behavior of the bundle is to a great extent influenced by the cladding failure criteria a more realistic cladding failure model should be developed at least for the detailed, mechanistic codes. Regarding the material behavior and flow blockage some models for the material interaction as well as for relocation and refreezing requires further improvement.

  16. Validation of 3D Code KATRIN For Fast Neutron Fluence Calculation of VVER-1000 Reactor Pressure Vessel by Ex-Vessel Measurements and Surveillance Specimens Results

    Science.gov (United States)

    Dzhalandinov, A.; Tsofin, V.; Kochkin, V.; Panferov, P.; Timofeev, A.; Reshetnikov, A.; Makhotin, D.; Erak, D.; Voloschenko, A.

    2016-02-01

    Usually the synthesis of two-dimensional and one-dimensional discrete ordinate calculations is used to evaluate neutron fluence on VVER-1000 reactor pressure vessel (RPV) for prognosis of radiation embrittlement. But there are some cases when this approach is not applicable. For example the latest projects of VVER-1000 have upgraded surveillance program. Containers with surveillance specimens are located on the inner surface of RPV with fast neutron flux maximum. Therefore, the synthesis approach is not suitable enough for calculation of local disturbance of neutron field in RPV inner surface behind the surveillance specimens because of their complicated and heterogeneous structure. In some cases the VVER-1000 core loading consists of fuel assemblies with different fuel height and the applicability of synthesis approach is also ambiguous for these fuel cycles. Also, the synthesis approach is not enough correct for the neutron fluence estimation at the RPV area above core top. Because of these reasons only the 3D neutron transport codes seem to be satisfactory for calculation of neutron fluence on the VVER-1000 RPV. The direct 3D calculations are also recommended by modern regulations.

  17. Experimental studies into the fluid dynamic performance of the coolant flow in the mixed core of the Temelin NPP VVER-1000 reactor

    Directory of Open Access Journals (Sweden)

    S.M. Dmitriev

    2015-11-01

    Full Text Available The paper presents the results of studies into the interassembly coolant interaction in the Temelin nuclear power plant (NPP VVER-1000 reactor core. An aerodynamic test bench was used to study the coolant flow processes in a TVSA-type fuel assembly bundle. To obtain more detailed information on the coolant flow dynamics, a VVER-1000 reactor core fragment was selected as the test model, which comprised two segments of a TVSA-12 PLUS fuel assembly and one segment of a TVSA-T assembly with stiffening angles and an interassembly gap. The studies into the coolant fluid dynamics consisted in measuring the velocity vector both in representative TVSA regions and inside the interassembly gap using a five-channel pneumometric probe. An analysis into the spatial distribution of the absolute flow velocity projections made it possible to detail the TVSA spacer, mixing and combined spacer grid flow pattern, identify the regions with the maximum transverse coolant flow, and determine the depth of the coolant flow disturbance propagation and redistribution in adjacent TVSA assemblies. The results of the studies into the interassembly coolant interaction among the adjacent TVSA assemblies are used at OKBM Afrikantov to update the VVER-1000 core thermal-hydraulic analysis procedures and have been added to the database for verification of computational fluid dynamics (CFD codes and for detailed cellwise analyses of the VVER-100 reactor cores.

  18. Validation of 3D Code KATRIN For Fast Neutron Fluence Calculation of VVER-1000 Reactor Pressure Vessel by Ex-Vessel Measurements and Surveillance Specimens Results

    Directory of Open Access Journals (Sweden)

    Dzhalandinov A.

    2016-01-01

    Full Text Available Usually the synthesis of two-dimensional and one-dimensional discrete ordinate calculations is used to evaluate neutron fluence on VVER-1000 reactor pressure vessel (RPV for prognosis of radiation embrittlement. But there are some cases when this approach is not applicable. For example the latest projects of VVER-1000 have upgraded surveillance program. Containers with surveillance specimens are located on the inner surface of RPV with fast neutron flux maximum. Therefore, the synthesis approach is not suitable enough for calculation of local disturbance of neutron field in RPV inner surface behind the surveillance specimens because of their complicated and heterogeneous structure. In some cases the VVER-1000 core loading consists of fuel assemblies with different fuel height and the applicability of synthesis approach is also ambiguous for these fuel cycles. Also, the synthesis approach is not enough correct for the neutron fluence estimation at the RPV area above core top. Because of these reasons only the 3D neutron transport codes seem to be satisfactory for calculation of neutron fluence on the VVER-1000 RPV. The direct 3D calculations are also recommended by modern regulations.

  19. A comprehensive approach to selecting the water chemistry of the secondary coolant circuit in the projects of nuclear power stations equipped with VVER-1200 reactors

    Science.gov (United States)

    Tyapkov, V. F.

    2011-05-01

    The paper presents the results obtained from studies on selecting the water chemistry of the secondary coolant circuit carried out for the project of a nuclear power station equipped with a new-generation VVER-1200 reactor on the basis of case calculations and an analysis of field experience gained at operating nuclear power stations.

  20. A New Insight into Energy Distribution of Electrons in Fuel-Rod Gap in VVER-1000 Nuclear Reactor

    Science.gov (United States)

    Fereshteh, Golian; Ali, Pazirandeh; Saeed, Mohammadi

    2015-06-01

    In order to calculate the electron energy distribution in the fuel rod gap of a VVER-1000 nuclear reactor, the Fokker-Planck equation (FPE) governing the non-equilibrium behavior of electrons passing through the fuel-rod gap as an absorber has been solved in this paper. Besides, the Monte Carlo Geant4 code was employed to simulate the electron migration in the fuel-rod gap and the energy distribution of electrons was found. As for the results, the accuracy of the FPE was compared to the Geant4 code outcomes and a satisfactory agreement was found. Also, different percentage of the volatile and noble gas fission fragments produced in fission reactions in fuel rod, i.e. Krypton, Xenon, Iodine, Bromine, Rubidium and Cesium were employed so as to investigate their effects on the electrons' energy distribution. The present results show that most of the electrons in the fuel rod's gap were within the thermal energy limitation and the tail of the electron energy distribution was far from a Maxwellian distribution. The interesting outcome was that the electron energy distribution is slightly increased due to the accumulation of fission fragments in the gap. It should be noted that solving the FPE for the energy straggling electrons that are penetrating into the fuel-rod gap in the VVER-1000 nuclear reactor has been carried out for the first time using an analytical approach.

  1. The calculational VVER burnup Credit Benchmark No.3 results with the ENDF/B-VI rev.5 (1999)

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez Gual, Maritza [Centro de Tecnologia Nuclear, La Habana (Cuba). E-mail: mrgual@ctn.isctn.edu.cu

    2000-07-01

    The purpose of this papers to present the results of CB3 phase of the VVER calculational benchmark with the recent evaluated nuclear data library ENDF/B-VI Rev.5 (1999). This results are compared with the obtained from the other participants in the calculations (Czech Republic, Finland, Hungary, Slovaquia, Spain and the United Kingdom). The phase (CB3) of the VVER calculation benchmark is similar to the Phase II-A of the OECD/NEA/INSC BUC Working Group benchmark for PWR. The cases without burnup profile (BP) were performed with the WIMS/D-4 code. The rest of the cases have been carried with DOTIII discrete ordinates code. The neutron library used was the ENDF/B-VI rev. 5 (1999). The WIMS/D-4 (69 groups) is used to collapse cross sections from the ENDF/B-VI Rev. 5 (1999) to 36 groups working library for 2-D calculations. This work also comprises the results of CB1 (obtained with ENDF/B-VI rev. 5 (1999), too) and CB3 for cases with Burnup of 30 MWd/TU and cooling time of 1 and 5 years and for case with Burnup of 40 MWd/TU and cooling time of 1 year. (author)

  2. Textile-Based Electronic Components for Energy Applications: Principles, Problems, and Perspective

    Directory of Open Access Journals (Sweden)

    Vishakha Kaushik

    2015-09-01

    Full Text Available Textile-based electronic components have gained interest in the fields of science and technology. Recent developments in nanotechnology have enabled the integration of electronic components into textiles while retaining desirable characteristics such as flexibility, strength, and conductivity. Various materials were investigated in detail to obtain current conductive textile technology, and the integration of electronic components into these textiles shows great promise for common everyday applications. The harvest and storage of energy in textile electronics is a challenge that requires further attention in order to enable complete adoption of this technology in practical implementations. This review focuses on the various conductive textiles, their methods of preparation, and textile-based electronic components. We also focus on fabrication and the function of textile-based energy harvesting and storage devices, discuss their fundamental limitations, and suggest new areas of study.

  3. Investigation of variable spindle speed in slow tool servo-based turning of noncircular optical components

    Science.gov (United States)

    Huang, Weihai; Yu, Deping; Chen, Dongsheng; Zhang, Min; Liu, Jinguang; Yao, Jin

    2016-10-01

    Ultra-precision noncircular optical components, e.g. hyperbolic quadrupole in mass spectrometer, can be machined by diamond turning assisted by slow tool servo (STS). However, the bandwidth of STS is usually small, which limits the STS's capability in following the required tool path, leading to a large form error. To reduce the form error, this paper proposes an approach to apply variable spindle speed (VSS) to STS-based turning. Design of the VSS trajectory based on the noncircular profile of the optical component was investigated in detail. To validate the proposed approach, simulation on the application of VSS in the STS-based turning process was established and applied to the machining of typical noncircular optical components. Simulation results show that the proposed approach is effective in reducing the requirement on the bandwidth of the STS, resulting in higher form accuracy of the machined noncircular optical components.

  4. Independent component analysis based source number estimation and its comparison for mechanical systems

    Science.gov (United States)

    Cheng, Wei; Lee, Seungchul; Zhang, Zhousuo; He, Zhengjia

    2012-11-01

    It has been challenging to correctly separate the mixed signals into source components when the source number is not known a priori. In this paper, we propose a novel source number estimation based on independent component analysis (ICA) and clustering evaluation analysis. We investigate and benchmark three information based source number estimations: Akaike information criterion (AIC), minimum description length (MDL) and improved Bayesian information criterion (IBIC). All the above methods are comparatively studied in both numerical and experimental case studies with typical mechanical signals. The results demonstrate that the proposed ICA based source number estimation with nonlinear dissimilarity measures performs more stable and robust than the information based ones for mechanical systems.

  5. A Simple Method for Limiting Disclosure in Continuous Microdata Based on Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Calviño Aida

    2017-03-01

    Full Text Available In this article we propose a simple and versatile method for limiting disclosure in continuous microdata based on Principal Component Analysis (PCA. Instead of perturbing the original variables, we propose to alter the principal components, as they contain the same information but are uncorrelated, which permits working on each component separately, reducing processing times. The number and weight of the perturbed components determine the level of protection and distortion of the masked data. The method provides preservation of the mean vector and the variance-covariance matrix. Furthermore, depending on the technique chosen to perturb the principal components, the proposed method can provide masked, hybrid or fully synthetic data sets. Some examples of application and comparison with other methods previously proposed in the literature (in terms of disclosure risk and data utility are also included.

  6. Gas chromatography/mass spectrometry based component profiling and quality prediction for Japanese sake.

    Science.gov (United States)

    Mimura, Natsuki; Isogai, Atsuko; Iwashita, Kazuhiro; Bamba, Takeshi; Fukusaki, Eiichiro

    2014-10-01

    Sake is a Japanese traditional alcoholic beverage, which is produced by simultaneous saccharification and alcohol fermentation of polished and steamed rice by Aspergillus oryzae and Saccharomyces cerevisiae. About 300 compounds have been identified in sake, and the contribution of individual components to the sake flavor has been examined at the same time. However, only a few compounds could explain the characteristics alone and most of the attributes still remain unclear. The purpose of this study was to examine the relationship between the component profile and the attributes of sake. Gas chromatography coupled with mass spectrometry (GC/MS)-based non-targeted analysis was employed to obtain the low molecular weight component profile of Japanese sake including both nonvolatile and volatile compounds. Sake attributes and overall quality were assessed by analytical descriptive sensory test and the prediction model of the sensory score from the component profile was constructed by means of orthogonal projections to latent structures (OPLS) regression analysis. Our results showed that 12 sake attributes [ginjo-ka (aroma of premium ginjo sake), grassy/aldehydic odor, sweet aroma/caramel/burnt odor, sulfury odor, sour taste, umami, bitter taste, body, amakara (dryness), aftertaste, pungent/smoothness and appearance] and overall quality were accurately explained by component profiles. In addition, we were able to select statistically significant components according to variable importance on projection (VIP). Our methodology clarified the correlation between sake attribute and 200 low molecular components and presented the importance of each component thus, providing new insights to the flavor study of sake.

  7. Risk-based damage assessment and maintenance management for turbine components

    Energy Technology Data Exchange (ETDEWEB)

    Fujiyama, Kazunari; Fujiwara, Toshihiro; Nakatani, Yujiro; Sawa, Testu; Ishii, Junji; Horino, Masayoshi; Nishimura, Mariko; Kitayama, Kazuhiro [Industrial and Power Systems and Services Company, Toshiba Corporation, Tokyo (Japan)

    2004-05-15

    A statistical approach for risk-based maintenance of damage tolerant components is presented. Damage risk is defined here as the expected cost due to repair of damage in the course of component life. The thermomechanical fatigue cracking was studied statistically as the typical damage phenomena for gas turbine nozzles. Probabilities of cycles to critical crack size and cycles to total amount of cracks were calculated through plant inspection data and experimental results of low cycle fatigue. The life cycle cost of damage tolerant components was proved to be optimized by considering the failure risk and the damage risk simultaneously. (orig.)

  8. [Thought and application of traditional Chinese medicine multiple drug delivery system based on material basis component].

    Science.gov (United States)

    Sun, E; Jia, Xiaobin; Huang, Yang; Chen, Bin; Hu, Qin; Xiao, Wei

    2012-07-01

    To aim directly at the research status of Chinese drugs pharmaceutics, this study provides a new research idea "traditional Chinese medicine multiple drug delivery system based on material basis component". This thought according to whole concept, syndrome, and Chinese medicine characteristics of multi-component, multi-target, multi-effect. The premise of designing traditional Chinese medicine multiple drug delivery system is material basis component, and the purpose is to improve bioavailability. The example of multi-drug delivery system of tongmai micro-pellets is expounded for application. This new research model of Chinese drugs pharmaceutics provides new strategies and methods for the development of modern Chinese drug delivery systems.

  9. Mixture gas component concentration analysis based on support vector machine and infrared spectrum

    Institute of Scientific and Technical Information of China (English)

    Peng Bai; Junhua Liu

    2006-01-01

    @@ A novel quantitative analysis method of multi-component mixture gas concentration based on support vector machine (SVM) and spectroscopy is proposed. Through transformation of the kernel function, the seriously overlapped and nonlinear spectrum data are transformed in high-dimensional space, but the highdimensional data can be processed in the original space. Some factors, such as kernel function, range of the wavelength, and penalty coefficient, are discussed. This method is applied to the quantitative analysis of natural gas components concentration, and the component concentration maximal deviation is 2.28%.

  10. Refinement and verification in component-based model-driven design

    DEFF Research Database (Denmark)

    Chen, Zhenbang; Liu, Zhiming; Ravn, Anders Peter

    2009-01-01

    Modern software development is complex as it has to deal with many different and yet related aspects of applications. In practical software engineering this is now handled by a UML-like modelling approach in which different aspects are modelled by different notations. Component-based and object...... of Refinement of Component and Object Systems (rCOS) and illustrates it with experiences from the work on the Common Component Modelling Example (CoCoME). This gives evidence that the formal techniques developed in rCOS can be integrated into a model-driven development process and shows where it may...

  11. Life Extension Methodologies and Risk-Based Inspection in the Management of Fracture Critical Aeroengine Components

    Science.gov (United States)

    2003-02-01

    stresses. The life-to-first- crack distributions and the propagation lives obtained in the nickel-base superalloys used in aeroengine discs appear to...aircraft. Turbine and compressor discs and shafts are identified as the major fracture critical components. They experience extreme thermo- mechanical...airworthiness, and integrity of fracture components such as discs and shafts throughout their service operation. Such events are non-random and hence

  12. Multi-Step Deep Reactive Ion Etching Fabrication Process for Silicon-Based Terahertz Components

    Science.gov (United States)

    Jung-Kubiak, Cecile (Inventor); Reck, Theodore (Inventor); Chattopadhyay, Goutam (Inventor); Perez, Jose Vicente Siles (Inventor); Lin, Robert H. (Inventor); Mehdi, Imran (Inventor); Lee, Choonsup (Inventor); Cooper, Ken B. (Inventor); Peralta, Alejandro (Inventor)

    2016-01-01

    A multi-step silicon etching process has been developed to fabricate silicon-based terahertz (THz) waveguide components. This technique provides precise dimensional control across multiple etch depths with batch processing capabilities. Nonlinear and passive components such as mixers and multipliers waveguides, hybrids, OMTs and twists have been fabricated and integrated into a small silicon package. This fabrication technique enables a wafer-stacking architecture to provide ultra-compact multi-pixel receiver front-ends in the THz range.

  13. Small Target Extraction Based on Independent Component Analysis for Hyperspectral Imagery

    Institute of Scientific and Technical Information of China (English)

    LU Wei; YU Xuchu

    2006-01-01

    A small target detection approach based on independent component analysis for hyperspectral data is put forward. In this algorithm, firstly the fast independent component analysis(FICA) is used to collect target information hided in high-dimensional data and projects them into low-dimensional space.Secondly, the feature images are selected with kurtosis .At last, small targets are extracted with histogram image segmentation which has been labeled by skewness.

  14. Conceptual Framework for Knowledge-Based Decision Migration in Multi-Component Robots

    Directory of Open Access Journals (Sweden)

    Laxmisha Rai

    2013-05-01

    Full Text Available Abstract This paper presents a conceptual framework for dynamically migrating the decisions of multi-component robots within different physical subcomponents. At the higher layer, a decision-making distributed migration system embeds the rules written in a knowledge-based system (KBS. The rules are written and decisions are made at one component, which can be migrated by the underlying distributed system. This eventually results in intelligent migration of the decisions and robot survival, irrespective of a particular physical component failure. Moreover, a robot does not need an exclusive control system for each component. The implementation is a two-step process. Initially, component-specific facts are identified and mapped to suit to respective components and to write rules and explore different gestures and behaviour of the robot. The second step is the embedding of these rules within the distributed or agent-based system and allowing the decisions to float around the system. The Jess (Java Expert System Shell expert system is used as the knowledge-based tool and different gestures are generated to realize the proposed method.

  15. The use of principle component and cluster analyses to differentiate banana pulp flours based on starch and dietary fiber components.

    Science.gov (United States)

    Ramli, Saifullah Bin; Alkarkhi, Abbas F M; Yong, Yeoh Shin; Easa, Azhar Mat

    2009-01-01

    Flour prepared from green and ripe Cavendish and Dream banana fruits were assessed for total starch, digestible starch, resistant starch, total dietary fiber, soluble dietary fiber and insoluble dietary fiber. Principle component analysis identified only one component responsible for explaining 83.83% of the total variance in the starch and dietary fiber components data to indicate that ripe banana flour had different characteristics from the green. Cluster analysis applied on similar data obtained two statistically significant clusters of green and ripe banana to indicate difference in behaviors according to the stages of ripeness. In conclusion, starch and dietary fiber components could be used to discriminate between flour prepared from fruits of different stage of ripeness. Results are also suggestive of the potential of green as well as the ripe banana flour as functional ingredients in food.

  16. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    Science.gov (United States)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  17. Prediction and modeling of the two-dimensional separation characteristic of a steam generator at a nuclear power station with VVER-1000 reactors

    Science.gov (United States)

    Parchevsky, V. M.; Guryanova, V. V.

    2017-01-01

    A computational and experimental procedure for construction of the two-dimensional separation curve (TDSC) for a horizontal steam generator (SG) at a nuclear power station (NPS) with VVER-reactors. In contrast to the conventional one-dimensional curve describing the wetness of saturated steam generated in SG as a function of the boiler water level at one, usually rated, load, TDSC is a function of two variables, which are the level and the load of SGB that enables TDSC to be used for wetness control in a wide load range. The procedure is based on two types of experimental data obtained during rated load operation: the nonuniformity factor of the steam load at the outlet from the submerged perforated sheet (SPS) and the dependence of the mass water level in the vicinity of the "hot" header on the water level the "cold" end of SG. The TDSC prediction procedure is presented in the form of an algorithm using SG characteristics, such as steam load and water level as the input and giving the calculated steam wetness at the output. The zoneby-zone calculation method is used. The result is presented in an analytical form (as an empirical correlation) suitable for uploading into controllers or other controls. The predicted TDSC can be used during real-time operation for implementation of different wetness control scenarios (for example, if the effectiveness is a priority, then the minimum water level, minimum wetness, and maximum turbine efficiency should be maintained; if safety is a priority, then the maximum level at the allowable wetness and the maximum water inventory should be kept), for operation of NPS in controlling the frequency and power in a power system, at the design phase (as a part of the simulation complex for verification of design solutions), during construction and erection (in developing software for personnel training simulators), during commissioning tests (to reduce the duration and labor-intensity of experimental activities), and for training.

  18. Predictive based monitoring of nuclear plant component degradation using support vector regression

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, Vivek [Idaho National Lab. (INL), Idaho Falls, ID (United States). Dept. of Human Factors, Controls, Statistics; Alamaniotis, Miltiadis [Purdue Univ., West Lafayette, IN (United States). School of Nuclear Engineering; Tsoukalas, Lefteri H. [Purdue Univ., West Lafayette, IN (United States). School of Nuclear Engineering

    2015-02-01

    Nuclear power plants (NPPs) are large installations comprised of many active and passive assets. Degradation monitoring of all these assets is expensive (labor cost) and highly demanding task. In this paper a framework based on Support Vector Regression (SVR) for online surveillance of critical parameter degradation of NPP components is proposed. In this case, on time replacement or maintenance of components will prevent potential plant malfunctions, and reduce the overall operational cost. In the current work, we apply SVR equipped with a Gaussian kernel function to monitor components. Monitoring includes the one-step-ahead prediction of the component’s respective operational quantity using the SVR model, while the SVR model is trained using a set of previous recorded degradation histories of similar components. Predictive capability of the model is evaluated upon arrival of a sensor measurement, which is compared to the component failure threshold. A maintenance decision is based on a fuzzy inference system that utilizes three parameters: (i) prediction evaluation in the previous steps, (ii) predicted value of the current step, (iii) and difference of current predicted value with components failure thresholds. The proposed framework will be tested on turbine blade degradation data.

  19. Spectral discrimination of bleached and healthy submerged corals based on principal components analysis

    Energy Technology Data Exchange (ETDEWEB)

    Holden, H.; LeDrew, E. [Univ. of Waterloo, Ontario (Canada)

    1997-06-01

    Remote discrimination of substrate types in relatively shallow coastal waters has been limited by the spatial and spectral resolution of available sensors. An additional limiting factor is the strong attenuating influence of the water column over the substrate. As a result, there have been limited attempts to map submerged ecosystems such as coral reefs based on spectral characteristics. Both healthy and bleached corals were measured at depth with a hand-held spectroradiometer, and their spectra compared. Two separate principal components analyses (PCA) were performed on two sets of spectral data. The PCA revealed that there is indeed a spectral difference based on health. In the first data set, the first component (healthy coral) explains 46.82%, while the second component (bleached coral) explains 46.35% of the variance. In the second data set, the first component (bleached coral) explained 46.99%; the second component (healthy coral) explained 36.55%; and the third component (healthy coral) explained 15.44 % of the total variance in the original data. These results are encouraging with respect to using an airborne spectroradiometer to identify areas of bleached corals thus enabling accurate monitoring over time.

  20. Towards a Component Framework for Architecture-Based Self-Adaptive Applications

    Institute of Scientific and Technical Information of China (English)

    ZHOU Yu; MA Xiaoxing; TAO Xianping; LU Jian

    2006-01-01

    Self-adaptive software is an efficient way to cope with highly dynamic nature of the environment where it is situated. In this paper, from the perspective of software architecture, we propose a component framework for supporting the architecture-based design and development of self-adaptive applications. It captures some key elements of the research on software architecture and provides more flexible facilities to decouple interacting components. Based on that, a prototype is implemented to demonstrate its feasibility, and at last a case study is presented to illustrate our framework.

  1. Component-Based Development of Runtime Observers in the COMDES Framework

    DEFF Research Database (Denmark)

    Guan, Wei; Li, Gang; Angelov, Christo K.;

    2013-01-01

    Formal verification methods, such as exhaustive model checking, are often infeasible because of high computational complexity. Runtime observers (monitors) provide an alternative, light-weight verification method, which offers a non-exhaustive but still feasible approach to monitor system behavior...... against formally specified properties. This paper presents a component-based design method for runtime observers in the context of COMDES framework—a component-based framework for distributed embedded system and its supporting tools. Therefore, runtime verification is facilitated by model...

  2. Forecast method for used number of parts and components based on complex network

    Institute of Scientific and Technical Information of China (English)

    LIU Fu-yun; QI Guo-ning; YANG Qing-hai

    2006-01-01

    Applying directed complex network to model the main structure of a product family,according to in-degree bi-logarithmic coordinate distribution curve and distribution rule of nodes of the network,in-degree evolving rule of nodes of the network is presented and analytic expression of in-degree probability density of nodes is derived.Through the analysis of the relation between existing kinds of components and existing product numbers,an expression of the relation between kinds of components and product numbers is derived.A forecast method for the increment of component numbers and parts based on the increment of products is presented.As an example,the component numbers of an industrial steam turbine product family is forecasted,forecast result verified and forecast error analyzed.

  3. Prognostic Health Monitoring System: Component Selection Based on Risk Criteria and Economic Benefit Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Binh T. Pham; Vivek Agarwal; Nancy J Lybeck; Magdy S Tawfik

    2012-05-01

    Prognostic health monitoring (PHM) is a proactive approach to monitor the ability of structures, systems, and components (SSCs) to withstand structural, thermal, and chemical loadings over the SSCs planned service lifespans. The current efforts to extend the operational license lifetime of the aging fleet of U.S. nuclear power plants from 40 to 60 years and beyond can benefit from a systematic application of PHM technology. Implementing a PHM system would strengthen the safety of nuclear power plants, reduce plant outage time, and reduce operation and maintenance costs. However, a nuclear power plant has thousands of SSCs, so implementing a PHM system that covers all SSCs requires careful planning and prioritization. This paper therefore focuses on a component selection that is based on the analysis of a component's failure probability, risk, and cost. Ultimately, the decision on component selection depend on the overall economical benefits arising from safety and operational considerations associated with implementing the PHM system.

  4. Microalgae as part of the autotrophic component of life support systems for future planetary bases

    Science.gov (United States)

    Sychev, Vladimir; Levinskikh, Margarita

    Research and development of human life support systems incorporating biospheric components performed in the USSR and Russia for over 50 years resulted in a well- structured and rational step-by-step approach to this area of activities. The development of biological life support systems (BLSS) was based on the theory of biocenology advanced by V.N. Sukachev, according to which organic matter turnover is a result of combined activities of plants, animals and microorganisms. Hence, a BLSS with its semi-closed matter turnover needs to incorporate all the components of natural ecosystems, i.e., plants (photoautotrophic organisms), animals, including humans, and microorganisms (heterotrophic organisms). The photoautotrophic component of the BLSS designed to support humans should meet a number of specific requirements, the most important of which are: - high productivity - stability of functional parameters within their normal fluctuation ranges - compatibility with other system components to preclude additional load on them - minimum of un-utilizable compounds in the material balance of the component. The photosynthetic component may consist of lower and higher plants, which may function separately or jointly. In either case, microalgae will play a key role, as they do on Earth, in the production of organic compounds and oxygen as well as in the support of BLSS reliability. The construction of a planetary base begins with the assembly of major engineering facilities whereas the construction of a BLSS starts after the assembly is complete and the base interior is fully separated from the outside environment. At early stages of base operation the autotrophic component of the system will consist of algae alone, which will provide photosynthetic regeneration of air and water. At later stages the autotrophic component will progress from lower to higher plants; when the greenhouses reach adequate sizes, higher plants will occupy the major portion of the autotrophic component

  5. Using a combination of weighting factor method and imperialist competitive algorithm to improve speed and enhance process of reloading pattern optimization of VVER-1000 reactors in transient cycles

    Energy Technology Data Exchange (ETDEWEB)

    Rahmani, Yashar, E-mail: yashar.rahmani@gmail.com [Department of Physics, Faculty of Engineering, Islamic Azad University, Sari Branch, Sari (Iran, Islamic Republic of); Shahvari, Yaser [Department of Computer Engineering, Payame Noor University (PNU), P.O. Box 19395-3697, Tehran (Iran, Islamic Republic of); Kia, Faezeh [Golestan Institute of Higher Education, Gorgan 49139-83635 (Iran, Islamic Republic of)

    2017-03-15

    Highlights: • This article was an attempt to optimize reloading pattern of Bushehr VVER-1000 reactor. • A combination of weighting factor method and the imperialist competitive algorithm was used. • The speed of optimization and desirability of the proposed pattern increased considerably. • To evaluate arrangements, a coupling of WIMSD5-B, CITATION-LDI2 and WERL codes was used. • Results reflected the considerable superiority of the proposed method over direct optimization. - Abstract: In this research, an innovative solution is described which can be used with a combination of the new imperialist competitive algorithm and the weighting factor method to improve speed and increase globality of search in reloading pattern optimization of VVER-1000 reactors in transient cycles and even obtain more desirable results than conventional direct method. In this regard, to reduce the scope of the assumed searchable arrangements, first using the weighting factor method and based on values of these coefficients in each of the 16 types of loadable fuel assemblies in the second cycle, the fuel assemblies were classified in more limited groups. In consequence, the types of fuel assemblies were reduced from 16 to 6 and consequently the number of possible arrangements was reduced considerably. Afterwards, in the first phase of optimization the imperialist competitive algorithm was used to propose an optimum reloading pattern with 6 groups. In the second phase, the algorithm was reused for finding desirable placement of the subset assemblies of each group in the optimum arrangement obtained from the previous phase, and thus the retransformation of the optimum arrangement takes place from the virtual 6-group mode to the real mode with 16 fuel types. In this research, the optimization process was conducted in two states. In the first state, it was tried to obtain an arrangement with the maximum effective multiplication factor and the smallest maximum power peaking factor. In

  6. A new process monitoring method based on noisy time structure independent component analysis

    Institute of Scientific and Technical Information of China (English)

    Lianfang Cai; Xuemin Tian

    2015-01-01

    Conventional process monitoring method based on fast independent component analysis (FastICA) cannot take the ubiquitous measurement noises into account and may exhibit degraded monitoring performance under the adverse effects of the measurement noises. In this paper, a new process monitoring approach based on noisy time structure ICA (NoisyTSICA) is proposed to solve such problem. A NoisyTSICA algorithm which can consider the measurement noises explicitly is firstly developed to estimate the mixing matrix and extract the independent components (ICs). Subsequently, a monitoring statistic is built to detect process faults on the basis of the recur-sive kurtosis estimations of the dominant ICs. Lastly, a contribution plot for the monitoring statistic is constructed to identify the fault variables based on the sensitivity analysis. Simulation studies on the continuous stirred tank reactor system demonstrate that the proposed NoisyTSICA-based monitoring method outperforms the conven-tional FastICA-based monitoring method.

  7. A new rolling bearing fault diagnosis method based on GFT impulse component extraction

    Science.gov (United States)

    Ou, Lu; Yu, Dejie; Yang, Hanjian

    2016-12-01

    Periodic impulses are vital indicators of rolling bearing faults. The extraction of impulse components from rolling bearing vibration signals is of great importance for fault diagnosis. In this paper, vibration signals are taken as the path graph signals in a manifold perspective, and the Graph Fourier Transform (GFT) of vibration signals are investigated from the graph spectrum domain, which are both introduced into the vibration signal analysis. To extract the impulse components efficiently, a new adjacency weight matrix is defined, and then the GFT of the impulse component and harmonic component in the rolling bearing vibration signals are analyzed. Furthermore, as the GFT graph spectrum of the impulse component is mainly concentrated in the high-order region, a new rolling bearing fault diagnosis method based on GFT impulse component extraction is proposed. In the proposed method, the GFT of a vibration signal is firstly performed, and its graph spectrum coefficients in the high-order region are extracted to reconstruct different impulse components. Next, the Hilbert envelope spectra of these impulse components are calculated, and the envelope spectrum values at the fault characteristic frequency are arranged in order. Furthermore, the envelope spectrum with the maximum value at the fault characteristic frequency is selected as the final result, from which the rolling bearing fault can be diagnosed. Finally, an index KR, which is the product of the kurtosis and Hilbert envelope spectrum fault feature ratio of the extracted impulse component, is put forward to measure the performance of the proposed method. Simulations and experiments are utilized to demonstrate the feasibility and effectiveness of the proposed method.

  8. Issues on Component Based Architectures Utilization for Real Time Control Applications

    Directory of Open Access Journals (Sweden)

    ZMARANDA Doina

    2011-05-01

    Full Text Available Generally, real-time embedded controlsystems are very demanding from the timing pointof view. Increasing complexity and criticality ofsuch systems leads to a challenge regarding theirdesign and programming model. Severaldevelopment models were proposed in the literature,all of them could be grouped into two categories:models based on event triggered approach andmodels based on timed triggered approach. Thispaper focuses on two of the most knownprogramming models that exhibit componentarchitecture: Giotto - based on timed triggeredapproach, and timed multitasking - based on eventtriggered approach. Based on the survey of eachmodel capabilities and component structure, theadvantages and drawbacks in utilization for realtimeembedded systems are analyzed in the paperand several conclusions are drawn.

  9. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    Science.gov (United States)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  10. Progress of a research program on seismic base isolation of nuclear components

    Energy Technology Data Exchange (ETDEWEB)

    Ebisawa, K.; Ando, K.; Shibata, K. [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan)

    2000-05-01

    Development of an evaluation code and related test program have been conducted to provide the technical base of the seismic base isolation of nuclear components. In the Phase I (FY1991-FY1995) of the research, a methodology and a computer code Ver.1 for evaluating the effect of seismic base isolation of nuclear components were developed. Case study was carried out on the effectiveness of base isolation of emergency transformer. Difference of input earthquake motion, type of isolation device and influence of the soil property were studied. Case study of a cost/benefit analysis in introducing the base isolation to emergency transformer was tried as an application of the computer code. As the Phase II (FY1996-FY2000), in order to obtain the test data of component base isolation systems, a verification test program, in which the test utilizing the real earthquake and the test by a shaking table are to be carried out, has been initiated since FY1996. In the tests, dynamic response and failure mode of base isolation systems will be examined. This paper overviews the progress of Phase I and II researches. (orig.)

  11. A study of different cases of VVER reactor core flooding in a large break loss of coolant accident

    Directory of Open Access Journals (Sweden)

    Bezrukov Yury Alekseevich

    2016-01-01

    Full Text Available The paper covers the results of VVER core reflooding studies in fuel assembly (FA mockup of 126 fuel rod simulators with axial power peaking. The experiments were performed for two types of flooding. The first type is top flooding of the empty (steamed FA mockup. The second type is bottom flooding of the FA mockup with level of boiling water. The test parameters are as follows: the range of the supplied power to the bundle is from 40 to 320 kW, the cooling water flow rate is from 0.04 to 1.1 kg/s, the maximum temperature of the fuel rod simulator is 800 °C and the linear heat flux is from 0.1 to 1.0 kW/m. The test results were used for computer code validation.

  12. Effective Web Design and Core Communication Issues: The Missing Components in Web-Based Distance Education.

    Science.gov (United States)

    Burch, Randall O.

    2001-01-01

    Discussion of Web-based distance education focuses on communication issues. Highlights include Internet communications; components of a Web site, including site architecture, user interface, information delivery method, and mode of feedback; elements of Web design, including conceptual design, sensory design, and reactive design; and a Web…

  13. A model for determining condition-based maintenance policies for deteriorating multi-component systems

    NARCIS (Netherlands)

    Hontelez, J.A.M.; Wijnmalen, D.J.D.

    1993-01-01

    We discuss a method to determine strategies for preventive maintenance of systems consisting of gradually deteriorating components. A model has been developed to compute not only the range of conditions inducing a repair action, but also inspection moments based on the last known condition value so

  14. The performance of a component-based allergen microarray for the diagnosis of kiwifruit allergy

    NARCIS (Netherlands)

    Bublin, M.; Dennstedt, S.; Buchegger, M.; Ciardiello, M. Antonietta; Bernardi, M. L.; Tuppo, L.; Harwanegg, C.; Hafner, C.; Ebner, C.; Ballmer-Weber, B. K.; Knulst, A.; Hoffmann-Sommergruber, K.; Radauer, C.; Mari, A.; Breiteneder, H.

    P>Background Allergy to kiwifruit is increasingly reported across Europe. Currently, the reliability of its diagnosis by the measurement of allergen-specific IgE with extracts or by skin testing with fresh fruits is unsatisfying. Objective To evaluate the usefulness of a component-based allergen

  15. Prototypic implementations of the building block for component based open Hypermedia systems (BB/CB-OHSs)

    DEFF Research Database (Denmark)

    Mohamed, Omer I. Eldai

    2005-01-01

    In this paper we describe the prototypic implementations of the BuildingBlock (BB/CB-OHSs) that proposed to address some of the Component-based Open Hypermedia Systems (CB-OHSs) issues, including distribution and interoperability [4, 11, 12]. Four service implementations were described below...

  16. Quantitative characterization of the carbon/carbon composites components based on video of polarized light microscope.

    Science.gov (United States)

    Li, Yixian; Qi, Lehua; Song, Yongshan; Chao, Xujiang

    2017-02-13

    The components of carbon/carbon (C/C) composites have significant influence on the thermal and mechanical properties, so a quantitative characterization of component is necessary to study the microstructure of C/C composites, and further to improve the macroscopic properties of C/C composites. Considering the extinction crosses of the pyrocarbon matrix have significant moving features, the polarized light microscope (PLM) video is used to characterize C/C composites quantitatively because it contains sufficiently dynamic and structure information. Then the optical flow method is introduced to compute the optical flow field between the adjacent frames, and segment the components of C/C composites from PLM image by image processing. Meanwhile the matrix with different textures is re-segmented by the length difference of motion vectors, and then the component fraction of each component and extinction angle of pyrocarbon matrix are calculated directly. Finally, the C/C composites are successfully characterized from three aspects of carbon fiber, pyrocarbon, and pores by a series of image processing operators based on PLM video, and the errors of component fractions are less than 15%.

  17. The Change of Austenitic Stainless Steel Elements Content in the Inner Parts of VVER-440 Reactor during Operation

    Science.gov (United States)

    Smutný, Vladimír; Hep, Jaroslav; Novosad, Petr

    2009-08-01

    Neutron activation induces the element transmutation in materials surrounding the reactor active core. The objective of the present paper is to calculate and evaluate the change of austenitic stainless steel 08Ch18N10T elements content through neutron induced activation, in inner parts of VVER-440 - in the baffle and in the barrel. Particularly the content changes of Mn in austenitic stainless steel. The neutron flux density and then the neutron activation of austenitic stainless steel elements in parts at the core are calculated. Neutron activation represents a measure of austenitic stainless steel elements transmutation. The power distribution is determined as an average value of several cycles power distribution in the middle of a cycle for the NPP Dukovany. The power distribution is calculated with the code MOBY-DICK [1]. The neutron flux density is calculated with the code TORT [2]. The neutron activation of austenitic stainless steel elements in the baffle and in the barrel is calculated with the system EASY-2007 containing the code FISPACT-2007 [3]. The calculation of the changing austenitic stainless steel elements content is performed depending on the moment of the supposed end of reactor operation - 40 years. There is also necessary monitoring and benchmarking of steel element content change, because the neutron flux calculation, particularly in thermal region, shows a considerable uncertainty, e.g. [4]. The motivation for this work is the study focused to stress corrosion cracking of austenitic stainless steels induced by radiation inside PWR and BWR, e.g. [5]. The paper could be a suggestion to estimation of austenitic stainless steel corrosion damage induced by neutrons in inner parts of VVER-440 reactor.

  18. A Network Framework Based on MechanicalComponent Design and Manufacturing

    Institute of Scientific and Technical Information of China (English)

    顾立志; 陈光军; 杨康

    2004-01-01

    Network manufacturing has been rapidly developed and is going to play an important role in modern indusu'y. The core of network manufacturing of mechanical products is the design and manufacturing based on the computer network technology. A network framework is introduced for manufacturing mechanical components at two main levels. On the design level features of the component are initially studied based on the structure and functions of the component. Details of the design procedure and contents are then analyzed with three main kinds of components. In this stage, selection of materials, calculation of sla'ess and deflection under load, determination of the size are carried out using CAD. On the manufacturing level various aspects of CAPP are discussed, including the principle and the modes of positioning the component, the exerted clamping forces, cutting engagement and input parameters, machine tools used, and machining fluids if necessary. Finally a prototype of the network framework is presented with several pieces of data terminal equipment through a local area network, the topological structure, and data sharing and security, without concerning the use of the concurrent engineering techniques and virtual manufacturing, and virtual measuring techniques.

  19. Batch process monitoring based on multiple-phase online sorting principal component analysis.

    Science.gov (United States)

    Lv, Zhaomin; Yan, Xuefeng; Jiang, Qingchao

    2016-09-01

    Existing phase-based batch or fed-batch process monitoring strategies generally have two problems: (1) phase number, which is difficult to determine, and (2) uneven length feature of data. In this study, a multiple-phase online sorting principal component analysis modeling strategy (MPOSPCA) is proposed to monitor multiple-phase batch processes online. Based on all batches of off-line normal data, a new multiple-phase partition algorithm is proposed, where k-means and a defined average Euclidean radius are employed to determine the multiple-phase data set and phase number. Principal component analysis is then applied to build the model in each phase, and all the components are retained. In online monitoring, the Euclidean distance is used to select the monitoring model. All the components undergo online sorting through a parameter defined by Bayesian inference (BI). The first several components are retained to calculate the T(2) statistics. Finally, the respective probability indices of [Formula: see text] is obtained using BI as the moving average strategy. The feasibility and effectiveness of MPOSPCA are demonstrated through a simple numerical example and the fed-batch penicillin fermentation process.

  20. Topology optimization based on moving deformable components: A new computational framework

    OpenAIRE

    2014-01-01

    In the present work, a new computational framework for structural topology optimization based on the concept of moving deformable components is proposed. Compared with the traditional pixel or node point-based solution framework, the proposed solution paradigm can incorporate more geometry and mechanical information into topology optimization directly and therefore render the solution process more flexible. It also has the great potential to reduce the computational burden associated with top...

  1. Protection algorithm for a wind turbine generator based on positive- and negative-sequence fault components

    DEFF Research Database (Denmark)

    Zheng, Tai-Ying; Cha, Seung-Tae; Crossley, Peter A.;

    2011-01-01

    A protection relay for a wind turbine generator (WTG) based on positive- and negative-sequence fault components is proposed in the paper. The relay uses the magnitude of the positive-sequence component in the fault current to detect a fault on a parallel WTG, connected to the same power collection...... feeder, or a fault on an adjacent feeder; but for these faults, the relay remains stable and inoperative. A fault on the power collection feeder or a fault on the collection bus, both of which require an instantaneous tripping response, are distinguished from an inter-tie fault or a grid fault, which...

  2. Layer-component-based communication stack framework for wireless residential control systems

    DEFF Research Database (Denmark)

    Torbensen, R.; Hjorth, Theis S.

    2011-01-01

    of nodes such as bridges, controllers, sensor/actuators – as well as secure communication between them. A special messaging system facilitates inter-component communication, and a Virtual Port Service protocol enables resource addressing. The end-devices in the heterogeneous network are made accessible...... shown how the framework facilitates fast prototyping and makes developing secure wireless control systems less complex.......This paper describes methods to lower the entry barrier for creating products that interoperate in the emerging heterogeneous residential control network domain. For designing reconfigurable, layer-component-based communication stacks, a flexible framework is proposed that supports several types...

  3. Layer-component-based communication stack framework for wireless residential control systems

    DEFF Research Database (Denmark)

    Torbensen, Rune Sonnich; Hjorth, Theis

    2010-01-01

    of nodes such as bridges, controllers, sensor/actuators - as well as secure communication between them. A special messaging system facilitates inter-component communication, and a Virtual Port Service protocol enables resource addressing. The end-devices in the heterogeneous network are made accessible...... shown how the framework facilitates fast prototyping and makes developing secure wireless control systems less complex. © 2010 IEEE.......This paper describes methods to lower the entry barrier for creating products that interoperate in the emerging heterogeneous residential control network domain. For designing reconfigurable, layer-component-based communication stacks, a flexible framework is proposed that supports several types...

  4. Layer-component-based communication stack framework for wireless residential control systems

    DEFF Research Database (Denmark)

    Torbensen, R.; Hjorth, Theis S.

    2011-01-01

    This paper describes methods to lower the entry barrier for creating products that interoperate in the emerging heterogeneous residential control network domain. For designing reconfigurable, layer-component-based communication stacks, a flexible framework is proposed that supports several types...... of nodes such as bridges, controllers, sensor/actuators – as well as secure communication between them. A special messaging system facilitates inter-component communication, and a Virtual Port Service protocol enables resource addressing. The end-devices in the heterogeneous network are made accessible...

  5. Independent component feature-based human activity recognition via Linear Discriminant Analysis and Hidden Markov Model.

    Science.gov (United States)

    Uddin, Md; Lee, J J; Kim, T S

    2008-01-01

    In proactive computing, human activity recognition from image sequences is an active research area. This paper presents a novel approach of human activity recognition based on Linear Discriminant Analysis (LDA) of Independent Component (IC) features from shape information. With extracted features, Hidden Markov Model (HMM) is applied for training and recognition. The recognition performance using LDA of IC features has been compared to other approaches including Principle Component Analysis (PCA), LDA of PC, and ICA. The preliminary results show much improved performance in the recognition rate with our proposed method.

  6. Classifying sEMG-based Hand Movements by Means of Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    M. S. Isaković

    2015-06-01

    Full Text Available In order to improve surface electromyography (sEMG based control of hand prosthesis, we applied Principal Component Analysis (PCA for feature extraction. The sEMG data from a group of healthy subjects (downloaded from free Ninapro database comprised the following sets: three grasping, eight wrist, and eleven finger movements. We tested the accuracy of a simple quadratic classifier for two sets of features derived from PCA. Preliminary results suggest that the first two principal components do not guarantee successful hand movement classification. The hand movement classification accuracy significantly increased with using three instead of two features, in all three sets of movements and throughout all subjects.

  7. Experimental investigations of thermal-hydraulic processes arising during operation of the passive safety systems used in new projects of nuclear power plants equipped with VVER reactors

    Science.gov (United States)

    Morozov, A. V.; Remizov, O. V.; Kalyakin, D. S.

    2014-05-01

    The results obtained from experimental investigations into thermal-hydraulic processes that take place during operation of the passive safety systems used in new-generation reactor plants constructed on the basis of VVER technology are presented. The experiments were carried out on the model rigs available at the Leipunskii Institute for Physics and Power Engineering. The processes through which interaction occurs between the opposite flows of saturated steam and cold water moving in the vertical steam line of the additional system for passively flooding the core from the second-stage hydro accumulators are studied. The specific features pertinent to undeveloped boiling of liquid on a single horizontal tube heated by steam and steam-gas mixture that is typical for of the condensing operating mode of a VVER reactor steam generator are investigated.

  8. An Automated Video Object Extraction System Based on Spatiotemporal Independent Component Analysis and Multiscale Segmentation

    Directory of Open Access Journals (Sweden)

    Zhang Xiao-Ping

    2006-01-01

    Full Text Available Video content analysis is essential for efficient and intelligent utilizations of vast multimedia databases over the Internet. In video sequences, object-based extraction techniques are important for content-based video processing in many applications. In this paper, a novel technique is developed to extract objects from video sequences based on spatiotemporal independent component analysis (stICA and multiscale analysis. The stICA is used to extract the preliminary source images containing moving objects in video sequences. The source image data obtained after stICA analysis are further processed using wavelet-based multiscale image segmentation and region detection techniques to improve the accuracy of the extracted object. An automated video object extraction system is developed based on these new techniques. Preliminary results demonstrate great potential for the new stICA and multiscale-segmentation-based object extraction system in content-based video processing applications.

  9. Multi-polarization reconstruction from compact polarimetry based on modified four-component scattering decomposition

    Institute of Scientific and Technical Information of China (English)

    Junjun Yin; Jian Yang

    2014-01-01

    An improved algorithm for multi-polarization recon-struction from compact polarimetry (CP) is proposed. According to two fundamental assumptions in compact polarimetric reconstruc-tion, two improvements are proposed. Firstly, the four-component model-based decomposition algorithm is modified with a new vol-ume scattering model. The decomposed helix scattering compo-nent is then used to deal with the non-reflection symmetry con-dition in compact polarimetric measurements. Using the decom-posed power and considering the scattering mechanism of each component, an average relationship between co-polarized and cross-polarized channels is developed over the original polariza-tion state extrapolation model. E-SAR polarimetric data acquired over the Oberpfaffenhofen area and JPL/AIRSAR polarimetric data acquired over San Francisco are used for verification, and good re-construction results are obtained, demonstrating the effectiveness of the proposed algorithm.

  10. Integrated ultracompact and broadband wavelength demultiplexer based on multi-component nano-cavities.

    Science.gov (United States)

    Lu, Cuicui; Liu, Yong-Chun; Hu, Xiaoyong; Yang, Hong; Gong, Qihuang

    2016-06-06

    Integrated nanoscale photonic devices have wide applications ranging from optical interconnects and optical computing to optical communications. Wavelength demultiplexer is an essential on-chip optical component which can separate the incident wavelength into different channels; however, the experimental progress is very limited. Here, using a multi-component nano-cavity design, we realize an ultracompact, broadband and high-contrast wavelength demultiplexer, with 2.3 μm feature size, 200 nm operation bandwidth (from 780 nm to 980 nm) and a contrast ratio up to 13.7 dB. The physical mechanism is based on the strong modulation of the surface plasmon polaritons induced by the multi-component nano-cavities, and it can be generalized to other nanoscale photonic devices. This provides a strategy for constructing on-chip photon routers, and also has applications for chip-integrated optical filter and optical logic gates.

  11. A new three-dimensional topology optimization method based on moving morphable components (MMCs)

    Science.gov (United States)

    Zhang, Weisheng; Li, Dong; Yuan, Jie; Song, Junfu; Guo, Xu

    2017-04-01

    In the present paper, a new method for solving three-dimensional topology optimization problem is proposed. This method is constructed under the so-called moving morphable components based solution framework. The novel aspect of the proposed method is that a set of structural components is introduced to describe the topology of a three-dimensional structure and the optimal structural topology is found by optimizing the layout of the components explicitly. The standard finite element method with ersatz material is adopted for structural response analysis and the shape sensitivity analysis only need to be carried out along the structural boundary. Compared to the existing methods, the description of structural topology is totally independent of the finite element/finite difference resolution in the proposed solution framework and therefore the number of design variables can be reduced substantially. Some widely investigated benchmark examples, in the three-dimensional topology optimization designs, are presented to demonstrate the effectiveness of the proposed approach.

  12. Structure Analysis of Network Traffic Matrix Based on Relaxed Principal Component Pursuit

    CERN Document Server

    Wang, Zhe; Xu, Ke; Yin, Baolin

    2011-01-01

    The network traffic matrix is a kind of flow-level Internet traffic data and is widely applied to network operation and management. It is a crucial problem to analyze the composition and structure of traffic matrix; some mathematical approaches such as Principal Component Analysis (PCA) were used to handle that problem. In this paper, we first argue that PCA performs poorly for analyzing traffic matrixes polluted by large volume anomalies, then propose a new composition model of the network traffic matrix. According to our model, structure analysis can be formally defined as decomposing a traffic matrix into low-rank, sparse, and noise sub-matrixes, which is equal to the Robust Principal Component Analysis (RPCA) problem defined in [13]. Based on the Relaxed Principal Component Pursuit (Relaxed PCP) method and the Accelerated Proximal Gradient (APG) algorithm, an iterative algorithm for decomposing a traffic matrix is presented, and our experiment results demonstrate its efficiency and flexibility. At last, f...

  13. Robust multi-stream speech recognition based on weighting the output probabilities of feature components

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jun; WEI Gang; YU Hua; NING Genxin

    2009-01-01

    In the traditional multi-stream fusion methods of speech recognition, all the feature components in a data stream share the same stream weight, while their distortion levels are usually different when the speech recognizer works in noisy environments. To overcome this limitation of the traditional multi-stream frameworks, the current study proposes a new stream fusion method that weights not only the stream outputs, but also the output probabilities of feature components. How the stream and feature component weights in the new fusion method affect the decision is analyzed and two stream fusion schemes based on the 03iginalisation and soft decision models in the missing data techniques are proposed. Experimental results on the hybrid sub-band multi-stream speech recognizer show that the proposed schemes can adjust the stream influences on the decision adaptively and outperform the traditional multi-stream methods in various noisy environments.

  14. A Component-Based Modeling and Validation Method for PLC Systems

    Directory of Open Access Journals (Sweden)

    Rui Wang

    2014-05-01

    Full Text Available Programmable logic controllers (PLCs are complex embedded systems that are widely used in industry. This paper presents a component-based modeling and validation method for PLC systems using the behavior-interaction-priority (BIP framework. We designed a general system architecture and a component library for a type of device control system. The control software and hardware of the environment were all modeled as BIP components. System requirements were formalized as monitors. Simulation was carried out to validate the system model. A realistic example from industry of the gates control system was employed to illustrate our strategies. We found a couple of design errors during the simulation, which helped us to improve the dependability of the original systems. The results of experiment demonstrated the effectiveness of our approach.

  15. Determination of power distribution in the VVER-440 core on the basis of data from in-core monitors by means of a metric analysis

    Science.gov (United States)

    Kryanev, A. V.; Udumyan, D. K.; Kurchenkov, A. Yu.; Gagarinskiy, A. A.

    2014-12-01

    Problems associated with determining the power distribution in the VVER-440 core on the basis of a neutron-physics calculation and data from in-core monitors are considered. A new mathematical scheme is proposed for this on the basis of a metric analysis. In relation to the existing mathematical schemes, the scheme in question improves the accuracy and reliability of the resulting power distribution.

  16. A RSM Method for Nonlinear Probabilistic Analysis of the Reinforced Concrete Structure Failure of a Nuclear Power Plant - Type VVER 440

    OpenAIRE

    Králik, Juraj

    2011-01-01

    This paper describes the reliability analysis of a concrete containment for VVER 440 under a high internal overpressure. The probabilistic safety assessment (PSA) level 3 aims at an assessment of the probability of the concrete structure failure under the excessive overpressure. The non-linear analysis of the concrete structures was considered. The uncertainties of the loads level (long-time temperature and dead loads), the material model (concrete cracking and crushing, behavior of the reinf...

  17. A novel prediction method about single components of analog circuits based on complex field modeling.

    Science.gov (United States)

    Zhou, Jingyu; Tian, Shulin; Yang, Chenglin

    2014-01-01

    Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments.

  18. A Novel Prediction Method about Single Components of Analog Circuits Based on Complex Field Modeling

    Directory of Open Access Journals (Sweden)

    Jingyu Zhou

    2014-01-01

    Full Text Available Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits’ single components. At last, it uses particle filter (PF to update parameters for the model and predicts remaining useful performance (RUP of analog circuits’ single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments.

  19. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework

    Directory of Open Access Journals (Sweden)

    Shengjing Wei

    2016-04-01

    Full Text Available Sign language recognition (SLR can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG sensors, accelerometers (ACC, and gyroscopes (GYRO. In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set suggested by two reference subjects, (82.6 ± 13.2% and (79.7 ± 13.4% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7% and (86.3 ± 13.7% when the training set included 50~60 gestures (about half of the target gesture set. The proposed framework can significantly reduce the user’s training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  20. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework.

    Science.gov (United States)

    Wei, Shengjing; Chen, Xiang; Yang, Xidong; Cao, Shuai; Zhang, Xu

    2016-04-19

    Sign language recognition (SLR) can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG) sensors, accelerometers (ACC), and gyroscopes (GYRO). In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL) sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set) suggested by two reference subjects, (82.6 ± 13.2)% and (79.7 ± 13.4)% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7)% and (86.3 ± 13.7)% when the training set included 50~60 gestures (about half of the target gesture set). The proposed framework can significantly reduce the user's training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  1. Biochemical component identification by light scattering techniques in whispering gallery mode optical resonance based sensor

    Science.gov (United States)

    Saetchnikov, Vladimir A.; Tcherniavskaia, Elina A.; Saetchnikov, Anton V.; Schweiger, Gustav; Ostendorf, Andreas

    2014-03-01

    Experimental data on detection and identification of variety of biochemical agents, such as proteins (albumin, interferon, C reactive protein), microelements (Na+, Ca+), antibiotic of different generations, in both single and multi component solutions under varied in wide range concentration are represented. Analysis has been performed on the light scattering parameters of whispering gallery mode (WGM) optical resonance based sensor with dielectric microspheres from glass and PMMA as sensitive elements fixed by spin - coating techniques in adhesive layer on the surface of substrate or directly on the coupling element. Sensitive layer was integrated into developed fluidic cell with a digital syringe. Light from tuneable laser strict focusing on and scattered by the single microsphere was detected by a CMOS camera. The image was filtered for noise reduction and integrated on two coordinates for evaluation of integrated energy of a measured signal. As the entrance data following signal parameters were used: relative (to a free spectral range) spectral shift of frequency of WGM optical resonance in microsphere and relative efficiency of WGM excitation obtained within a free spectral range which depended on both type and concentration of investigated agents. Multiplexing on parameters and components has been realized using spatial and spectral parameters of scattered by microsphere light with developed data processing. Biochemical component classification and identification of agents under investigation has been performed by network analysis techniques based on probabilistic network and multilayer perceptron. Developed approach is demonstrated to be applicable both for single agent and for multi component biochemical analysis.

  2. Hip fracture risk estimation based on principal component analysis of QCT atlas: a preliminary study

    Science.gov (United States)

    Li, Wenjun; Kornak, John; Harris, Tamara; Lu, Ying; Cheng, Xiaoguang; Lang, Thomas

    2009-02-01

    We aim to capture and apply 3-dimensional bone fragility features for fracture risk estimation. Using inter-subject image registration, we constructed a hip QCT atlas comprising 37 patients with hip fractures and 38 age-matched controls. In the hip atlas space, we performed principal component analysis to identify the principal components (eigen images) that showed association with hip fracture. To develop and test a hip fracture risk model based on the principal components, we randomly divided the 75 QCT scans into two groups, one serving as the training set and the other as the test set. We applied this model to estimate a fracture risk index for each test subject, and used the fracture risk indices to discriminate the fracture patients and controls. To evaluate the fracture discrimination efficacy, we performed ROC analysis and calculated the AUC (area under curve). When using the first group as the training group and the second as the test group, the AUC was 0.880, compared to conventional fracture risk estimation methods based on bone densitometry, which had AUC values ranging between 0.782 and 0.871. When using the second group as the training group, the AUC was 0.839, compared to densitometric methods with AUC values ranging between 0.767 and 0.807. Our results demonstrate that principal components derived from hip QCT atlas are associated with hip fracture. Use of such features may provide new quantitative measures of interest to osteoporosis.

  3. Assessment of the TiO2/water nanofluid effects on heat transfer characteristics in VVER-1000 nuclear reactor using CFD modeling

    Directory of Open Access Journals (Sweden)

    Seyed Mohammad Mousavizadeh

    2015-12-01

    Full Text Available The most important advantage of nanoparticles is the increased thermal conductivity coefficient and convection heat transfer coefficient so that, as a result of using a 1.5% volume concentration of nanoparticles, the thermal conductivity coefficient would increase by about twice. In this paper, the effects of a nanofluid (TiO2/water on heat transfer characteristics such as the thermal conductivity coefficient, heat transfer coefficient, fuel clad, and fuel center temperatures in a VVER-1000 nuclear reactor are investigated. To this end, the cell equivalent of a fuel rod and its surrounding coolant fluid were obtained in the hexagonal fuel assembly of a VVER-1000 reactor. Then, a fuel rod was simulated in the hot channel using Computational Fluid Dynamics (CFD simulation codes and thermohydraulic calculations (maximum fuel temperature, fluid outlet, Minimum Departure from Nucleate Boiling Ratio (MDNBR, etc. were performed and compared with a VVER-1000 reactor without nanoparticles. One of the most important results of the analysis was that heat transfer and the thermal conductivity coefficient increased, and usage of the nanofluid reduced MDNBR.

  4. Assessment of the TiO{sub 2}/water nanofluid effects on heat transfer characteristics in VVER-1000 nuclear reactor using CFD modeling

    Energy Technology Data Exchange (ETDEWEB)

    Mousavizadeh, Seyed Mohammad; Ansarifar, Gholam Reza; Talebi, Mansour [Dept. of Nuclear Engineering, Faculty of Advanced Sciences and Technology, University of Isfahan, Isfahan (Iran, Islamic Republic of)

    2015-12-15

    The most important advantage of nanoparticles is the increased thermal conductivity coefficient and convection heat transfer coefficient so that, as a result of using a 1.5% volume concentration of nanoparticles, the thermal conductivity coefficient would increase by about twice. In this paper, the effects of a nanofluid (TiO2/water) on heat transfer characteristics such as the thermal conductivity coefficient, heat transfer coefficient, fuel clad, and fuel center temperatures in a VVER-1000 nuclear reactor are investigated. To this end, the cell equivalent of a fuel rod and its surrounding coolant fluid were obtained in the hexagonal fuel assembly of a VVER-1000 reactor. Then, a fuel rod was simulated in the hot channel using Computational Fluid Dynamics (CFD) simulation codes and thermohydraulic calculations (maximum fuel temperature, fluid outlet, Minimum Departure from Nucleate Boiling Ratio (MDNBR), etc.) were performed and compared with a VVER-1000 reactor without nanoparticles. One of the most important results of the analysis was that heat transfer and the thermal conductivity coefficient increased, and usage of the nanofluid reduced MDNBR.

  5. Blind Separation of Acoustic Signals Combining SIMO-Model-Based Independent Component Analysis and Binary Masking

    Directory of Open Access Journals (Sweden)

    Hiekata Takashi

    2006-01-01

    Full Text Available A new two-stage blind source separation (BSS method for convolutive mixtures of speech is proposed, in which a single-input multiple-output (SIMO-model-based independent component analysis (ICA and a new SIMO-model-based binary masking are combined. SIMO-model-based ICA enables us to separate the mixed signals, not into monaural source signals but into SIMO-model-based signals from independent sources in their original form at the microphones. Thus, the separated signals of SIMO-model-based ICA can maintain the spatial qualities of each sound source. Owing to this attractive property, our novel SIMO-model-based binary masking can be applied to efficiently remove the residual interference components after SIMO-model-based ICA. The experimental results reveal that the separation performance can be considerably improved by the proposed method compared with that achieved by conventional BSS methods. In addition, the real-time implementation of the proposed BSS is illustrated.

  6. Improved gene prediction by principal component analysis based autoregressive Yule-Walker method.

    Science.gov (United States)

    Roy, Manidipa; Barman, Soma

    2016-01-10

    Spectral analysis using Fourier techniques is popular with gene prediction because of its simplicity. Model-based autoregressive (AR) spectral estimation gives better resolution even for small DNA segments but selection of appropriate model order is a critical issue. In this article a technique has been proposed where Yule-Walker autoregressive (YW-AR) process is combined with principal component analysis (PCA) for reduction in dimensionality. The spectral peaks of DNA signal are used to detect protein-coding regions based on the 1/3 frequency component. Here optimal model order selection is no more critical as noise is removed by PCA prior to power spectral density (PSD) estimation. Eigenvalue-ratio is used to find the threshold between signal and noise subspaces for data reduction. Superiority of proposed method over fast Fourier Transform (FFT) method and autoregressive method combined with wavelet packet transform (WPT) is established with the help of receiver operating characteristics (ROC) and discrimination measure (DM) respectively.

  7. Reliability Analysis of Component Software in Wireless Sensor Networks Based on Transformation of Testing Data

    Directory of Open Access Journals (Sweden)

    Chunyan Hou

    2009-08-01

    Full Text Available We develop an approach of component software reliability analysis which includes the benefits of both time domain, and structure based approaches. This approach overcomes the deficiency of existing NHPP techniques that fall short of addressing repair, and internal system structures simultaneously. Our solution adopts a method of transformation of testing data to cover both methods, and is expected to improve reliability prediction. This paradigm allows component-based software testing process doesn’t meet the assumption of NHPP models, and accounts for software structures by the way of modeling the testing process. According to the testing model it builds the mapping relation from the testing profile to the operational profile which enables the transformation of the testing data to build the reliability dataset required by NHPP models. At last an example is evaluated to validate and show the effectiveness of this approach.

  8. Dependent component analysis based approach to robust demarcation of skin tumors

    Science.gov (United States)

    Kopriva, Ivica; Peršin, Antun; Puizina-Ivić, Neira; Mirić, Lina

    2009-02-01

    Method for robust demarcation of the basal cell carcinoma (BCC) is presented employing novel dependent component analysis (DCA)-based approach to unsupervised segmentation of the red-green-blue (RGB) fluorescent image of the BCC. It exploits spectral diversity between the BCC and the surrounding tissue. DCA represents an extension of the independent component analysis (ICA) and is necessary to account for statistical dependence induced by spectral similarity between the BCC and surrounding tissue. Robustness to intensity fluctuation is due to the scale invariance property of DCA algorithms. By comparative performance analysis with state-of-the-art image segmentation methods such as active contours (level set), K-means clustering, non-negative matrix factorization and ICA we experimentally demonstrate good performance of DCA-based BCC demarcation in demanding scenario where intensity of the fluorescent image has been varied almost two-orders of magnitude.

  9. Spectral network based on component cells under the SOPHIA European project

    Energy Technology Data Exchange (ETDEWEB)

    Núñez, Rubén, E-mail: ruben.nunez@ies-def.upm.es; Antón, Ignacio; Askins, Steve; Sala, Gabriel [Instituto de Energía Solar - Universidad Politécnica de Madrid, Ciudad Universitaria, 28040 Madrid (Spain); Domínguez, César; Voarino, Philippe [CEA-INES, 50 avenue du Lac Léman, 73375 Le Bourget-du-Lac (France); Steiner, Marc; Siefer, Gerald [Fraunhofer ISE, Heidenhofstr. 2, 79110 Freiburg (Germany); Fucci, Rafaelle; Roca, Franco [ENEA, P.le E.Fermi 1, Località Granatello, 80055 Portici (Italy); Minuto, Alessandro; Morabito, Paolo [RSE, Via Rubattino 54, 20134 Milan (Italy)

    2015-09-28

    In the frame of the European project SOPHIA, a spectral network based on component (also called isotypes) cells has been created. Among the members of this project, several spectral sensors based on component cells and collimating tubes, so-called spectroheliometers, were installed in the last years, allowing the collection of minute-resolution spectral data useful for CPV systems characterization across Europe. The use of spectroheliometers has been proved useful to establish the necessary spectral conditions to perform power rating of CPV modules and systems. If enough data in a given period of time is collected, ideally a year, it is possible to characterize spectrally the place where measurements are taken, in the same way that hours of annual irradiation can be estimated using a pyrheliometer.

  10. Principal component analysis-based inversion of effective temperatures for late-type stars

    CERN Document Server

    Paletou, F; Houdebine, E R; Watson, V

    2015-01-01

    We show how the range of application of the principal component analysis-based inversion method of Paletou et al. (2015) can be extended to late-type stars data. Besides being an extension of its original application domain, for FGK stars, we also used synthetic spectra for our learning database. We discuss our results on effective temperatures against previous evaluations made available from Vizier and Simbad services at CDS.

  11. An Intuitionistic Fuzzy Methodology for Component-Based Software Reliability Optimization

    DEFF Research Database (Denmark)

    Madsen, Henrik; Grigore, Albeanu; Popenţiuvlǎdicescu, Florin

    2012-01-01

    Component-based software development is the current methodology facilitating agility in project management, software reuse in design and implementation, promoting quality and productivity, and increasing the reliability and performability. This paper illustrates the usage of intuitionistic fuzzy...... degree approach in modelling the quality of entities in imprecise software reliability computing in order to optimize management results. Intuitionistic fuzzy optimization algorithms are proposed to be used for complex software systems reliability optimization under various constraints....

  12. [Qualitative analysis of chemical constituents in Si-Wu Decoction based on TCM component database].

    Science.gov (United States)

    Wang, Zhen-fang; Zhao, Yang; Fan, Zi-quan; Kang, Li-ping; Qiao, Li-rui; Zhang, Jie; Gao, Yue; Ma, Bai-ping

    2015-10-01

    In order to clarify the chemical constituents of Si-Wu Decoction rapidly and holistically, we analyzed the ethanol extract of Si-Wu Decoction by UPLC/Q-TOF-MSE and UNIFI which based on traditional Chinese medicine database, the probable structures of 113 compounds were identified. The results show that this method can rapidly and effectively characterize the chemical compounds of Si-Wu Decoction and provide a new solution for identification of components from complex TCM extract.

  13. Prototypic implementations of the building block for component based open Hypermedia systems (BB/CB-OHSs)

    DEFF Research Database (Denmark)

    Mohamed, Omer I. Eldai

    2005-01-01

    In this paper we describe the prototypic implementations of the BuildingBlock (BB/CB-OHSs) that proposed to address some of the Component-based Open Hypermedia Systems (CB-OHSs) issues, including distribution and interoperability [4, 11, 12]. Four service implementations were described below. The....... These are the math service, navigational service, naming and location service and the storage service in addition to two communication protocols (TCP/IP and JAVA RMI)....

  14. FORECASTING THE FINANCIAL RETURNS FOR USING MULTIPLE REGRESSION BASED ON PRINCIPAL COMPONENT ANALYSIS

    Directory of Open Access Journals (Sweden)

    Nop Sopipan

    2013-01-01

    Full Text Available The aim of this study was to forecast the returns for the Stock Exchange of Thailand (SET Index by adding some explanatory variables and stationary Autoregressive order p (AR (p in the mean equation of returns. In addition, we used Principal Component Analysis (PCA to remove possible complications caused by multicollinearity. Results showed that the multiple regressions based on PCA, has the best performance.

  15. Embedded System Construction: Evaluation of a Model-Driven and Component-Based Develpoment Approach

    OpenAIRE

    Bunse, C.; Gross, H.G.; Peper, C. (Claudia)

    2008-01-01

    Preprint of paper published in: Models in Software Engineering, Lecture Notes in Computer Science 5421, 2009; doi:10.1007/978-3-642-01648-6_8 Model-driven development has become an important engineering paradigm. It is said to have many advantages over traditional approaches, such as reuse or quality improvement, also for embedded systems. Along a similar line of argumentation, component-based software engineering is advocated. In order to investigate these claims, the MARMOT method was appli...

  16. Analysis and Evaluating Security of Component-Based Software Development: A Security Metrics Framework

    Directory of Open Access Journals (Sweden)

    Irshad Ahmad Mir

    2012-10-01

    Full Text Available Evaluating the security of software systems is a complex problem for the research communities due to the multifaceted and complex operational environment of the system involved. Many efforts towards the secure system development methodologies like secSDLC by Microsoft have been made but the measurement scale on which the security can be measured got least success. As with a shift in the nature of software development from standalone applications to distributed environment where there are a number of potential adversaries and threats present, security has been outlined and incorporated at the architectural level of the system and so is the need to evaluate and measure the level of security achieved . In this paper we present a framework for security evaluation at the design and architectural phase of the system development. We have outlined the security objectives based on the security requirements of the system and analyzed the behavior of various software architectures styles. As the component-based development (CBD is an important and widely used model to develop new large scale software due to various benefits like increased reuse, reduce time to market and cost. Our emphasis is on CBD and we have proposed a framework for the security evaluation of Component based software design and derived the security metrics for the main three pillars of security, confidentiality, integrity and availability based on the component composition, dependency and inter component data/information flow. The proposed framework and derived metrics are flexible enough, in way that the system developer can modify the metrics according to the situation and are applicable both at the development phases and as well as after development.

  17. A Component-Based Conference Control Model and Implementation for Loosely Coupled Sessions

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Conference control is a very important core part to compose a complete Internet multimedia conference system and has been a hot research area over the years, but there are currently no widely accepted robust and scalable solutions and standards. This paper proposes a component-based conference control model for loosely coupled sessions in which media applications can collaborate with a Session Controller(SC) to provide the conference control. A SC prototype has been built.

  18. Deformable known component model-based reconstruction for coronary CT angiography

    Science.gov (United States)

    Zhang, X.; Tilley, S.; Xu, S.; Mathews, A.; McVeigh, E. R.; Stayman, J. W.

    2017-03-01

    Purpose: Atherosclerosis detection remains challenging in coronary CT angiography for patients with cardiac implants. Pacing electrodes of a pacemaker or lead components of a defibrillator can create substantial blooming and streak artifacts in the heart region, severely hindering the visualization of a plaque of interest. We present a novel reconstruction method that incorporates a deformable model for metal leads to eliminate metal artifacts and improve anatomy visualization even near the boundary of the component. Methods: The proposed reconstruction method, referred as STF-dKCR, includes a novel parameterization of the component that integrates deformation, a 3D-2D preregistration process that estimates component shape and position, and a polyenergetic forward model for x-ray propagation through the component where the spectral properties are jointly estimated. The methodology was tested on physical data of a cardiac phantom acquired on a CBCT testbench. The phantom included a simulated vessel, a metal wire emulating a pacing lead, and a small Teflon sphere attached to the vessel wall, mimicking a calcified plaque. The proposed method was also compared to the traditional FBP reconstruction and an interpolation-based metal correction method (FBP-MAR). Results: Metal artifacts presented in standard FBP reconstruction were significantly reduced in both FBP-MAR and STF- dKCR, yet only the STF-dKCR approach significantly improved the visibility of the small Teflon target (within 2 mm of the metal wire). The attenuation of the Teflon bead improved to 0.0481 mm-1 with STF-dKCR from 0.0166 mm-1 with FBP and from 0.0301 mm-1 with FBP-MAR - much closer to the expected 0.0414 mm-1. Conclusion: The proposed method has the potential to improve plaque visualization in coronary CT angiography in the presence of wire-shaped metal components.

  19. Novel Ontologies-based Optical Character Recognition-error Correction Cooperating with Graph Component Extraction

    Directory of Open Access Journals (Sweden)

    Sarunya Kanjanawattana

    2017-01-01

    Full Text Available literature. Extracting graph information clearly contributes to readers, who are interested in graph information interpretation, because we can obtain significant information presenting in the graph. A typical tool used to transform image-based characters to computer editable characters is optical character recognition (OCR. Unfortunately, OCR cannot guarantee perfect results, because it is sensitive to noise and input quality. This becomes a serious problem because misrecognition provides misunderstanding information to readers and causes misleading communication. In this study, we present a novel method for OCR-error correction based on bar graphs using semantics, such as ontologies and dependency parsing. Moreover, we used a graph component extraction proposed in our previous study to omit irrelevant parts from graph components. It was applied to clean and prepare input data for this OCR-error correction. The main objectives of this paper are to extract significant information from the graph using OCR and to correct OCR errors using semantics. As a result, our method provided remarkable performance with the highest accuracies and F-measures. Moreover, we examined that our input data contained less of noise because of an efficiency of our graph component extraction. Based on the evidence, we conclude that our solution to the OCR problem achieves the objectives.

  20. Selection of Component Codes for Turbo Coding Based on Convergence properties

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl

    1999-01-01

    The turbo decoding is a sub-optimal decoding, i.e. it is not a maximum likelihood decoding. It is important to be aware of this fact when the parameters for the scheme are chosen. This goes especially for the selection of component codes, where the selection often has been based solely on the per......The turbo decoding is a sub-optimal decoding, i.e. it is not a maximum likelihood decoding. It is important to be aware of this fact when the parameters for the scheme are chosen. This goes especially for the selection of component codes, where the selection often has been based solely...... on the performance at high SNR's. We will show that it is important to base the choice on the performance at low SNR's, i.e. the convergence properties, as well. Further, the study of the performance with different component codes may lead to an understanding of the convergence process in the turbo codes....

  1. Models and frameworks: a synergistic association for developing component-based applications.

    Science.gov (United States)

    Alonso, Diego; Sánchez-Ledesma, Francisco; Sánchez, Pedro; Pastor, Juan A; Álvarez, Bárbara

    2014-01-01

    The use of frameworks and components has been shown to be effective in improving software productivity and quality. However, the results in terms of reuse and standardization show a dearth of portability either of designs or of component-based implementations. This paper, which is based on the model driven software development paradigm, presents an approach that separates the description of component-based applications from their possible implementations for different platforms. This separation is supported by automatic integration of the code obtained from the input models into frameworks implemented using object-oriented technology. Thus, the approach combines the benefits of modeling applications from a higher level of abstraction than objects, with the higher levels of code reuse provided by frameworks. In order to illustrate the benefits of the proposed approach, two representative case studies that use both an existing framework and an ad hoc framework, are described. Finally, our approach is compared with other alternatives in terms of the cost of software development.

  2. Nonlinear Statistical Process Monitoring Based on Control Charts with Memory Effect and Kernel Independent Component Analysis

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A novel nonlinear combination process monitoring method was proposed based on techniques with memory effect (multivariate exponentially weighted moving average (MEWMA)) and kernel independent component analysis (KICA). The method was developed for dealing with nonlinear issues and detecting small or moderate drifts in one or more process variables with autocorrelation. MEWMA charts use additional information from the past history of the process for keeping the memory effect of the process behavior trend. KICA is a recently developed statistical technique for revealing hidden, nonlinear statistically independent factors that underlie sets of measurements and it is a two-phase algorithm: whitened kernel principal component analysis (KPCA) plus independent component analysis (ICA). The application to the fluid catalytic cracking unit (FCCU) simulated process indicates that the proposed combined method based on MEWMA and KICA can effectively capture the nonlinear relationship and detect small drifts in process variables. Its performance significantly outperforms monitoring method based on ICA, MEWMA-ICA and KICA, especially for long-term performance deterioration.

  3. MLAA-based attenuation correction of flexible hardware components in hybrid PET/MR imaging.

    Science.gov (United States)

    Heußer, Thorsten; Rank, Christopher M; Berker, Yannick; Freitag, Martin T; Kachelrieß, Marc

    2017-12-01

    Accurate PET quantification demands attenuation correction (AC) for both patient and hardware attenuation of the 511 keV annihilation photons. In hybrid PET/MR imaging, AC for stationary hardware components such as patient table and MR head coil is straightforward, employing CT-derived attenuation templates. AC for flexible hardware components such as MR-safe headphones and MR radiofrequency (RF) surface coils is more challenging. Registration-based approaches, aligning CT-based attenuation templates with the current patient position, have been proposed but are not used in clinical routine. Ignoring headphone or RF coil attenuation has been shown to result in regional activity underestimation values of up to 18%. We propose to employ the maximum-likelihood reconstruction of attenuation and activity (MLAA) algorithm to estimate the attenuation of flexible hardware components. Starting with an initial attenuation map not including flexible hardware components, the attenuation update of MLAA is applied outside the body outline only, allowing to estimate hardware attenuation without modifying the patient attenuation map. Appropriate prior expectations on the attenuation coefficients are incorporated into MLAA. The proposed method is investigated for non-TOF PET phantom and (18)F-FDG patient data acquired with a clinical PET/MR device, using headphones or RF surface coils as flexible hardware components. Although MLAA cannot recover the exact physical shape of the hardware attenuation maps, the overall attenuation of the hardware components is accurately estimated. Therefore, the proposed algorithm significantly improves PET quantification. Using the phantom data, local activity underestimation when neglecting hardware attenuation was reduced from up to 25% to less than 3% under- or overestimation as compared to reference scans without hardware present or to CT-derived AC. For the patient data, we found an average activity underestimation of 7.9% evaluated in the full

  4. Minor Component Analysis-based Landing Forecast System for Ship-borne Helicopter

    Institute of Scientific and Technical Information of China (English)

    ZHOU Bo,; SHI Ai-guo; WAN Lin; YANG Bao-zhang

    2005-01-01

    The general structure of ship-borne helicopter landing forecast system is presented, and a novel ship motion prediction model based on minor component analysis (MCA) is built up to improve the forecast effectiveness. To validate the feasibility of this landing forecast system, time series for the roll, pitch and heave are generated by simulation and then forecasted based on MCA. Simulation results show that ship-borne helicopters can land safely in higher sea condition while carrying on rescue or replenishment tasks at sea in terms of the landing forecast system.

  5. Spectral characterisation of colour printer based on a novel grey component replacement method

    Institute of Scientific and Technical Information of China (English)

    Jinyi Guo; Haisong Xu; M.Ronnier Luo; Binyu Wang

    2011-01-01

    Conventional printer characterisation models are generally based on the assumption that the densities of primary colours are additive.However,additivity failure frequently occurs in practice.We propose a novel grey component replacement(GCR) method based on the spectral density sub-additivity equations in this letter for spectral characterisation of a 4-ink colour printer.The method effectively correct the feasibility of the proposed method and to evaluate the model performance.Finally,the GCR model for characterising colour printer with high spectral and colorimetric prediction accuracy is established.

  6. Non-negative matrix factorization based unmixing for principal component transformed hyperspectral data

    Institute of Scientific and Technical Information of China (English)

    Xiu-rui GENG; Lu-yan JI; Kang SUN

    2016-01-01

    Non-negative matrix factorization (NMF) has been widely used in mixture analysis for hyperspectral remote sensing. When used for spectral unmixing analysis, however, it has two main shortcomings: (1) since the dimensionality of hyperspectral data is usually very large, NMF tends to suffer from large computational complexity for the popular multiplicative iteration rule;(2) NMF is sensitive to noise (outliers), and thus the corrupted data will make the results of NMF meaningless. Although principal component analysis (PCA) can be used to mitigate these two problems, the transformed data will contain negative numbers, hindering the direct use of the multiplicative iteration rule of NMF. In this paper, we analyze the impact of PCA on NMF, and fi nd that multiplicative NMF can also be applicable to data after principal component transformation. Based on this conclusion, we present a method to perform NMF in the principal component space, named ‘principal component NMF’ (PCNMF). Experimental results show that PCNMF is both accurate and time-saving.

  7. GOMMA: a component-based infrastructure for managing and analyzing life science ontologies and their evolution

    Science.gov (United States)

    2011-01-01

    Background Ontologies are increasingly used to structure and semantically describe entities of domains, such as genes and proteins in life sciences. Their increasing size and the high frequency of updates resulting in a large set of ontology versions necessitates efficient management and analysis of this data. Results We present GOMMA, a generic infrastructure for managing and analyzing life science ontologies and their evolution. GOMMA utilizes a generic repository to uniformly and efficiently manage ontology versions and different kinds of mappings. Furthermore, it provides components for ontology matching, and determining evolutionary ontology changes. These components are used by analysis tools, such as the Ontology Evolution Explorer (OnEX) and the detection of unstable ontology regions. We introduce the component-based infrastructure and show analysis results for selected components and life science applications. GOMMA is available at http://dbs.uni-leipzig.de/GOMMA. Conclusions GOMMA provides a comprehensive and scalable infrastructure to manage large life science ontologies and analyze their evolution. Key functions include a generic storage of ontology versions and mappings, support for ontology matching and determining ontology changes. The supported features for analyzing ontology changes are helpful to assess their impact on ontology-dependent applications such as for term enrichment. GOMMA complements OnEX by providing functionalities to manage various versions of mappings between two ontologies and allows combining different match approaches. PMID:21914205

  8. GOMMA: a component-based infrastructure for managing and analyzing life science ontologies and their evolution

    Directory of Open Access Journals (Sweden)

    Kirsten Toralf

    2011-09-01

    Full Text Available Abstract Background Ontologies are increasingly used to structure and semantically describe entities of domains, such as genes and proteins in life sciences. Their increasing size and the high frequency of updates resulting in a large set of ontology versions necessitates efficient management and analysis of this data. Results We present GOMMA, a generic infrastructure for managing and analyzing life science ontologies and their evolution. GOMMA utilizes a generic repository to uniformly and efficiently manage ontology versions and different kinds of mappings. Furthermore, it provides components for ontology matching, and determining evolutionary ontology changes. These components are used by analysis tools, such as the Ontology Evolution Explorer (OnEX and the detection of unstable ontology regions. We introduce the component-based infrastructure and show analysis results for selected components and life science applications. GOMMA is available at http://dbs.uni-leipzig.de/GOMMA. Conclusions GOMMA provides a comprehensive and scalable infrastructure to manage large life science ontologies and analyze their evolution. Key functions include a generic storage of ontology versions and mappings, support for ontology matching and determining ontology changes. The supported features for analyzing ontology changes are helpful to assess their impact on ontology-dependent applications such as for term enrichment. GOMMA complements OnEX by providing functionalities to manage various versions of mappings between two ontologies and allows combining different match approaches.

  9. Identifying and Analyzing Strong Components of an Industrial Network Based on Cycle Degree

    Directory of Open Access Journals (Sweden)

    Zhiying Zhang

    2016-01-01

    Full Text Available In the era of big data and cloud computing, data research focuses not only on describing the individual characteristics but also on depicting the relationships among individuals. Studying dependence and constraint relationships among industries has aroused significant interest in the academic field. From the network perspective, this paper tries to analyze industrial relational structures based on cycle degree. The cycle degree of a vertex, that is, the number of cycles through a vertex in an industrial network, can describe the roles of the vertices of strong components in industrial circulation. In most cases, different vertices in a strong component have different cycle degrees, and the one with a larger cycle degree plays more important roles. However, the concept of cycle degree does not involve the lengths of the cycles, which are also important for circulations. The more indirect the relationship between two industries is, the weaker it is. In order to analyze strong components thoroughly, this paper proposes the concept of circular centrality taking into consideration the influence by two factors: the lengths and the numbers of cycles through a vertex. Exemplification indicates that a profound analysis of strong components in an industrial network can reveal the features of an economy.

  10. Scientific bases of biomass processing into basic component of aviation fuel

    Science.gov (United States)

    Kachalov, V. V.; Lavrenov, V. A.; Lishchiner, I. I.; Malova, O. V.; Tarasov, A. L.; Zaichenko, V. M.

    2016-11-01

    A combination of feedstock pyrolysis and the cracking of the volatile pyrolysis products on the charcoal at 1000 °C allows to obtain a tarless synthesis gas which contains 90 vol% or more of carbon monoxide and hydrogen in approximately equal proportions. Basic component of aviation fuel was synthesized in a two-stage process from gas obtained by pyrolytic processing of biomass. Methanol and dimethyl ether can be efficiently produced in a two-layer loading of methanolic catalyst and γ-Al2O3. The total conversion of CO per pass was 38.2% using for the synthesis of oxygenates a synthesis gas with adverse ratio of H2/CO = 0.96. Conversion of CO to CH3OH was 15.3% and the conversion of CO to dimethyl ether was 20.9%. A high yield of basic component per oxygenates mass (44.6%) was obtained during conversion. The high selectivity of the synthesis process for liquid hydrocarbons was observed. An optimal recipe of aviation fuel B-92 based on a synthesized basic component was developed. The prototype of aviation fuel meets the requirements for B-92 when straight fractions of 50-100 °C (up to 35 wt%), isooctane (up to 10 wt%) and ethyl fluid (2.0 g/kg calculated as tetraethyl lead) is added to the basic component.

  11. A Co-modeling Method Based on Component Features for Mechatronic Devices in Aero-engines

    Science.gov (United States)

    Wang, Bin; Zhao, Haocen; Ye, Zhifeng

    2017-08-01

    Data-fused and user-friendly design of aero-engine accessories is required because of their structural complexity and stringent reliability. This paper gives an overview of a typical aero-engine control system and the development process of key mechatronic devices used. Several essential aspects of modeling and simulation in the process are investigated. Considering the limitations of a single theoretic model, feature-based co-modeling methodology is suggested to satisfy the design requirements and compensate for diversity of component sub-models for these devices. As an example, a stepper motor controlled Fuel Metering Unit (FMU) is modeled in view of the component physical features using two different software tools. An interface is suggested to integrate the single discipline models into the synthesized one. Performance simulation of this device using the co-model and parameter optimization for its key components are discussed. Comparison between delivery testing and the simulation shows that the co-model for the FMU has a high accuracy and the absolute superiority over a single model. Together with its compatible interface with the engine mathematical model, the feature-based co-modeling methodology is proven to be an effective technical measure in the development process of the device.

  12. Soft Sensor of Vehicle State Estimation Based on the Kernel Principal Component and Improved Neural Network

    Directory of Open Access Journals (Sweden)

    Haorui Liu

    2016-01-01

    Full Text Available In the car control systems, it is hard to measure some key vehicle states directly and accurately when running on the road and the cost of the measurement is high as well. To address these problems, a vehicle state estimation method based on the kernel principal component analysis and the improved Elman neural network is proposed. Combining with nonlinear vehicle model of three degrees of freedom (3 DOF, longitudinal, lateral, and yaw motion, this paper applies the method to the soft sensor of the vehicle states. The simulation results of the double lane change tested by Matlab/SIMULINK cosimulation prove the KPCA-IENN algorithm (kernel principal component algorithm and improved Elman neural network to be quick and precise when tracking the vehicle states within the nonlinear area. This algorithm method can meet the software performance requirements of the vehicle states estimation in precision, tracking speed, noise suppression, and other aspects.

  13. A blind separation method of overlapped multi-components based on time varying AR model

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A method utilizing single channel recordings to blindly separate the multicomponents overlapped in time and frequency domains is proposed in this paper. Based on the time varying AR model, the instantaneous frequency and amplitude of each signal component are estimated respectively, thus the signal component separation is achieved. By using prolate spheroidal sequence as basis functions to expand the time varying parameters of the AR model, the method turns the problem of linear time varying parameters estimation to a linear time invariant parameter estimation problem, then the parameters are estimated by a recursive algorithm. The computation of this method is simple, and no prior knowledge of the signals is needed. Simulation results demonstrate validity and excellent performance of this method.

  14. Analysis of Active Components in Salvia Miltiorrhiza Injection Based on Vascular Endothelial Cell Protection

    Directory of Open Access Journals (Sweden)

    Shen Jie

    2014-09-01

    Full Text Available Correlation analysis based on chromatograms and pharmacological activities is essential for understanding the effective components in complex herbal medicines. In this report, HPLC and measurement of antioxidant properties were used to describe the active ingredients of Salvia miltiorrhiza injection (SMI. HPLC results showed that tanshinol, protocatechuic aldehyde, rosmarinic acid, salvianolic acid B, protocatechuic acid and their metabolites in rat serum may contribute to the efficacy of SMI. Assessment of antioxidant properties indicated that differences in the composition of serum powder of SMI caused differences in vascular endothelial cell protection. When bivariate correlation was carried out it was found that salvianolic acid B, tanshinol and protocatechuic aldehyde were active components of SMI because they were correlated to antioxidant properties.

  15. Physics-Based Design Tools for Lightweight Ceramic Composite Turbine Components with Durable Microstructures

    Science.gov (United States)

    DiCarlo, James A.

    2011-01-01

    Under the Supersonics Project of the NASA Fundamental Aeronautics Program, modeling and experimental efforts are underway to develop generic physics-based tools to better implement lightweight ceramic matrix composites into supersonic engine components and to assure sufficient durability for these components in the engine environment. These activities, which have a crosscutting aspect for other areas of the Fundamental Aero program, are focusing primarily on improving the multi-directional design strength and rupture strength of high-performance SiC/SiC composites by advanced fiber architecture design. This presentation discusses progress in tool development with particular focus on the use of 2.5D-woven architectures and state-of-the-art constituents for a generic un-cooled SiC/SiC low-pressure turbine blade.

  16. Crawling Waves Speed Estimation Based on the Dominant Component Analysis Paradigm.

    Science.gov (United States)

    Rojas, Renán; Ormachea, Juvenal; Salo, Arthur; Rodríguez, Paul; Parker, Kevin J; Castaneda, Benjamin

    2015-10-01

    A novel method for estimating the shear wave speed from crawling waves based on the amplitude modulation-frequency modulation model is proposed. Our method consists of a two-step approach for estimating the stiffness parameter at the central region of the material of interest. First, narrowband signals are isolated in the time dimension to recover the locally strongest component and to reject distortions from the ultrasound data. Then, the shear wave speed is computed by the dominant component analysis approach and its spatial instantaneous frequency is estimated by the discrete quasi-eigenfunction approximations method. Experimental results on phantoms with different compositions and operating frequencies show coherent speed estimations and accurate inclusion locations.

  17. Fluvial facies reservoir productivity prediction method based on principal component analysis and artificial neural network

    Directory of Open Access Journals (Sweden)

    Pengyu Gao

    2016-03-01

    Full Text Available It is difficult to forecast the well productivity because of the complexity of vertical and horizontal developments in fluvial facies reservoir. This paper proposes a method based on Principal Component Analysis and Artificial Neural Network to predict well productivity of fluvial facies reservoir. The method summarizes the statistical reservoir factors and engineering factors that affect the well productivity, extracts information by applying the principal component analysis method and approximates arbitrary functions of the neural network to realize an accurate and efficient prediction on the fluvial facies reservoir well productivity. This method provides an effective way for forecasting the productivity of fluvial facies reservoir which is affected by multi-factors and complex mechanism. The study result shows that this method is a practical, effective, accurate and indirect productivity forecast method and is suitable for field application.

  18. Analysis of active components in Salvia miltiorrhiza injection based on vascular endothelial cell protection.

    Science.gov (United States)

    Shen, Jie; Yang, Kai; Sun, Caihua; Zheng, Minxia

    2014-09-01

    Correlation analysis based on chromatograms and pharmacological activities is essential for understanding the effective components in complex herbal medicines. In this report, HPLC and measurement of antioxidant properties were used to describe the active ingredients of Salvia miltiorrhiza injection (SMI). HPLC results showed that tanshinol, protocatechuic aldehyde, rosmarinic acid, salvianolic acid B, protocatechuic acid and their metabolites in rat serum may contribute to the efficacy of SMI. Assessment of antioxidant properties indicated that differences in the composition of serum powder of SMI caused differences in vascular endothelial cell protection. When bivariate correlation was carried out it was found that salvianolic acid B, tanshinol and protocatechuic aldehyde were active components of SMI because they were correlated to antioxidant properties.

  19. Photonic Beamformer Model Based on Analog Fiber-Optic Links’ Components

    Science.gov (United States)

    Volkov, V. A.; Gordeev, D. A.; Ivanov, S. I.; Lavrov, A. P.; Saenko, I. I.

    2016-08-01

    The model of photonic beamformer for wideband microwave phased array antenna is investigated. The main features of the photonic beamformer model based on true-time-delay technique, DWDM technology and fiber chromatic dispersion are briefly analyzed. The performance characteristics of the key components of photonic beamformer for phased array antenna in the receive mode are examined. The beamformer model composed of the components available on the market of fiber-optic analog communication links is designed and tentatively investigated. Experimental demonstration of the designed model beamforming features includes actual measurement of 5-element microwave linear array antenna far-field patterns in 6-16 GHz frequency range for antenna pattern steering up to 40°. The results of experimental testing show good accordance with the calculation estimates.

  20. Mapping of Core Components Based e-Business Standards into Ontology

    Science.gov (United States)

    Magdalenić, Ivan; Vrdoljak, Boris; Schatten, Markus

    A mapping of Core Components specification based e-business standards to an ontology is presented. The Web Ontology Language (OWL) is used for ontology development. In order to preserve the existing hierarchy of the standards, an emphasis is put on the mapping of Core Components elements to specific constructs in OWL. The main purpose of developing an e-business standards' ontology is to create a foundation for an automated mapping system that would be able to convert concepts from various standards in an independent fashion. The practical applicability and verification of the presented mappings is tested on the mapping of Universal Business Language version 2.0 and Cross Industry Invoice version 2.0 to OWL.

  1. Fabrication of directional solidification components of nickel-base superalloys by laser metal forming

    Institute of Scientific and Technical Information of China (English)

    Liping Feng; Weidong Huang; Darong Chen; Xin Lin; Haiou Yang

    2004-01-01

    Straight plates, hollow columns, ear-like blade tips, twist plates with directional solidification microstructure made of Rene 95 superalloys were successfully fabricated on Nickel-base superalloy and DD3 substrates, respectively. The processing conditions for production of the parts with corresponding shapes were obtained. The fabrication precision was high and the components were compact. The solidification microstructure of the parts was analyzed by optical microscopy. The results show that the solidification microstructure is composed of columnar dendrites, by epitaxial growth onto the directional solidification substrates. The crystallography orientation of the parts was parallel to that of the substrates. The primary arm spacing was about 10 μm, which is in the range of superfine dendrites, and the secondary arm was small or even degenerated. It is concluded that the laser metal forming technique provides a method to manufacture directional solidification components.

  2. Multi-component LFM signal detection and parameter estimation based on Radon-HHT

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A new method is proposed to analyze multi-component linear frequency modulated (LFM) signals, which eliminates cross terms in conventional Wigner-Ville distribution (WVD). The approach is based on Radon transform and Hilbert-Huang transform (HHT), which is a recently developed method adaptive to non-linear and non-stationary signals. The complicated signal is decomposed into several intrinsic mode functions (IMF) by the empirical mode decomposition (EMD), which makes the consequent instantaneous frequency meaningful. After the instantaneous frequency and Hilbert spectrum are computed, multi-component LFM signals detection and parameter estimation are obtained using Radon transform on the Hilbert spectrum plane. The simulation results show its feasibility and effectiveness.

  3. Research on multi-component gas optical detection system based on conjugated interferometer

    Science.gov (United States)

    Gui, Xin; Tong, Yuheng; Wang, Honghai; Yu, Haihu; Li, Zhengying

    2017-09-01

    An optical multi-component gas detection system based on the conjugated interferometer (CI) is proposed and experimentally demonstrated. It can realize the concentration detection of mixture gas in the environment. The CI can transform the absorption spectrum of the target gases to a conjugated emission spectrum, when combining the CI with the broadband light source, the spectrum of output light matches well with the absorption spectrum of target gases. The CI design for different target gases can be achieved by replacing the kind of target absorbing gas in the CI filter. Traditional fiber gas sensor system requires multiple light sources for detection when there are several kinds of gases, and this problem has been solved by using the CI filter combined with the broadband light source. The experimental results show that the system can detect the concentration of multi-component gases, which are mixed with C2H2 and NH3. Experimental results also show a good concentration sensing linearity.

  4. Three-Component Power Decomposition for Polarimetric SAR Data Based on Adaptive Volume Scatter Modeling

    Directory of Open Access Journals (Sweden)

    Sang-Eun Park

    2012-05-01

    Full Text Available In this paper, the three-component power decomposition for polarimetric SAR (PolSAR data with an adaptive volume scattering model is proposed. The volume scattering model is assumed to be reflection-symmetric but parameterized. For each image pixel, the decomposition first starts with determining the adaptive parameter based on matrix similarity metric. Then, a respective scattering power component is retrieved with the established procedure. It has been shown that the proposed method leads to complete elimination of negative powers as the result of the adaptive volume scattering model. Experiments with the PolSAR data from both the NASA/JPL (National Aeronautics and Space Administration/Jet Propulsion Laboratory Airborne SAR (AIRSAR and the JAXA (Japan Aerospace Exploration Agency ALOS-PALSAR also demonstrate that the proposed method not only obtains similar/better results in vegetated areas as compared to the existing Freeman-Durden decomposition but helps to improve discrimination of the urban regions.

  5. Framework based on MDA and ontology for the representation and validation of components model

    Directory of Open Access Journals (Sweden)

    Nemury Silega-Martínez

    2014-05-01

    Full Text Available Model Driven Architecture is one of the most prominent proposals in the area of software development, accepted by both the research community and software development industry. Moreover, in recent years have shown the potential of ontologies for representing a particular domain, example of this are the results in the semantic web. In this paper we present a proposal based on Model Driven Architecture paradigm and is complemented with ontology to represent and validate component models. This component model is restricted to the development of business management systems, so it includes concepts from that domain. The use of the framework will reduce the number of errors made during the development of the system architecture, will increase standardization and productivity at this stage.

  6. Multi-Level, Multi-Component Approaches to Community Based Interventions for Healthy Living

    DEFF Research Database (Denmark)

    Mikkelsen, Bent Egberg; Novotny, Rachel; Gittelsohn, Joel

    2016-01-01

    There is increasing interest in integrated and coordinated programs that intervene in multiple community settings/institutions at the same time and involve policy and system changes. The purpose of the paper is to analyse three comparable cases of Multi Level, Multi Component intervention programs...... the potential of ML-MC community-based public health nutrition interventions to create sustainable change. The paper proposes methodology, guidelines and directions for future research through analysis and examination strengths and weaknesses in the programs. Similarities are that they engage and commit local...... stakeholders in a structured approach to integrate intervention components in order to create dose and intensity. In that way, they all make provisions for post intervention impact sustainability. All programs target the child and family members' knowledge, attitudes, behavior, the policy level...

  7. FMEA Based Risk Assessment of Component Failure Modes in Industrial Radiography

    CERN Document Server

    Pandey, Alok; Sonawane, A U; Rawat, Prashant S

    2016-01-01

    Industrial radiography has its inimitable role in non-destructive examinations. Industrial radiography devices, consisting of significantly high activity of the radioisotopes, are operated manually by remotely held control unit. Malfunctioning of these devices may cause potential exposure to the operator and nearby public, and thus should be practiced under a systematic risk control. To ensure the radiation safety, proactive risk assessment should be implemented. Risk assessment in industrial radiography using the Failure Modes & Effect Analysis (FMEA) for the design and operation of industrial radiography exposure devices has been carried out in this study. Total 56 component failure modes were identified and Risk Priority Numbers (RPNs) were assigned by the FMEA expert team, based on the field experience and reported failure data of various components. Results shows all the identified failure modes have RPN in the range of 04 to 216 and most of the higher RPN are due to low detectability and high severi...

  8. Principal components analysis of an evaluation of the hemiplegic subject based on the Bobath approach.

    Science.gov (United States)

    Corriveau, H; Arsenault, A B; Dutil, E; Lepage, Y

    1992-01-01

    An evaluation based on the Bobath approach to treatment has previously been developed and partially validated. The purpose of the present study was to verify the content validity of this evaluation with the use of a statistical approach known as principal components analysis. Thirty-eight hemiplegic subjects participated in the study. Analysis of the scores on each of six parameters (sensorium, active movements, muscle tone, reflex activity, postural reactions, and pain) was evaluated on three occasions across a 2-month period. Each time this produced three factors that contained 70% of the variation in the data set. The first component mainly reflected variations in mobility, the second mainly variations in muscle tone, and the third mainly variations in sensorium and pain. The results of such exploratory analysis highlight the fact that some of the parameters are not only important but also interrelated. These results seem to partially support the conceptual framework substantiating the Bobath approach to treatment.

  9. A component-based FPGA design framework for neuronal ion channel dynamics simulations.

    Science.gov (United States)

    Mak, Terrence S T; Rachmuth, Guy; Lam, Kai-Pui; Poon, Chi-Sang

    2006-12-01

    Neuron-machine interfaces such as dynamic clamp and brain-implantable neuroprosthetic devices require real-time simulations of neuronal ion channel dynamics. Field-programmable gate array (FPGA) has emerged as a high-speed digital platform ideal for such application-specific computations. We propose an efficient and flexible component-based FPGA design framework for neuronal ion channel dynamics simulations, which overcomes certain limitations of the recently proposed memory-based approach. A parallel processing strategy is used to minimize computational delay, and a hardware-efficient factoring approach for calculating exponential and division functions in neuronal ion channel models is used to conserve resource consumption. Performances of the various FPGA design approaches are compared theoretically and experimentally in corresponding implementations of the alpha-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid (AMPA) and N-methyl-D-aspartate (NMDA) synaptic ion channel models. Our results suggest that the component-based design framework provides a more memory economic solution, as well as more efficient logic utilization for large word lengths, whereas the memory-based approach may be suitable for time-critical applications where a higher throughput rate is desired.

  10. A connected component-based method for efficiently integrating multiscale $N$-body systems

    CERN Document Server

    Jänes, Jürgen; Zwart, Simon F Portegies

    2014-01-01

    We present a novel method for efficient direct integration of gravitational N-body systems with a large variation in characteristic time scales. The method is based on a recursive and adaptive partitioning of the system based on the connected components of the graph generated by the particle distribution combined with an interaction-specific time step criterion. It uses an explicit and approximately time-symmetric time step criterion, and conserves linear and angular momentum to machine precision. In numerical tests on astrophysically relevant setups, the method compares favourably to both alternative Hamiltonian-splitting integrators as well as recently developed block time step-based GPU-accelerated Hermite codes. Our reference implementation is incorporated in the HUAYNO code, which is freely available as a part of the AMUSE framework.

  11. A Distributed Web GIS Application Based on Component Technology and Fractal Image Compression

    Institute of Scientific and Technical Information of China (English)

    HAN Jie

    2006-01-01

    Geographic information system (GIS) technology is a combination of computer's graphic and database to store and process spatial information. According to the users' demands, GIS exports the exact geographic information and related information for users with map and description through associating geographic place and related attributes. Based on the existing popular technology, this paper presents a distributed web GIS application based on component technology and fractal image compression. It presents the basic framework of the proposed system at first, and then discusses the key technology of implementing this system; finally it designs a three-layer WEB GIS instance using VC++ ATL based on Geo Beans. The example suggests the proposed design is correct, feasible and valid.

  12. A Stock Market Prediction Method Based on Support Vector Machines (SVM and Independent Component Analysis (ICA

    Directory of Open Access Journals (Sweden)

    Hakob GRIGORYAN

    2016-08-01

    Full Text Available The research presented in this work focuses on financial time series prediction problem. The integrated prediction model based on support vector machines (SVM with independent component analysis (ICA (called SVM-ICA is proposed for stock market prediction. The presented approach first uses ICA technique to extract important features from the research data, and then applies SVM technique to perform time series prediction. The results obtained from the SVM-ICA technique are compared with the results of SVM-based model without using any pre-processing step. In order to show the effectiveness of the proposed methodology, two different research data are used as illustrative examples. In experiments, the root mean square error (RMSE measure is used to evaluate the performance of proposed models. The comparative analysis leads to the conclusion that the proposed SVM-ICA model outperforms the simple SVM-based model in forecasting task of nonstationary time series.

  13. Staphyloferrin A as siderophore-component in fluoroquinolone-based Trojan horse antibiotics.

    Science.gov (United States)

    Milner, Stephen J; Seve, Alexandra; Snelling, Anna M; Thomas, Gavin H; Kerr, Kevin G; Routledge, Anne; Duhme-Klair, Anne-Kathrin

    2013-06-07

    A series of fluoroquinolone conjugates was synthesised by linking the carboxylic acid functionality of the carboxylate-type siderophore staphyloferrin A and its derivatives to the piperazinyl nitrogen of ciprofloxacin and norfloxacin via amide bond formation. Four siderophore-drug conjugates were screened against a panel of bacteria associated with infection in humans. Whilst no activity was found against ciprofloxacin- or norfloxacin-resistant bacteria, one of the conjugates retained antibacterial activity against fluoroquinolone-susceptible strains although the structure of its lysine-based siderophore component differs from that of the natural siderophore staphyloferrin A. In contrast, three ornithine-based siderophore conjugates showed significantly reduced activity against strains that are susceptible to their respective parent fluoroquinolones, regardless of the type of fluoroquinolone attached or chirality at the ornithine Cα-atom. The loss of potency observed for the (R)- and (S)-ornithine-based ciprofloxacin conjugates correlates with their reduced inhibitory activity against the target enzyme DNA gyrase.

  14. MULTI-VIEW FACE DETECTION BASED ON KERNEL PRINCIPAL COMPONENT ANALYSIS AND KERNEL SUPPORT VECTOR TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Muzhir Shaban Al-Ani

    2011-05-01

    Full Text Available Detecting faces across multiple views is more challenging than in a frontal view. To address this problem,an efficient approach is presented in this paper using a kernel machine based approach for learning suchnonlinear mappings to provide effective view-based representation for multi-view face detection. In thispaper Kernel Principal Component Analysis (KPCA is used to project data into the view-subspaces thencomputed as view-based features. Multi-view face detection is performed by classifying each input imageinto face or non-face class, by using a two class Kernel Support Vector Classifier (KSVC. Experimentalresults demonstrate successful face detection over a wide range of facial variation in color, illuminationconditions, position, scale, orientation, 3D pose, and expression in images from several photo collections.

  15. Ferroelectromagnetic solid solutions on the base piezoelectric ceramic materials for components of micromechatronics

    Science.gov (United States)

    Bochenek, Dariusz; Zachariasz, Radosław; Niemiec, Przemysław; Ilczuk, Jan; Bartkowska, Joanna; Brzezińska, Dagmara

    2016-10-01

    In the presented work, a ferroelectromagnetic solid solutions based on PZT and ferrite powders have been obtained. The main aim of combination of ferroelectric and magnetic powders was to obtain material showing both electric and magnetic properties. Ferroelectric ceramic powder (in amount of 90%) was based on the doped PZT type solid solution while magnetic component was nickel-zinc ferrite Ni1-xZnxFe2O4 (in amount of 10%). The synthesis of components of ferroelectromagnetic solid solutions was performed using the solid phase sintering. Final densification of synthesized powder has been done using free sintering. The aim of the work was to obtain and examine in the first multicomponent PZT type ceramics admixed with chromium with the following chemical composition Pb0.94Sr0.06(Zr0.46Ti0.54)O3+0.25 at% Cr2O3 and next ferroelectromagnetic solid solution based on a PZT type ferroelectric powder (Pb0.94Sr0.06(Zr0.46Ti0.54)O3+0.25 at% Cr2O3) and nickel-zinc ferrite (Ni0.64Zn0.36Fe2O4), from the point of view of their mechanical and electric properties, such as: electric permittivity, ε; dielectric loss, tanδ; mechanical losses, Q-1; and Young modulus, E.

  16. Power Transformer Differential Protection Based on Neural Network Principal Component Analysis, Harmonic Restraint and Park's Plots

    Directory of Open Access Journals (Sweden)

    Manoj Tripathy

    2012-01-01

    Full Text Available This paper describes a new approach for power transformer differential protection which is based on the wave-shape recognition technique. An algorithm based on neural network principal component analysis (NNPCA with back-propagation learning is proposed for digital differential protection of power transformer. The principal component analysis is used to preprocess the data from power system in order to eliminate redundant information and enhance hidden pattern of differential current to discriminate between internal faults from inrush and overexcitation conditions. This algorithm has been developed by considering optimal number of neurons in hidden layer and optimal number of neurons at output layer. The proposed algorithm makes use of ratio of voltage to frequency and amplitude of differential current for transformer operating condition detection. This paper presents a comparative study of power transformer differential protection algorithms based on harmonic restraint method, NNPCA, feed forward back propagation neural network (FFBPNN, space vector analysis of the differential signal, and their time characteristic shapes in Park’s plane. The algorithms are compared as to their speed of response, computational burden, and the capability to distinguish between a magnetizing inrush and power transformer internal fault. The mathematical basis for each algorithm is briefly described. All the algorithms are evaluated using simulation performed with PSCAD/EMTDC and MATLAB.

  17. Medical Image Segmentation Using Independent Component Analysis-Based Kernelized Fuzzy c-Means Clustering

    Directory of Open Access Journals (Sweden)

    Yao-Tien Chen

    2017-01-01

    Full Text Available Segmentation of brain tissues is an important but inherently challenging task in that different brain tissues have similar grayscale values and the intensity of a brain tissue may be confused with that of another one. The paper accordingly develops an ICKFCM method based on kernelized fuzzy c-means clustering with ICA analysis for extracting regions of interest in MRI brain images. The proposed method first removes the skull region using a skull stripping algorithm. Through ICA, three independent components are then extracted from multimodal medical images containing T1-weighted, T2-weighted, and PD-weighted MRI images. As MRI signals can be regarded as a combination of the signals from brain matters, ICA can be used for contrast enhancement of MRI images. Finally, the three independent components are utilized as inputs by KFCM algorithm to extract different brain tissues. Relying on the decomposition of a multivariate signal into independent non-Gaussian components and using a more appropriate kernel-induced distance for fuzzy clustering, the proposed method is capable of achieving greater reliability in both theory and practice than other segmentation approaches. According to the experiment results, the proposed method is capable of accurately extracting the complicated shapes of brain tissues and still remaining robust against various types of noises.

  18. Dynamic Reliability Analysis Method of Degraded Mechanical Components Based on Process Probability Density Function of Stress

    Directory of Open Access Journals (Sweden)

    Peng Gao

    2014-01-01

    Full Text Available It is necessary to develop dynamic reliability models when considering strength degradation of mechanical components. Instant probability density function (IPDF of stress and process probability density function (PPDF of stress, which are obtained via different statistical methods, are defined, respectively. In practical engineering, the probability density function (PDF for the usage of mechanical components is mostly PPDF, such as the PDF acquired via the rain flow counting method. For the convenience of application, IPDF is always approximated by PPDF when using the existing dynamic reliability models. However, it may cause errors in the reliability calculation due to the approximation of IPDF by PPDF. Therefore, dynamic reliability models directly based on PPDF of stress are developed in this paper. Furthermore, the proposed models can be used for reliability assessment in the case of small amount of stress process samples by employing the fuzzy set theory. In addition, the mechanical components in solar array of satellites are chosen as representative examples to illustrate the proposed models. The results show that errors are caused because of the approximation of IPDF by PPDF and the proposed models are accurate in the reliability computation.

  19. Crude oil price analysis and forecasting based on variational mode decomposition and independent component analysis

    Science.gov (United States)

    E, Jianwei; Bao, Yanling; Ye, Jimin

    2017-10-01

    As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.

  20. A configurable component-based software system for magnetic field measurements

    Energy Technology Data Exchange (ETDEWEB)

    Nogiec, J.M.; DiMarco, J.; Kotelnikov, S.; Trombly-Freytag, K.; Walbridge, D.; Tartaglia, M.; /Fermilab

    2005-09-01

    A new software system to test accelerator magnets has been developed at Fermilab. The magnetic measurement technique involved employs a single stretched wire to measure alignment parameters and magnetic field strength. The software for the system is built on top of a flexible component-based framework, which allows for easy reconfiguration and runtime modification. Various user interface, data acquisition, analysis, and data persistence components can be configured to form different measurement systems that are tailored to specific requirements (e.g., involving magnet type or test stand). The system can also be configured with various measurement sequences or tests, each of them controlled by a dedicated script. It is capable of working interactively as well as executing a preselected sequence of tests. Each test can be parameterized to fit the specific magnet type or test stand requirements. The system has been designed with portability in mind and is capable of working on various platforms, such as Linux, Solaris, and Windows. It can be configured to use a local data acquisition subsystem or a remote data acquisition computer, such as a VME processor running VxWorks. All hardware-oriented components have been developed with a simulation option that allows for running and testing measurements in the absence of data acquisition hardware.

  1. Multi-component vertical profile retrievals for ground-based MAX-DOAS

    Science.gov (United States)

    Irie, Hitoshi; Kanaya, Yugo; Takashima, Hisahiro; van Roozendael, Michel; Wittrock, Folkard; Piters, Ankie

    2010-05-01

    We attempt to retrieve lower-tropospheric vertical profile information for 8 components from ground-based Multi-Axis Differential Optical Absorption Spectroscopy (MAX-DOAS) measurements. The components retrieved include aerosol extinction coefficients (AEC) at two wavelengths 357 and 476 nm, NO2, HCHO, CHOCHO, H2O, SO2, and O3 volume mixing ratios (VMRs). This method was applied to MAX-DOAS observations performed at Cabauw, the Netherlands (52.0°N, 4.9°E) in June-July 2009 during the Cabauw Intercomparison campaign of Nitrogen Dioxide measuring Instruments (CINDI) campaign. For the lowest layer of retrieved profiles at 0-1 km, two channels of AEC values reveal consistent variations. NO2 showed typical diurnal variations with maximum in early morning and minimum in the afternoon. Positive correlations between HCHO and CHOCHO were often seen. H2O VMR agreed well with that derived from NCEP surface data, and was used to judge cloudy cases after conversion to relative humidity. All these results support the capability of MAX-DOAS observations applicable to various air quality studies. Similar multi-component retrievals applied to observations in Japan are also presented in this talk.

  2. Time Series of EIT Measurements and Images During Lung Ventilation Based on Principal Component Analysis

    Institute of Scientific and Technical Information of China (English)

    范文茹; 王化祥; 杨程屹; 马世文

    2010-01-01

    The aim of this paper is to propose a useful method for exploring regional ventilation and perfusion in the chest and also separation of pulmonary and cardiac changes.The approach is based on estimating both electrical impedance tomography(EIT) measurements and reconstructed images by means of principal component analysis(PCA).In the experiments in vivo,43 cycles of heart-beat rhythm could be detected by PCA when the volunteer held breath;9 breathing cycles and 50 heart-beat cycles could be detected by PCA ...

  3. Dynamic behavior and effectiveness of three-dimensional component base isolation system

    Energy Technology Data Exchange (ETDEWEB)

    Tsutsumi, Hideaki; Yamada, Hiroyuki; Ebisawa, Katsumi; Shibata, Katsuyuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Fujimoto, Shigeru [Toshiba Corp., Kawasaki, Kanagawa (Japan)

    2000-11-01

    Three-dimensional component base isolation system (3-DIS : horizontal natural frequency=0.4 Hz, vertical natural frequency=2 Hz) which consists of multi-layer-rubber-bearings and coil springs was fabricated. A verification test on the dynamic behavior and effectiveness of 3-DIS with and without oil dampers was carried out using shaking table test. From the test results, it is found that the amplification of rocking motion and vertical acceleration response of 3-DIS against various large seismic motions can be suppress by using oil dampers with total damping factor of 35%. (author)

  4. Independent Component Analysis of Complex Valued Signals Based on First-order Statistics

    Directory of Open Access Journals (Sweden)

    P.C. Xu

    2013-12-01

    Full Text Available This paper proposes a novel method based on first-order statistics, aims to solve the problem of the independent component extraction of complex valued signals in instantaneous linear mixtures. Single-step and iterative algorithms are proposed and discussed under the engineering practice. Theoretical performance analysis about asymptotic interference-to-signal ratio (ISR and probability of correct support estimation (PCE are accomplished. Simulation examples validate the theoretic analysis, and demonstrate that the single-step algorithm is extremely effective. Moreover, the iterative algorithm is more efficient than complex FastICA under certain circumstances.

  5. Reliability-based optimization of maintenance scheduling of mechanical components under fatigue.

    Science.gov (United States)

    Beaurepaire, P; Valdebenito, M A; Schuëller, G I; Jensen, H A

    2012-05-01

    This study presents the optimization of the maintenance scheduling of mechanical components under fatigue loading. The cracks of damaged structures may be detected during non-destructive inspection and subsequently repaired. Fatigue crack initiation and growth show inherent variability, and as well the outcome of inspection activities. The problem is addressed under the framework of reliability based optimization. The initiation and propagation of fatigue cracks are efficiently modeled using cohesive zone elements. The applicability of the method is demonstrated by a numerical example, which involves a plate with two holes subject to alternating stress.

  6. On risk-based operation and maintenance of offshore wind turbine components

    DEFF Research Database (Denmark)

    Nielsen, Jannie Jessen; Sørensen, John Dalsgaard

    2011-01-01

    Operation and maintenance are significant contributors to the cost of energy for offshore wind turbines. Optimal planning could rationally be based on Bayesian pre-posterior decision theory, and all costs through the lifetime of the structures should be included. This paper contains a study...... of a generic case where the costs are evaluated for a single wind turbine with a single component. Costs due to inspections, repairs, and lost production are included in the model. The costs are compared for two distinct maintenance strategies, namely with and without inclusion of periodic imperfect...

  7. Characteristics of multi-component MI-based hydrogen storage alloys and their hydride electrodes

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    A series of multi-component MI-based hydrogen storage alloys with a cobalt atomic ratio of 0.40-0.75 w ere prepared. The electrochemical properties under different charge-discharge conditions and PCT characteristics measured by electrochemical method were investigated. The addition of other alloying elements for partial substitution of Co lowers the hydrogen equilibrium pressure and discharge capacity, but improves the cycling stability and makes the alloys keep nearly the same rate discharge capability and high-temperature discharge capability as those of the compared alloy.The reasons were discussed.

  8. Prediction of the Functional Performance of Machined Components Based on Surface Topography: State of the Art

    Science.gov (United States)

    Grzesik, Wit

    2016-10-01

    This survey overviews the functional performance of manufactured components produced by typical finishing machining operations in terms of their topographical characteristics. Surface topographies were characterized using both profile (2D) and 3D (areal) surface roughness parameters. The prediction of typical functional properties such as fatigue, friction, wear, bonding and corrosion is discussed based on appropriate surface roughness parameters. Some examples of real 3D surface topographies produced with desired functional characteristics are provided. This survey highlights technological possibilities of producing surfaces with enhanced functional properties by machining processes.

  9. Principal component cluster analysis of ECG time series based on Lyapunov exponent spectrum

    Institute of Scientific and Technical Information of China (English)

    WANG Nai; RUAN Jiong

    2004-01-01

    In this paper we propose an approach of principal component cluster analysis based on Lyapunov exponent spectrum (LES) to analyze the ECG time series. Analysis results of 22 sample-files of ECG from the MIT-BIH database confirmed the validity of our approach. Another technique named improved teacher selecting student (TSS) algorithm is presented to analyze unknown samples by means of some known ones, which is of better accuracy. This technique combines the advantages of both statistical and nonlinear dynamical methods and is shown to be significant to the analysis of nonlinear ECG time series.

  10. A component analysis based on serial results analyzing performance of parallel iterative programs

    Energy Technology Data Exchange (ETDEWEB)

    Richman, S.C. [Dalhousie Univ. (Canada)

    1994-12-31

    This research is concerned with the parallel performance of iterative methods for solving large, sparse, nonsymmetric linear systems. Most of the iterative methods are first presented with their time costs and convergence rates examined intensively on sequential machines, and then adapted to parallel machines. The analysis of the parallel iterative performance is more complicated than that of serial performance, since the former can be affected by many new factors, such as data communication schemes, number of processors used, and Ordering and mapping techniques. Although the author is able to summarize results from data obtained after examining certain cases by experiments, two questions remain: (1) How to explain the results obtained? (2) How to extend the results from the certain cases to general cases? To answer these two questions quantitatively, the author introduces a tool called component analysis based on serial results. This component analysis is introduced because the iterative methods consist mainly of several basic functions such as linked triads, inner products, and triangular solves, which have different intrinsic parallelisms and are suitable for different parallel techniques. The parallel performance of each iterative method is first expressed as a weighted sum of the parallel performance of the basic functions that are the components of the method. Then, one separately examines the performance of basic functions and the weighting distributions of iterative methods, from which two independent sets of information are obtained when solving a given problem. In this component approach, all the weightings require only serial costs not parallel costs, and each iterative method for solving a given problem is represented by its unique weighting distribution. The information given by the basic functions is independent of iterative method, while that given by weightings is independent of parallel technique, parallel machine and number of processors.

  11. The 8-component retrievals from ground-based MAX-DOAS observations

    Science.gov (United States)

    Irie, H.; Takashima, H.; Kanaya, Y.; Boersma, F.; Gast, L.; Wittrock, F.; van Roozendael, M.

    2010-12-01

    We first attempt to retrieve lower-tropospheric vertical profile information on 8 components from ground-based Multi-Axis Differential Optical Absorption Spectroscopy (MAX-DOAS) observations. Components retrieved are aerosol extinction coefficients (AEC) at two wavelengths 357 and 476 nm, NO2, HCHO, CHOCHO, H2O, SO2, and O3 volume mixing ratios (VMRs). A Japanese MAX-DOAS profile retrieval algorithm version 1 (JM1) is applied to observations performed at Cabauw, the Netherlands (51.97N, 4.93E) in June-July 2009 during the Cabauw Intercomparison campaign of Nitrogen Dioxide measuring Instruments (CINDI). Of retrieved profiles, we focus here on the lowest layer data (mean values at altitudes 0-1 km), where the sensitivity is usually highest owing to the longest light path. In support of the capability of the multi-component retrievals, overall we find reasonable agreement with independent data sets, including a regional chemical transport model (CHIMERE) and in situ observations performed at 3- and 200-m height levels of a tower placed in Cabauw. Enhanced HCHO and SO2 plumes were likely affected by biogenic and ship emissions, respectively, but an improvement in their emission strengths was suggested for better agreement. Analysis of air mass factors indicates that the horizontal representativeness of MAX-DOAS observation is about 3-15 km, comparable to or better than the spatial resolution of relevant UV-visible satellite observations and model calculations. These demonstrate that MAX-DOAS provides multi-component data useful for evaluation of satellite observations and model calculations and plays a role in bridging different data sets having different spatial resolutions.

  12. Three-dimensional NDE of VHTR core components via simulation-based testing. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Guzina, Bojan [Univ. of Minnesota, Minneapolis, MN (United States); Kunerth, Dennis [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2014-09-30

    A next generation, simulation-driven-and-enabled testing platform is developed for the 3D detection and characterization of defects and damage in nuclear graphite and composite structures in Very High Temperature Reactors (VHTRs). The proposed work addresses the critical need for the development of high-fidelity Non-Destructive Examination (NDE) technologies for as-manufactured and replaceable in-service VHTR components. Centered around the novel use of elastic (sonic and ultrasonic) waves, this project deploys a robust, non-iterative inverse solution for the 3D defect reconstruction together with a non-contact, laser-based approach to the measurement of experimental waveforms in VHTR core components. In particular, this research (1) deploys three-dimensional Scanning Laser Doppler Vibrometry (3D SLDV) as a means to accurately and remotely measure 3D displacement waveforms over the accessible surface of a VHTR core component excited by mechanical vibratory source; (2) implements a powerful new inverse technique, based on the concept of Topological Sensitivity (TS), for non-iterative elastic waveform tomography of internal defects - that permits robust 3D detection, reconstruction and characterization of discrete damage (e.g. holes and fractures) in nuclear graphite from limited-aperture NDE measurements; (3) implements state-of-the art computational (finite element) model that caters for accurately simulating elastic wave propagation in 3D blocks of nuclear graphite; (4) integrates the SLDV testing methodology with the TS imaging algorithm into a non-contact, high-fidelity NDE platform for the 3D reconstruction and characterization of defects and damage in VHTR core components; and (5) applies the proposed methodology to VHTR core component samples (both two- and three-dimensional) with a priori induced, discrete damage in the form of holes and fractures. Overall, the newly established SLDV-TS testing platform represents a next-generation NDE tool that surpasses

  13. Novel Component-Based Development Model for SIP-Based Mobile Application (1202)

    CERN Document Server

    Barnawi, Ahmed; Qureshi, M Rizwan Jameel; Khan, Asif Irshad

    2012-01-01

    Universities and Institutions these days' deals with issues related to with assessment of large number of students. Various evaluation methods have been adopted by examiners in different institutions to examining the ability of an individual, starting from manual means of using paper and pencil to electronic, from oral to written, practical to theoretical and many others. There is a need to expedite the process of examination in order to meet the increasing enrolment of students at the universities and institutes. Sip Based Mass Mobile Examination System (SiBMMES) expedites the examination process by automating various activities in an examination such as exam paper setting, Scheduling and allocating examination time and evaluation (auto-grading for objective questions) etc. SiBMMES uses the IP Multimedia Subsystem (IMS) that is an IP communications framework providing an environment for the rapid development of innovative and reusable services Session Initial Protocol (SIP) is a signalling (request-response)...

  14. Novel Component Based Development Model For Sip-Based Mobile Application

    CERN Document Server

    Barnawi, Ahmed; Qureshi, M Rizwan Jameel; Khan, Asif Irshad; 10.5121/ijsea.2012.3107

    2012-01-01

    Universities and Institutions these days' deals with issues related to with assessment of large number of students. Various evaluation methods have been adopted by examiners in different institutions to examining the ability of an individual, starting from manual means of using paper and pencil to electronic, from oral to written, practical to theoretical and many others. There is a need to expedite the process of examination in order to meet the increasing enrolment of students at the universities and institutes. Sip Based Mass Mobile Examination System (SiBMMES) expedites the examination process by automating various activities in an examination such as exam paper setting, Scheduling and allocating examination time and evaluation (auto-grading for objective questions) etc. SiBMMES uses the IP Multimedia Subsystem (IMS) that is an IP communications framework providing an environment for the rapid development of innovative and reusable services Session Initial Protocol (SIP) is a signalling (request-response)...

  15. Site-Specific Incorporation of Functional Components into RNA by an Unnatural Base Pair Transcription System

    Directory of Open Access Journals (Sweden)

    Rie Kawai

    2012-03-01

    Full Text Available Toward the expansion of the genetic alphabet, an unnatural base pair between 7-(2-thienylimidazo[4,5-b]pyridine (Ds and pyrrole-2-carbaldehyde (Pa functions as a third base pair in replication and transcription, and provides a useful tool for the site-specific, enzymatic incorporation of functional components into nucleic acids. We have synthesized several modified-Pa substrates, such as alkylamino-, biotin-, TAMRA-, FAM-, and digoxigenin-linked PaTPs, and examined their transcription by T7 RNA polymerase using Ds-containing DNA templates with various sequences. The Pa substrates modified with relatively small functional groups, such as alkylamino and biotin, were efficiently incorporated into RNA transcripts at the internal positions, except for those less than 10 bases from the 3′-terminus. We found that the efficient incorporation into a position close to the 3′-terminus of a transcript depended on the natural base contexts neighboring the unnatural base, and that pyrimidine-Ds-pyrimidine sequences in templates were generally favorable, relative to purine-Ds-purine sequences. The unnatural base pair transcription system provides a method for the site-specific functionalization of large RNA molecules.

  16. Time-invariant component-based normalization for a simultaneous PET-MR scanner.

    Science.gov (United States)

    Belzunce, M A; Reader, A J

    2016-05-07

    Component-based normalization is a method used to compensate for the sensitivity of each of the lines of response acquired in positron emission tomography. This method consists of modelling the sensitivity of each line of response as a product of multiple factors, which can be classified as time-invariant, time-variant and acquisition-dependent components. Typical time-variant factors are the intrinsic crystal efficiencies, which are needed to be updated by a regular normalization scan. Failure to do so would in principle generate artifacts in the reconstructed images due to the use of out of date time-variant factors. For this reason, an assessment of the variability and the impact of the crystal efficiencies in the reconstructed images is important to determine the frequency needed for the normalization scans, as well as to estimate the error obtained when an inappropriate normalization is used. Furthermore, if the fluctuations of these components are low enough, they could be neglected and nearly artifact-free reconstructions become achievable without performing a regular normalization scan. In this work, we analyse the impact of the time-variant factors in the component-based normalization used in the Biograph mMR scanner, but the work is applicable to other PET scanners. These factors are the intrinsic crystal efficiencies and the axial factors. For the latter, we propose a new method to obtain fixed axial factors that was validated with simulated data. Regarding the crystal efficiencies, we assessed their fluctuations during a period of 230 d and we found that they had good stability and low dispersion. We studied the impact of not including the intrinsic crystal efficiencies in the normalization when reconstructing simulated and real data. Based on this assessment and using the fixed axial factors, we propose the use of a time-invariant normalization that is able to achieve comparable results to the standard, daily updated, normalization factors used in this

  17. Time-invariant component-based normalization for a simultaneous PET-MR scanner

    Science.gov (United States)

    Belzunce, M. A.; Reader, A. J.

    2016-05-01

    Component-based normalization is a method used to compensate for the sensitivity of each of the lines of response acquired in positron emission tomography. This method consists of modelling the sensitivity of each line of response as a product of multiple factors, which can be classified as time-invariant, time-variant and acquisition-dependent components. Typical time-variant factors are the intrinsic crystal efficiencies, which are needed to be updated by a regular normalization scan. Failure to do so would in principle generate artifacts in the reconstructed images due to the use of out of date time-variant factors. For this reason, an assessment of the variability and the impact of the crystal efficiencies in the reconstructed images is important to determine the frequency needed for the normalization scans, as well as to estimate the error obtained when an inappropriate normalization is used. Furthermore, if the fluctuations of these components are low enough, they could be neglected and nearly artifact-free reconstructions become achievable without performing a regular normalization scan. In this work, we analyse the impact of the time-variant factors in the component-based normalization used in the Biograph mMR scanner, but the work is applicable to other PET scanners. These factors are the intrinsic crystal efficiencies and the axial factors. For the latter, we propose a new method to obtain fixed axial factors that was validated with simulated data. Regarding the crystal efficiencies, we assessed their fluctuations during a period of 230 d and we found that they had good stability and low dispersion. We studied the impact of not including the intrinsic crystal efficiencies in the normalization when reconstructing simulated and real data. Based on this assessment and using the fixed axial factors, we propose the use of a time-invariant normalization that is able to achieve comparable results to the standard, daily updated, normalization factors used in this

  18. Processing and characterization of Nickel-base superalloy micro-components and films for MEMS applications

    Science.gov (United States)

    Burns, Devin E.

    Microelectromechanical (MEMS) devices are not capable of withstanding harsh operating environments, which may include high temperatures, pressures and corrosive agents. Ni-base superalloys have been used successfully in the hot stages of jet turbine engines despite the presence of these conditions. In my thesis work, I developed two techniques compatible with micro-processing methods to produce Ni-base superalloy micro-components for MEMS applications. The mechanical properties of these materials were accessed at room and elevated temperatures. Microstructural studies were performed, linking microstructural features to mechanical properties. The first technique modified LIGA Ni (LIGA is a German acronym for lithography, electroplating and molding) microtensile specimens using a vapor phase aluminization process. A subsequent homogenization heat treatment produced a two phase Ni-Ni3A1 microstructure characteristic of modern Ni-base superalloys. Al composition was used to tailor both the precipitate size and volume fraction. Aluminized LIGA Ni micro-components exhibited room temperature yield and ultimate strengths 3 to 4 times LIGA Ni micro-components subject to the same heat treatment. The second technique involved sputtering a commercial Ni-base superalloy, Haynes 718, to produce thick sputtered foils (up to 20 gam) on silicon and brass substrates. The as-deposited foils were nanocrystalline solid solutions with chemical compositions similar to the bulk material. Foils subject to ageing heat treatments exhibited unique precipitation mechanisms and good thermal stability. Strengths as high as 750 MPa at 700°C were observed with several percent ductility. This is a significant improvement over state of the art metallic MEMS materials. Furthermore, a new high temperature microtensile testing technique was developed. The technique embeds a displacement based force sensor into the hot zone of a furnace. This arrangement ensures temperature uniformity during testing

  19. An adaptive neuro fuzzy model for estimating the reliability of component-based software systems

    Directory of Open Access Journals (Sweden)

    Kirti Tyagi

    2014-01-01

    Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.

  20. The ultraviolet detection component based on Te-Cs image intensifier

    Science.gov (United States)

    Qian, Yunsheng; Zhou, Xiaoyu; Wu, Yujing; Wang, Yan; Xu, Hua

    2017-05-01

    Ultraviolet detection technology has been widely focused and adopted in the fields of ultraviolet warning and corona detection for its significant value and practical meaning. The component structure of ultraviolet ICMOS, imaging driving and the photon counting algorithm are studied in this paper. Firstly, the one-inch and wide dynamic range CMOS chip with the coupling optical fiber panel is coupled to the ultraviolet image intensifier. The photocathode material in ultraviolet image intensifier is Te-Cs, which contributes to the solar blind characteristic, and the dual micro-channel plates (MCP) structure ensures the sufficient gain to achieve the single photon counting. Then, in consideration of the ultraviolet detection demand, the drive circuit of the CMOS chip is designed and the corresponding program based on Verilog language is written. According to the characteristics of ultraviolet imaging, the histogram equalization method is applied to enhance the ultraviolet image and the connected components labeling way is utilized for the ultraviolet single photon counting. Moreover, one visible light video channel is reserved in the ultraviolet ICOMS camera, which can be used for the fusion of ultraviolet and visible images. Based upon the module, the ultraviolet optical lens and the deep cut-off solar blind filter are adopted to construct the ultraviolet detector. At last, the detection experiment of the single photon signal is carried out, and the test results are given and analyzed.

  1. A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis.

    Science.gov (United States)

    Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio

    2015-12-01

    This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.

  2. A novel BCI based on ERP components sensitive to configural processing of human faces

    Science.gov (United States)

    Zhang, Yu; Zhao, Qibin; Jing, Jin; Wang, Xingyu; Cichocki, Andrzej

    2012-04-01

    This study introduces a novel brain-computer interface (BCI) based on an oddball paradigm using stimuli of facial images with loss of configural face information (e.g., inversion of face). To the best of our knowledge, till now the configural processing of human faces has not been applied to BCI but widely studied in cognitive neuroscience research. Our experiments confirm that the face-sensitive event-related potential (ERP) components N170 and vertex positive potential (VPP) have reflected early structural encoding of faces and can be modulated by the configural processing of faces. With the proposed novel paradigm, we investigate the effects of ERP components N170, VPP and P300 on target detection for BCI. An eight-class BCI platform is developed to analyze ERPs and evaluate the target detection performance using linear discriminant analysis without complicated feature extraction processing. The online classification accuracy of 88.7% and information transfer rate of 38.7 bits min-1 using stimuli of inverted faces with only single trial suggest that the proposed paradigm based on the configural processing of faces is very promising for visual stimuli-driven BCI applications.

  3. A Development Process for Enterprise Information Systems Based on Automatic Generation of the Components

    Directory of Open Access Journals (Sweden)

    Adrian ALEXANDRESCU

    2008-01-01

    Full Text Available This paper contains some ideas concerning the Enterprise Information Systems (EIS development. It combines known elements from the software engineering domain, with original elements, which the author has conceived and experimented. The author has followed two major objectives: to use a simple description for the concepts of an EIS, and to achieve a rapid and reliable EIS development process with minimal cost. The first goal was achieved defining some models, which describes the conceptual elements of the EIS domain: entities, events, actions, states and attribute-domain. The second goal is based on a predefined architectural model for the EIS, on predefined analyze and design models for the elements of the domain and finally on the automatic generation of the system components. The proposed methods do not depend on a special programming language or a data base management system. They are general and may be applied to any combination of such technologies.

  4. Credit Risk Assessment Model Based Using Principal component Analysis And Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Hamdy Abeer

    2016-01-01

    Full Text Available Credit risk assessment for bank customers has gained increasing attention in recent years. Several models for credit scoring have been proposed in the literature for this purpose. The accuracy of the model is crucial for any financial institution’s profitability. This paper provided a high accuracy credit scoring model that could be utilized with small and large datasets utilizing a principal component analysis (PCA based breakdown to the significance of the attributes commonly used in the credit scoring models. The proposed credit scoring model applied PCA to acquire the main attributes of the credit scoring data then an ANN classifier to determine the credit worthiness of an individual applicant. The performance of the proposed model was compared to other models in terms of accuracy and training time. Results, based on German dataset showed that the proposed model is superior to others and computationally cheaper. Thus it can be a potential candidate for future credit scoring systems.

  5. Sensor fault diagnosis of nonlinear processes based on structured kernel principal component analysis

    Institute of Scientific and Technical Information of China (English)

    Kechang FU; Liankui DAI; Tiejun WU; Ming ZHU

    2009-01-01

    A new sensor fault diagnosis method based on structured kernel principal component analysis (KPCA) is proposed for nonlinear processes.By performing KPCA on subsets of variables,a set of structured residuals,i.e.,scaled powers of KPCA,can be obtained in the same way as partial PCA.The structured residuals are utilized in composing an isolation scheme for sensor fault diagnosis,according to a properly designed incidence matrix.Sensor fault sensitivity and critical sensitivity are defined,based on which an incidence matrix optimization algorithm is proposed to improve the performance of the structured KPCA.The effectiveness of the proposed method is demonstrated on the simulated continuous stirred tank reactor (CSTR) process.

  6. Combustion engine diagnosis model-based condition monitoring of gasoline and diesel engines and their components

    CERN Document Server

    Isermann, Rolf

    2017-01-01

    This book offers first a short introduction to advanced supervision, fault detection and diagnosis methods. It then describes model-based methods of fault detection and diagnosis for the main components of gasoline and diesel engines, such as the intake system, fuel supply, fuel injection, combustion process, turbocharger, exhaust system and exhaust gas aftertreatment. Additionally, model-based fault diagnosis of electrical motors, electric, pneumatic and hydraulic actuators and fault-tolerant systems is treated. In general series production sensors are used. It includes abundant experimental results showing the detection and diagnosis quality of implemented faults. Written for automotive engineers in practice, it is also of interest to graduate students of mechanical and electrical engineering and computer science. The Content Introduction.- I SUPERVISION, FAULT DETECTION AND DIAGNOSIS METHODS.- Supervision, Fault-Detection and Fault-Diagnosis Methods - a short Introduction.- II DIAGNOSIS OF INTERNAL COMBUST...

  7. Final report on LDRD project 52722 : radiation hardened optoelectronic components for space-based applications.

    Energy Technology Data Exchange (ETDEWEB)

    Hargett, Terry W. (L& M Technologies, Inc.); Serkland, Darwin Keith; Blansett, Ethan L.; Geib, Kent Martin; Sullivan, Charles Thomas; Hawkins, Samuel D.; Wrobel, Theodore Frank; Keeler, Gordon Arthur; Klem, John Frederick; Medrano, Melissa R.; Peake, Gregory Merwin; Karpen, Gary D.; Montano, Victoria A. (L& M Technologies, Inc.)

    2003-12-01

    This report describes the research accomplishments achieved under the LDRD Project 'Radiation Hardened Optoelectronic Components for Space-Based Applications.' The aim of this LDRD has been to investigate the radiation hardness of vertical-cavity surface-emitting lasers (VCSELs) and photodiodes by looking at both the effects of total dose and of single-event upsets on the electrical and optical characteristics of VCSELs and photodiodes. These investigations were intended to provide guidance for the eventual integration of radiation hardened VCSELs and photodiodes with rad-hard driver and receiver electronics from an external vendor for space applications. During this one-year project, we have fabricated GaAs-based VCSELs and photodiodes, investigated ionization-induced transient effects due to high-energy protons, and measured the degradation of performance from both high-energy protons and neutrons.

  8. A Method of Clustering Components into Modules Based on Products' Functional and Structural Analysis

    Institute of Scientific and Technical Information of China (English)

    MENG Xiang-hui; JIANG Zu-hua; ZHENG Ying-fei

    2006-01-01

    Modularity is the key to improving the cost-variety trade-off in product development. To achieve the functional independency and structural independency of modules, a method of clustering components to identify modules based on functional and structural analysis was presented. Two stages were included in the method. In the first stage the products' function was analyzed to determine the primary level of modules. Then the objective function for modules identifying was formulated to achieve functional independency of modules. Finally the genetic algorithm was used to solve the combinatorial optimization problem in modules identifying to form the primary modules of products. In the second stage the cohesion degree of modules and the coupling degree between modules were analyzed. Based on this structural analysis the modular scheme was refined according to the thinking of structural independency. A case study on the gear reducer was conducted to illustrate the validity of the presented method.

  9. Anomaly Detection System Based on Principal Component Analysis and Support Vector Machine

    Institute of Scientific and Technical Information of China (English)

    LI Zhanchun; LI Zhitang; LIU Bin

    2006-01-01

    This article presents an anomaly detection system based on principal component analysis (PCA) and support vector machine (SVM). The system first creates a profile defining a normal behavior by frequency-based scheme, and then compares the similarity of a current behavior with the created profile to decide whether the input instance is normal or anomaly. In order to avoid overfitting and reduce the computational burden, normal behavior principal features are extracted by the PCA method. SVM is used to distinguish normal or anomaly for user behavior after training procedure has been completed by learning. In the experiments for performance evaluation the system achieved a correct detection rate equal to 92.2% and a false detection rate equal to 2.8%.

  10. INCREMENTAL PRINCIPAL COMPONENT ANALYSIS BASED OUTLIER DETECTION METHODS FOR SPATIOTEMPORAL DATA STREAMS

    Directory of Open Access Journals (Sweden)

    A. Bhushan

    2015-07-01

    Full Text Available In this paper, we address outliers in spatiotemporal data streams obtained from sensors placed across geographically distributed locations. Outliers may appear in such sensor data due to various reasons such as instrumental error and environmental change. Real-time detection of these outliers is essential to prevent propagation of errors in subsequent analyses and results. Incremental Principal Component Analysis (IPCA is one possible approach for detecting outliers in such type of spatiotemporal data streams. IPCA has been widely used in many real-time applications such as credit card fraud detection, pattern recognition, and image analysis. However, the suitability of applying IPCA for outlier detection in spatiotemporal data streams is unknown and needs to be investigated. To fill this research gap, this paper contributes by presenting two new IPCA-based outlier detection methods and performing a comparative analysis with the existing IPCA-based outlier detection methods to assess their suitability for spatiotemporal sensor data streams.

  11. Almost twenty years' search of transuranium isotopes in effluents discharged to air from nuclear power plants with VVER reactors.

    Science.gov (United States)

    Hölgye, Z; Filgas, R

    2006-04-01

    Airborne effluents of 5 stacks (stacks 1-5) of three nuclear power plants, with 9 pressurized water reactors VVER of 4,520 MWe total power, were searched for transuranium isotopes in different time periods. The search started in 1985. The subject of this work is a presentation of discharge data for the period of 1998-2003 and a final evaluation. It was found that 238Pu, 239,240Pu, 241Am, 242Cm, and 244Cm can be present in airborne effluents. Transuranium isotope contents in most of the quarterly effluent samples from stacks 2, 4 and 5 were not measurable. Transuranium isotopes were present in the effluents from stack l during all 9 years of the study and from stack 3 since the 3rd quarter of 1996 as a result of a defect in the fuel cladding. A relatively high increase of transuranium isotopes in effluents from stack 3 occurred in the 3rd quarter of 1999, and a smaller increase occurred in the 3rd quarter of 2003. In each instance 242Cm prevailed in the transuranium isotope mixtures. 238Pu/239,240Pu, 241Am/239,240Pu, 242Cm/239,240Pu, and 244Cm/239,240Pu ratios in fuel for different burn-up were calculated, and comparison of these ratios in fuel and effluents was performed.

  12. Neutron and gamma field investigations in the VVER-1000 mock-up concrete shielding on the reactor LR-0

    Energy Technology Data Exchange (ETDEWEB)

    Zaritsky, S.; Egorov, A. [National Research Center, Kurchatov Inst., Moscow 123182 (Russian Federation); Osmera, B.; Marik, M.; Rypar, V. [Research Centre Rez Ltd., Rez 25068 (Czech Republic); Cvachovec, F. [Univ. of Defense, Brno 61200 (Czech Republic); Kolros, A. [Czech Technical Univ., Prague 18000 (Czech Republic)

    2011-07-01

    Two sets of neutron and gamma field investigations were carried out in the dismountable model of radiation shielding of the VVER-1000 mock-up on the LR-0 reactor. First, measurements and calculations of the {sup 3}He(n,p)T reaction rate and fast neutrons and gamma flux spectra in the operational neutron monitor channel inside a concrete shielding for different shapes and locations of the channel (cylindrical channel in a concrete, channels with collimator in a concrete, cylindrical channel in a graphite). In all cases measurements and calculations of the {sup 3}He(n,p)T reaction rate were done with and without an additional moderator-polyethylene insert inside the channel. Second, measurements and calculations of the {sup 3}He(n,p)T reaction rate spatial distribution inside a concrete. The {sup 3}He(n,p)T reaction rate measurements and calculations were carried out exploring the relative thermal neutron density in the channels and its space distribution in the concrete. Fast neutrons and gamma measurements were carried out with a stilbene (45 x 45 mm) scintillation spectrometer in the energy regions 0.5-10 MeV (neutrons) and 0.2-10 MeV (gammas). (authors)

  13. Uncertainties in the Fluence Determination in the Surveillance Samples of VVER-440

    Directory of Open Access Journals (Sweden)

    Konheiser Joerg

    2016-01-01

    Full Text Available The reactor pressure vessel (RPV represents one of the most important safety components in a nuclear power plant. Therefore, surveillance specimen (SS programs for the RPV material exist to deliver a reliable assessment of RPV residual lifetime. This report will present neutron fluence calculations for SS. These calculations were carried out by the codes TRAMO [1] and DORT [2]. This study was accompanied by ex-vessel neutron dosimetry experiments at Kola NPP. The main neutron activation monitoring reactions were 54Fe(n,p54Mn and 58Ni(n,p58Co. Good agreement was found between the deterministic and stochastic calculation results and between the calculations and the ex-vessel measurements. The different influences on the monitors were studied. In order to exclude the possible healing effects of the samples due to excessive temperatures, the heat release in the surveillance specimens was determined based on the calculated gamma fluences. Under comparatively realistic conditions, the heat increased by 6 K.

  14. Uncertainties in the Fluence Determination in the Surveillance Samples of VVER-440

    Science.gov (United States)

    Konheiser, Joerg; Grahn, Alexander; Borodkin, Pavel; Borodkin, Gennady

    2016-02-01

    The reactor pressure vessel (RPV) represents one of the most important safety components in a nuclear power plant. Therefore, surveillance specimen (SS) programs for the RPV material exist to deliver a reliable assessment of RPV residual lifetime. This report will present neutron fluence calculations for SS. These calculations were carried out by the codes TRAMO [1] and DORT [2]. This study was accompanied by ex-vessel neutron dosimetry experiments at Kola NPP. The main neutron activation monitoring reactions were 54Fe(n,p)54Mn and 58Ni(n,p)58Co. Good agreement was found between the deterministic and stochastic calculation results and between the calculations and the ex-vessel measurements. The different influences on the monitors were studied. In order to exclude the possible healing effects of the samples due to excessive temperatures, the heat release in the surveillance specimens was determined based on the calculated gamma fluences. Under comparatively realistic conditions, the heat increased by 6 K.

  15. COMDES-II: A Component-Based Framework for Generative Development of Distributed Real-Time Control Systems

    DEFF Research Database (Denmark)

    Ke, Xu; Sierszecki, Krzysztof; Angelov, Christo K.

    2007-01-01

    The paper presents a generative development methodology and component models of COMDES-II, a component-based software framework for distributed embedded control systems with real-time constraints. The adopted methodology allows for rapid modeling and validation of control software at a higher lev...... methodology for COMDES-II from a general perspective, describes the component models in details and demonstrates their application through a DC-Motor control system case study....

  16. Effects of intravenous solutions on acid-base equilibrium: from crystalloids to colloids and blood components.

    Science.gov (United States)

    Langer, Thomas; Ferrari, Michele; Zazzeron, Luca; Gattinoni, Luciano; Caironi, Pietro

    2014-01-01

    Intravenous fluid administration is a medical intervention performed worldwide on a daily basis. Nevertheless, only a few physicians are aware of the characteristics of intravenous fluids and their possible effects on plasma acid-base equilibrium. According to Stewart's theory, pH is independently regulated by three variables: partial pressure of carbon dioxide, strong ion difference (SID), and total amount of weak acids (ATOT). When fluids are infused, plasma SID and ATOT tend toward the SID and ATOT of the administered fluid. Depending on their composition, fluids can therefore lower, increase, or leave pH unchanged. As a general rule, crystalloids having a SID greater than plasma bicarbonate concentration (HCO₃-) cause an increase in plasma pH (alkalosis), those having a SID lower than HCO₃- cause a decrease in plasma pH (acidosis), while crystalloids with a SID equal to HCO₃- leave pH unchanged, regardless of the extent of the dilution. Colloids and blood components are composed of a crystalloid solution as solvent, and the abovementioned rules partially hold true also for these fluids. The scenario is however complicated by the possible presence of weak anions (albumin, phosphates and gelatins) and their effect on plasma pH. The present manuscript summarises the characteristics of crystalloids, colloids, buffer solutions and blood components and reviews their effect on acid-base equilibrium. Understanding the composition of intravenous fluids, along with the application of simple physicochemical rules best described by Stewart's approach, are pivotal steps to fully elucidate and predict alterations of plasma acid-base equilibrium induced by fluid therapy.

  17. Principal components analysis based control of a multi-dof underactuated prosthetic hand

    Directory of Open Access Journals (Sweden)

    Magenes Giovanni

    2010-04-01

    Full Text Available Abstract Background Functionality, controllability and cosmetics are the key issues to be addressed in order to accomplish a successful functional substitution of the human hand by means of a prosthesis. Not only the prosthesis should duplicate the human hand in shape, functionality, sensorization, perception and sense of body-belonging, but it should also be controlled as the natural one, in the most intuitive and undemanding way. At present, prosthetic hands are controlled by means of non-invasive interfaces based on electromyography (EMG. Driving a multi degrees of freedom (DoF hand for achieving hand dexterity implies to selectively modulate many different EMG signals in order to make each joint move independently, and this could require significant cognitive effort to the user. Methods A Principal Components Analysis (PCA based algorithm is used to drive a 16 DoFs underactuated prosthetic hand prototype (called CyberHand with a two dimensional control input, in order to perform the three prehensile forms mostly used in Activities of Daily Living (ADLs. Such Principal Components set has been derived directly from the artificial hand by collecting its sensory data while performing 50 different grasps, and subsequently used for control. Results Trials have shown that two independent input signals can be successfully used to control the posture of a real robotic hand and that correct grasps (in terms of involved fingers, stability and posture may be achieved. Conclusions This work demonstrates the effectiveness of a bio-inspired system successfully conjugating the advantages of an underactuated, anthropomorphic hand with a PCA-based control strategy, and opens up promising possibilities for the development of an intuitively controllable hand prosthesis.

  18. Time course based artifact identification for independent components of resting state fMRI

    Directory of Open Access Journals (Sweden)

    Christian eRummel

    2013-05-01

    Full Text Available In functional magnetic resonance imaging (fMRI coherent oscillations of the blood oxygen level dependent (BOLD signal can be detected. These arise when brain regions respond to external stimuli or are activated by tasks. The same networks have been characterized during wakeful rest when functional connectivity of the human brain is organized in generic resting state networks (RSN. Alterations of RSN emerge as neurobiological markers of pathological conditions such as altered mental state. In single-subject fMRI data the coherent components can be identified by blind source separation of the pre-processed BOLD data using spatial independent component analysis (ICA and related approaches. The resulting maps may represent physiological RSNs or may be due to various artifacts. In this methodological study, we propose a conceptually simple and fully automatic time course based filtering procedure to detect obvious artifacts in the ICA output for resting state fMRI. The filter is trained on six and tested on 29 healthy subjects, yielding mean filter accuracy, sensitivity and specificity of 0.80, 0.82 and 0.75 in out-of-sample tests. To estimate the impact of clearly artifactual single-subject components on group resting state studies we analyze unfiltered and filtered output with a second level ICA procedure. Although the automated filter does not reach performance values of visual analysis by human raters, we propose that resting state compatible analysis of ICA time courses could be very useful to complement the existing map or task/event oriented artifact classification algorithms.

  19. Eight-component retrievals from ground-based MAX-DOAS observations

    Directory of Open Access Journals (Sweden)

    H. Irie

    2011-06-01

    Full Text Available We attempt for the first time to retrieve lower-tropospheric vertical profile information for 8 quantities from ground-based Multi-Axis Differential Optical Absorption Spectroscopy (MAX-DOAS observations. The components retrieved are the aerosol extinction coefficients at two wavelengths, 357 and 476 nm, and NO2, HCHO, CHOCHO, H2O, SO2, and O3 volume mixing ratios. A Japanese MAX-DOAS profile retrieval algorithm, version 1 (JM1, is applied to observations performed at Cabauw, the Netherlands (51.97° N, 4.93° E, in June–July 2009 during the Cabauw Intercomparison campaign of Nitrogen Dioxide measuring Instruments (CINDI. Of the retrieved profiles, we focus here on the lowest-layer data (mean values at altitudes 0–1 km, where the sensitivity is usually highest owing to the longest light path. In support of the capability of the multi-component retrievals, we find reasonable overall agreement with independent data sets, including a regional chemical transport model (CHIMERE and in situ observations performed near the surface (2–3 m and at the 200-m height level of the tall tower in Cabauw. Plumes of enhanced HCHO and SO2 were likely affected by biogenic and ship emissions, respectively, and an improvement in their emission strengths is suggested for better agreement between CHIMERE simulations and MAX-DOAS observations. Analysis of air mass factors indicates that the horizontal spatial representativeness of MAX-DOAS observations is about 3–15 km (depending mainly on aerosol extinction, comparable to or better than the spatial resolution of current UV-visible satellite observations and model calculations. These demonstrate that MAX-DOAS provides multi-component data useful for the evaluation of satellite observations and model calculations and can play an important role in bridging different data sets having different spatial resolutions.

  20. A global distribution of the ignitability component of flammability based on climatic drivers

    Science.gov (United States)

    Karali, Anna; Fyllas, Nikolaos M.; Hatzaki, Maria; Giannakopoulos, Christos; Nastos, Panagiotis

    2017-04-01

    Fire regime is the result of complex interactions among ignition, topography, weather and vegetation. Even though the influence of vegetation varies regionally, it remains the only component that can be directly managed in order to reduce the negative impacts of wildland fires. Therefore, reliable information on vegetation flammability is required, making it one of the essential components of fire risk assessment and management. Specific Leaf Area (SLA [cm2 g-1], the ratio of leaf area to leaf dry mass) has received little attention regarding its relationship with ignitability and, thus, flammability. However, recent studies on a regional scale have shown that leaves of higher SLA are more ignitable. Thus, in the framework of the current study, the ignitability, as a function of SLA on global scale, is explored. In order to calculate SLA, a linear regression model combining SLA and climate data has been used (Maire et al., 2015). The climate data used for its calculation include the maximum monthly fractional sunshine duration, the maximum monthly temperature and the number of days with daily mean temperature above 0°C for each grid cell, obtained from the ERA-Interim gridded observations database. Subsequently, the ignitability component of flammability is calculated on a global scale using a bivariate regression relationship with SLA based on experimental burns of leaf materials (Grootemaat et al., 2015). The global distribution of ignitability can subsequently be combined with fire weather index (FWI) values for the development of an integrated index of forest fire vulnerability for the current and future climate, using CMIP5 climate model outputs. This will enable the integration of functional biogeographic data with widely applied fire risk assessment methodologies at regional to global spatial scales.

  1. Eight-component retrievals from ground-based MAX-DOAS observations

    Directory of Open Access Journals (Sweden)

    H. Irie

    2011-01-01

    Full Text Available We attempt for the first time to retrieve lower-tropospheric vertical profile information for 8 quantities from ground-based Multi-Axis Differential Optical Absorption Spectroscopy (MAX-DOAS observations. The components retrieved are the aerosol extinction coefficients at two wavelengths, 357 and 476 nm and NO2, HCHO, CHOCHO, H2O, SO2, and O3 volume mixing ratios. A Japanese MAX-DOAS profile retrieval algorithm, version 1 (JM1, is applied to observations performed at Cabauw, the Netherlands (51.97° N, 4.93° E, in June–July 2009 during the Cabauw Intercomparison campaign of Nitrogen Dioxide measuring Instruments (CINDI. Of the retrieved profiles, we focus here on the lowest-layer data (mean values at altitudes 0–1 km, where the sensitivity is usually highest owing to the longest light path. In support of the capability of the multi-component retrievals, we find reasonable overall agreement with independent data sets, including a regional chemical transport model (CHIMERE and in situ observations performed at the 3 and 200 m height levels of the tall tower in Cabauw. Plumes of enhanced HCHO and SO2 were likely affected by biogenic and ship emissions, respectively, and an improvement in their emission strengths is suggested for better agreement between CHIMERE simulations and MAX-DOAS observations. Analysis of air mass factors indicates that the horizontal spatial representativeness of MAX-DOAS observations is about 3–15 km (depending mainly on aerosol extinction, comparable to or better than the spatial resolution of current UV-visible satellite observations and model calculations. These demonstrate that MAX-DOAS provides multi-component data useful for the evaluation of satellite observations and model calculations and can play an important role in bridging different data sets having different spatial resolutions.

  2. Eight-component retrievals from ground-based MAX-DOAS observations

    Science.gov (United States)

    Irie, H.; Takashima, H.; Kanaya, Y.; Boersma, K. F.; Gast, L.; Wittrock, F.; Brunner, D.; Zhou, Y.; van Roozendael, M.

    2011-06-01

    We attempt for the first time to retrieve lower-tropospheric vertical profile information for 8 quantities from ground-based Multi-Axis Differential Optical Absorption Spectroscopy (MAX-DOAS) observations. The components retrieved are the aerosol extinction coefficients at two wavelengths, 357 and 476 nm, and NO2, HCHO, CHOCHO, H2O, SO2, and O3 volume mixing ratios. A Japanese MAX-DOAS profile retrieval algorithm, version 1 (JM1), is applied to observations performed at Cabauw, the Netherlands (51.97° N, 4.93° E), in June-July 2009 during the Cabauw Intercomparison campaign of Nitrogen Dioxide measuring Instruments (CINDI). Of the retrieved profiles, we focus here on the lowest-layer data (mean values at altitudes 0-1 km), where the sensitivity is usually highest owing to the longest light path. In support of the capability of the multi-component retrievals, we find reasonable overall agreement with independent data sets, including a regional chemical transport model (CHIMERE) and in situ observations performed near the surface (2-3 m) and at the 200-m height level of the tall tower in Cabauw. Plumes of enhanced HCHO and SO2 were likely affected by biogenic and ship emissions, respectively, and an improvement in their emission strengths is suggested for better agreement between CHIMERE simulations and MAX-DOAS observations. Analysis of air mass factors indicates that the horizontal spatial representativeness of MAX-DOAS observations is about 3-15 km (depending mainly on aerosol extinction), comparable to or better than the spatial resolution of current UV-visible satellite observations and model calculations. These demonstrate that MAX-DOAS provides multi-component data useful for the evaluation of satellite observations and model calculations and can play an important role in bridging different data sets having different spatial resolutions.

  3. Voigt waves in homogenized particulate composites based on isotropic dielectric components

    CERN Document Server

    Mackay, Tom G

    2011-01-01

    Homogenized composite materials (HCMs) can support a singular form of optical propagation, known as Voigt wave propagation, while their component materials do not. This phenomenon was investigated for biaxial HCMs arising from nondissipative isotropic dielectric component materials. The biaxiality of these HCMs stems from the oriented spheroidal shapes of the particles which make up the component materials. An extended version of the Bruggeman homogenization formalism was used to investigate the influence of component particle orientation, shape and size, as well as volume fraction of the component materials, upon Voigt wave propagation. Our numerical studies revealed that the directions in which Voigt waves propagate is highly sensitive to the orientation of the component particles and to the volume fraction of the component materials, but less sensitive to the shape of the component particles and less sensitive still to the size of the component particles. Furthermore, whether or not such an HCM supports Vo...

  4. Accurate neural network-based modeling for RF MEMS component synthesizing

    Science.gov (United States)

    Mohamed, Firas; Affour, Bachar

    2004-01-01

    Contrary to traditional analysis flows as expensive FEM simulation tools or inaccurate electrical models extractors, we developed MemsCompiler that implements a new real synthesis approach for RF MEMS. The new flow starts from system designer requirements and generates, in a one-click operation, a ready-to-fabricate layout (GDSII) and a passive fitted equivalent Spice circuit. Concerning the circuit, physical considerations give us an equivalent schematic in which circuit parameters values must be adjusted to fit the required performances. As to the GDSII, which constitutes the main contribution of this work, Design Of Experiment technique, used in the first version of the synthesizer, gave about 11% of dispersion and found to be unsatisfactory in some cases. A more accurate modeling was indispensable. Thus, we developed a neural networks-based modeling for circular inductors, which are considered by designers among the most stubborn components. This new modeling has shown to be very accurate: MemsCompiler produced about 3% of dispersion compared to the equivalent circuit and about 6% of dispersion for generated geometries. This modeling is flexible and could be rapidly generalized to other components.

  5. A Corpus-based Evaluation of Lexical Components of a Domainspecific Text to Knowledge Mapping Prototype

    CERN Document Server

    Shams, Rushdi

    2012-01-01

    The aim of this paper is to evaluate the lexical components of a Text to Knowledge Mapping (TKM) prototype. The prototype is domain-specific, the purpose of which is to map instructional text onto a knowledge domain. The context of the knowledge domain of the prototype is physics, specifically DC electrical circuits. During development, the prototype has been tested with a limited data set from the domain. The prototype now reached a stage where it needs to be evaluated with a representative linguistic data set called corpus. A corpus is a collection of text drawn from typical sources which can be used as a test data set to evaluate NLP systems. As there is no available corpus for the domain, we developed a representative corpus and annotated it with linguistic information. The evaluation of the prototype considers one of its two main components- lexical knowledge base. With the corpus, the evaluation enriches the lexical knowledge resources like vocabulary and grammar structure. This leads the prototype to p...

  6. Adaptive blind separation of underdetermined mixtures based on sparse component analysis

    Institute of Scientific and Technical Information of China (English)

    YANG ZuYuan; HE ZhaoShui; XIE ShengLi; FU YuLi

    2008-01-01

    The independence priori is very often used in the conventional blind source sepa-ration (BSS). Naturally, independent component analysis (ICA) is also employed to perform BSS very often. However, ICA is difficult to use in some challenging cases, such as underdetermined BSS or blind separation of dependent sources. Recently, sparse component analysis (SCA) has attained much attention because it is theo-retically available for underdetermined BSS and even for blind dependent source separation sometimes. However, SCA has not been developed very sufficiently. Up to now, there are only few existing algorithms and they are also not perfect as well in practice. For example, although Lewicki-Sejnowski's natural gradient for SCA is superior to K-mean clustering, it is just an approximation without rigorously theo-retical basis. To overcome these problems, a new natural gradient formula is pro-posed in this paper. This formula is derived directly from the cost function of SCA through matrix theory. Mathematically, it is more rigorous. In addition, a new and robust adaptive BSS algorithm is developed based on the new natural gradient. Simulations illustrate that this natural gradient formula is more robust and reliable than Lewicki-Sejnowski's gradient.

  7. Cardiac autonomic changes in middle-aged women: identification based on principal component analysis.

    Science.gov (United States)

    Trevizani, Gabriela A; Nasario-Junior, Olivassé; Benchimol-Barbosa, Paulo R; Silva, Lilian P; Nadal, Jurandir

    2016-07-01

    The purpose of this study was to investigate the application of the principal component analysis (PCA) technique on power spectral density function (PSD) of consecutive normal RR intervals (iRR) aiming at assessing its ability to discriminate healthy women according to age groups: young group (20-25 year-old) and middle-aged group (40-60 year-old). Thirty healthy and non-smoking female volunteers were investigated (13 young [mean ± SD (median): 22·8 ± 0·9 years (23·0)] and 17 Middle-aged [51·7 ± 5·3 years (50·0)]). The iRR sequence was collected during ten minutes, breathing spontaneously, in supine position and in the morning, using a heart rate monitor. After selecting an iRR segment (5 min) with the smallest variance, an auto regressive model was used to estimate the PSD. Five principal component coefficients, extracted from PSD signals, were retained for analysis according to the Mahalanobis distance classifier. A threshold established by logistic regression allowed the separation of the groups with 100% specificity, 83·2% sensitivity and 93·3% total accuracy. The PCA appropriately classified two groups of women in relation to age (young and Middle-aged) based on PSD analysis of consecutive normal RR intervals.

  8. Study on the optimal algorithm prediction of corn leaf component information based on hyperspectral imaging

    Science.gov (United States)

    Wu, Qiong; Wang, Jihua; Wang, Cheng; Xu, Tongyu

    2016-09-01

    Genetic algorithm (GA) has a significant effect in the band optimization selection of Partial Least Squares (PLS) correction model. Application of genetic algorithm in selection of characteristic bands can achieve the optimal solution more rapidly, effectively improve measurement accuracy and reduce variables used for modeling. In this study, genetic algorithm as a module conducted band selection for the application of hyperspectral imaging in nondestructive testing of corn seedling leaves, and GA-PLS model was established. In addition, PLS quantitative model of full spectrum and experienced-spectrum region were established in order to suggest the feasibility of genetic algorithm optimizing wave bands, and model robustness was evaluated. There were 12 characteristic bands selected by genetic algorithm. With reflectance values of corn seedling component information at spectral characteristic wavelengths corresponding to 12 characteristic bands as variables, a model about SPAD values of corn leaves acquired was established by PLS, and modeling results showed r = 0.7825. The model results were better than those of PLS model established in full spectrum and experience-based selected bands. The results suggested that genetic algorithm can be used for data optimization and screening before establishing the corn seedling component information model by PLS method and effectively increase measurement accuracy and greatly reduce variables used for modeling.

  9. Fault detection of excavator's hydraulic system based on dynamic principal component analysis

    Institute of Scientific and Technical Information of China (English)

    HE Qing-hua; HE Xiang-yu; ZHU Jian-xin

    2008-01-01

    In order to improve reliability of the excavator's hydraulic system, a fault detection approach based on dynamic principal component analysis(PCA) was proposed. Dynamic PCA is an extension of PCA, which can effectively extract the dynamic relations among process variables. With this approach, normal samples were used as training data to develop a dynamic PCA model in the first step. Secondly, the dynamic PCA model decomposed the testing data into projections to the principal component subspace(PCS) and residual subspace(RS). Thirdly, T2 statistic and Q statistic performed as indexes of fault detection in PCS and RS, respectively.Several simulated faults were introduced to validate the approach. The results show that the dynamic PCA model developed is able to detect overall faults by using T2 statistic and Q statistic. By simulation analysis, the proposed approach achieves an accuracy of 95% for 20 test sample sets, which shows that the fault detection approach can be effectively applied to the excavator's hydraulic system.

  10. THz spectral data analysis and components unmixing based on non-negative matrix factorization methods

    Science.gov (United States)

    Ma, Yehao; Li, Xian; Huang, Pingjie; Hou, Dibo; Wang, Qiang; Zhang, Guangxin

    2017-04-01

    In many situations the THz spectroscopic data observed from complex samples represent the integrated result of several interrelated variables or feature components acting together. The actual information contained in the original data might be overlapping and there is a necessity to investigate various approaches for model reduction and data unmixing. The development and use of low-rank approximate nonnegative matrix factorization (NMF) and smooth constraint NMF (CNMF) algorithms for feature components extraction and identification in the fields of terahertz time domain spectroscopy (THz-TDS) data analysis are presented. The evolution and convergence properties of NMF and CNMF methods based on sparseness, independence and smoothness constraints for the resulting nonnegative matrix factors are discussed. For general NMF, its cost function is nonconvex and the result is usually susceptible to initialization and noise corruption, and may fall into local minima and lead to unstable decomposition. To reduce these drawbacks, smoothness constraint is introduced to enhance the performance of NMF. The proposed algorithms are evaluated by several THz-TDS data decomposition experiments including a binary system and a ternary system simulating some applications such as medicine tablet inspection. Results show that CNMF is more capable of finding optimal solutions and more robust for random initialization in contrast to NMF. The investigated method is promising for THz data resolution contributing to unknown mixture identification.

  11. Identifying coordinative structure using principal component analysis based on coherence derived from linear systems analysis.

    Science.gov (United States)

    Wang, Xinguang; O'Dwyer, Nicholas; Halaki, Mark; Smith, Richard

    2013-01-01

    Principal component analysis is a powerful and popular technique for capturing redundancy in muscle activity and kinematic patterns. A primary limitation of the correlations or covariances between signals on which this analysis is based is that they do not account for dynamic relations between signals, yet such relations-such as that between neural drive and muscle tension-are widespread in the sensorimotor system. Low correlations may thus be obtained and signals may appear independent despite a dynamic linear relation between them. To address this limitation, linear systems analysis can be used to calculate the matrix of overall coherences between signals, which measures the strength of the relation between signals taking dynamic relations into account. Using ankle, knee, and hip sagittal-plane angles from 6 healthy subjects during ~50% of total variance in the data set, while with overall coherence matrices the first component accounted for > 95% of total variance. The results demonstrate that the dimensionality of the coordinative structure can be overestimated using conventional correlation, whereas a more parsimonious structure is identified with overall coherence.

  12. Feature extraction for ultrasonic sensor based defect detection in ceramic components

    Science.gov (United States)

    Kesharaju, Manasa; Nagarajah, Romesh

    2014-02-01

    High density silicon carbide materials are commonly used as the ceramic element of hard armour inserts used in traditional body armour systems to reduce their weight, while providing improved hardness, strength and elastic response to stress. Currently, armour ceramic tiles are inspected visually offline using an X-ray technique that is time consuming and very expensive. In addition, from X-rays multiple defects are also misinterpreted as single defects. Therefore, to address these problems the ultrasonic non-destructive approach is being investigated. Ultrasound based inspection would be far more cost effective and reliable as the methodology is applicable for on-line quality control including implementation of accept/reject criteria. This paper describes a recently developed methodology to detect, locate and classify various manufacturing defects in ceramic tiles using sub band coding of ultrasonic test signals. The wavelet transform is applied to the ultrasonic signal and wavelet coefficients in the different frequency bands are extracted and used as input features to an artificial neural network (ANN) for purposes of signal classification. Two different classifiers, using artificial neural networks (supervised) and clustering (un-supervised) are supplied with features selected using Principal Component Analysis(PCA) and their classification performance compared. This investigation establishes experimentally that Principal Component Analysis(PCA) can be effectively used as a feature selection method that provides superior results for classifying various defects in the context of ultrasonic inspection in comparison with the X-ray technique.

  13. A Robust MEMS Based Multi-Component Sensor for 3D Borehole Seismic Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Paulsson Geophysical Services

    2008-03-31

    The objective of this project was to develop, prototype and test a robust multi-component sensor that combines both Fiber Optic and MEMS technology for use in a borehole seismic array. The use such FOMEMS based sensors allows a dramatic increase in the number of sensors that can be deployed simultaneously in a borehole seismic array. Therefore, denser sampling of the seismic wave field can be afforded, which in turn allows us to efficiently and adequately sample P-wave as well as S-wave for high-resolution imaging purposes. Design, packaging and integration of the multi-component sensors and deployment system will target maximum operating temperature of 350-400 F and a maximum pressure of 15000-25000 psi, thus allowing operation under conditions encountered in deep gas reservoirs. This project aimed at using existing pieces of deployment technology as well as MEMS and fiber-optic technology. A sensor design and analysis study has been carried out and a laboratory prototype of an interrogator for a robust borehole seismic array system has been assembled and validated.

  14. Siloxane based Organic-Inorganic Hybrid Polymers and their Applications for Nanostructured Optical/Photonic Components

    Directory of Open Access Journals (Sweden)

    Rahmat Hidayat

    2014-11-01

    Full Text Available We have studied the preparation of organic-inorganic hybrid polymer precursors by sol-gel technique and their utilization for nanostructured optical components for photonic applications. The gel polymer precursors were prepared from siloxane modified by polymerizable acrylate groups, which can be processed further by photopolymerization process. Molecular structure characterizations by means of the FTIR measurements indicate the conversion of C=C bonds into C-C bonds after photopolymerization. This bond conversion produces high cross-linking between the organic and inorganic moieties, resulting in thermally stable and chemically resistant thin polymer layer which provide unique advantages of this material for particular optical/photonic applications. By employing laser interference technique, gratings with periodicity between 400-1000 nm have been successfully fabricated. Application of those sub-micron periodicity of grating structure as active elements in optically pumped polymer laser system and Surface Plasmon Resonance (SPR based measurement system have been also explored. The experimental results therefore also show the potential applications of this hybrid polymer as a building material for micro/nano-photonics components.

  15. Novel M-component based biomarkers in Waldenström's macroglobulinemia.

    Science.gov (United States)

    Leleu, Xavier; Koulieris, Efstathios; Maltezas, Dimitrios; Itzykson, Raphael; Xie, Wanling; Manier, Salomon; Dulery, Remy; Boyle, Eilleen; Gauthier, Jordan; Poulain, Stéphanie; Tatiana, Tzenou; Panayiotidis, Panayiotis; Bradwell, Arthur R; Harding, Stephen; Leblond, Veronique; Kyrtsonis, Marie-Christine; Ghobrial, Irene M

    2011-02-01

    Waldenstrom's macroglobulinemia (WM) is an indolent B-cell lymphoma of the lymphoplasmacytic type accompanied by a serum IgM component. However, conventional IgM quantification lacks sensitivity, does not precisely reflect tumor burden of WM, and, although being the main marker for monitoring response to treatment, may not be accurate. New serum M-component based biomarkers were developed for routine practice in recent years, such as the Freelite® test and more recently the Hevylite test®. Studies have shown that Freelite was a prognostic marker for time to treatment in WM that helps monitoring disease response or progression. Hevylite measures IgMkappa and IgMlambda, separately, and might provide true quantitative measurement of the IgM M-spike. Although current data are preliminary, Hevylite® might replace the current technique to measure IgM M-spike in the years to come. We summarize herein studies conducted to delineate the role of these tests in WM.

  16. Reactor plasma facing component designs based on liquid metal concepts supported in porous systems

    Science.gov (United States)

    Tabarés, F. L.; Oyarzabal, E.; Martin-Rojo, A. B.; Tafalla, D.; de Castro, A.; Soleto, A.

    2017-01-01

    The use of liquid metals (LMs) as plasma facing components in fusion devices was proposed as early as 1970 for a field reversed concept and inertial fusion reactors. The idea was extensively developed during the APEX Project, at the turn of the century, and it is the subject at present of the biennial International Symposium on Lithium Applications (ISLA), whose fourth meeting took place in Granada, Spain at the end of September 2015. While liquid metal flowing concepts were specially addressed in USA research projects, the idea of embedding the metal in a capillary porous system (CPS) was put forwards by Russian teams in the 1990s, thus opening the possibility of static concepts. Since then, many ideas and accompanying experimental tests in fusion devices and laboratories have been produced, involving a large fraction of countries within the international fusion community. Within the EUROFusion Roadmap, these activities are encompassed into the working programs of the plasma facing components (PFC) and divertor tokamak test (DTT) packages. In this paper, a review of the state of the art in concepts based on the CPS set-up for a fusion reactor divertor target, aimed at preventing the ejection of the liquid metal by electro-magnetic (EM) forces generated under plasma operation, is described and required R+D activities on the topic, including ongoing work at CIEMAT specifically oriented to filling the remaining gaps, are stressed.

  17. An information system for the building industries: A communication approach based on industrial components

    Directory of Open Access Journals (Sweden)

    A F Cutting-Decelle

    2006-01-01

    Full Text Available This paper presents the SYDOX/MATCOMP/Xi project, funded by the French Ministry of Industry. The goal of the project was to provide the construction professionals with an on-line aid for component specification and selection at different levels of the construction life cycle. This two years project started in 1997 and involved several partners. This paper describes the main features of the information system: databases, query and communication systems. SYDOX(SYstème de DOnnées compleXes is aimed at defining and demonstrating a prototype to access information about MATerials and COMPonents used in construction, implemented on a WWW server. Though the objective is general, the work was focused on a restricted sub-section of the construction domain. We describe the domain and the scope of the project, the starting point and the lessons learnt from the development of the prototype. We also propose some important ideas on which this research is based.

  18. A Novel Double Cluster and Principal Component Analysis-Based Optimization Method for the Orbit Design of Earth Observation Satellites

    Directory of Open Access Journals (Sweden)

    Yunfeng Dong

    2017-01-01

    Full Text Available The weighted sum and genetic algorithm-based hybrid method (WSGA-based HM, which has been applied to multiobjective orbit optimizations, is negatively influenced by human factors through the artificial choice of the weight coefficients in weighted sum method and the slow convergence of GA. To address these two problems, a cluster and principal component analysis-based optimization method (CPC-based OM is proposed, in which many candidate orbits are gradually randomly generated until the optimal orbit is obtained using a data mining method, that is, cluster analysis based on principal components. Then, the second cluster analysis of the orbital elements is introduced into CPC-based OM to improve the convergence, developing a novel double cluster and principal component analysis-based optimization method (DCPC-based OM. In DCPC-based OM, the cluster analysis based on principal components has the advantage of reducing the human influences, and the cluster analysis based on six orbital elements can reduce the search space to effectively accelerate convergence. The test results from a multiobjective numerical benchmark function and the orbit design results of an Earth observation satellite show that DCPC-based OM converges more efficiently than WSGA-based HM. And DCPC-based OM, to some degree, reduces the influence of human factors presented in WSGA-based HM.

  19. GIS COMPONENT BASED 3D LANDSLIDE HAZARD ASSESSMENT SYSTEM: 3DSLOPEGIS

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In this paper, based on a new Geographic Information System (GIS) grid-based three-dimensional (3D) deterministic model and taken the slope unit as the study object, the landslide hazard is mapped by the index of the 3D safety factor. Compared with the one-dimensional (1D) model of infinite slope, which is now widely used for deterministic model based landslide hazard assessment in GIS, the GIS grid-based 3D model is more acceptable and is more adaptable for three-dimensional landslide. Assuming the initial slip as the lower part of an ellipsoid, the 3D critical slip surface in the 3D slope stability analysis is obtained by means of a minimization of the 3D safety factor using the Monte Carlo random simulation. Using a hydraulic model tool for the watershed analysis in GIS, an automatic process has been developed for identifying the slope unit from digital elevation model (DEM) data. Compared with the grid-based landslide hazard mapping method, the slope unit possesses clear topographical meaning, so its results are more credible. All the calculations are implemented by a computational program, 3DSlopeGIS, in which a GIS component is used for fulfilling the GIS spatial analysis function, and all the data for the 3D slope safety factor calculation are in the form of GIS data (the vector and the grid layers). Because of all these merits of the GIS-based 3D landslide hazard mapping method, the complex algorithms and iteration procedures of the 3D problem can also be perfectly implemented.

  20. Estimating age from the pubic symphysis: A new component-based system.

    Science.gov (United States)

    Dudzik, Beatrix; Langley, Natalie R

    2015-12-01

    The os pubis is one of the most widely used areas of the skeleton for age estimation. Current pubic symphyseal aging methods for adults combine the morphology associated with the developmental changes that occur into the mid-30s with the degenerative changes that span the latter portion of the age spectrum. The most popular methods are phase-based; however, the definitions currently used to estimate age intervals may not be adequately defined and/or accurately understood by burgeoning researchers and seasoned practitioners alike. This study identifies patterns of growth and maturation in the pubic symphysis to derive more precise age estimates for individuals under 40 years of age. Emphasis is placed on young adults to provide more informative descriptions of epiphyseal changes associated with the final phases of skeletal maturation before degeneration commences. This study investigated macroscopic changes in forensically relevant modern U.S. samples of known age, sex, and ancestry from the Maricopa County Forensic Science Center in Phoenix, Arizona as well as donated individuals from the William M. Bass Forensic and Donated Collections at the University of Tennessee, Knoxville (n=237). Age-related traits at locations with ontogenetic and biomechanical relevance were broken into components and scored. The components included the pubic tubercle, the superior apex of the face, the ventral and dorsal demifaces, and the ventral and dorsal symphyseal margins. Transition analysis was applied to elucidate the transition ages between the morphological states of each component. The categorical scores and transition analysis ages were subjected to multinomial logistic regression and decision tree analysis to derive accurate age interval estimates. Results of these analyses were used to construct a decision tree-style flow chart for practitioner use. High inter-rater agreement of the individual component traits (linear weighted kappa values ≥0.665 for all traits in the