SQA of finite element method (FEM) codes used for analyses of pit storage/transport packages
Russel, E. [Lawrence Livermore National Lab., CA (United States)
1997-11-01
This report contains viewgraphs on the software quality assurance of finite element method codes used for analyses of pit storage and transport projects. This methodology utilizes the ISO 9000-3: Guideline for application of 9001 to the development, supply, and maintenance of software, for establishing well-defined software engineering processes to consistently maintain high quality management approaches.
User's Manual for FEM-BEM Method. 1.0
Butler, Theresa; Deshpande, M. D. (Technical Monitor)
2002-01-01
A user's manual for using FORTRAN code to perform electromagnetic analysis of arbitrarily shaped material cylinders using a hybrid method that combines the finite element method (FEM) and the boundary element method (BEM). In this method, the material cylinder is enclosed by a fictitious boundary and the Maxwell's equations are solved by FEM inside the boundary and by BEM outside the boundary. The electromagnetic scattering on several arbitrarily shaped material cylinders using this FORTRAN code is computed to as examples.
Integrity evaluation for stud female threads on pressure vessel according to ASME code using FEM
Kim, Moon Young; Chung, Nam Yong
2003-01-01
The extension of design life among power plants is increasingly becoming a world-wide trend. Kori no.1 unit in Korea is operating two cycle. It has two man-ways for tube inspection in a steam generator which is one of the important components in a nuclear power plant. Especially, stud bolts for man-way cover have damaged by disassembly and assembly several times and degradation for bolt materials for long term operation. It should be evaluated and compared by ASME code criteria for integrity evaluation. Integrity evaluation criteria which has been made by the manufacturer is not applied on the stud bolts of nuclear pressure vessels directly because it is controlled by the yield stress of ASME code. It can apply evaluation criteria through FEM analysis to damaged female threads and to evaluated safety for helical-coil method which is used according to code case-N-496-1. From analysis results, we found that it is the same results between stress intensity which got from FEM analysis on damaged female threads over 10% by manufacture integrity criteria and 2/3 yield strength criteria on ASME code. It was also confirmed that the helical-coil repair method would be safe
Sasaki, Y [Kyushu University, Fukuoka (Japan). Faculty of Engineering
1996-10-01
Analytical methods considering 3-D resistivity distribution, in particular, finite element method (FEM) were studied to improve the reliability of electromagnetic exploration. Integral equation, difference calculus, FEM and hybrid method are generally used as computational 3-D modeling method. FEM is widely used in various fields because FEM can easily handle complicated shapes and boundaries. However, in electromagnetic method, the assumption of continuous electric field is pointed out as important problem. The normal (orthogonal) component of current density should be continuous at the boundary between media with different conductivities, while this means that the normal component of electric field is discontinuous. In FEM, this means that current channeling is not properly considered, resulting in poor accuracy. Unless this problem is solved, FEM modeling is not practical. As one of the solutions, it is promising to specifically incorporate interior boundary conditions into element equation. 4 refs., 11 figs.
Coupled FEM-DBEM method to assess crack growth in magnet system of Wendelstein 7-X
R. Citarella
2013-10-01
Full Text Available The fivefold symmetric modular stellarator Wendelstein 7-X (W7-X is currently under construction in Greifswald, Germany. The superconducting coils of the magnet system are bolted onto a central support ring and interconnected with five so-called lateral support elements (LSEs per half module. After welding of the LSE hollow boxes to the coil cases, cracks were found in the vicinity of the welds that could potentially limit the allowed number N of electromagnetic (EM load cycles of the machine. In response to the appearance of first cracks during assembly, the Stress Intensity Factors (SIFs were calculated and corresponding crack growth rates of theoretical semi-circular cracks of measured sizes in potentially critical position and orientation were predicted using Paris’ law, whose parameters were calibrated in fatigue tests at cryogenic temperature. In this paper the Dual Boundary Element Method (DBEM is applied in a coupled FEM-DBEM approach to analyze the propagation of multiple cracks with different shapes. For this purpose, the crack path is assessed with the Minimum Strain Energy density criterion and SIFs are calculated by the J-integral approach. The Finite Element Method (FEM is adopted to model, using the commercial codes Ansys or Abaqus;, the overall component whereas the submodel analysis, in the volume surrounding the cracked area, is performed by FEM (“FEM-FEM approach” or alternatively by DBEM (“FEM-DBEM approach”. The “FEM-FEM approach” considers a FEM submodel, that is extracted from the FEM global model; the latter provide the boundary conditions for the submodel. Such approach is affected by some restrictions in the crack propagation phase, whereas, with the “FEM-DBEM approach”, the crack propagation simulation is straightforward. In this case the submodel is created in a DBEM environment with boundary conditions provided by the global FEM analysis; then the crack is introduced and a crack propagation analysis
FEM BASED PARAMETRIC DESIGN STUDY OF TIRE PROFILE USING DEDICATED CAD MODEL AND TRANSLATION CODE
Nikola Korunović
2014-12-01
Full Text Available In this paper a finite element method (FEM based parametric design study of the tire profile shape and belt width is presented. One of the main obstacles that similar studies have faced is how to change the finite element mesh after a modification of the tire geometry is performed. In order to overcome this problem, a new approach is proposed. It implies automatic update of the finite elements mesh, which follows the change of geometric design parameters on a dedicated CAD model. The mesh update is facilitated by an originally developed mapping and translation code. In this way, the performance of a large number of geometrically different tire design variations may be analyzed in a very short time. Although a pilot one, the presented study has also led to the improvement of the existing tire design.
Development of seismic analysis model for HTGR core on commercial FEM code
Tsuji, Nobumasa; Ohashi, Kazutaka
2015-01-01
The aftermath of the Great East Japan Earthquake prods to revise the design basis earthquake intensity severely. In aseismic design of block-type HTGR, the securement of structural integrity of core blocks and other structures which are made of graphite become more important. For the aseismic design of block-type HTGR, it is necessary to predict the motion of core blocks which are collided with adjacent blocks. Some seismic analysis codes have been developed in 1970s, but these codes are special purpose-built codes and have poor collaboration with other structural analysis code. We develop the vertical 2 dimensional analytical model on multi-purpose commercial FEM code, which take into account the multiple impacts and friction between block interfaces and rocking motion on contact with dowel pins of the HTGR core by using contact elements. This model is verified by comparison with the experimental results of 12 column vertical slice vibration test. (author)
Numerical Modelling of the Special Light Source with Novel R-FEM Method
Pavel Fiala
2008-01-01
Full Text Available This paper presents information about new directions in the modelling of lighting systems, and an overview of methods for the modelling of lighting systems. The novel R-FEM method is described, which is a combination of the Radiosity method and the Finite Elements Method (FEM. The paper contains modelling results and their verification by experimental measurements and by the Matlab simulation for this R-FEM method.
Generalized multiscale finite element methods (GMsFEM)
Efendiev, Yalchin R.; Galvis, Juan; Hou, Thomasyizhao
2013-01-01
In this paper, we propose a general approach called Generalized Multiscale Finite Element Method (GMsFEM) for performing multiscale simulations for problems without scale separation over a complex input space. As in multiscale finite element methods (MsFEMs), the main idea of the proposed approach is to construct a small dimensional local solution space that can be used to generate an efficient and accurate approximation to the multiscale solution with a potentially high dimensional input parameter space. In the proposed approach, we present a general procedure to construct the offline space that is used for a systematic enrichment of the coarse solution space in the online stage. The enrichment in the online stage is performed based on a spectral decomposition of the offline space. In the online stage, for any input parameter, a multiscale space is constructed to solve the global problem on a coarse grid. The online space is constructed via a spectral decomposition of the offline space and by choosing the eigenvectors corresponding to the largest eigenvalues. The computational saving is due to the fact that the construction of the online multiscale space for any input parameter is fast and this space can be re-used for solving the forward problem with any forcing and boundary condition. Compared with the other approaches where global snapshots are used, the local approach that we present in this paper allows us to eliminate unnecessary degrees of freedom on a coarse-grid level. We present various examples in the paper and some numerical results to demonstrate the effectiveness of our method. © 2013 Elsevier Inc.
Generalized multiscale finite element methods (GMsFEM)
Efendiev, Yalchin R.
2013-10-01
In this paper, we propose a general approach called Generalized Multiscale Finite Element Method (GMsFEM) for performing multiscale simulations for problems without scale separation over a complex input space. As in multiscale finite element methods (MsFEMs), the main idea of the proposed approach is to construct a small dimensional local solution space that can be used to generate an efficient and accurate approximation to the multiscale solution with a potentially high dimensional input parameter space. In the proposed approach, we present a general procedure to construct the offline space that is used for a systematic enrichment of the coarse solution space in the online stage. The enrichment in the online stage is performed based on a spectral decomposition of the offline space. In the online stage, for any input parameter, a multiscale space is constructed to solve the global problem on a coarse grid. The online space is constructed via a spectral decomposition of the offline space and by choosing the eigenvectors corresponding to the largest eigenvalues. The computational saving is due to the fact that the construction of the online multiscale space for any input parameter is fast and this space can be re-used for solving the forward problem with any forcing and boundary condition. Compared with the other approaches where global snapshots are used, the local approach that we present in this paper allows us to eliminate unnecessary degrees of freedom on a coarse-grid level. We present various examples in the paper and some numerical results to demonstrate the effectiveness of our method. © 2013 Elsevier Inc.
Nicolae APOSTOLESCU
2010-12-01
Full Text Available The main objective of this paper is to describe a code for calculating an equivalent systemof concentrate loads for a FEM analysis. The tables from the Aerodynamic Department containpressure field for a whole bearing surface, and integrated quantities both for the whole surface andfor fixed and mobile part. Usually in a FEM analysis the external loads as concentrated loadsequivalent to the distributed pressure field are introduced. These concentrated forces can also be usedin static tests. Commercial codes provide solutions for this problem, but what we intend to develop isa code adapted to the user’s specific needs.
Application of FEM analytical method for hydrogen migration behaviour in Zirconium alloys
Arioka, K; Ohta, H [Takasago Research and Development Center, Mitsubishi Heavy Industries Ltd, Hyogo-ken (Japan)
1997-02-01
It is well recognized that the hydriding behaviours of Zirconium alloys are very significant problems as a safety issues. Also, it is well known that the diffusion of hydrogen in Zirconium alloys are affected not only by concentration but also temperature gradient. But in actual component, especially heat transfer tube such as fuel rod, we can not avoid the temperature gradient in some degree. So, it is very useful to develop the computer code which can analyze the hydrogen diffusion and precipitation behaviours under temperature gradient as a function of the structure of fuel rod. For this objective, we have developed the computer code for hydrogen migration behaviour using FEM analytical methods. So, following items are presented and discussed. Analytical method and conditions; correlation between the computed and test results; application to designing studies. (author). 8 refs, 4 figs, 2 tabs.
Application of multi-thread computing and domain decomposition to the 3-D neutronics Fem code Cronos
Ragusa, J.C.
2003-01-01
The purpose of this paper is to present the parallelization of the flux solver and the isotopic depletion module of the code, either using Message Passing Interface (MPI) or OpenMP. Thread parallelism using OpenMP was used to parallelize the mixed dual FEM (finite element method) flux solver MINOS. Investigations regarding the opportunity of mixing parallelism paradigms will be discussed. The isotopic depletion module was parallelized using domain decomposition and MPI. An attempt at using OpenMP was unsuccessful and will be explained. This paper is organized as follows: the first section recalls the different types of parallelism. The mixed dual flux solver and its parallelization are then presented. In the third section, we describe the isotopic depletion solver and its parallelization; and finally conclude with some future perspectives. Parallel applications are mandatory for fine mesh 3-dimensional transport and simplified transport multigroup calculations. The MINOS solver of the FEM neutronics code CRONOS2 was parallelized using the directive based standard OpenMP. An efficiency of 80% (resp. 60%) was achieved with 2 (resp. 4) threads. Parallelization of the isotopic depletion solver was obtained using domain decomposition principles and MPI. Efficiencies greater than 90% were reached. These parallel implementations were tested on a shared memory symmetric multiprocessor (SMP) cluster machine. The OpenMP implementation in the solver MINOS is only the first step towards fully using the SMPs cluster potential with a mixed mode parallelism. Mixed mode parallelism can be achieved by combining message passing interface between clusters with OpenMP implicit parallelism within a cluster
Application of multi-thread computing and domain decomposition to the 3-D neutronics Fem code Cronos
Ragusa, J.C. [CEA Saclay, Direction de l' Energie Nucleaire, Service d' Etudes des Reacteurs et de Modelisations Avancees (DEN/SERMA), 91 - Gif sur Yvette (France)
2003-07-01
The purpose of this paper is to present the parallelization of the flux solver and the isotopic depletion module of the code, either using Message Passing Interface (MPI) or OpenMP. Thread parallelism using OpenMP was used to parallelize the mixed dual FEM (finite element method) flux solver MINOS. Investigations regarding the opportunity of mixing parallelism paradigms will be discussed. The isotopic depletion module was parallelized using domain decomposition and MPI. An attempt at using OpenMP was unsuccessful and will be explained. This paper is organized as follows: the first section recalls the different types of parallelism. The mixed dual flux solver and its parallelization are then presented. In the third section, we describe the isotopic depletion solver and its parallelization; and finally conclude with some future perspectives. Parallel applications are mandatory for fine mesh 3-dimensional transport and simplified transport multigroup calculations. The MINOS solver of the FEM neutronics code CRONOS2 was parallelized using the directive based standard OpenMP. An efficiency of 80% (resp. 60%) was achieved with 2 (resp. 4) threads. Parallelization of the isotopic depletion solver was obtained using domain decomposition principles and MPI. Efficiencies greater than 90% were reached. These parallel implementations were tested on a shared memory symmetric multiprocessor (SMP) cluster machine. The OpenMP implementation in the solver MINOS is only the first step towards fully using the SMPs cluster potential with a mixed mode parallelism. Mixed mode parallelism can be achieved by combining message passing interface between clusters with OpenMP implicit parallelism within a cluster.
Fracture Capabilities in Grizzly with the extended Finite Element Method (X-FEM)
Dolbow, John [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhang, Ziyu [Idaho National Lab. (INL), Idaho Falls, ID (United States); Spencer, Benjamin [Idaho National Lab. (INL), Idaho Falls, ID (United States); Jiang, Wen [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2015-09-01
Efforts are underway to develop fracture mechanics capabilities in the Grizzly code to enable it to be used to perform deterministic fracture assessments of degraded reactor pressure vessels (RPVs). A capability was previously developed to calculate three-dimensional interaction- integrals to extract mixed-mode stress-intensity factors. This capability requires the use of a finite element mesh that conforms to the crack geometry. The eXtended Finite Element Method (X-FEM) provides a means to represent a crack geometry without explicitly fitting the finite element mesh to it. This is effected by enhancing the element kinematics to represent jump discontinuities at arbitrary locations inside of the element, as well as the incorporation of asymptotic near-tip fields to better capture crack singularities. In this work, use of only the discontinuous enrichment functions was examined to see how accurate stress intensity factors could still be calculated. This report documents the following work to enhance Grizzly’s engineering fracture capabilities by introducing arbitrary jump discontinuities for prescribed crack geometries; X-FEM Mesh Cutting in 3D: to enhance the kinematics of elements that are intersected by arbitrary crack geometries, a mesh cutting algorithm was implemented in Grizzly. The algorithm introduces new virtual nodes and creates partial elements, and then creates a new mesh connectivity; Interaction Integral Modifications: the existing code for evaluating the interaction integral in Grizzly was based on the assumption of a mesh that was fitted to the crack geometry. Modifications were made to allow for the possibility of a crack front that passes arbitrarily through the mesh; and Benchmarking for 3D Fracture: the new capabilities were benchmarked against mixed-mode three-dimensional fracture problems with known analytical solutions.
Wang, Yaqi; Rabiti, Cristian; Palmiotti, Giuseppe, E-mail: yaqi.wang@inl.gov, E-mail: cristian.rabiti@inl.gov, E-mail: giuseppe.palmiotti@inl.gov [Idaho National Laboratory, Idaho Falls, ID (United States)
2011-07-01
This paper proposes a new set of Krylov solvers, CG and GMRes, as an alternative of the Red-Black (RB) algorithm on on solving the steady-state one-speed neutron transport equation discretized with PN in angle and hybrid FEM (Finite Element Method) in space. A pre conditioner with the low-order RB iteration is designed to improve their convergence. These Krylov solvers can reduce the cost of pre-assembling the response matrices greatly. Numerical results with the INSTANT code are presented in order to show that they can be a good supplement on solving the PN-HFEM system. (author)
Wang, Yaqi; Rabiti, Cristian; Palmiotti, Giuseppe
2011-01-01
This paper proposes a new set of Krylov solvers, CG and GMRes, as an alternative of the Red-Black (RB) algorithm on on solving the steady-state one-speed neutron transport equation discretized with PN in angle and hybrid FEM (Finite Element Method) in space. A pre conditioner with the low-order RB iteration is designed to improve their convergence. These Krylov solvers can reduce the cost of pre-assembling the response matrices greatly. Numerical results with the INSTANT code are presented in order to show that they can be a good supplement on solving the PN-HFEM system. (author)
Zaidi, N. A.; Rosli, Muhamad Farizuan; Effendi, M. S. M.; Abdullah, Mohamad Hariri
2017-09-01
For almost all injection molding applications of Polyethylene Terephthalate (PET) plastic was analyzed the strength, durability and stiffness of properties by using Finite Element Method (FEM) for jointing system of wood furniture. The FEM was utilized for analyzing the PET jointing system for Oak and Pine as wood based material of furniture. The difference pattern design of PET as wood jointing furniture gives the difference value of strength furniture itself. The results show the wood specimen with grooves and eclipse pattern design PET jointing give lower global estimated error is 28.90%, compare to the rectangular and non-grooves wood specimen of global estimated error is 63.21%.
Superczyńska M.
2016-09-01
Full Text Available The paper presents results of numerical calculations of a diaphragm wall model executed in Poznań clay formation. Two selected FEM codes were applied, Plaxis and Abaqus. Geological description of Poznań clay formation in Poland as well as geotechnical conditions on construction site in Warsaw city area were presented. The constitutive models of clay implemented both in Plaxis and Abaqus were discussed. The parameters of the Poznań clay constitutive models were assumed based on authors’ experimental tests. The results of numerical analysis were compared taking into account the measured values of horizontal displacements.
Numerical calculation of acoustic radiation from band-vibrating structures via FEM/FAQP method
GAO Honglin
2017-08-01
Full Text Available The Finite Element Method (FEM combined with the Frequency Averaged Quadratic Pressure method (FAQP are used to calculate the acoustic radiation of structures excited in the frequency band. The surface particle velocity of stiffened cylindrical shells under frequency band excitation is calculated using finite element software, the normal vibration velocity is converted from the surface particle velocity to calculate the average energy source (frequency averaged across intensity, frequency averaged across pressure and frequency averaged across velocity, and the FAQP method is used to calculate the average sound pressure level within the bandwidth. The average sound pressure levels are then compared with the bandwidth using finite element and boundary element software, and the results show that FEM combined with FAQP is more suitable for high frequencies and can be used to calculate the average sound pressure level in the 1/3 octave band with good stability, presenting an alternative to applying frequency-by-frequency calculation and the average frequency process. The FEM/FAQP method can be used as a prediction method for calculating acoustic radiation while taking the randomness of vibration at medium and high frequencies into consideration.
Gu Fangyu; Zeng Xiao
1990-01-01
It is considered impossible to inspect flaw by using ordinary mechanical measuring methods. In this paper, it is found that the stree and strain distortions of pressure vessel with 2D linear shape crack in the deep location appear the 'cat effect' on the surface of stracture, and that the location and size of the crack can be determined with strain measuring and FEM according to 'cat effect' of strain distortion
FEM-2D - Input description and performance
Schmidt, F.A.R.
1975-03-01
FEM-2D solves the 2d diffusion equation by the Finite Element Method. This version of the code was written for x-y geometry, triangular elements with first and second order flux approximations, and has a solution routine which is based on a modified Cholesky procedure. FEM-2D is fully integrated into the modular system RSYST. However, we have developed a simulation program RSIMK which simulates some of the functions of RSYST and allows to run FEM-2D independently. (orig.) [de
Rao, Chengping; Zhang, Youlin; Wan, Decheng
2017-12-01
Fluid-Structure Interaction (FSI) caused by fluid impacting onto a flexible structure commonly occurs in naval architecture and ocean engineering. Research on the problem of wave-structure interaction is important to ensure the safety of offshore structures. This paper presents the Moving Particle Semi-implicit and Finite Element Coupled Method (MPS-FEM) to simulate FSI problems. The Moving Particle Semi-implicit (MPS) method is used to calculate the fluid domain, while the Finite Element Method (FEM) is used to address the structure domain. The scheme for the coupling of MPS and FEM is introduced first. Then, numerical validation and convergent study are performed to verify the accuracy of the solver for solitary wave generation and FSI problems. The interaction between the solitary wave and an elastic structure is investigated by using the MPS-FEM coupled method.
Modelling of WWER fuel rod during LOCA conditions using FEM code ANSYS
Bogatyr, S. M.; Krupkin, A. V.; Kuznetsov, V. I.; Novikov, V. V.; Petrov, O. M.; Shestopalov, A. A.
2013-01-01
The report presents the results of the computer simulation of the IFA-650.6 experiment, the sixth test in Halden LOCA test project series, performed in May 18, 2007 with a pre-irradiated WWER-440 fuel with maximum burnup of 56 MWd/kgU. The thermo-mechanical analysis was fulfilled with the license finite element ANSYS code package.The calculation was carried out with the 2D axisymmetric and 3D problem definitions. Analysis of the calculational results shows that the ANSYS code can adequately simulate thermo-mechanical behavior of cladding under IFA-650.6 test conditions. (authors)
Higher Order, Hybrid BEM/FEM Methods Applied to Antenna Modeling
Fink, P. W.; Wilton, D. R.; Dobbins, J. A.
2002-01-01
In this presentation, the authors address topics relevant to higher order modeling using hybrid BEM/FEM formulations. The first of these is the limitation on convergence rates imposed by geometric modeling errors in the analysis of scattering by a dielectric sphere. The second topic is the application of an Incomplete LU Threshold (ILUT) preconditioner to solve the linear system resulting from the BEM/FEM formulation. The final tOpic is the application of the higher order BEM/FEM formulation to antenna modeling problems. The authors have previously presented work on the benefits of higher order modeling. To achieve these benefits, special attention is required in the integration of singular and near-singular terms arising in the surface integral equation. Several methods for handling these terms have been presented. It is also well known that achieving he high rates of convergence afforded by higher order bases may als'o require the employment of higher order geometry models. A number of publications have described the use of quadratic elements to model curved surfaces. The authors have shown in an EFIE formulation, applied to scattering by a PEC .sphere, that quadratic order elements may be insufficient to prevent the domination of modeling errors. In fact, on a PEC sphere with radius r = 0.58 Lambda(sub 0), a quartic order geometry representation was required to obtain a convergence benefi.t from quadratic bases when compared to the convergence rate achieved with linear bases. Initial trials indicate that, for a dielectric sphere of the same radius, - requirements on the geometry model are not as severe as for the PEC sphere. The authors will present convergence results for higher order bases as a function of the geometry model order in the hybrid BEM/FEM formulation applied to dielectric spheres. It is well known that the system matrix resulting from the hybrid BEM/FEM formulation is ill -conditioned. For many real applications, a good preconditioner is required
Numerical modelling of pressure suppression pools with CFD and FEM codes
Paettikangas, T.; Niemi, J.; Timperi, A. (VTT Technical Research Centre of Finland (Finland))
2011-06-15
Experiments on large-break loss-of-coolant accident for BWR is modeled with computational fluid (CFD) dynamics and finite element calculations. In the CFD calculations, the direct-contact condensation in the pressure suppression pool is studied. The heat transfer in the liquid phase is modeled with the Hughes-Duffey correlation based on the surface renewal model. The heat transfer is proportional to the square root of the turbulence kinetic energy. The condensation models are implemented with user-defined functions in the Euler-Euler two-phase model of the Fluent 12.1 CFD code. The rapid collapse of a large steam bubble and the resulting pressure source is studied analytically and numerically. Pressure source obtained from simplified calculations is used for studying the structural effects and FSI in a realistic BWR containment. The collapse results in volume acceleration, which induces pressure loads on the pool walls. In the case of a spherical bubble, the velocity term of the volume acceleration is responsible of the largest pressure load. As the amount of air in the bubble is decreased, the peak pressure increases. However, when the water compressibility is accounted for, the finite speed of sound becomes a limiting factor. (Author)
FEM simulation of friction testing method based on combined forward rod-backward can extrusion
Nakamura, T; Bay, Niels; Zhang, Z. L
1997-01-01
A new friction testing method by combined forward rod-backward can extrusion is proposed in order to evaluate frictional characteristics of lubricants in forging processes. By this method the friction coefficient mu and the friction factor m can be estimated along the container wall and the conical...... curves are obtained by rigid-plastic FEM simulations in a combined forward rod-backward can extrusion process for a reduction in area R-b = 25, 50 and 70 percent in the backward can extrusion. It is confirmed that the friction factor m(p) on the punch nose in the backward cart extrusion has almost...... in a mechanical press with aluminium alloy A6061 as the workpiece material and different kinds of lubricants. They confirm the analysis resulting in reasonable values for the friction coefficient and the friction factor....
Stress and displacement analysis of a modern design lathe body by the fi nite element method (FEM
R. Staniek
2012-01-01
Full Text Available The Finite element method (FEM was used in this study for the analysis of the strain and stress of a turning machine body. The fi nal design decisions were made on the basis of stress and displacement fi eld analysis of various design versions related to the structure of the considered machine tool. The results presented in this paper will be helpful for practical static and dynamic strength evaluation as well as for the appropriate design of machine tools using the FEM.
Analysis of submerged implant towards mastication load using 3D finite element method (FEM
Widia Hafsyah Sumarlina Ritonga
2016-11-01
Full Text Available Introduction: The surgical procedure for implantation of a surgical implant comprising a stage for the implant design nonsubmerged and two stages for submerged. Submerged implant design often used in Faculty of Dentistry Universitas Padjadjaran because it is safer in achieving osseointegration. This study was conducted to evaluate the failure of dental implant based on location and the value of internal tensiones as well as supporting tissues when given mastication load by using the 3D Finite Element Method (FEM. Methods: This study used a photograph of the mandibular CBCT patient and CT Scan Micro one implant submerged. Radiograph image was then converted into a digital model of the 3D computerized finite element, inputted the material properties, pedestal, then simulated the occlusion load as much as 87N and 29N of frictional Results: The maximum tension location on the implant was located on the exact side of the contact area between the implant and alveolar crest. The maximum tension value was 193.31MPa on the implant body. The value was below the limit value of the ability of the titanium alloy to withstand fracture (860 MPa. Conclusion: The location of the maximum tension on the body of the implant was located on the exact contact area between the implant-abutment and alveolar crest. Under the mastication load, this implant design found no failure.
Uchibori, Akihiro; Ohshima, Hiroyuki
2008-01-01
A numerical analysis method for melting/solidification phenomena has been developed to evaluate a feasibility of several candidate techniques in the nuclear fuel cycle. Our method is based on the eXtended Finite Element Method (X-FEM) which has been used for moving boundary problems. Key technique of the X-FEM is to incorporate signed distance function into finite element interpolation to represent a discontinuous gradient of the temperature at a moving solid-liquid interface. Construction of the finite element equation, the technique of quadrature and the method to solve the equation are reported here. The numerical solutions of the one-dimensional Stefan problem, solidification in a two-dimensional square corner and melting of pure gallium are compared to the exact solutions or to the experimental data. Through these analyses, validity of the newly developed numerical analysis method has been demonstrated. (author)
A FEM-based method to determine the complex material properties of piezoelectric disks.
Pérez, N; Carbonari, R C; Andrade, M A B; Buiochi, F; Adamowski, J C
2014-08-01
Numerical simulations allow modeling piezoelectric devices and ultrasonic transducers. However, the accuracy in the results is limited by the precise knowledge of the elastic, dielectric and piezoelectric properties of the piezoelectric material. To introduce the energy losses, these properties can be represented by complex numbers, where the real part of the model essentially determines the resonance frequencies and the imaginary part determines the amplitude of each resonant mode. In this work, a method based on the Finite Element Method (FEM) is modified to obtain the imaginary material properties of piezoelectric disks. The material properties are determined from the electrical impedance curve of the disk, which is measured by an impedance analyzer. The method consists in obtaining the material properties that minimize the error between experimental and numerical impedance curves over a wide range of frequencies. The proposed methodology starts with a sensitivity analysis of each parameter, determining the influence of each parameter over a set of resonant modes. Sensitivity results are used to implement a preliminary algorithm approaching the solution in order to avoid the search to be trapped into a local minimum. The method is applied to determine the material properties of a Pz27 disk sample from Ferroperm. The obtained properties are used to calculate the electrical impedance curve of the disk with a Finite Element algorithm, which is compared with the experimental electrical impedance curve. Additionally, the results were validated by comparing the numerical displacement profile with the displacements measured by a laser Doppler vibrometer. The comparison between the numerical and experimental results shows excellent agreement for both electrical impedance curve and for the displacement profile over the disk surface. The agreement between numerical and experimental displacement profiles shows that, although only the electrical impedance curve is
Automatic coding method of the ACR Code
Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi
1993-01-01
The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology
Analysis on the geometrical shape of T-honeycomb structure by finite element method (FEM)
Zain, Fitri; Rosli, Muhamad Farizuan; Effendi, M. S. M.; Abdullah, Mohamad Hariri
2017-09-01
Geometric in design is much related with our life. Each of the geometrical structure interacts with each other. The overall shape of an object contains other shape inside, and there shapes create a relationship between each other in space. Besides that, how geometry relates to the function of the object have to be considerate. In this project, the main purpose was to design the geometrical shape of modular furniture with the shrinking of Polyethylene Terephthalate (PET) jointing system that has good strength when applied load on it. But, the goal of this paper is focusing on the analysis of Static Cases by FEM of the hexagonal structure to obtain the strength when load apply on it. The review from the existing product has many information and very helpful to finish this paper. This project focuses on hexagonal shape that distributed to become a shelf inspired by honeycomb structure. It is very natural look and simple in shape and its modular structure more easily to separate and combine. The method discusses on chapter methodology are the method used to analysis the strength when the load applied to the structure. The software used to analysis the structure is Finite Element Method from CATIA V5R21 software. Bending test is done on the jointing part between the edges of the hexagonal shape by using Universal Tensile Machine (UTM). The data obtained have been calculate by bending test formulae and sketch the graph between flexural strains versus flexural stress. The material selection of the furniture is focused on wood. There are three different types of wood such as balsa, pine and oak, while the properties of jointing also be mentioned in this thesis. Hence, the design structural for honeycomb shape already have in the market but this design has main objective which has a good strength that can withstand maximum load and offers more potentials in the form of furniture.
Kurz, S
1999-01-01
In this paper a new technique for the accurate calculation of magnetic fields in the end regions of superconducting accelerator magnets is presented. This method couples Boundary Elements (BEM) which discretize the surface of the iron yoke and Finite Elements (FEM) for the modelling of the nonlinear interior of the yoke. The BEM-FEM method is therefore specially suited for the calculation of 3-dimensional effects in the magnets, as the coils and the air regions do not have to be represented in the finite-element mesh and discretization errors only influence the calculation of the magnetization (reduced field) of the yoke. The method has been recently implemented into the CERN-ROXIE program package for the design and optimization of the LHC magnets. The field shape and multipole errors in the two-in-one LHC dipoles with its coil ends sticking out of the common iron yoke is presented.
Structural reliability methods: Code development status
Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.
1991-05-01
The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.
Dodig, H.
2017-11-01
This contribution presents the boundary integral formulation for numerical computation of time-harmonic radar cross section for 3D targets. Method relies on hybrid edge element BEM/FEM to compute near field edge element coefficients that are associated with near electric and magnetic fields at the boundary of the computational domain. Special boundary integral formulation is presented that computes radar cross section directly from these edge element coefficients. Consequently, there is no need for near-to-far field transformation (NTFFT) which is common step in RCS computations. By the end of the paper it is demonstrated that the formulation yields accurate results for canonical models such as spheres, cubes, cones and pyramids. Method has demonstrated accuracy even in the case of dielectrically coated PEC sphere at interior resonance frequency which is common problem for computational electromagnetic codes.
Evaluating DEM results with FEM perspectives of load : soil interaction
Tadesse, D.
2004-01-01
Keywords: Load - soil interaction, soil structure, soil mechanical properties, FEM (Finite Element Method), Plaxis (Finite Element Code), granular particles, shear stress, DEM (Distinct Element Method),
The spammed code offset method
Skoric, B.; Vreede, de N.
2013-01-01
Helper data schemes are a security primitive used for privacy-preserving biometric databases and Physical Unclonable Functions. One of the oldest known helper data schemes is the Code Offset Method (COM). We propose an extension of the COM: the helper data is accompanied by many instances of fake
The spammed code offset method
Skoric, B.; Vreede, de N.
2014-01-01
Helper data schemes are a security primitive used for privacy-preserving biometric databases and physical unclonable functions. One of the oldest known helper data schemes is the code offset method (COM). We propose an extension of the COM: the helper data are accompanied by many instances of fake
Dubcova, Lenka; Solin, Pavel; Hansen, Glen; Park, HyeongKae
2011-01-01
Multiphysics solution challenges are legion within the field of nuclear reactor design and analysis. One major issue concerns the coupling between heat and neutron flow (neutronics) within the reactor assembly. These phenomena are usually very tightly interdependent, as large amounts of heat are quickly produced with an increase in fission events within the fuel, which raises the temperature that affects the neutron cross section of the fuel. Furthermore, there typically is a large diversity of time and spatial scales between mathematical models of heat and neutronics. Indeed, the different spatial resolution requirements often lead to the use of very different meshes for the two phenomena. As the equations are coupled, one must take care in exchanging solution data between them, or significant error can be introduced into the coupled problem. We propose a novel approach to the discretization of the coupled problem on different meshes based on an adaptive multimesh higher-order finite element method (hp-FEM), and compare it to popular interpolation and projection methods. We show that the multimesh hp-FEM method is significantly more accurate than the interpolation and projection approaches considered in this study.
Modification of the FEM3 model to ensure mass conservation
Gresho, P.M.
1987-01-01
The problem of global mass conservation (lack thereof) in the current anelastic equations solved by FEM3 is described and its cause explained. The additional equations necessary to solve the problem are presented and methods for their incorporation into the current code are suggested. 14 refs
Development of a 2-D Simplified P3 FEM Solver for Arbitrary Geometry Applications
Ryu, Eun Hyun; Joo, Han Gyu [Seoul National University, Seoul (Korea, Republic of)
2010-10-15
In the calculation of power distributions and multiplication factors in a nuclear reactor, the Finite Difference Method (FDM) and the nodal methods are primarily used. These methods are, however, limited to particular geometries and lack general application involving arbitrary geometries. The Finite Element Method (FEM) can be employed for arbitrary geometry application and there are numerous FEM codes to solve the neutron diffusion equation or the Sn transport equation. The diffusion based FEM codes have the drawback of inferior accuracy while the Sn based ones require a considerable computing time. This work here is to seek a compromise between these two by employing the simplified P3 (SP3) method for arbitrary geometry applications. Sufficient accuracy with affordable computing time and resources would be achieved with this choice of approximate transport solution when compared to full FEM based Pn or Sn solutions. For now only 2-D solver is considered
Chen, Xiaodong; Sadineni, Vikram; Maity, Mita; Quan, Yong; Enterline, Matthew; Mantri, Rao V
2015-12-01
Lyophilization is an approach commonly undertaken to formulate drugs that are unstable to be commercialized as ready to use (RTU) solutions. One of the important aspects of commercializing a lyophilized product is to transfer the process parameters that are developed in lab scale lyophilizer to commercial scale without a loss in product quality. This process is often accomplished by costly engineering runs or through an iterative process at the commercial scale. Here, we are highlighting a combination of computational and experimental approach to predict commercial process parameters for the primary drying phase of lyophilization. Heat and mass transfer coefficients are determined experimentally either by manometric temperature measurement (MTM) or sublimation tests and used as inputs for the finite element model (FEM)-based software called PASSAGE, which computes various primary drying parameters such as primary drying time and product temperature. The heat and mass transfer coefficients will vary at different lyophilization scales; hence, we present an approach to use appropriate factors while scaling-up from lab scale to commercial scale. As a result, one can predict commercial scale primary drying time based on these parameters. Additionally, the model-based approach presented in this study provides a process to monitor pharmaceutical product robustness and accidental process deviations during Lyophilization to support commercial supply chain continuity. The approach presented here provides a robust lyophilization scale-up strategy; and because of the simple and minimalistic approach, it will also be less capital intensive path with minimal use of expensive drug substance/active material.
Nakamura, T; Bay, Niels
1998-01-01
A new friction testing method based on combined forward conical can-backward straight can extrusion is proposed in order to evaluate friction characteristics in severe metal forming operations. By this method the friction coefficient along the conical punch surface is determined knowing...... the friction coefficient along the die wall. The latter is determined by a combined forward and backward can extrusion of straight cans. Calibration curves determining the relationship between punch travel, can heights, and friction coefficient for the two rests are calculated based on a rigid-plastic FEM...... analysis. Experimental friction tests are carried out in a mechanical press with aluminium alloy A6061 as the workpiece material and different kinds of lubricants. They confirm that the theoretical analysis results irt reasonable values for the friction coefficient....
Cone Beam X-Ray Luminescence Tomography Imaging Based on KA-FEM Method for Small Animals.
Chen, Dongmei; Meng, Fanzhen; Zhao, Fengjun; Xu, Cao
2016-01-01
Cone beam X-ray luminescence tomography can realize fast X-ray luminescence tomography imaging with relatively low scanning time compared with narrow beam X-ray luminescence tomography. However, cone beam X-ray luminescence tomography suffers from an ill-posed reconstruction problem. First, the feasibility of experiments with different penetration and multispectra in small animal has been tested using nanophosphor material. Then, the hybrid reconstruction algorithm with KA-FEM method has been applied in cone beam X-ray luminescence tomography for small animals to overcome the ill-posed reconstruction problem, whose advantage and property have been demonstrated in fluorescence tomography imaging. The in vivo mouse experiment proved the feasibility of the proposed method.
Romppanen, A.-J.; Immonen, E.
2013-12-01
The residual stresses formed as a result of Electronic Beam welding (EB-welding) in copper are investigated by Posiva. In the present study, residual stresses of EB-welded copper plates were studied with contour method. In the method eleven copper plates (X436 - X440 and X453 - X458) were cut in half with wire electric discharge machining (EDM) after which the deformation due to stress relaxation was measured with coordinate measurement system. The measured data was then used as boundary displacement data for the FEM analyses, in which the corresponding residual stresses were calculated. Before giving the corresponding displacement boundary conditions to the FE models, the deformation data was processed and smoothed appropriately. The residual stress levels of the copper plates were found to be around 40 - 55 MPa at maximum. This corresponds to other reported residual stress measurements and current state of knowledge with this material in Posiva. (orig.)
Efendiev, Yalchin R.
2013-08-21
In this paper, we propose multilevel Monte Carlo (MLMC) methods that use ensemble level mixed multiscale methods in the simulations of multiphase flow and transport. The contribution of this paper is twofold: (1) a design of ensemble level mixed multiscale finite element methods and (2) a novel use of mixed multiscale finite element methods within multilevel Monte Carlo techniques to speed up the computations. The main idea of ensemble level multiscale methods is to construct local multiscale basis functions that can be used for any member of the ensemble. In this paper, we consider two ensemble level mixed multiscale finite element methods: (1) the no-local-solve-online ensemble level method (NLSO); and (2) the local-solve-online ensemble level method (LSO). The first approach was proposed in Aarnes and Efendiev (SIAM J. Sci. Comput. 30(5):2319-2339, 2008) while the second approach is new. Both mixed multiscale methods use a number of snapshots of the permeability media in generating multiscale basis functions. As a result, in the off-line stage, we construct multiple basis functions for each coarse region where basis functions correspond to different realizations. In the no-local-solve-online ensemble level method, one uses the whole set of precomputed basis functions to approximate the solution for an arbitrary realization. In the local-solve-online ensemble level method, one uses the precomputed functions to construct a multiscale basis for a particular realization. With this basis, the solution corresponding to this particular realization is approximated in LSO mixed multiscale finite element method (MsFEM). In both approaches, the accuracy of the method is related to the number of snapshots computed based on different realizations that one uses to precompute a multiscale basis. In this paper, ensemble level multiscale methods are used in multilevel Monte Carlo methods (Giles 2008a, Oper.Res. 56(3):607-617, b). In multilevel Monte Carlo methods, more accurate
Parametric FEM for geometric biomembranes
Bonito, Andrea; Nochetto, Ricardo H.; Sebastian Pauletti, M.
2010-05-01
We consider geometric biomembranes governed by an L2-gradient flow for bending energy subject to area and volume constraints (Helfrich model). We give a concise derivation of a novel vector formulation, based on shape differential calculus, and corresponding discretization via parametric FEM using quadratic isoparametric elements and a semi-implicit Euler method. We document the performance of the new parametric FEM with a number of simulations leading to dumbbell, red blood cell and toroidal equilibrium shapes while exhibiting large deformations.
Gao, Kai; Fu, Shubin; Gibson, Richard L.; Chung, Eric T.; Efendiev, Yalchin
2015-01-01
It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale medium property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system
Gao, Kai, E-mail: kaigao87@gmail.com [Department of Geology and Geophysics, Texas A& M University, College Station, TX 77843 (United States); Fu, Shubin, E-mail: shubinfu89@gmail.com [Department of Mathematics, Texas A& M University, College Station, TX 77843 (United States); Gibson, Richard L., E-mail: gibson@tamu.edu [Department of Geology and Geophysics, Texas A& M University, College Station, TX 77843 (United States); Chung, Eric T., E-mail: tschung@math.cuhk.edu.hk [Department of Mathematics, The Chinese University of Hong Kong, Shatin, NT (Hong Kong); Efendiev, Yalchin, E-mail: efendiev@math.tamu.edu [Department of Mathematics, Texas A& M University, College Station, TX 77843 (United States); Numerical Porous Media SRI Center (NumPor), King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)
2015-08-15
It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale medium property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.
Gao, Kai
2015-04-14
It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both boundaries and the interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale medium property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.
Chang, H. Y.; Lim, B. T.; Kim, K. S.; Kim, J. W.; Park, H. B. [KEPCO Engineering and Construction Company, Gimcheon (Korea, Republic of); Kim, Y. S.; Kim, K. T. [Andong National University, Andong (Korea, Republic of)
2017-06-15
Coal tar-coated pipes buried in a domestic nuclear power plant have operated under the cathodic protection. This work conducted the simulation of the coating performance of these pipes using a FEM method. The pipes, being ductile cast iron have been suffered under considerably high cathodic protection condition beyond the appropriate condition. However, cathodic potential measured at the site revealed non-protected status. Converting from 3D CAD data of the power plant to appropriate type for a FEM simulation was conducted and cathodic potential under the applied voltage and current was calculated using primary and secondary current distribution and physical conditions. FEM simulation for coal tar-coated pipe without defects revealed over-protection condition if the pipes were well-coated. However, the simulation for coal tar-coated pipes with many defects predict that the coated pipes may be severely degraded. Therefore, for high risk pipes, direct examination and repair or renewal of pipes are strongly recommended.
Applications of ATILA FEM software to smart materials case studies in designing devices
Uchino, Kenji
2013-01-01
ATILA Finite Element Method (FEM) software facilitates the modelling and analysis of applications using piezoelectric, magnetostrictor and shape memory materials. It allows entire designs to be constructed, refined and optimized before production begins. Through a range of instructive case studies, Applications of ATILA FEM software to smart materials provides an indispensable guide to the use of this software in the design of effective products.Part one provides an introduction to ATILA FEM software, beginning with an overview of the software code. New capabilities and loss integratio
Application of FEM analysis methods to a cylinder-cylinder intersection structure
Xue Liping; Widera, G.E.O.; Sang Zhifu
2005-01-01
The objective of this paper is to study a particular cylindrical shell intersection (d/D=0.526) by use of both linear elastic and elastic-plastic stress analyses via the finite element method using the FEA software ANSYS. The former mainly focuses on the calculation of the stress concentration and flexibility factors in the intersection area before the structure experiences plastic behavior. When an elastic-plastic analysis method is employed, the limit load and burst pressure need to be determined. In this study, two different methods, the 'double elastic-slope method' and the 'tangent intersection method' are both employed to determine the limit pressure. To predict the burst pressure and failure location, the 'arc-length method' in ANSYS is used to solve the nonlinear problem. Finally, the FEA results are compared to experimental data and the agreement is shown to be good. (authors)
Two-Level Hierarchical FEM Method for Modeling Passive Microwave Devices
Polstyanko, Sergey V.; Lee, Jin-Fa
1998-03-01
In recent years multigrid methods have been proven to be very efficient for solving large systems of linear equations resulting from the discretization of positive definite differential equations by either the finite difference method or theh-version of the finite element method. In this paper an iterative method of the multiple level type is proposed for solving systems of algebraic equations which arise from thep-version of the finite element analysis applied to indefinite problems. A two-levelV-cycle algorithm has been implemented and studied with a Gauss-Seidel iterative scheme used as a smoother. The convergence of the method has been investigated, and numerical results for a number of numerical examples are presented.
Gollub, Erica L; Cyrus, Elena; Dévieux, Jessy G; Jean-Gilles, Michèle; Neptune, Sandra; Pelletier, Valerie; Michel, Hulda; Sévère, Marie; Pierre, Laurinus
2015-01-01
Worldwide, women report the need for safe, non-hormonal, woman-initiated methods of family planning. Cervical barriers provide such technology but are under-researched and under-promoted. In the USA, there are few studies of cervical barriers among women at high unmet need for contraception. A feasibility study of the FemCap™ was conducted among US women of Haitian origin. Participants were heterosexual and seeking to avoid pregnancy. At first visit, participants completed baseline assessments, underwent group counselling and were fitted with FemCap™. Women were asked to insert or use the cap at home. The second visit (2-3 weeks) included an interviewer-administered questionnaire and a focus-group discussion. Participants (n = 20) were Haitian-born (70%), married (55%) and parous (85%). Their mean age was 32.6 years. Seventy percent reported recent unprotected sex. All women inserted the device at home and 9 women used it during intercourse, including 5 without prior partner negotiation. Of 20 women, 11 liked FemCap™ very much or somewhat; 7 considered it 'OK'; 2 disliked it. Best-liked attributes were comfort, discreet wear and reusability. Difficulties with removal abated over time. Qualitative data revealed a high value placed on lack of systemic side effects. Use of FemCap™ was feasible and acceptable, supporting expansion of research, particularly among relevant populations with unmet need.
FEM-based Printhead Intelligent Adjusting Method for Printing Conduct Material
Liang Xiaodan
2017-01-01
Full Text Available Ink-jet printing circuit board has some advantage, such as non-contact manufacture, high manufacture accuracy, and low pollution and so on. In order to improve the and printing precision, the finite element technology is adopted to model the piezoelectric print heads, and a new bacteria foraging algorithm with a lifecycle strategy is proposed to optimize the parameters of driving waveforms for getting the desired droplet characteristics. Results of numerical simulation show such algorithm has a good performance. Additionally, the droplet jetting simulation results and measured results confirmed such method precisely gets the desired droplet characteristics.
Preconditioning cubic spline collocation method by FEM and FDM for elliptic equations
Kim, Sang Dong [KyungPook National Univ., Taegu (Korea, Republic of)
1996-12-31
In this talk we discuss the finite element and finite difference technique for the cubic spline collocation method. For this purpose, we consider the uniformly elliptic operator A defined by Au := -{Delta}u + a{sub 1}u{sub x} + a{sub 2}u{sub y} + a{sub 0}u in {Omega} (the unit square) with Dirichlet or Neumann boundary conditions and its discretization based on Hermite cubic spline spaces and collocation at the Gauss points. Using an interpolatory basis with support on the Gauss points one obtains the matrix A{sub N} (h = 1/N).
Kikinis Ron
2006-03-01
Full Text Available Abstract Introduction Mitral Valve (MV 3D structural data can be easily obtained using standard transesophageal echocardiography (TEE devices but quantitative pre- and intraoperative volume analysis of the MV is presently not feasible in the cardiac operation room (OR. Finite element method (FEM modelling is necessary to carry out precise and individual volume analysis and in the future will form the basis for simulation of cardiac interventions. Method With the present retrospective pilot study we describe a method to transfer MV geometric data to 3D Slicer 2 software, an open-source medical visualization and analysis software package. A newly developed software program (ROIExtract allowed selection of a region-of-interest (ROI from the TEE data and data transformation for use in 3D Slicer. FEM models for quantitative volumetric studies were generated. Results ROI selection permitted the visualization and calculations required to create a sequence of volume rendered models of the MV allowing time-based visualization of regional deformation. Quantitation of tissue volume, especially important in myxomatous degeneration can be carried out. Rendered volumes are shown in 3D as well as in time-resolved 4D animations. Conclusion The visualization of the segmented MV may significantly enhance clinical interpretation. This method provides an infrastructure for the study of image guided assessment of clinical findings and surgical planning. For complete pre- and intraoperative 3D MV FEM analysis, three input elements are necessary: 1. time-gated, reality-based structural information, 2. continuous MV pressure and 3. instantaneous tissue elastance. The present process makes the first of these elements available. Volume defect analysis is essential to fully understand functional and geometrical dysfunction of but not limited to the valve. 3D Slicer was used for semi-automatic valve border detection and volume-rendering of clinical 3D echocardiographic
New Channel Coding Methods for Satellite Communication
J. Sebesta
2010-04-01
Full Text Available This paper deals with the new progressive channel coding methods for short message transmission via satellite transponder using predetermined length of frame. The key benefits of this contribution are modification and implementation of a new turbo code and utilization of unique features with applications of methods for bit error rate estimation and algorithm for output message reconstruction. The mentioned methods allow an error free communication with very low Eb/N0 ratio and they have been adopted for satellite communication, however they can be applied for other systems working with very low Eb/N0 ratio.
Sudoh, Takashi
1981-06-01
The objective of this study are: 1) Evaluate the capability of the electrical heater for simulating the fuel rod during the reflood phase, and 2) To investigate the effect of the clad-fuel gap in the fuel rod on the clad thermal response during the reflood phase. A computer code HETFEM which is the two dimensional transient thermal conductivity analysis code utilized a finite element method is developed for analysing thermal responses of heater and fuel rod. The two kinds of electrical heaters and a fuel rod are calculated with simple boundary conditions. 1) direct heater (former JAERI reflood test heater), 2) indirect heater (FLECHT test heater), 3) fuel rod (15 x 15 type in Westinghouse PWR). The comparison of the clad temperature responses shows the quench time is influenced by the thermal diffusivity and gap conductance. In the conclusion, the ELECHT heater shows atypicality in the clad temperature response and heat releasing rate. But the direct heater responses are similar to those of the fuel rod. For the gap effect on the fuel rod behavior, the lower gap conductance causes sooner quench and less heat releasing rate. This calculation is not considered the precursory cooling which is affected by heat releasing rate at near and below the quench front. Therefore two dimensional calculation with heat transfer related to the local fluid conditions will be needed. (author)
Huan, Huiting; Mandelis, Andreas; Liu, Lixian
2018-04-01
Determining and keeping track of a material's mechanical performance is very important for safety in the aerospace industry. The mechanical strength of alloy materials is precisely quantified in terms of its stress-strain relation. It has been proven that frequency-domain photothermoacoustic (FD-PTA) techniques are effective methods for characterizing the stress-strain relation of metallic alloys. PTA methodologies include photothermal (PT) diffusion and laser thermoelastic photoacoustic ultrasound (PAUS) generation which must be separately discussed because the relevant frequency ranges and signal detection principles are widely different. In this paper, a detailed theoretical analysis of the connection between thermoelastic parameters and stress/strain tensor is presented with respect to FD-PTA nondestructive testing. Based on the theoretical model, a finite element method (FEM) was further implemented to simulate the PT and PAUS signals at very different frequency ranges as an important analysis tool of experimental data. The change in the stress-strain relation has an impact on both thermal and elastic properties, verified by FEM and results/signals from both PT and PAUS experiments.
Simulation of HMA compaction by using FEM
ter Huerne, H.L.; van Maarseveen, M.F.A.M.; Molenaar, A.A.A.; van de Ven, M.F.C.
2008-01-01
This paper introduces a simulation tool for the compaction process of Hot Mix Asphalt (HMA) using a roller under varying external conditions. The focus is on the use of the Finite Element Model (FEM) with code DiekA, on its necessary requirements and on the presentation of simulation results. The
Kwak, Nam-su; Kim, Jae-Yeol
2012-04-01
The world, coming into the 21st century, is preparing a new revolution called a knowledge-based society after the industrial society. The interest of the world is concentrated on information technology, Nano-technology and biotechnology. In particular, the Nano-technology of which study was originally started from an alternative for overcoming semiconductor micro-technology. It can be applied to most industry subject such as electronics, information and communication, machinery, chemistry, bioengineering, energy, etc. They are emerging into the technology that can change civilization of human beings. Specially, ultra precision machining is quickly applied to Nano-technology in the field of machinery. Lately, with rapid development of electronics industry and optic industry, there are needs for super precision finishing of various core parts required in such related apparatuses. This paper handles stability of a super precision micro cutting machine that is a core unit of such a super precision finisher, and analyzes the results depending on the hinge type and material change, using FEM analysis. By reviewing the stability, it is possible to achieve the effect of basic data collection for unit control and to reduce trials and errors in unit design and manufacturing.
Method for coding low entrophy data
Yeh, Pen-Shu (Inventor)
1995-01-01
A method of lossless data compression for efficient coding of an electronic signal of information sources of very low information rate is disclosed. In this method, S represents a non-negative source symbol set, (s(sub 0), s(sub 1), s(sub 2), ..., s(sub N-1)) of N symbols with s(sub i) = i. The difference between binary digital data is mapped into symbol set S. Consecutive symbols in symbol set S are then paired into a new symbol set Gamma which defines a non-negative symbol set containing the symbols (gamma(sub m)) obtained as the extension of the original symbol set S. These pairs are then mapped into a comma code which is defined as a coding scheme in which every codeword is terminated with the same comma pattern, such as a 1. This allows a direct coding and decoding of the n-bit positive integer digital data differences without the use of codebooks.
Numerical method improvement for a subchannel code
Ding, W.J.; Gou, J.L.; Shan, J.Q. [Xi' an Jiaotong Univ., Shaanxi (China). School of Nuclear Science and Technology
2016-07-15
Previous studies showed that the subchannel codes need most CPU time to solve the matrix formed by the conservation equations. Traditional matrix solving method such as Gaussian elimination method and Gaussian-Seidel iteration method cannot meet the requirement of the computational efficiency. Therefore, a new algorithm for solving the block penta-diagonal matrix is designed based on Stone's incomplete LU (ILU) decomposition method. In the new algorithm, the original block penta-diagonal matrix will be decomposed into a block upper triangular matrix and a lower block triangular matrix as well as a nonzero small matrix. After that, the LU algorithm is applied to solve the matrix until the convergence. In order to compare the computational efficiency, the new designed algorithm is applied to the ATHAS code in this paper. The calculation results show that more than 80 % of the total CPU time can be saved with the new designed ILU algorithm for a 324-channel PWR assembly problem, compared with the original ATHAS code.
A Flexible Method for Producing F.E.M. Analysis of Bone Using Open-Source Software
Boppana, Abhishektha; Sefcik, Ryan; Meyers, Jerry G.; Lewandowski, Beth E.
2016-01-01
This project, performed in support of the NASA GRC Space Academy summer program, sought to develop an open-source workflow methodology that segmented medical image data, created a 3D model from the segmented data, and prepared the model for finite-element analysis. In an initial step, a technological survey evaluated the performance of various existing open-source software that claim to perform these tasks. However, the survey concluded that no single software exhibited the wide array of functionality required for the potential NASA application in the area of bone, muscle and bio fluidic studies. As a result, development of a series of Python scripts provided the bridging mechanism to address the shortcomings of the available open source tools. The implementation of the VTK library provided the most quick and effective means of segmenting regions of interest from the medical images; it allowed for the export of a 3D model by using the marching cubes algorithm to build a surface mesh. To facilitate the development of the model domain from this extracted information required a surface mesh to be processed in the open-source software packages Blender and Gmsh. The Preview program of the FEBio suite proved to be sufficient for volume filling the model with an unstructured mesh and preparing boundaries specifications for finite element analysis. To fully allow FEM modeling, an in house developed Python script allowed assignment of material properties on an element by element basis by performing a weighted interpolation of voxel intensity of the parent medical image correlated to published information of image intensity to material properties, such as ash density. A graphical user interface combined the Python scripts and other software into a user friendly interface. The work using Python scripts provides a potential alternative to expensive commercial software and inadequate, limited open-source freeware programs for the creation of 3D computational models. More work
Subband Coding Methods for Seismic Data Compression
Kiely, A.; Pollara, F.
1995-01-01
This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.
Investigation of Ice-PVC separation under Flexural Loading using FEM Analysis
H Xue
2016-08-01
Full Text Available This paper presents the FEM technique applied in the study of ice separation over a polyvinyl chloride (PVC surface. A two layer model of ice and PVC is analysed theoretically using Euler-Bernoulli beam theory and the rule of mixtures. The physical samples are prepared by freezing ice over the PVC surfaces. The samples are tested experimentally in a four-point loading setup. The experimental results contain strain data gathered through a data acquisition system using the LabView software. The data is collected at the rate of 1 kHz per load step. A model is also coded in MATLAB® and simulated using the finite element method (FEM in ANSYS® Multiphysics. The FEM model of the ice and PVC sample is built using solid elements. The mesh is tested for sensitively. A good agreement is found between the theoretical, experimental and numerical simulation results.
FAST PALMPRINT AUTHENTICATION BY SOBEL CODE METHOD
Jyoti Malik
2011-05-01
Full Text Available The ideal real time personal authentication system should be fast and accurate to automatically identify a person’s identity. In this paper, we have proposed a palmprint based biometric authentication method with improvement in time and accuracy, so as to make it a real time palmprint authentication system. Several edge detection methods, wavelet transform, phase congruency etc. are available to extract line feature from the palmprint. In this paper, Multi-scale Sobel Code operators of different orientations (0?, 45?, 90?, and 135? are applied to the palmprint to extract Sobel-Palmprint features in different direc- tions. The Sobel-Palmprint features extracted are stored in Sobel- Palmprint feature vector and matched using sliding window with Hamming Distance similarity measurement method. The sliding win- dow method is accurate but time taking process. In this paper, we have improved the sliding window method so that the matching time reduces. It is observed that there is 39.36% improvement in matching time. In addition, a Min Max Threshold Range (MMTR method is proposed that helps in increasing overall system accuracy by reducing the False Acceptance Rate (FAR. Experimental results indicate that the MMTR method improves the False Acceptance Rate drastically and improvement in sliding window method reduces the comparison time. The accuracy improvement and matching time improvement leads to proposed real time authentication system.
Kim, Kang Soo; Lee, Ho Jin; Woo, Wan Chuck; Seong, Baek Seok; Byeon, Jin Gwi; Park, Kwang Soo; Jung, In Chul
2010-01-01
Much research has been done to estimate the residual stress on a dissimilar metal weld. There are many methods to estimate the weld residual stress and FEM (Finite Element Method) is generally used due to the advantage of the parametric study. And the X-ray method and a Hole Drilling technique for an experimental method are also usually used. The aim of this paper is to develop the appropriate FEM model to estimate the residual stresses of the dissimilar overlay weld pipe. For this, firstly, the specimen of the dissimilar overlay weld pipe was manufactured. The SA 508 Gr3 nozzle, the SA 182 safe end and SA376 pipe were welded by the Alloy 182. And the overlay weld by the Alloy 52M was performed. The residual stress of this specimen was measured by using the Neutron Diffraction device in the HANARO (High-flux Advanced Neutron Application ReactOr) research reactor, KAERI (Korea Atomic Energy Research Institute). Secondly, FEM Model on the dissimilar overlay weld pipe was made and analyzed by the ABAQUS Code (ABAQUS, 2004). Thermal analysis and stress analysis were performed, and the residual stress was calculated. Thirdly, the results of the FEM analysis were compared with those of the experimental methods
Efendiev, Yalchin R.; Iliev, Oleg; Kronsbein, C.
2013-01-01
In this paper, we propose multilevel Monte Carlo (MLMC) methods that use ensemble level mixed multiscale methods in the simulations of multiphase flow and transport. The contribution of this paper is twofold: (1) a design of ensemble level mixed
Carrington, David Bradley [Los Alamos National Laboratory (LANL), Los Alamos, NM (United States); Monayem, A. K. M. [Univ. of New Mexico, Albuquerque, NM (United States); Mazumder, H. [Univ. of New Mexico, Albuquerque, NM (United States); Heinrich, Juan C. [Univ. of New Mexico, Albuquerque, NM (United States)
2015-03-05
A three-dimensional finite element method for the numerical simulations of fluid flow in domains containing moving rigid objects or boundaries is developed. The method falls into the general category of Arbitrary Lagrangian Eulerian methods; it is based on a fixed mesh that is locally adapted in the immediate vicinity of the moving interfaces and reverts to its original shape once the moving interfaces go past the elements. The moving interfaces are defined by separate sets of marker points so that the global mesh is independent of interface movement and the possibility of mesh entanglement is eliminated. The results is a fully robust formulation capable of calculating on domains of complex geometry with moving boundaries or devises that can also have a complex geometry without danger of the mesh becoming unsuitable due to its continuous deformation thus eliminating the need for repeated re-meshing and interpolation. Moreover, the boundary conditions on the interfaces are imposed exactly. This work is intended to support the internal combustion engines simulator KIVA developed at Los Alamos National Laboratories. The model's capabilities are illustrated through application to incompressible flows in different geometrical settings that show the robustness and flexibility of the technique to perform simulations involving moving boundaries in a three-dimensional domain.
A Method for Improving the Progressive Image Coding Algorithms
Ovidiu COSMA
2014-12-01
Full Text Available This article presents a method for increasing the performance of the progressive coding algorithms for the subbands of images, by representing the coefficients with a code that reduces the truncation error.
A novel method of generating and remembering international morse codes
Charyulu, R.J.K.
untethered communications have been advanced, despite as S.O.S International Morse Code will be at rescue as an emergency tool, when all other modes fail The details of hte method and actual codes have been enumerated....
Bosselut, D.; Soulier, B.
1997-03-01
Some finite element models have been performed at EDF to simulate the vibrations of rod cluster and to analyse the wear phenomenon of rods using parametrical studies. In the first part, one of the finite element models is presented. The location of excitation sources is described. The calculated values are: rod displacement in the guiding cards, shock forces on the guiding cards and wear power produced. In the second part, a parametrical study is presented for a given computer experiment domain with an Experimental Design method. The building of the computer experiment design is described. The used polynomial model has all linear, quadratic and interactive terms for each of the 6 parameters (26 coefficients), 34 polynomials have been built to approach the effective shock forces and the mean wear power at each of the 17 guiding points. In the last part, the influence of parameters on calculated mean wear power is shown along rods and some responses surfaces are visualized. Systematism and closeness of experiment design technique is underlined. Easy simulation of all the response domain by polynomial approach, allows comparison with experiment feedback. (author)
Environmental equipment for usages of FEM software. ADVENTURE system user's guide
Yamasaki, Ichirou; Yoshimura, Shinobu
2003-05-01
The community softwares, databases, and other various tools have been installed in the ITBL environment by the Office of ITBL Promotion as a common utility property for each research field. Among those softwares, Finite Element Method (FEM) code, Adventure (which was originally developed by Prof. Yoshimura, the University of Tokyo), is provided as one of structure analysis programs for ITBL users. The code is well known to possess a high performance ability for parallel processing, especially for massively parallel environments. In this report, a chain of processes for usages of the system as well as the installation method to PC cluster are described. (author)
Environmental equipment for usages of FEM software. ADVENTURE system user's guide
Yamasaki, Ichirou [Japan Atomic Energy Research Inst., Center for Promotion of Computional Science and Engineering, Kizu, Kyoto (Japan); Yoshimura, Shinobu [Tokyo Univ., Tokyo (Japan)
2003-05-01
The community softwares, databases, and other various tools have been installed in the ITBL environment by the Office of ITBL Promotion as a common utility property for each research field. Among those softwares, Finite Element Method (FEM) code, Adventure (which was originally developed by Prof. Yoshimura, the University of Tokyo), is provided as one of structure analysis programs for ITBL users. The code is well known to possess a high performance ability for parallel processing, especially for massively parallel environments. In this report, a chain of processes for usages of the system as well as the installation method to PC cluster are described. (author)
A method for scientific code coupling in a distributed environment
Caremoli, C.; Beaucourt, D.; Chen, O.; Nicolas, G.; Peniguel, C.; Rascle, P.; Richard, N.; Thai Van, D.; Yessayan, A.
1994-12-01
This guide book deals with coupling of big scientific codes. First, the context is introduced: big scientific codes devoted to a specific discipline coming to maturity, and more and more needs in terms of multi discipline studies. Then we describe different kinds of code coupling and an example of code coupling: 3D thermal-hydraulic code THYC and 3D neutronics code COCCINELLE. With this example we identify problems to be solved to realize a coupling. We present the different numerical methods usable for the resolution of coupling terms. This leads to define two kinds of coupling: with the leak coupling, we can use explicit methods, and with the strong coupling we need to use implicit methods. On both cases, we analyze the link with the way of parallelizing code. For translation of data from one code to another, we define the notion of Standard Coupling Interface based on a general structure for data. This general structure constitutes an intermediary between the codes, thus allowing a relative independence of the codes from a specific coupling. The proposed method for the implementation of a coupling leads to a simultaneous run of the different codes, while they exchange data. Two kinds of data communication with message exchange are proposed: direct communication between codes with the use of PVM product (Parallel Virtual Machine) and indirect communication with a coupling tool. This second way, with a general code coupling tool, is based on a coupling method, and we strongly recommended to use it. This method is based on the two following principles: re-usability, that means few modifications on existing codes, and definition of a code usable for coupling, that leads to separate the design of a code usable for coupling from the realization of a specific coupling. This coupling tool available from beginning of 1994 is described in general terms. (authors). figs., tabs
MacGinnis, Matt; Chu, Howard; Youssef, George; Wu, Kimberley W; Machado, Andre Wilson; Moon, Won
2014-08-29
Orthodontic palatal expansion appliances have been widely used with satisfactory and, most often, predictable clinical results. Recently, clinicians have successfully utilized micro-implants with palatal expander designs to work as anchors to the palate to achieve more efficient skeletal expansion and to decrease undesired dental effects. The purpose of the study was to use finite element method (FEM) to determine the stress distribution and displacement within the craniofacial complex when simulated conventional and micro-implant-assisted rapid palatal expansion (MARPE) expansion forces are applied to the maxilla. The simulated stress distribution produced within the palate and maxillary buttresses in addition to the displacement and rotation of the maxilla could then be analyzed to determine if micro-implants aid in skeletal expansion. A three-dimensional (3D) mesh model of the cranium with associated maxillary sutures was developed using computed tomography (CT) images and Mimics modeling software. To compare transverse expansion stresses in rapid palatal expansion (RPE) and MARPE, expansion forces were distributed to differing points on the maxilla and evaluated with ANSYS simulation software. The stresses distributed from forces applied to the maxillary teeth are distributed mainly along the trajectories of the three maxillary buttresses. In comparison, the MARPE showed tension and compression directed to the palate, while showing less rotation, and tipping of the maxillary complex. In addition, the conventional hyrax displayed a rotation of the maxilla around the teeth as opposed to the midpalatal suture of the MARPE. This data suggests that the MARPE causes the maxilla to bend laterally, while preventing unwanted rotation of the complex. In conclusion, the MARPE may be beneficial for hyperdivergent patients, or those that have already experienced closure of the midpalatal suture, who require palatal expansion and would worsen from buccal tipping of the teeth
Zhang, Y.; Lu, T., E-mail: likesurge@sina.com
2016-12-01
Highlights: • Two characteristic parameters of the temperature fluctuations are used for qualitative analysis. • A quantitative assessment method for high-cycle thermal fatigue of a T-pipe is proposed. • The time-dependent curves for the temperature and thermal stress are not always “in-phase”. • Large magnitude of thermal stresses may not mean large number of fatigue cycles. • The normalized fatigue damage rate and normalized RMS temperature are positively related. - Abstract: With the development of nuclear power and nuclear power safety, high-cycle thermal fatigue of the pipe structures induced by the flow and heat transfer of the fluid in pipes have aroused more and more attentions. Turbulent mixing of hot and cold flows in a T-pipe is a well-recognized source of thermal fatigue in piping system, and thermal fatigue is a significant long-term degradation mechanism. It is not an easy work to evaluate thermal fatigue of a T-pipe under turbulent flow mixing because of the thermal loads acting at fluid–structure interface of the pipe are so complex and changeful. In this paper, a one-way Computational Fluid Dynamics-Finite Element Method (CFD-FEM method) coupling based on the ANSYS Workbench 15.0 software has been developed to calculate transient thermal stresses with the temperature fields of turbulent flow mixing, and thermal fatigue assessment has been carried out with this obtained fluctuating thermal stresses by programming in the software platform of Matlab based on the rainflow counting method. In the thermal analysis, the normalized mean temperatures and the normalized root mean square (RMS) temperatures are obtained and compared with the experiment of the test case from the Vattenfall benchmark facility to verify the accuracy of the CFD calculation and to determine the position which thermal fatigue is most likely to occur in the T-junction. Besides, more insights have been obtained in the coupled CFD-FEM analysis and the thermal fatigue
CFD-FEM coupling for accurate prediction of thermal fatigue
Hannink, M.H.C.; Kuczaj, A.K.; Blom, F.J.; Church, J.M.; Komen, E.M.J.
2009-01-01
Thermal fatigue is a safety related issue in primary pipework systems of nuclear power plants. Life extension of current reactors and the design of a next generation of new reactors lead to growing importance of research in this direction. The thermal fatigue degradation mechanism is induced by temperature fluctuations in a fluid, which arise from mixing of hot and cold flows. Accompanied physical phenomena include thermal stratification, thermal striping, and turbulence [1]. Current plant instrumentation systems allow monitoring of possible causes as stratification and temperature gradients at fatigue susceptible locations [1]. However, high-cycle temperature fluctuations associated with turbulent mixing cannot be adequately detected by common thermocouple instrumentations. For a proper evaluation of thermal fatigue, therefore, numerical simulations are necessary that couple instantaneous fluid and solid interactions. In this work, a strategy for the numerical prediction of thermal fatigue is presented. The approach couples Computational Fluid Dynamics (CFD) and the Finite Element Method (FEM). For the development of the computational approach, a classical test case for the investigation of thermal fatigue problems is studied, i.e. mixing in a T-junction. Due to turbulent mixing of hot and cold fluids in two perpendicularly connected pipes, temperature fluctuations arise in the mixing zone downstream in the flow. Subsequently, these temperature fluctuations are also induced in the pipes. The stresses that arise due to the fluctuations may eventually lead to thermal fatigue. In the first step of the applied procedure, the temperature fluctuations in both fluid and structure are calculated using the CFD method. Subsequently, the temperature fluctuations in the structure are imposed as thermal loads in a FEM model of the pipes. A mechanical analysis is then performed to determine the thermal stresses, which are used to predict the fatigue lifetime of the structure
Calibration Methods for Reliability-Based Design Codes
Gayton, N.; Mohamed, A.; Sørensen, John Dalsgaard
2004-01-01
The calibration methods are applied to define the optimal code format according to some target safety levels. The calibration procedure can be seen as a specific optimization process where the control variables are the partial factors of the code. Different methods are available in the literature...
The program FEM3D users manual
Misfeldt, I.
1977-11-01
A short description of the program FEM3D that solves the three-dimensional, multigroup, neutron diffusion equation by the finite element method. The elements are box-formed Lagrange type elements of order 1, 2, or 3. The program gives very reliable results within reasonable calculation times, but it is not a fast program and should therefore mostly be used where high precision is needed. (author/BP)
Lattice Boltzmann method fundamentals and engineering applications with computer codes
Mohamad, A A
2014-01-01
Introducing the Lattice Boltzmann Method in a readable manner, this book provides detailed examples with complete computer codes. It avoids the most complicated mathematics and physics without scarifying the basic fundamentals of the method.
Higuchi, M; Ando, K; Shimada, S; Umetani, H [Toyota Motor Corp., Aichi (Japan)
1990-04-25
In order to apply FEM (finite element method) flow analysis to environmental engineering, the two-dimensional analysis system was developed, and applied to the design of plant ventilation and improvement of an enviromental control equipment. Furthermore, the three-dimensional analysis system was developed to extend its application. Since in a large scale model, no enhancement of a processing capacity was expected even by a supercomputer because of longer I/O times between internal and external memories caused by a small internal memory space, the parallel processing system with multiple external memories was introduced to analyze such models. To achieve more efficient processing, multiple series of renumbering codes were also prepared to optimize the processing order of solvers, elements and nodes. As examples, the improvement of a thermal oxidizer and the ventilation design for a forging plant, and as an example of a three-dimensional large scale model, the flow analysis around a plant were presented. 8 refs., 17 figs., 1 tab.
Uncertainty assessment of a dike with an anchored sheet pile wall using FEM
Rippi, Aikaterini; Nuttall, Jonathan; Teixeira, Ana; Schweckendiek, T.; Lang, M.; Klijn, F.; Samuels, P.
2016-01-01
The Dutch design codes for the dikes with retaining walls rely on Finite Element Analysis (FEM) in combination with partial safety factors. However, this can lead to conservative designs. For this reason, in this study, a reliability analysis is carried out with FEM calculations aiming to
Escolano-Sánchez, F.
2015-03-01
Full Text Available Direct foundations with continuous elements, such as slabs, provide more advantages than direct foundations with isolated elements, such as footings, and deep foundations, such as piles, in the case of soil with natural or man-made cavities. The slabs are usually designed by two-dimensional models which show their shape on the plant, on a lineal elastic support, represented by a modulus of soil reaction. Regarding the settlement estimation, the following article compares the Finite Elements Method (FEM versus the classical Method (CM to select the modulus of soil reaction used to design foundations slabs in sensitive soils and sites with possible cavities or collapses. This analysis includes one of these cavities in the design to evaluate the risk of fail.Las cimentaciones directas con elementos continuos «losas», tienen ventajas sobre las cimentaciones directas con elementos aislados «zapatas» y sobre las cimentaciones profundas «pilotes», frente a la presencia de terrenos problemáticos. Las losas se diseñan de forma habitual con modelos bidimensionales que representan su forma en planta, apoyada en un medio elástico y lineal, representado por un módulo de balasto. En el presente artículo se realiza un análisis comparativo, para la estimación de asientos, entre el Método de Elementos Finitos (FEM y el Método Clásico (MC, para la elección de los módulos de balasto que se utilizan en el diseño de losas de cimentación en terrenos con blandones y cavidades naturales o antrópicas. Este análisis considera el peligro de la presencia de una de estas cavidades dentro de su diseño, de esta forma, el riesgo de fallo puede ser valorado por ambos métodos.
Calculation of marine propeller static strength based on coupled BEM/FEM
YE Liyu
2017-10-01
Full Text Available [Objectives] The reliability of propeller stress has a great influence on the safe navigation of a ship. To predict propeller stress quickly and accurately,[Methods] a new numerical prediction model is developed by coupling the Boundary Element Method(BEMwith the Finite Element Method (FEM. The low order BEM is used to calculate the hydrodynamic load on the blades, and the Prandtl-Schlichting plate friction resistance formula is used to calculate the viscous load. Next, the calculated hydrodynamic load and viscous correction load are transmitted to the calculation of the Finite Element as surface loads. Considering the particularity of propeller geometry, a continuous contact detection algorithm is developed; an automatic method for generating the finite element mesh is developed for the propeller blade; a code based on the FEM is compiled for predicting blade stress and deformation; the DTRC 4119 propeller model is applied to validate the reliability of the method; and mesh independence is confirmed by comparing the calculated results with different sizes and types of mesh.[Results] The results show that the calculated blade stress and displacement distribution are reliable. This method avoids the process of artificial modeling and finite element mesh generation, and has the advantages of simple program implementation and high calculation efficiency.[Conclusions] The code can be embedded into the code of theoretical and optimized propeller designs, thereby helping to ensure the strength of designed propellers and improve the efficiency of propeller design.
Statistical methods for accurately determining criticality code bias
Trumble, E.F.; Kimball, K.D.
1997-01-01
A system of statistically treating validation calculations for the purpose of determining computer code bias is provided in this paper. The following statistical treatments are described: weighted regression analysis, lower tolerance limit, lower tolerance band, and lower confidence band. These methods meet the criticality code validation requirements of ANS 8.1. 8 refs., 5 figs., 4 tabs
Control rod computer code IAMCOS: general theory and numerical methods
West, G.
1982-11-01
IAMCOS is a computer code for the description of mechanical and thermal behavior of cylindrical control rods for fast breeders. This code version was applied, tested and modified from 1979 to 1981. In this report are described the basic model (02 version), theoretical definitions and computation methods [fr
Method and device for decoding coded digital video signals
2000-01-01
The invention relates to a video coding method and system including a quantization and coding sub-assembly (38) in which a quantization parameter is controlled by another parameter defined as being in direct relation with the dynamic range value of the data contained in given blocks of pixels.
Parallel processing of structural integrity analysis codes
Swami Prasad, P.; Dutta, B.K.; Kushwaha, H.S.
1996-01-01
Structural integrity analysis forms an important role in assessing and demonstrating the safety of nuclear reactor components. This analysis is performed using analytical tools such as Finite Element Method (FEM) with the help of digital computers. The complexity of the problems involved in nuclear engineering demands high speed computation facilities to obtain solutions in reasonable amount of time. Parallel processing systems such as ANUPAM provide an efficient platform for realising the high speed computation. The development and implementation of software on parallel processing systems is an interesting and challenging task. The data and algorithm structure of the codes plays an important role in exploiting the parallel processing system capabilities. Structural analysis codes based on FEM can be divided into two categories with respect to their implementation on parallel processing systems. The first category codes such as those used for harmonic analysis, mechanistic fuel performance codes need not require the parallelisation of individual modules of the codes. The second category of codes such as conventional FEM codes require parallelisation of individual modules. In this category, parallelisation of equation solution module poses major difficulties. Different solution schemes such as domain decomposition method (DDM), parallel active column solver and substructuring method are currently used on parallel processing systems. Two codes, FAIR and TABS belonging to each of these categories have been implemented on ANUPAM. The implementation details of these codes and the performance of different equation solvers are highlighted. (author). 5 refs., 12 figs., 1 tab
MARS code manual volume I: code structure, system models, and solution methods
Chung, Bub Dong; Kim, Kyung Doo; Bae, Sung Won; Jeong, Jae Jun; Lee, Seung Wook; Hwang, Moon Kyu; Yoon, Churl
2010-02-01
Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This theory manual provides a complete list of overall information of code structure and major function of MARS including code architecture, hydrodynamic model, heat structure, trip / control system and point reactor kinetics model. Therefore, this report would be very useful for the code users. The overall structure of the manual is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible
Advanced codes and methods supporting improved fuel cycle economics - 5493
Curca-Tivig, F.; Maupin, K.; Thareau, S.
2015-01-01
AREVA's code development program was practically completed in 2014. The basic codes supporting a new generation of advanced methods are the followings. GALILEO is a state-of-the-art fuel rod performance code for PWR and BWR applications. Development is completed, implementation started in France and the U.S.A. ARCADIA-1 is a state-of-the-art neutronics/ thermal-hydraulics/ thermal-mechanics code system for PWR applications. Development is completed, implementation started in Europe and in the U.S.A. The system thermal-hydraulic codes S-RELAP5 and CATHARE-2 are not really new but still state-of-the-art in the domain. S-RELAP5 was completely restructured and re-coded such that its life cycle increases by further decades. CATHARE-2 will be replaced in the future by the new CATHARE-3. The new AREVA codes and methods are largely based on first principles modeling with an extremely broad international verification and validation data base. This enables AREVA and its customers to access more predictable licensing processes in a fast evolving regulatory environment (new safety criteria, requests for enlarged qualification databases, statistical applications, uncertainty propagation...). In this context, the advanced codes and methods and the associated verification and validation represent the key to avoiding penalties on products, on operational limits, or on methodologies themselves
The variational celular method - the code implantation
Rosato, A.; Lima, M.A.P.
1980-12-01
The process to determine the potential energy curve for diatomic molecules by the Variational Cellular Method is discussed. An analysis of the determination of the electronic eigenenergies and the electrostatic energy of these molecules is made. An explanation of the input data and their meaning is also presented. (Author) [pt
Fujimura, Toichiro
1996-01-01
A three-dimensional neutron transport code DFEM has been developed by the double finite element method to analyze reactor cores with complex geometry as large fast reactors. Solution algorithm is based on the double finite element method in which the space and angle finite elements are employed. A reactor core system can be divided into some triangular and/or quadrangular prism elements, and the spatial distribution of neutron flux in each element is approximated with linear basis functions. As for the angular variables, various basis functions are applied, and their characteristics were clarified by comparison. In order to enhance the accuracy, a general method is derived to remedy the truncation errors at reflective boundaries, which are inherent in the conventional FEM. An adaptive acceleration method and the source extrapolation method were applied to accelerate the convergence of the iterations. The code structure is outlined and explanations are given on how to prepare input data. A sample input list is shown for reference. The eigenvalue and flux distribution for real scale fast reactors and the NEA benchmark problems were presented and discussed in comparison with the results of other transport codes. (author)
New decoding methods of interleaved burst error-correcting codes
Nakano, Y.; Kasahara, M.; Namekawa, T.
1983-04-01
A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.
Spacetime Discontinuous Galerkin FEM: Spectral Response
Abedi, R; Omidi, O; Clarke, P L
2014-01-01
Materials in nature demonstrate certain spectral shapes in terms of their material properties. Since successful experimental demonstrations in 2000, metamaterials have provided a means to engineer materials with desired spectral shapes for their material properties. Computational tools are employed in two different aspects for metamaterial modeling: 1. Mircoscale unit cell analysis to derive and possibly optimize material's spectral response; 2. macroscale to analyze their interaction with conventional material. We compare two different approaches of Time-Domain (TD) and Frequency Domain (FD) methods for metamaterial applications. Finally, we discuss advantages of the TD method of Spacetime Discontinuous Galerkin finite element method (FEM) for spectral analysis of metamaterials
A code for obtaining temperature distribution by finite element method
Bloch, M.
1984-01-01
The ELEFIB Fortran language computer code using finite element method for calculating temperature distribution of linear and two dimensional problems, in permanent region or in the transient phase of heat transfer, is presented. The formulation of equations uses the Galerkin method. Some examples are shown and the results are compared with other papers. The comparative evaluation shows that the elaborated code gives good values. (M.C.K.) [pt
Development and application of methods to characterize code uncertainty
Wilson, G.E.; Burtt, J.D.; Case, G.S.; Einerson, J.J.; Hanson, R.G.
1985-01-01
The United States Nuclear Regulatory Commission sponsors both international and domestic studies to assess its safety analysis codes. The Commission staff intends to use the results of these studies to quantify the uncertainty of the codes with a statistically based analysis method. Development of the methodology is underway. The Idaho National Engineering Laboratory contributions to the early development effort, and testing of two candidate methods are the subjects of this paper
Methods and computer codes for probabilistic sensitivity and uncertainty analysis
Vaurio, J.K.
1985-01-01
This paper describes the methods and applications experience with two computer codes that are now available from the National Energy Software Center at Argonne National Laboratory. The purpose of the SCREEN code is to identify a group of most important input variables of a code that has many (tens, hundreds) input variables with uncertainties, and do this without relying on judgment or exhaustive sensitivity studies. Purpose of the PROSA-2 code is to propagate uncertainties and calculate the distributions of interesting output variable(s) of a safety analysis code using response surface techniques, based on the same runs used for screening. Several applications are discussed, but the codes are generic, not tailored to any specific safety application code. They are compatible in terms of input/output requirements but also independent of each other, e.g., PROSA-2 can be used without first using SCREEN if a set of important input variables has first been selected by other methods. Also, although SCREEN can select cases to be run (by random sampling), a user can select cases by other methods if he so prefers, and still use the rest of SCREEN for identifying important input variables
Kanatani, Mamoru; Tochigi, Hitoshi; Kawai, Tadashi
1999-01-01
In the development of the man-made island siting technology of nuclear power plants, assessing the stability of the seawall against large ocean waves and earthquakes is indispensable. Concerning with the seismic stability of the seawall, prediction of the deformation like sliding and settlement of the seawall during earthquake including the armour units in front of the caisson becomes important factor. For this purpose, the authors have developed the two-dimensional DEM-FEM coupled analysis method (SEAWALL-2D) to predict the deformation of the seawall covered with the armour units during earthquake. In this method, movements of the armour units are calculated in DEM analysis part and deformation of the caisson, rubble moundsand seabed and back fill are calculated in FEM analysis part taking the nonlinearity of the soil materials based on the effective stress into account. Numerical simulations of dynamic centrifuge model tests of the seawall are conducted to verify the applicability of this method. Results of the simulation analyses have successfully reproduced the movements of the armour units and the residual deformation of the caisson, sand seabed and back fill compared with the test results. (author)
Parallelization methods study of thermal-hydraulics codes
Gaudart, Catherine
2000-01-01
The variety of parallelization methods and machines leads to a wide selection for programmers. In this study we suggest, in an industrial context, some solutions from the experience acquired through different parallelization methods. The study is about several scientific codes which simulate a large variety of thermal-hydraulics phenomena. A bibliography on parallelization methods and a first analysis of the codes showed the difficulty of our process on the whole applications to study. Therefore, it would be necessary to identify and extract a representative part of these applications and parallelization methods. The linear solver part of the codes forced itself. On this particular part several parallelization methods had been used. From these developments one could estimate the necessary work for a non initiate programmer to parallelize his application, and the impact of the development constraints. The different methods of parallelization tested are the numerical library PETSc, the parallelizer PAF, the language HPF, the formalism PEI and the communications library MPI and PYM. In order to test several methods on different applications and to follow the constraint of minimization of the modifications in codes, a tool called SPS (Server of Parallel Solvers) had be developed. We propose to describe the different constraints about the optimization of codes in an industrial context, to present the solutions given by the tool SPS, to show the development of the linear solver part with the tested parallelization methods and lastly to compare the results against the imposed criteria. (author) [fr
A GPU code for analytic continuation through a sampling method
Johan Nordström
2016-01-01
Full Text Available We here present a code for performing analytic continuation of fermionic Green’s functions and self-energies as well as bosonic susceptibilities on a graphics processing unit (GPU. The code is based on the sampling method introduced by Mishchenko et al. (2000, and is written for the widely used CUDA platform from NVidia. Detailed scaling tests are presented, for two different GPUs, in order to highlight the advantages of this code with respect to standard CPU computations. Finally, as an example of possible applications, we provide the analytic continuation of model Gaussian functions, as well as more realistic test cases from many-body physics.
Uncertainty assessment of a dike with an anchored sheet pile wall using FEM
Rippi Aikaterini
2016-01-01
Full Text Available The Dutch design codes for the dikes with retaining walls rely on Finite Element Analysis (FEM in combination with partial safety factors. However, this can lead to conservative designs. For this reason, in this study, a reliability analysis is carried out with FEM calculations aiming to demonstrate the feasibility of reliability analysis for a dike with an anchored sheet pile wall modelled in the 2D FEM, Plaxis. Sensitivity and reliability analyses were carried out and enabled by coupling the uncertainty package, OpenTURNS and Plaxis. The most relevant (ultimate limit states concern the anchor, the sheet pile wall, the soil body failure (global instability and finally the system. The case was used to investigate the applicability of the First Order Reliability Method (FORM and Directional Sampling (DS to analysing these limit states. The final goal is to estimate the probability of failure and identify the most important soil properties that affect the behaviour of each component and the system as a whole. The results of this research can be used to assess and optimize the current design procedure for dikes with retaining walls.
2D arc-PIC code description: methods and documentation
Timko, Helga
2011-01-01
Vacuum discharges are one of the main limiting factors for future linear collider designs such as that of the Compact LInear Collider. To optimize machine efficiency, maintaining the highest feasible accelerating gradient below a certain breakdown rate is desirable; understanding breakdowns can therefore help us to achieve this goal. As a part of ongoing theoretical research on vacuum discharges at the Helsinki Institute of Physics, the build-up of plasma can be investigated through the particle-in-cell method. For this purpose, we have developed the 2D Arc-PIC code introduced here. We present an exhaustive description of the 2D Arc-PIC code in two parts. In the first part, we introduce the particle-in-cell method in general and detail the techniques used in the code. In the second part, we provide a documentation and derivation of the key equations occurring in the code. The code is original work of the author, written in 2010, and is therefore under the copyright of the author. The development of the code h...
A Fast Optimization Method for General Binary Code Learning.
Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng
2016-09-22
Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.
Automated uncertainty analysis methods in the FRAP computer codes
Peck, S.O.
1980-01-01
A user oriented, automated uncertainty analysis capability has been incorporated in the Fuel Rod Analysis Program (FRAP) computer codes. The FRAP codes have been developed for the analysis of Light Water Reactor fuel rod behavior during steady state (FRAPCON) and transient (FRAP-T) conditions as part of the United States Nuclear Regulatory Commission's Water Reactor Safety Research Program. The objective of uncertainty analysis of these codes is to obtain estimates of the uncertainty in computed outputs of the codes is to obtain estimates of the uncertainty in computed outputs of the codes as a function of known uncertainties in input variables. This paper presents the methods used to generate an uncertainty analysis of a large computer code, discusses the assumptions that are made, and shows techniques for testing them. An uncertainty analysis of FRAP-T calculated fuel rod behavior during a hypothetical loss-of-coolant transient is presented as an example and carried through the discussion to illustrate the various concepts
Deep Learning Methods for Improved Decoding of Linear Codes
Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair
2018-02-01
The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.
Research on coding and decoding method for digital levels
Tu Lifen; Zhong Sidong
2011-01-20
A new coding and decoding method for digital levels is proposed. It is based on an area-array CCD sensor and adopts mixed coding technology. By taking advantage of redundant information in a digital image signal, the contradiction that the field of view and image resolution restrict each other in a digital level measurement is overcome, and the geodetic leveling becomes easier. The experimental results demonstrate that the uncertainty of measurement is 1mm when the measuring range is between 2m and 100m, which can meet practical needs.
Research on coding and decoding method for digital levels.
Tu, Li-fen; Zhong, Si-dong
2011-01-20
A new coding and decoding method for digital levels is proposed. It is based on an area-array CCD sensor and adopts mixed coding technology. By taking advantage of redundant information in a digital image signal, the contradiction that the field of view and image resolution restrict each other in a digital level measurement is overcome, and the geodetic leveling becomes easier. The experimental results demonstrate that the uncertainty of measurement is 1 mm when the measuring range is between 2 m and 100 m, which can meet practical needs.
Optimized iterative decoding method for TPC coded CPM
Ma, Yanmin; Lai, Penghui; Wang, Shilian; Xie, Shunqin; Zhang, Wei
2018-05-01
Turbo Product Code (TPC) coded Continuous Phase Modulation (CPM) system (TPC-CPM) has been widely used in aeronautical telemetry and satellite communication. This paper mainly investigates the improvement and optimization on the TPC-CPM system. We first add the interleaver and deinterleaver to the TPC-CPM system, and then establish an iterative system to iteratively decode. However, the improved system has a poor convergence ability. To overcome this issue, we use the Extrinsic Information Transfer (EXIT) analysis to find the optimal factors for the system. The experiments show our method is efficient to improve the convergence performance.
Rapid Structural Design Change Evaluation with AN Experiment Based FEM
Chu, C.-H.; Trethewey, M. W.
1998-04-01
The work in this paper proposes a dynamic structural design model that can be developed in a rapid fashion. The approach endeavours to produce a simplified FEM developed in conjunction with an experimental modal database. The FEM is formulated directly from the geometry and connectivity used in an experimental modal test using beam/frame elements. The model sacrifices fine detail for a rapid development time. The FEM is updated at the element level so the dynamic response replicates the experimental results closely. The physical attributes of the model are retained, making it well suited to evaluate the effect of potential design changes. The capabilities are evaluated in a series of computational and laboratory tests. First, a study is performed with a simulated cantilever beam with a variable mass and stiffness distribution. The modal characteristics serve as the updating target with random noise added to simulate experimental uncertainty. A uniformly distributed FEM is developed and updated. The results show excellent results, all natural frequencies are within 0·001% with MAC values above 0·99. Next, the method is applied to predict the dynamic changes of a hardware portal frame structure for a radical design change. Natural frequency predictions from the original FEM differ by as much as almost 18% with reasonable MAC values. The results predicted from the updated model produce excellent results when compared to the actual hardware changes, the first five modal natural frequency difference is around 5% and the corresponding mode shapes producing MAC values above 0·98.
Unconventional material part FEM analysis
Michal TROPP
2016-12-01
Full Text Available The article covers the usability of alternative materials in vehicles construction. The paper elaborates upon the setup of the process and analysis of the results of carbon composite component FEM model. The 3D model, used for the examination, is a part of axle from alternative small electric vehicle. The analysis was conducted with the help of MSC Adams and Ansys Workbench software. Color maps of von Mises stress in material and total deformations of the component are the results of calculation.
Method of laser beam coding for control systems
Pałys, Tomasz; Arciuch, Artur; Walczak, Andrzej; Murawski, Krzysztof
2017-08-01
The article presents the method of encoding a laser beam for control systems. The experiments were performed using a red laser emitting source with a wavelength of λ = 650 nm and a power of P ≍ 3 mW. The aim of the study was to develop methods of modulation and demodulation of the laser beam. Results of research, in which we determined the effect of selected camera parameters, such as image resolution, number of frames per second on the result of demodulation of optical signal, is also shown in the paper. The experiments showed that the adopted coding method provides sufficient information encoded in a single laser beam (36 codes with the effectiveness of decoding at 99.9%).
Monte Carlo burnup codes acceleration using the correlated sampling method
Dieudonne, C.
2013-01-01
For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this document we present an original methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time we develop a theoretical model to study the features of the correlated sampling method to understand its effects on depletion calculations. In a third time the implementation of this method in the TRIPOLI-4 code will be discussed, as well as the precise calculation scheme used to bring important speed-up of the depletion calculation. We will begin to validate and optimize the perturbed depletion scheme with the calculation of a REP-like fuel cell depletion. Then this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes. (author) [fr
CNC LATHE MACHINE PRODUCING NC CODE BY USING DIALOG METHOD
Yakup TURGUT
2004-03-01
Full Text Available In this study, an NC code generation program utilising Dialog Method was developed for turning centres. Initially, CNC lathes turning methods and tool path development techniques were reviewed briefly. By using geometric definition methods, tool path was generated and CNC part program was developed for FANUC control unit. The developed program made CNC part program generation process easy. The program was developed using BASIC 6.0 programming language while the material and cutting tool database were and supported with the help of ACCESS 7.0.
Nonlinear observer design for a nonlinear string/cable FEM model using contraction theory
Turkyilmaz, Yilmaz; Jouffroy, Jerome; Egeland, Olav
model is presented in the form of partial differential equations (PDE). Galerkin's method is then applied to obtain a set of ordinary differential equations such that the cable model is approximated by a FEM model. Based on the FEM model, a nonlinear observer is designed to estimate the cable...
New computational methods used in the lattice code DRAGON
Marleau, G.; Hebert, A.; Roy, R.
1992-01-01
The lattice code DRAGON is used to perform transport calculations inside cells and assemblies for multidimensional geometry using the collision probability method, including the interface current and J ± techniques. Typical geometries that can be treated using this code include CANDU 2-dimensional clusters, CANDU 3-dimensional assemblies, pressurized water reactor (PWR) rectangular and hexagonal assemblies. It contains a self-shielding module for the treatment of microscopic cross section libraries and a depletion module for burnup calculations. DRAGON was written in a modular form in such a way as to accept easily new collision probability options and make them readily available to all the modules that require collision probability matrices like the self-shielding module, the flux solution module and the homogenization module. In this paper the authors present an overview of DRAGON and discuss some of the methods that were implemented in DRAGON in order to improve on its performance
Computer codes and methods for simulating accelerator driven systems
Sartori, E.; Byung Chan Na
2003-01-01
A large set of computer codes and associated data libraries have been developed by nuclear research and industry over the past half century. A large number of them are in the public domain and can be obtained under agreed conditions from different Information Centres. The areas covered comprise: basic nuclear data and models, reactor spectra and cell calculations, static and dynamic reactor analysis, criticality, radiation shielding, dosimetry and material damage, fuel behaviour, safety and hazard analysis, heat conduction and fluid flow in reactor systems, spent fuel and waste management (handling, transportation, and storage), economics of fuel cycles, impact on the environment of nuclear activities etc. These codes and models have been developed mostly for critical systems used for research or power generation and other technological applications. Many of them have not been designed for accelerator driven systems (ADS), but with competent use, they can be used for studying such systems or can form the basis for adapting existing methods to the specific needs of ADS's. The present paper describes the types of methods, codes and associated data available and their role in the applications. It provides Web addresses for facilitating searches for such tools. Some indications are given on the effect of non appropriate or 'blind' use of existing tools to ADS. Reference is made to available experimental data that can be used for validating the methods use. Finally, some international activities linked to the different computational aspects are described briefly. (author)
Code-B-1 for stress/strain calculation for TRISO fuel particle (Contract research)
Aihara, Jun; Ueta, Shohei; Shibata, Taiju; Sawa, Kazuhiro
2011-12-01
We have developed Code-B-1 for the prediction of the failure probabilities of the coated fuel particles for the high temperature gas-cooled reactors (HTGRs) under operation by modification of an existing code. A finite element method (FEM) is employed for the stress calculation part and Code-B-1 can treat the plastic deformation of the coating layer of the coated fuel particles which the existing code cannot treat. (author)
1963 Vajont rock slide: a comparison between 3D DEM and 3D FEM
Crosta, Giovanni; Utili, Stefano; Castellanza, Riccardo; Agliardi, Federico; Bistacchi, Andrea; Weng Boon, Chia
2013-04-01
Data on the exact location of the failure surface of the landslide have been used as the starting point for the modelling of the landslide. 3 dimensional numerical analyses were run employing both the discrete element method (DEM) and a Finite Element Method (FEM) code. In this work the focus is on the prediction of the movement of the landlside during its initial phase of detachment from Mount Toc. The results obtained by the two methods are compared and conjectures on the observed discrepancies of the predictions between the two methods are formulated. In the DEM simulations the internal interaction of the sliding blocks and the expansion of the debris is obtained as a result of the kinematic interaction among the rock blocks resulting from the jointing of the rock mass involved in the slide. In the FEM analyses, the c-phi reduction technique was employed along the predefine failure surface until the onset of the landslide occurred. In particular, two major blocks of the landslide were identified and the stress, strain and displacement fields at the interface between the two blocks were analysed in detail.
CFD code verification and the method of manufactured solutions
Pelletier, D.; Roache, P.J.
2002-01-01
This paper presents the Method of Manufactured Solutions (MMS) for CFD code verification. The MMS provides benchmark solutions for direct evaluation of the solution error. The best benchmarks are exact analytical solutions with sufficiently complex solution structure to ensure that all terms of the differential equations are exercised in the simulation. The MMS provides a straight forward and general procedure for generating such solutions. When used with systematic grid refinement studies, which are remarkably sensitive, the MMS provides strong code verification with a theorem-like quality. The MMS is first presented on simple 1-D examples. Manufactured solutions for more complex problems are then presented with sample results from grid convergence studies. (author)
Verification of the DEFENS Code through the CANDU Problems with Rectangular Geometry
Ryu, Eun Hyun; Song, Yong Mann [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-05-15
Because a finite element method (FEM) based code can explicitly describe the core geometry, it has an advantage in a core analysis such as the CANDU core. For the reactor physics calculation in the CANDU core, the RFSP-IST code is used for the core analysis, and the RFSP-IST code is based on the finite difference method (FDM). Thus, the convergence with the mesh size and the geometry shape is not consistent. In this research, the convergence with the mesh size of the RFSP code is investigated, a method comparison between the FEM and FDM is done for the usefulness of the FEM based code with the same rectangular geometry. The target problems are the imaginary core and initial core with the uniform parameter, which is produced by the WIMS-IST code based on the parameters of Wolsong unit 1. The reference solution is generated by running the multi-group calculation of the McCARD code. In this research, the convergence of the RFSP code is investigated and the DEFENS code is compared with the RFSP code for the imaginary and initial cores. The accuracy of the DEFENS code and the disadvantage of the RFSP code are verified.
Bolt-Grout Interactions in Elastoplastic Rock Mass Using Coupled FEM-FDM Techniques
Debasis Deb
2010-01-01
Full Text Available Numerical procedure based on finite element method (FEM and finite difference method (FDM for the analysis of bolt-grout interactions are introduced in this paper. The finite element procedure incorporates elasto-plastic concepts with Hoek and Brown yield criterion and has been applied for rock mass. Bolt-grout interactions are evaluated based on finite difference method and are embedded in the elasto-plastic procedures of FEM. The experimental validation of the proposed FEM-FDM procedures and numerical examples of a bolted tunnel are provided to demonstrate the efficacy of the proposed method for practical applications.
Tsopra, Rosy; Peckham, Daniel; Beirne, Paul; Rodger, Kirsty; Callister, Matthew; White, Helen; Jais, Jean-Philippe; Ghosh, Dipansu; Whitaker, Paul; Clifton, Ian J; Wyatt, Jeremy C
2018-07-01
Coding of diagnoses is important for patient care, hospital management and research. However coding accuracy is often poor and may reflect methods of coding. This study investigates the impact of three alternative coding methods on the inaccuracy of diagnosis codes and hospital reimbursement. Comparisons of coding inaccuracy were made between a list of coded diagnoses obtained by a coder using (i)the discharge summary alone, (ii)case notes and discharge summary, and (iii)discharge summary with the addition of medical input. For each method, inaccuracy was determined for the primary, secondary diagnoses, Healthcare Resource Group (HRG) and estimated hospital reimbursement. These data were then compared with a gold standard derived by a consultant and coder. 107 consecutive patient discharges were analysed. Inaccuracy of diagnosis codes was highest when a coder used the discharge summary alone, and decreased significantly when the coder used the case notes (70% vs 58% respectively, p coded from the discharge summary with medical support (70% vs 60% respectively, p coding with case notes, and 35% for coding with medical support. The three coding methods resulted in an annual estimated loss of hospital remuneration of between £1.8 M and £16.5 M. The accuracy of diagnosis codes and percentage of correct HRGs improved when coders used either case notes or medical support in addition to the discharge summary. Further emphasis needs to be placed on improving the standard of information recorded in discharge summaries. Copyright © 2018 Elsevier B.V. All rights reserved.
Subotin, Michael; Davis, Anthony R
2016-09-01
Natural language processing methods for medical auto-coding, or automatic generation of medical billing codes from electronic health records, generally assign each code independently of the others. They may thus assign codes for closely related procedures or diagnoses to the same document, even when they do not tend to occur together in practice, simply because the right choice can be difficult to infer from the clinical narrative. We propose a method that injects awareness of the propensities for code co-occurrence into this process. First, a model is trained to estimate the conditional probability that one code is assigned by a human coder, given than another code is known to have been assigned to the same document. Then, at runtime, an iterative algorithm is used to apply this model to the output of an existing statistical auto-coder to modify the confidence scores of the codes. We tested this method in combination with a primary auto-coder for International Statistical Classification of Diseases-10 procedure codes, achieving a 12% relative improvement in F-score over the primary auto-coder baseline. The proposed method can be used, with appropriate features, in combination with any auto-coder that generates codes with different levels of confidence. The promising results obtained for International Statistical Classification of Diseases-10 procedure codes suggest that the proposed method may have wider applications in auto-coding. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Local coding based matching kernel method for image classification.
Yan Song
Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.
Zapoměl, Jaroslav; Stachiv, Ivo; Ferfecki, P.
66-67, January (2016), s. 223-231 ISSN 0888-3270 R&D Projects: GA ČR GAP107/12/0800 Institutional support: RVO:61388998 ; RVO:68378271 Keywords : film measurement * finite element method * Monte Carlo probabilistic method * resonance frequency Subject RIV: BI - Acoustics; BI - Acoustics (FZU-D) Impact factor: 4.116, year: 2016
3D FEM Simulation of Flank Wear in Turning
Attanasio, Aldo; Ceretti, Elisabetta; Giardini, Claudio
2011-05-01
This work deals with tool wear simulation. Studying the influence of tool wear on tool life, tool substitution policy and influence on final part quality, surface integrity, cutting forces and power consumption it is important to reduce the global process costs. Adhesion, abrasion, erosion, diffusion, corrosion and fracture are some of the phenomena responsible of the tool wear depending on the selected cutting parameters: cutting velocity, feed rate, depth of cut, …. In some cases these wear mechanisms are described by analytical models as a function of process variables (temperature, pressure and sliding velocity along the cutting surface). These analytical models are suitable to be implemented in FEM codes and they can be utilized to simulate the tool wear. In the present paper a commercial 3D FEM software has been customized to simulate the tool wear during turning operations when cutting AISI 1045 carbon steel with uncoated tungsten carbide tip. The FEM software was improved by means of a suitable subroutine able to modify the tool geometry on the basis of the estimated tool wear as the simulation goes on. Since for the considered couple of tool-workpiece material the main phenomena generating wear are the abrasive and the diffusive ones, the tool wear model implemented into the subroutine was obtained as combination between the Usui's and the Takeyama and Murata's models. A comparison between experimental and simulated flank tool wear curves is reported demonstrating that it is possible to simulate the tool wear development.
Resonance interference method in lattice physics code stream
Choi, Sooyoung; Khassenov, Azamat; Lee, Deokjung
2015-01-01
Newly developed resonance interference model is implemented in the lattice physics code STREAM, and the model shows a significant improvement in computing accurate eigenvalues. Equivalence theory is widely used in production calculations to generate the effective multigroup (MG) cross-sections (XS) for commercial reactors. Although a lot of methods have been developed to enhance the accuracy in computing effective XSs, the current resonance treatment methods still do not have a clear resonance interference model. The conventional resonance interference model simply adds the absorption XSs of resonance isotopes to the background XS. However, the conventional models show non-negligible errors in computing effective XSs and eigenvalues. In this paper, a resonance interference factor (RIF) library method is proposed. This method interpolates the RIFs in a pre-generated RIF library and corrects the effective XS, rather than solving the time consuming slowing down calculation. The RIF library method is verified for homogeneous and heterogeneous problems. The verification results using the proposed method show significant improvements of accuracy in treating the interference effect. (author)
D. S. Vakhlyarskiy
2016-01-01
Full Text Available This paper proposes a method to calculate the splitting of natural frequency of the shell of hemispherical resonator gyro. (HRG. The paper considers splitting that arises from the small defect of the middle surface, which makes the resonator different from the rotary shell. The presented method is a combination of the perturbation method and the finite element method. The method allows us to find the frequency splitting caused by defects in shape, arbitrary distributed in the circumferential direction. This is achieved by calculating the perturbations of multiple natural frequencies of the second and higher orders. The proposed method allows us to calculate the splitting of multiple frequencies for the shell with the meridian of arbitrary shape.A developed finite element is an annular element of the shell and has two nodes. Projections of movements are used on the axis of the global cylindrical system of coordinates, as the unknown. To approximate the movements are used polynomials of the second degree. Within the finite element the geometric characteristics are arranged in a series according to the small parameter of perturbations of the middle surface geometry.Movements on the final element are arranged in series according to the small parameter, and in a series according to circumferential angle. With computer used to implement the method, three-dimensional arrays are used to store the perturbed quantities. This allows the use of regular expressions for the mass and stiffness matrices, when building the finite element, instead of analytic dependencies for each perturbation of these matrices of the required order with desirable mathematical operations redefined in accordance with the perturbation method.As a test task, is calculated frequency splitting of non-circular cylindrical resonator with Navier boundary conditions. The discrepancy between the results and semi-analytic solution to this problem is less than 1%. For a cylindrical shell is
A gridding method for object-oriented PIC codes
Gisler, G.; Peter, W.; Nash, H.; Acquah, J.; Lin, C.; Rine, D.
1993-01-01
A simple, rule-based gridding method for object-oriented PIC codes is described which is not only capable of dealing with complicated structures such as multiply-connected regions, but is also computationally faster than classical gridding techniques. Using, these smart grids, vacant cells (e.g., cells enclosed by conductors) will never have to be stored or calculated, thus avoiding the usual situation of having to zero electromagnetic fields within conductors after valuable cpu time has been spent in calculating the fields within these cells in the first place. This object-oriented gridding technique makes use of encapsulating characteristics of actual physical objects (particles, fields, grids, etc.) in C ++ classes and supporting software reuse of these entities through C ++ class inheritance relations. It has been implemented in the form of a simple two-dimensional plasma particle-in-cell code, and forms the initial effort of an AFOSR research project to develop a flexible software simulation environment for particle-in-cell algorithms based on object-oriented technology
Present status of transport code development based on Monte Carlo method
Nakagawa, Masayuki
1985-01-01
The present status of development in Monte Carlo code is briefly reviewed. The main items are the followings; Application fields, Methods used in Monte Carlo code (geometry spectification, nuclear data, estimator and variance reduction technique) and unfinished works, Typical Monte Carlo codes and Merits of continuous energy Monte Carlo code. (author)
Barnes, Ronald; Roth, Caleb C.; Shadaram, Mehdi; Beier, Hope; Ibey, Bennett L.
2015-03-01
The underlying mechanism(s) responsible for nanoporation of phospholipid membranes by nanosecond pulsed electric fields (nsEP) remains unknown. The passage of a high electric field through a conductive medium creates two primary contributing factors that may induce poration: the electric field interaction at the membrane and the shockwave produced from electrostriction of a polar submersion medium exposed to an electric field. Previous work has focused on the electric field interaction at the cell membrane, through such models as the transport lattice method. Our objective is to model the shock wave cell membrane interaction induced from the density perturbation formed at the rising edge of a high voltage pulse in a polar liquid resulting in a shock wave propagating away from the electrode toward the cell membrane. Utilizing previous data from cell membrane mechanical parameters, and nsEP generated shockwave parameters, an acoustic shock wave model based on the Helmholtz equation for sound pressure was developed and coupled to a cell membrane model with finite-element modeling in COMSOL. The acoustic structure interaction model was developed to illustrate the harmonic membrane displacements and stresses resulting from shockwave and membrane interaction based on Hooke's law. Poration is predicted by utilizing membrane mechanical breakdown parameters including cortical stress limits and hydrostatic pressure gradients.
FEM Modelling of Lateral-Torsional Buckling Using Shell and Solid Elements
Valeš, Jan; Stan, Tudor-Cristian
2017-01-01
The paper describes two methods of FEM modelling of I-section beams loaded by bending moments. Series of random realizations with initial imperfections following the first eigenmode of lateral-torsional buckling were created. Two independent FEM software products were used for analyses of resista...... of resistance. At the end the difference and correlation between the results as well as advantages and disadvantages of both methods are discussed....
A new 2D FEM analysis of a disc machine with offset rotor
Gair, S.; Canova, A. [Napier Univ., Edinburgh (United Kingdom). Dept. of Electrical, Electronic and Computer Engineering; Eastham, J.F.; Betzer, T. [Univ. of Bath (United Kingdom). School of Electronic and Electrical Engineering
1995-12-31
The paper presents a new 2-Dimensional Finite Element Method (2D FEM) analysis of a double sided axial field, permanent magnet excited brushless DC motor. The rotor of the machine is free to move in a direction perpendicular to the axis of the shaft. Computed 2D results are compared with 3D FEM analysis and the new analysis method is shown to give close agreement.
Ai-bing Zhang
Full Text Available Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish and two representing non-coding ITS barcodes (rust fungi and brown algae. Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ and Maximum likelihood (ML methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40% for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37% for 1094 brown algae queries, both using ITS barcodes.
Zhang, Ai-bing; Feng, Jie; Ward, Robert D; Wan, Ping; Gao, Qiang; Wu, Jun; Zhao, Wei-zhong
2012-01-01
Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI) region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS) genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF) to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish) and two representing non-coding ITS barcodes (rust fungi and brown algae). Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ) and Maximum likelihood (ML) methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI) of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40%) for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37%) for 1094 brown algae queries, both using ITS barcodes.
A Semantic Analysis Method for Scientific and Engineering Code
Stewart, Mark E. M.
1998-01-01
This paper develops a procedure to statically analyze aspects of the meaning or semantics of scientific and engineering code. The analysis involves adding semantic declarations to a user's code and parsing this semantic knowledge with the original code using multiple expert parsers. These semantic parsers are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. In practice, a user would submit code with semantic declarations of primitive variables to the analysis procedure, and its semantic parsers would automatically recognize and document some static, semantic concepts and locate some program semantic errors. A prototype implementation of this analysis procedure is demonstrated. Further, the relationship between the fundamental algebraic manipulations of equations and the parsing of expressions is explained. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.
Effect of Cervical Lessions on the Tooth FEM Study
Gabriela Bereşescu; Ligia Brezeanu; Claudia Şoaita
2010-01-01
The approach used until recently concerning the phenomena of dental abfraction points to the conclusion that the cervical area of the tooth, were this type of lesion usually occur, concentrates the stress resulted from the action of the forces applied on various areas on the crown. Moreover, any lesion in the cervical area facilitates the possibility of its advance into the tooth, ultimately fracturing it. Our paper presents a FEM (finite element method) study on the results of a mechanical a...
Step by step parallel programming method for molecular dynamics code
Orii, Shigeo; Ohta, Toshio
1996-07-01
Parallel programming for a numerical simulation program of molecular dynamics is carried out with a step-by-step programming technique using the two phase method. As a result, within the range of a certain computing parameters, it is found to obtain parallel performance by using the level of parallel programming which decomposes the calculation according to indices of do-loops into each processor on the vector parallel computer VPP500 and the scalar parallel computer Paragon. It is also found that VPP500 shows parallel performance in wider range computing parameters. The reason is that the time cost of the program parts, which can not be reduced by the do-loop level of the parallel programming, can be reduced to the negligible level by the vectorization. After that, the time consuming parts of the program are concentrated on less parts that can be accelerated by the do-loop level of the parallel programming. This report shows the step-by-step parallel programming method and the parallel performance of the molecular dynamics code on VPP500 and Paragon. (author)
Recent advances in neutral particle transport methods and codes
Azmy, Y.Y.
1996-01-01
An overview of ORNL's three-dimensional neutral particle transport code, TORT, is presented. Special features of the code that make it invaluable for large applications are summarized for the prospective user. Advanced capabilities currently under development and installation in the production release of TORT are discussed; they include: multitasking on Cray platforms running the UNICOS operating system; Adjacent cell Preconditioning acceleration scheme; and graphics codes for displaying computed quantities such as the flux. Further developments for TORT and its companion codes to enhance its present capabilities, as well as expand its range of applications are disucssed. Speculation on the next generation of neutron particle transport codes at ORNL, especially regarding unstructured grids and high order spatial approximations, are also mentioned
Utilization of FEM model for steel microstructure determination
Kešner, A.; Chotěborský, R.; Linda, M.; Hromasová, M.
2018-02-01
Agricultural tools which are used in soil processing, they are worn by abrasive wear mechanism cases by hard minerals particles in the soil. The wear rate is influenced by mechanical characterization of tools material and wear rate is influenced also by soil mineral particle contents. Mechanical properties of steel can be affected by a technology of heat treatment that it leads to a different microstructures. Experimental work how to do it is very expensive and thanks to numerical methods like FEM we can assumed microstructure at low cost but each of numerical model is necessary to be verified. The aim of this work has shown a procedure of prediction microstructure of steel for agricultural tools. The material characterizations of 51CrV4 grade steel were used for numerical simulation like TTT diagram, heat capacity, heat conduction and other physical properties of material. A relationship between predicted microstructure by FEM and real microstructure after heat treatment shows a good correlation.
An Integrated NDE and FEM Characterization of Composite Rotors
Abdul-Aziz, Ali; Baaklini, George Y.; Trudell, Jeffrey J.
2000-01-01
A structural assessment by integrating finite-element methods (FEM) and a nondestructive evaluation (NDE) of two flywheel rotor assemblies is presented. Composite rotor A is pancake like with a solid hub design, and composite rotor B is cylindrical with a hollow hub design. Detailed analyses under combined centrifugal and interference-fit loading are performed. Two- and three-dimensional stress analyses and two-dimensional fracture mechanics analyses are conducted. A comparison of the structural analysis results obtained with those extracted via NDE findings is reported. Contact effects due to press-fit conditions are evaluated. Stress results generated from the finite-element analyses were corroborated with the analytical solution. Cracks due to rotational loading up to 49 000 rpm for rotor A and 34 000 rpm for rotor B were successfully imaged with NDE and predicted with FEM and fracture mechanics analyses. A procedure that extends current structural analysis to a life prediction tool is also defined.
RELAP5/MOD3 code manual: Code structure, system models, and solution methods. Volume 1
1995-08-01
The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling, approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I provides modeling theory and associated numerical schemes
Optimized Method for Generating and Acquiring GPS Gold Codes
Khaled Rouabah
2015-01-01
Full Text Available We propose a simpler and faster Gold codes generator, which can be efficiently initialized to any desired code, with a minimum delay. Its principle consists of generating only one sequence (code number 1 from which we can produce all the other different signal codes. This is realized by simply shifting this sequence by different delays that are judiciously determined by using the bicorrelation function characteristics. This is in contrast to the classical Linear Feedback Shift Register (LFSR based Gold codes generator that requires, in addition to the shift process, a significant number of logic XOR gates and a phase selector to change the code. The presence of all these logic XOR gates in classical LFSR based Gold codes generator provokes the consumption of an additional time in the generation and acquisition processes. In addition to its simplicity and its rapidity, the proposed architecture, due to the total absence of XOR gates, has fewer resources than the conventional Gold generator and can thus be produced at lower cost. The Digital Signal Processing (DSP implementations have shown that the proposed architecture presents a solution for acquiring Global Positioning System (GPS satellites signals optimally and in a parallel way.
A novel construction method of QC-LDPC codes based on CRT for optical communications
Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-05-01
A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.
Improved Intra-coding Methods for H.264/AVC
Li Song
2009-01-01
Full Text Available The H.264/AVC design adopts a multidirectional spatial prediction model to reduce spatial redundancy, where neighboring pixels are used as a prediction for the samples in a data block to be encoded. In this paper, a recursive prediction scheme and an enhanced (block-matching algorithm BMA prediction scheme are designed and integrated into the state-of-the-art H.264/AVC framework to provide a new intra coding model. Extensive experiments demonstrate that the coding efficiency can be on average increased by 0.27 dB with comparison to the performance of the conventional H.264 coding model.
Numerical analysis of laminated elastomer by FEM
Mazda, T.; Shiojiri, H.
1993-01-01
A Computer code based on mixed finite element method was developed for three dimensional large strain analyses of laminated elastomers including nonlinear bulk stress vs. bulk strain relationships. The adopted element is the variable node element with maximum node numbers of 27 for displacements and 4 for pressures. At first, the displacements and pressures were calculated by the code using single element under various loading conditions. The results were compared with theoretical solutions and the both results' exactly coincided with each other. Next, the analyses of laminated elastomers subjected to axial loadings were conducted using both the new code and ABAQUS code, and the results were compared with the test results. The agreement of the results of the present code were better than ABAQUS code mainly due to the capability of handling wider range of material properties. Lastly, the shearing tests of laminated elastomers were simulated by the new code. The results were shown to be in good agreement with the test results. (author)
An Efficient Method for Verifying Gyrokinetic Microstability Codes
Bravenec, R.; Candy, J.; Dorland, W.; Holland, C.
2009-11-01
Benchmarks for gyrokinetic microstability codes can be developed through successful ``apples-to-apples'' comparisons among them. Unlike previous efforts, we perform the comparisons for actual discharges, rendering the verification efforts relevant to existing experiments and future devices (ITER). The process requires i) assembling the experimental analyses at multiple times, radii, discharges, and devices, ii) creating the input files ensuring that the input parameters are faithfully translated code-to-code, iii) running the codes, and iv) comparing the results, all in an organized fashion. The purpose of this work is to automate this process as much as possible: At present, a python routine is used to generate and organize GYRO input files from TRANSP or ONETWO analyses. Another routine translates the GYRO input files into GS2 input files. (Translation software for other codes has not yet been written.) Other python codes submit the multiple GYRO and GS2 jobs, organize the results, and collect them into a table suitable for plotting. (These separate python routines could easily be consolidated.) An example of the process -- a linear comparison between GYRO and GS2 for a DIII-D discharge at multiple radii -- will be presented.
RF Wave Simulation Using the MFEM Open Source FEM Package
Stillerman, J.; Shiraiwa, S.; Bonoli, P. T.; Wright, J. C.; Green, D. L.; Kolev, T.
2016-10-01
A new plasma wave simulation environment based on the finite element method is presented. MFEM, a scalable open-source FEM library, is used as the basis for this capability. MFEM allows for assembling an FEM matrix of arbitrarily high order in a parallel computing environment. A 3D frequency domain RF physics layer was implemented using a python wrapper for MFEM and a cold collisional plasma model was ported. This physics layer allows for defining the plasma RF wave simulation model without user knowledge of the FEM weak-form formulation. A graphical user interface is built on πScope, a python-based scientific workbench, such that a user can build a model definition file interactively. Benchmark cases have been ported to this new environment, with results being consistent with those obtained using COMSOL multiphysics, GENRAY, and TORIC/TORLH spectral solvers. This work is a first step in bringing to bear the sophisticated computational tool suite that MFEM provides (e.g., adaptive mesh refinement, solver suite, element types) to the linear plasma-wave interaction problem, and within more complicated integrated workflows, such as coupling with core spectral solver, or incorporating additional physics such as an RF sheath potential model or kinetic effects. USDoE Awards DE-FC02-99ER54512, DE-FC02-01ER54648.
ATHENA code manual. Volume 1. Code structure, system models, and solution methods
Carlson, K.E.; Roth, P.A.; Ransom, V.H.
1986-09-01
The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code has been developed to perform transient simulation of the thermal hydraulic systems which may be found in fusion reactors, space reactors, and other advanced systems. A generic modeling approach is utilized which permits as much of a particular system to be modeled as necessary. Control system and secondary system components are included to permit modeling of a complete facility. Several working fluids are available to be used in one or more interacting loops. Different loops may have different fluids with thermal connections between loops. The modeling theory and associated numerical schemes are documented in Volume I in order to acquaint the user with the modeling base and thus aid effective use of the code. The second volume contains detailed instructions for input data preparation
Simulation of ultrasonic and EMAT arrays using FEM and FDTD.
Xie, Yuedong; Yin, Wuliang; Liu, Zenghua; Peyton, Anthony
2016-03-01
This paper presents a method which combines electromagnetic simulation and ultrasonic simulation to build EMAT array models. For a specific sensor configuration, Lorentz forces are calculated using the finite element method (FEM), which then can feed through to ultrasonic simulations. The propagation of ultrasound waves is numerically simulated using finite-difference time-domain (FDTD) method to describe their propagation within homogenous medium and their scattering phenomenon by cracks. Radiation pattern obtained with Hilbert transform on time domain waveforms is proposed to characterise the sensor in terms of its beam directivity and field distribution along the steering angle. Copyright © 2015 Elsevier B.V. All rights reserved.
Fluid dynamics and heat transfer methods for the TRAC code
Reed, W.H.; Kirchner, W.L.
1977-01-01
A computer code called TRAC is being developed for analysis of loss-of-coolant accidents and other transients in light water reactors. This code involves a detailed, multidimensional description of two-phase flow coupled implicitly through appropriate heat transfer coefficients with a simulation of the temperature field in fuel and structural material. Because TRAC utilizes about 1000 fluid mesh cells to describe an LWR system, whereas existing lumped parameter codes typically involve fewer than 100 fluid cells, we have developed new highly implicit difference techniques that yield acceptable computing times on modern computers. Several test problems for which experimental data are available, including blowdown of single pipe and loop configurations with and without heated walls, have been computed with TRAC. Excellent agreement with experimental results has been obtained. (author)
A robust fusion method for multiview distributed video coding
Salmistraro, Matteo; Ascenso, Joao; Brites, Catarina
2014-01-01
Distributed video coding (DVC) is a coding paradigm which exploits the redundancy of the source (video) at the decoder side, as opposed to predictive coding, where the encoder leverages the redundancy. To exploit the correlation between views, multiview predictive video codecs require the encoder...... with a robust fusion system able to improve the quality of the fused SI along the decoding process through a learning process using already decoded data. We shall here take the approach to fuse the estimated distributions of the SIs as opposed to a conventional fusion algorithm based on the fusion of pixel...... values. The proposed solution is able to achieve gains up to 0.9 dB in Bjøntegaard difference when compared with the best-performing (in a RD sense) single SI DVC decoder, chosen as the best of an inter-view and a temporal SI-based decoder one....
Fluid dynamics and heat transfer methods for the TRAC code
Reed, W.H.; Kirchner, W.L.
1977-01-01
A computer code called TRAC is being developed for analysis of loss-of-coolant accidents and other transients in light water reactors. This code involves a detailed, multidimensional description of two-phase flow coupled implicitly through appropriate heat transfer coefficients with a simulation of the temperature field in fuel and structural material. Because TRAC utilizes about 1000 fluid mesh cells to describe an LWR system, whereas existing lumped parameter codes typically involve fewer than 100 fluid cells, new highly implicit difference techniques are developed that yield acceptable computing times on modern computers. Several test problems for which experimental data are available, including blowdown of single pipe and loop configurations with and without heated walls, have been computed with TRAC. Excellent agreement with experimental results has been obtained
FEM growth and yield data monocultures - Poplar
Mohren, G.M.J.; Goudzwaard, L.; Jansen, J.J.; Oosterbaan, A.; Oldenburger, J.F.; Ouden, den J.
2016-01-01
The current database is part of the FEM growth and yield database, a collection of growth and yield data from even-aged monocultures (douglas fir, common oak, poplar, Japanese Larch, Norway spruce, Scots pine, Corsican pine, Austrian pine, red oak and several other species, with only a few plots,
Mapping Saldana's Coding Methods onto the Literature Review Process
Onwuegbuzie, Anthony J.; Frels, Rebecca K.; Hwang, Eunjin
2016-01-01
Onwuegbuzie and Frels (2014) provided a step-by-step guide illustrating how discourse analysis can be used to analyze literature. However, more works of this type are needed to address the way that counselor researchers conduct literature reviews. Therefore, we present a typology for coding and analyzing information extracted for literature…
Method for quantitative assessment of nuclear safety computer codes
Dearien, J.A.; Davis, C.B.; Matthews, L.J.
1979-01-01
A procedure has been developed for the quantitative assessment of nuclear safety computer codes and tested by comparison of RELAP4/MOD6 predictions with results from two Semiscale tests. This paper describes the developed procedure, the application of the procedure to the Semiscale tests, and the results obtained from the comparison
Chung, Young Jong; Kim, Soo Hyoung; Kim, See Darl (and others)
2008-10-15
The TASS/SMR code has been developed with domestic technologies for the safety analysis of the SMART plant which is an integral type pressurized water reactor. It can be applied to the analysis of design basis accidents including non-LOCA (loss of coolant accident) and LOCA of the SMART plant. The TASS/SMR code can be applied to any plant regardless of the structural characteristics of a reactor since the code solves the same governing equations for both the primary and secondary system. The code has been developed to meet the requirements of the safety analysis code. This report describes the overall structure of the TASS/SMR, input processing, and the processes of a steady state and transient calculations. In addition, basic differential equations, finite difference equations, state relationships, and constitutive models are described in the report. First, the conservation equations, a discretization process for numerical analysis, search method for state relationship are described. Then, a core power model, heat transfer models, physical models for various components, and control and trip models are explained.
Methods for the development of large computer codes under LTSS
Sicilian, J.M.
1977-06-01
TRAC is a large computer code being developed by Group Q-6 for the analysis of the transient thermal hydraulic behavior of light-water nuclear reactors. A system designed to assist the development of TRAC is described. The system consists of a central HYDRA dataset, R6LIB, containing files used in the development of TRAC, and a file maintenance program, HORSE, which facilitates the use of this dataset
Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W; Imel, Zac E; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C
2015-02-01
The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. Copyright © 2015 Elsevier Inc. All rights reserved.
CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.
Saegusa, Jun
2008-01-01
The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.
Nakagawa, Takahiro; Ochiai, Katsuharu [Plant and System Planning Department, Toshiba Corporation, Yokohama, Kanagawa (Japan); Uematsu, Mikio; Hayashida, Yoshihisa [Department of Nuclear Engineering, Toshiba Engineering Corporation, Yokohama, Kanagawa (Japan)
2000-03-01
A boiling water reactor (BWR) plant has a single loop coolant system, in which main steam generated in the reactor core proceeds directly into turbines. Consequently, radioactive {sup 16}N (6.2 MeV photon emitter) contained in the steam contributes to gamma-ray skyshine dose in the vicinity of the BWR plant. The skyshine dose analysis is generally performed with the line-beam method code SKYSHINE, in which calculational geometry consists of a rectangular turbine building and a set of isotropic point sources corresponding to an actual distribution of {sup 16}N sources. For the purpose of upgrading calculational accuracy, the SKYSHINE-CG code has been developed by incorporating the combinatorial geometry (CG) routine into the SKYSHINE code, so that shielding effect of in-building equipment can be properly considered using a three-dimensional model composed of boxes, cylinders, spheres, etc. Skyshine dose rate around a 500 MWe BWR plant was calculated with both SKYSHINE and SKYSHINE-CG codes, and the calculated results were compared with measured data obtained with a NaI(Tl) scintillation detector. The C/E values for SKYSHINE-CG calculation were scattered around 4.0, whereas the ones for SKYSHINE calculation were as large as 6.0. Calculational error was found to be reduced by adopting three-dimensional model based on the combinatorial geometry method. (author)
Effect of Cervical Lessions on the Tooth FEM Study
Gabriela Bereşescu
2010-12-01
Full Text Available The approach used until recently concerning the phenomena of dental abfraction points to the conclusion that the cervical area of the tooth, were this type of lesion usually occur, concentrates the stress resulted from the action of the forces applied on various areas on the crown. Moreover, any lesion in the cervical area facilitates the possibility of its advance into the tooth, ultimately fracturing it. Our paper presents a FEM (finite element method study on the results of a mechanical analysis of the phenomena involving the tooth damaged by cervical lesions.
Methods of evaluating the effects of coding on SAR data
Dutkiewicz, Melanie; Cumming, Ian
1993-01-01
It is recognized that mean square error (MSE) is not a sufficient criterion for determining the acceptability of an image reconstructed from data that has been compressed and decompressed using an encoding algorithm. In the case of Synthetic Aperture Radar (SAR) data, it is also deemed to be insufficient to display the reconstructed image (and perhaps error image) alongside the original and make a (subjective) judgment as to the quality of the reconstructed data. In this paper we suggest a number of additional evaluation criteria which we feel should be included as evaluation metrics in SAR data encoding experiments. These criteria have been specifically chosen to provide a means of ensuring that the important information in the SAR data is preserved. The paper also presents the results of an investigation into the effects of coding on SAR data fidelity when the coding is applied in (1) the signal data domain, and (2) the image domain. An analysis of the results highlights the shortcomings of the MSE criterion, and shows which of the suggested additional criterion have been found to be most important.
FEM Simulation of Incremental Shear
Rosochowski, Andrzej; Olejnik, Lech
2007-01-01
A popular way of producing ultrafine grained metals on a laboratory scale is severe plastic deformation. This paper introduces a new severe plastic deformation process of incremental shear. A finite element method simulation is carried out for various tool geometries and process kinematics. It has been established that for the successful realisation of the process the inner radius of the channel as well as the feeding increment should be approximately 30% of the billet thickness. The angle at which the reciprocating die works the material can be 30 deg. . When compared to equal channel angular pressing, incremental shear shows basic similarities in the mode of material flow and a few technological advantages which make it an attractive alternative to the known severe plastic deformation processes. The most promising characteristic of incremental shear is the possibility of processing very long billets in a continuous way which makes the process more industrially relevant
Isaac Caicedo-Castro
2014-01-01
Full Text Available This paper presents CodeRAnts, a new recommendation method based on a collaborative searching technique and inspired on the ant colony metaphor. This method aims to fill the gap in the current state of the matter regarding recommender systems for software reuse, for which prior works present two problems. The first is that, recommender systems based on these works cannot learn from the collaboration of programmers and second, outcomes of assessments carried out on these systems present low precision measures and recall and in some of these systems, these metrics have not been evaluated. The work presented in this paper contributes a recommendation method, which solves these problems.
Application of an enriched FEM technique in thermo-mechanical contact problems
Khoei, A. R.; Bahmani, B.
2018-02-01
In this paper, an enriched FEM technique is employed for thermo-mechanical contact problem based on the extended finite element method. A fully coupled thermo-mechanical contact formulation is presented in the framework of X-FEM technique that takes into account the deformable continuum mechanics and the transient heat transfer analysis. The Coulomb frictional law is applied for the mechanical contact problem and a pressure dependent thermal contact model is employed through an explicit formulation in the weak form of X-FEM method. The equilibrium equations are discretized by the Newmark time splitting method and the final set of non-linear equations are solved based on the Newton-Raphson method using a staggered algorithm. Finally, in order to illustrate the capability of the proposed computational model several numerical examples are solved and the results are compared with those reported in literature.
A method for generating subgroup parameters from resonance tables and the SPART code
Devan, K.; Mohanakrishnan, P.
1995-01-01
A method for generating subgroup or band parameters from resonance tables is described. A computer code SPART was written using this method. This code generates the subgroup parameters for any number of bands within the specified broad groups at different temperatures by reading the required input data from the binary cross section library in the Cadarache format. The results obtained with SPART code for two bands were compared with that obtained from GROUPIE code and a good agreement was obtained. Results of the generation of subgroup parameters in four bands for sample case of 239 Pu from resonance tables of Cadarache Ver.2 library is also presented. 6 refs, 2 tabs
Simulation of Biosensor using FEM
Sheeparamatti, B G; Hebbal, M S; Sheeparamatti, R B; Math, V B; Kadadevaramath, J S
2006-01-01
Bio-Micro Electro Mechanical Systems/Nano Electro Mechanical Systems include a wide variety of sensors, actuators, and complex micro/nano devices for biomedical applications. Recent advances in biosensors have shown that sensors based on bending of microfabricated cantilevers have potential advantages over earlier used detection methods. Thus, a simple cantilever beam can be used as a sensor for biomedical, chemical and environmental applications. Here, microfabricated multilayered cantilever beam is exposed to sensing environment. Lower layer being pure structural silicon or polymer and upper layer is of polymer with antigen/antibody immobilized in it. Obviously, it has an affinity towards its counterpart i.e. antibody/antigen. In the sensing environment, if counter elements exists, they get captured by this sensing beam head, and the cantilever beam deflects. This deflection can be sensed and the presence of counter elements in the environment can be predicted. In this work, a finite element model of a biosensor for sensing antibody/antigen reaction is developed and simulated using ANSYS/Multiphysics. The optimal dimensions of the microcantilever beam are selected based on permissible deflection range with the aid of MATLAB. In the model analysis, both weight and surface stress effects on the cantilever are considered. Approximate weights are taken into account because of counter elements, considering their molecular weight and possible number of elements required for sensing. The results obtained in terms of lateral deflection are presented
Caremoli, C; Beaucourt, D; Chen, O; Nicolas, G; Peniguel, C; Rascle, P; Richard, N; Thai Van, D; Yessayan, A
1994-12-01
This guide book deals with coupling of big scientific codes. First, the context is introduced: big scientific codes devoted to a specific discipline coming to maturity, and more and more needs in terms of multi discipline studies. Then we describe different kinds of code coupling and an example of code coupling: 3D thermal-hydraulic code THYC and 3D neutronics code COCCINELLE. With this example we identify problems to be solved to realize a coupling. We present the different numerical methods usable for the resolution of coupling terms. This leads to define two kinds of coupling: with the leak coupling, we can use explicit methods, and with the strong coupling we need to use implicit methods. On both cases, we analyze the link with the way of parallelizing code. For translation of data from one code to another, we define the notion of Standard Coupling Interface based on a general structure for data. This general structure constitutes an intermediary between the codes, thus allowing a relative independence of the codes from a specific coupling. The proposed method for the implementation of a coupling leads to a simultaneous run of the different codes, while they exchange data. Two kinds of data communication with message exchange are proposed: direct communication between codes with the use of PVM product (Parallel Virtual Machine) and indirect communication with a coupling tool. This second way, with a general code coupling tool, is based on a coupling method, and we strongly recommended to use it. This method is based on the two following principles: re-usability, that means few modifications on existing codes, and definition of a code usable for coupling, that leads to separate the design of a code usable for coupling from the realization of a specific coupling. This coupling tool available from beginning of 1994 is described in general terms. (authors). figs., tabs.
Reliability techniques and Coupled BEM/FEM for interaction pile-soil
Ahmed SAHLI
2017-06-01
Full Text Available This paper deals with the development of a computational code for the modelling and verification of safety in relation to limit states of piles found in foundations of current structures. To this end, it makes use of reliability techniques for the probabilistic analysis of piles modelled with the finite element method (FEM coupled to the boundary element method (BEM. The soil is modelled with the BEM employing Mindlin's fundamental solutions, suitable for the consideration of a three-dimensional infinite half-space. The piles are modelled as bar elements with the MEF, each of which is represented in the BEM as a loading line. The finite element of the employed bar has four nodes and fourteen nodal parameters, three of which are displacements for each node plus two rotations for the top node. The slipping of the piles in relation to the mass is carried out using adhesion models to define the evolution of the shaft tensions during the transfer of load to the soil. The reliability analysis is based on three methods: first order second moment (FOSM, first order reliability method and Monte Carlo method.
Vectorization of nuclear codes on FACOM 230-75 APU computer
Harada, Hiroo; Higuchi, Kenji; Ishiguro, Misako; Tsutsui, Tsuneo; Fujii, Minoru
1983-02-01
To provide for the future usage of supercomputer, we have investigated the vector processing efficiency of nuclear codes which are being used at JAERI. The investigation is performed by using FACOM 230-75 APU computer. The codes are CITATION (3D neutron diffusion), SAP5 (structural analysis), CASCMARL (irradiation damage simulation). FEM-BABEL (3D neutron diffusion by FEM), GMSCOPE (microscope simulation). DWBA (cross section calculation at molecular collisions). A new type of cell density calculation for particle-in-cell method is also investigated. For each code we have obtained a significant speedup which ranges from 1.8 (CASCMARL) to 7.5 (GMSCOPE), respectively. We have described in this report the running time dynamic profile analysis of the codes, numerical algorithms used, program restructuring for the vectorization, numerical experiments of the iterative process, vectorized ratios, speedup ratios on the FACOM 230-75 APU computer, and some vectorization views. (author)
Coupling of partitioned physics codes with quasi-Newton methods
Haelterman, R
2017-03-01
Full Text Available , A class of methods for solving nonlinear simultaneous equations. Math. Comp. 19, pp. 577–593 (1965) [3] C.G. Broyden, Quasi-Newton methods and their applications to function minimization. Math. Comp. 21, pp. 368–381 (1967) [4] J.E. Dennis, J.J. More...´, Quasi-Newton methods: motivation and theory. SIAM Rev. 19, pp. 46–89 (1977) [5] J.E. Dennis, R.B. Schnabel, Least Change Secant Updates for quasi- Newton methods. SIAM Rev. 21, pp. 443–459 (1979) [6] G. Dhondt, CalculiX CrunchiX USER’S MANUAL Version 2...
A nodal Grean's function method of reactor core fuel management code, NGCFM2D
Li Dongsheng; Yao Dong.
1987-01-01
This paper presents the mathematical model and program structure of the nodal Green's function method of reactor core fuel management code, NGCFM2D. Computing results of some reactor cores by NGCFM2D are analysed and compared with other codes
WKB: an interactive code for solving differential equations using phase integral methods
White, R.B.
1978-01-01
A small code for the analysis of ordinary differential equations interactively through the use of Phase Integral Methods (WKB) has been written for use on the DEC 10. This note is a descriptive manual for those interested in using the code
Development of three-dimensional transport code by the double finite element method
Fujimura, Toichiro
1985-01-01
Development of a three-dimensional neutron transport code by the double finite element method is described. Both of the Galerkin and variational methods are adopted to solve the problem, and then the characteristics of them are compared. Computational results of the collocation method, developed as a technique for the vaviational one, are illustrated in comparison with those of an Ssub(n) code. (author)
Essentials of the finite element method for mechanical and structural engineers
Pavlou, Dimitrios G
2015-01-01
Fundamental coverage, analytic mathematics, and up-to-date software applications are hard to find in a single text on the finite element method (FEM). Dimitrios Pavlou's Essentials of the Finite Element Method: For Structural and Mechanical Engineers makes the search easier by providing a comprehensive but concise text for those new to FEM, or just in need of a refresher on the essentials. Essentials of the Finite Element Method explains the basics of FEM, then relates these basics to a number of practical engineering applications. Specific topics covered include linear spring elements, bar elements, trusses, beams and frames, heat transfer, and structural dynamics. Throughout the text, readers are shown step-by-step detailed analyses for finite element equations development. The text also demonstrates how FEM is programmed, with examples in MATLAB, CALFEM, and ANSYS allowing readers to learn how to develop their own computer code. Suitable for everyone from first-time BSc/MSc students to practicing mechanic...
Cecka, Cris
2012-01-01
This chapter discusses multiple strategies to perform general computations on unstructured grids, with specific application to the assembly of matrices in finite element methods (FEMs). It reviews and applies two methods for assembly of FEMs to produce and accelerate a FEM model for a nonlinear hyperelastic solid where the assembly, solution, update, and visualization stages are performed solely on the GPU, benefiting from speed-ups in each stage and avoiding costly GPUCPU transfers of data. For each method, the chapter discusses the NVIDIA GPU hardware\\'s limiting resources, optimizations, key data structures, and dependence of the performance with respect to problem size, element size, and GPU hardware generation. Furthermore, this chapter informs potential users of the benefits of GPU technology, provides guidelines to help them implement their own FEM solutions, gives potential speed-ups that can be expected, and provides source code for reference. © 2012 Elsevier Inc. All rights reserved.
A Hybrid FEM-ANN Approach for Slope Instability Prediction
Verma, A. K.; Singh, T. N.; Chauhan, Nikhil Kumar; Sarkar, K.
2016-09-01
Assessment of slope stability is one of the most critical aspects for the life of a slope. In any slope vulnerability appraisal, Factor Of Safety (FOS) is the widely accepted index to understand, how close or far a slope from the failure. In this work, an attempt has been made to simulate a road cut slope in a landslide prone area in Rudrapryag, Uttarakhand, India which lies near Himalayan geodynamic mountain belt. A combination of Finite Element Method (FEM) and Artificial Neural Network (ANN) has been adopted to predict FOS of the slope. In ANN, a three layer, feed- forward back-propagation neural network with one input layer and one hidden layer with three neurons and one output layer has been considered and trained using datasets generated from numerical analysis of the slope and validated with new set of field slope data. Mean absolute percentage error estimated as 1.04 with coefficient of correlation between the FOS of FEM and ANN as 0.973, which indicates that the system is very vigorous and fast to predict FOS for any slope.
Double folding model of nucleus-nucleus potential: formulae, iteration method and computer code
Luk'yanov, K.V.
2008-01-01
Method of construction of the nucleus-nucleus double folding potential is described. Iteration procedure for the corresponding integral equation is presented. Computer code and numerical results are presented
Sexing code subversion, theory and representation
Herbst, Claudia
2008-01-01
Critically investigating the gender of programming in popular culture, Sexing Code proposes that the de facto representation of technical ability serves to perpetuate the age-old association of the male with intellect and reason, while identifying the fem
Compatibility of global environmental assessment methods of buildings with an Egyptian energy code
Amal Kamal Mohamed Shamseldin
2017-04-01
Full Text Available Several environmental assessment methods of buildings had emerged over the world to set environmental classifications for buildings, such as the American method “Leadership in Energy and Environmental Design” (LEED the most widespread one. Several countries decided to put their own assessment methods to catch up with the previous orientation, such as Egypt. The main goal of putting the Egyptian method was to impose the voluntary local energy efficiency codes. Through a local survey, it was clearly noted that many of the construction makers in Egypt do not even know the local method, and whom are interested in the environmental assessment of buildings seek to apply LEED rather than anything else. Therefore, several questions appear about the American method compatibility with the Egyptian energy codes – that contain the most exact characteristics and requirements and give the outmost credible energy efficiency results for buildings in Egypt-, and the possibility of finding another global method that gives closer results to those of the Egyptian codes, especially with the great variety of energy efficiency measurement approaches used among the different assessment methods. So, the researcher is trying to find the compatibility of using non-local assessment methods with the local energy efficiency codes. Thus, if the results are not compatible, the Egyptian government should take several steps to increase the local building sector awareness of the Egyptian method to benefit these codes, and it should begin to enforce it within the building permits after a proper guidance and feedback.
How recalibration method, pricing, and coding affect DRG weights
Carter, Grace M.; Rogowski, Jeannette A.
1992-01-01
We compared diagnosis-related group (DRG) weights calculated using the hospital-specific relative-value (HSR V) methodology with those calculated using the standard methodology for each year from 1985 through 1989 and analyzed differences between the two methods in detail for 1989. We provide evidence suggesting that classification error and subsidies of higher weighted cases by lower weighted cases caused compression in the weights used for payment as late as the fifth year of the prospective payment system. However, later weights calculated by the standard method are not compressed because a statistical correlation between high markups and high case-mix indexes offsets the cross-subsidization. HSR V weights from the same files are compressed because this methodology is more sensitive to cross-subsidies. However, both sets of weights produce equally good estimates of hospital-level costs net of those expenses that are paid by outlier payments. The greater compression of the HSR V weights is counterbalanced by the fact that more high-weight cases qualify as outliers. PMID:10127456
An adaptive singular ES-FEM for mechanics problems with singular field of arbitrary order
Nguyen-Xuan, H.; Liu, G. R.; Bordas, Stéphane; Natarajan, S.; Rabczuk, T.
2013-01-01
This paper presents a singular edge-based smoothed finite element method (sES-FEM) for mechanics problems with singular stress fields of arbitrary order. The sES-FEM uses a basic mesh of three-noded linear triangular (T3) elements and a special layer of five-noded singular triangular elements (sT5) connected to the singular-point of the stress field. The sT5 element has an additional node on each of the two edges connected to the singular-point. It allows us to represent simple and efficient ...
Development of burnup methods and capabilities in Monte Carlo code RMC
She, Ding; Liu, Yuxuan; Wang, Kan; Yu, Ganglin; Forget, Benoit; Romano, Paul K.; Smith, Kord
2013-01-01
Highlights: ► The RMC code has been developed aiming at large-scale burnup calculations. ► Matrix exponential methods are employed to solve the depletion equations. ► The Energy-Bin method reduces the time expense of treating ACE libraries. ► The Cell-Mapping method is efficient to handle massive amounts of tally cells. ► Parallelized depletion is necessary for massive amounts of burnup regions. -- Abstract: The Monte Carlo burnup calculation has always been a challenging problem because of its large time consumption when applied to full-scale assembly or core calculations, and thus its application in routine analysis is limited. Most existing MC burnup codes are usually external wrappers between a MC code, e.g. MCNP, and a depletion code, e.g. ORIGEN. The code RMC is a newly developed MC code with an embedded depletion module aimed at performing burnup calculations of large-scale problems with high efficiency. Several measures have been taken to strengthen the burnup capabilities of RMC. Firstly, an accurate and efficient depletion module called DEPTH has been developed and built in, which employs the rational approximation and polynomial approximation methods. Secondly, the Energy-Bin method and the Cell-Mapping method are implemented to speed up the transport calculations with large numbers of nuclides and tally cells. Thirdly, the batch tally method and the parallelized depletion module have been utilized to better handle cases with massive amounts of burnup regions in parallel calculations. Burnup cases including a PWR pin and a 5 × 5 assembly group are calculated, thereby demonstrating the burnup capabilities of the RMC code. In addition, the computational time and memory requirements of RMC are compared with other MC burnup codes.
Status of SFR Codes and Methods QA Implementation
Brunett, Acacia J. [Argonne National Lab. (ANL), Argonne, IL (United States); Briggs, Laural L. [Argonne National Lab. (ANL), Argonne, IL (United States); Fanning, Thomas H. [Argonne National Lab. (ANL), Argonne, IL (United States)
2017-01-31
This report details development of the SAS4A/SASSYS-1 SQA Program and describes the initial stages of Program implementation planning. The provisional Program structure, which is largely focused on the establishment of compliant SQA documentation, is outlined in detail, and Program compliance with the appropriate SQA requirements is highlighted. Additional program activities, such as improvements to testing methods and Program surveillance, are also described in this report. Given that the programmatic resources currently granted to development of the SAS4A/SASSYS-1 SQA Program framework are not sufficient to adequately address all SQA requirements (e.g. NQA-1, NUREG/BR-0167, etc.), this report also provides an overview of the gaps that remain the SQA program, and highlights recommendations on a path forward to resolution of these issues. One key finding of this effort is the identification of the need for an SQA program sustainable over multiple years within DOE annual R&D funding constraints.
Structural dynamics in LMFBR containment analysis: a brief survey of computational methods and codes
Chang, Y.W.; Gvildys, J.
1977-01-01
In recent years, the use of computer codes to study the response of primary containment of large, liquid-metal fast breeder reactors (LMFBR) under postulated accident conditions has been adopted by most fast reactor projects. Since the first introduction of REXCO-H containment code in 1969, a number of containment codes have evolved and been reported in the literature. The paper briefly summarizes the various numerical methods commonly used in containment analysis in computer programs. They are compared on the basis of truncation errors resulting in the numerical approximation, the method of integration, the resolution of the computed results, and the ease of programming in computer codes. The aim of the paper is to provide enough information to an analyst so that he can suitably define his choice of method, and hence his choice of programs
An Efficient Integer Coding and Computing Method for Multiscale Time Segment
TONG Xiaochong
2016-12-01
Full Text Available This article focus on the exist problem and status of current time segment coding, proposed a new set of approach about time segment coding: multi-scale time segment integer coding (MTSIC. This approach utilized the tree structure and the sort by size formed among integer, it reflected the relationship among the multi-scale time segments: order, include/contained, intersection, etc., and finally achieved an unity integer coding processing for multi-scale time. On this foundation, this research also studied the computing method for calculating the time relationships of MTSIC, to support an efficient calculation and query based on the time segment, and preliminary discussed the application method and prospect of MTSIC. The test indicated that, the implement of MTSIC is convenient and reliable, and the transformation between it and the traditional method is convenient, it has the very high efficiency in query and calculating.
Refuelling design and core calculations at NPP Paks: codes and methods
Pos, I.; Nemes, I.; Javor, E.; Korpas, L.; Szecsenyi, Z.; Patai-Szabo, S.
2001-01-01
This article gives a brief review of the computer codes used in the fuel management practice at NPP Paks. The code package consist of the HELIOS neutron and gamma transport code for preparation of few-group cross section library, the CERBER code to determine the optimal core loading patterns and the C-PORCA code for detailed reactor physical analysis of different reactor states. The last two programs have been developed at the NPP Paks. HELIOS gives sturdy basis for our neutron physical calculation, CERBER and C-PORCA programs have been enhanced in great extent for last years. Methods and models have become more detailed and accurate as regards the calculated parameters and space resolution. Introduction of a more advanced data handling algorithm arbitrary move of fuel assemblies can be followed either in the reactor core or storage pool. The new interactive WINDOWS applications allow easier and more reliable use of codes. All these computer code developments made possible to handle and calculate new kind of fuels as profiled Russian and BNFL fuel with burnable poison or to support the reliable reuse of fuel assemblies stored in the storage pool. To extend thermo-hydraulic capability, with KFKI contribution the COBRA code will also be coupled to the system (Authors)
Serfontein, Dawid E.; Mulder, Eben J.; Reitsma, Frederik
2014-01-01
A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications
Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)
2014-05-01
A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.
FEM Techniques for High Stress Detection in Accelerated Fatigue Simulation
Veltri, M.
2016-09-01
This work presents the theory and a numerical validation study in support to a novel method for a priori identification of fatigue critical regions, with the aim to accelerate durability design in large FEM problems. The investigation is placed in the context of modern full-body structural durability analysis, where a computationally intensive dynamic solution could be required to identify areas with potential for fatigue damage initiation. The early detection of fatigue critical areas can drive a simplification of the problem size, leading to sensible improvement in solution time and model handling while allowing processing of the critical areas in higher detail. The proposed technique is applied to a real life industrial case in a comparative assessment with established practices. Synthetic damage prediction quantification and visualization techniques allow for a quick and efficient comparison between methods, outlining potential application benefits and boundaries.
Licensing in BE system code calculations. Applications and uncertainty evaluation by CIAU method
Petruzzi, Alessandro; D'Auria, Francesco
2007-01-01
The evaluation of uncertainty constitutes the necessary supplement of Best Estimate (BE) calculations performed to understand accident scenarios in water cooled nuclear reactors. The needs come from the imperfection of computational tools on the one side and from the interest in using such tool to get more precise evaluation of safety margins. In the present paper the approaches to uncertainty are outlined and the CIAU (Code with capability of Internal Assessment of Uncertainty) method proposed by the University of Pisa is described including ideas at the basis and results from applications. Two approaches are distinguished that are characterized as 'propagation of code input uncertainty' and 'propagation of code output errors'. For both methods, the thermal-hydraulic code is at the centre of the process of uncertainty evaluation: in the former case the code itself is adopted to compute the error bands and to propagate the input errors, in the latter case the errors in code application to relevant measurements are used to derive the error bands. The CIAU method exploits the idea of the 'status approach' for identifying the thermal-hydraulic conditions of an accident in any Nuclear Power Plant (NPP). Errors in predicting such status are derived from the comparison between predicted and measured quantities and, in the stage of the application of the method, are used to compute the uncertainty. (author)
Barkmann, R; Dencks, S; Laugier, P
2010-01-01
has been introduced yet. We developed a QUS scanner for measurements at the femur (Femur Ultrasound Scanner, FemUS) and tested its in vivo performance. METHODS: Using the FemUS device, we obtained femoral QUS and DXA on 32 women with recent hip fractures and 30 controls. Fracture discrimination......A quantitative ultrasound (QUS) device for measurements at the proximal femur was developed and tested in vivo (Femur Ultrasound Scanner, FemUS). Hip fracture discrimination was as good as for DXA, and a high correlation with hip BMD was achieved. Our results show promise for enhanced QUS...... and the correlation with femur bone mineral density (BMD) were assessed. RESULTS: Hip fracture discrimination using the FemUS device was at least as good as with hip DXA and calcaneal QUS. Significant correlations with total hip bone mineral density were found with a correlation coefficient R (2) up to 0...
BEEBE - WANG, J.; LUCCIO, A.U.; D IMPERIO, N.; MACHIDA, S.
2002-01-01
Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed
BEEBE - WANG,J.; LUCCIO,A.U.; D IMPERIO,N.; MACHIDA,S.
2002-06-03
Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed.
Average Likelihood Methods of Classification of Code Division Multiple Access (CDMA)
2016-05-01
subject to code matrices that follows the structure given by (113). [⃗ yR y⃗I ] = √ Es 2L [ GR1 −GI1 GI2 GR2 ] [ QR −QI QI QR ] [⃗ bR b⃗I ] + [⃗ nR n⃗I... QR ] [⃗ b+ b⃗− ] + [⃗ n+ n⃗− ] (115) The average likelihood for type 4 CDMA (116) is a special case of type 1 CDMA with twice the code length and...AVERAGE LIKELIHOOD METHODS OF CLASSIFICATION OF CODE DIVISION MULTIPLE ACCESS (CDMA) MAY 2016 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE
An improved method for storing and retrieving tabulated data in a scalar Monte Carlo code
Hollenbach, D.F.; Reynolds, K.H.; Dodds, H.L.; Landers, N.F.; Petrie, L.M.
1990-01-01
The KENO-Va code is a production-level criticality safety code used to calculate the k eff of a system. The code is stochastic in nature, using a Monte Carlo algorithm to track individual particles one at a time through the system. The advent of computers with vector processors has generated an interest in improving KENO-Va to take advantage of the potential speed-up associated with these new processors. Unfortunately, the original Monte Carlo algorithm and method of storing and retrieving cross-section data is not adaptable to vector processing. This paper discusses an alternate method for storing and retrieving data that not only is readily vectorizable but also improves the efficiency of the current scalar code
Nofriansyah, Dicky; Defit, Sarjon; Nurcahyo, Gunadi W.; Ganefri, G.; Ridwan, R.; Saleh Ahmar, Ansari; Rahim, Robbi
2018-01-01
Cybercrime is one of the most serious threats. Efforts are made to reduce the number of cybercrime is to find new techniques in securing data such as Cryptography, Steganography and Watermarking combination. Cryptography and Steganography is a growing data security science. A combination of Cryptography and Steganography is one effort to improve data integrity. New techniques are used by combining several algorithms, one of which is the incorporation of hill cipher method and Morse code. Morse code is one of the communication codes used in the Scouting field. This code consists of dots and lines. This is a new modern and classic concept to maintain data integrity. The result of the combination of these three methods is expected to generate new algorithms to improve the security of the data, especially images.
Mixed FEM for Second Order Elliptic Problems on Polygonal Meshes with BEM-Based Spaces
Efendiev, Yalchin; Galvis, Juan; Lazarov, Raytcho; Weiß er, Steffen
2014-01-01
We present a Boundary Element Method (BEM)-based FEM for mixed formulations of second order elliptic problems in two dimensions. The challenge, we would like to address, is a proper construction of H(div)-conforming vector valued trial functions
Financial security for women -- Fem Consult congress.
1996-01-01
The nongovernmental organization "Fem Consult," which seeks to strengthen the socioeconomic position of women by applying a gender perspective to programs and projects in developing countries, celebrated its 10th anniversary in 1996 by holding a conference in the Netherlands on financial security for women in the developing world. During the conference, the President of the WWF (Working Women's Forum) described her agency's 17 years of experience in lending to impoverished rural and urban women in India. By extending microcredit assistance through a network of cooperatives, the WWF has been the catalyst for lasting improvements in the economic and social status of impoverished women. Representatives of the Grameen Bank, Women's World Banking, the Ecumenical Development Cooperative Society, and other organizations also addressed the conference.
Coupling methods for parallel running RELAPSim codes in nuclear power plant simulation
Li, Yankai; Lin, Meng, E-mail: linmeng@sjtu.edu.cn; Yang, Yanhua
2016-02-15
When the plant is modeled detailedly for high precision, it is hard to achieve real-time calculation for one single RELAP5 in a large-scale simulation. To improve the speed and ensure the precision of simulation at the same time, coupling methods for parallel running RELAPSim codes were proposed in this study. Explicit coupling method via coupling boundaries was realized based on a data-exchange and procedure-control environment. Compromise of synchronization frequency was well considered to improve the precision of simulation and guarantee the real-time simulation at the same time. The coupling methods were assessed using both single-phase flow models and two-phase flow models and good agreements were obtained between the splitting–coupling models and the integrated model. The mitigation of SGTR was performed as an integral application of the coupling models. A large-scope NPP simulator was developed adopting six splitting–coupling models of RELAPSim and other simulation codes. The coupling models could improve the speed of simulation significantly and make it possible for real-time calculation. In this paper, the coupling of the models in the engineering simulator is taken as an example to expound the coupling methods, i.e., coupling between parallel running RELAPSim codes, and coupling between RELAPSim code and other types of simulation codes. However, the coupling methods are also referable in other simulator, for example, a simulator employing ATHLETE instead of RELAP5, other logic code instead of SIMULINK. It is believed the coupling method is commonly used for NPP simulator regardless of the specific codes chosen in this paper.
Method for calculating internal radiation and ventilation with the ADINAT heat-flow code
Butkovich, T.R.; Montan, D.N.
1980-01-01
One objective of the spent fuel test in Climax Stock granite (SFTC) is to correctly model the thermal transport, and the changes in the stress field and accompanying displacements from the application of the thermal loads. We have chosen the ADINA and ADINAT finite element codes to do these calculations. ADINAT is a heat transfer code compatible to the ADINA displacement and stress analysis code. The heat flow problem encountered at SFTC requires a code with conduction, radiation, and ventilation capabilities, which the present version of ADINAT does not have. We have devised a method for calculating internal radiation and ventilation with the ADINAT code. This method effectively reproduces the results from the TRUMP multi-dimensional finite difference code, which correctly models radiative heat transport between drift surfaces, conductive and convective thermal transport to and through air in the drifts, and mass flow of air in the drifts. The temperature histories for each node in the finite element mesh calculated with ADINAT using this method can be used directly in the ADINA thermal-mechanical calculation
Stress And Strain Analysis of The Hip Joint Using FEM
Vaverka, M.; Návrat, Tomáš; Vrbka, M.; Florian, Z.; Fuis, Vladimír
2006-01-01
Roč. 14, 4-5 (2006), s. 271-279 ISSN 0928-7329 R&D Projects: GA ČR GA101/05/0136 Institutional research plan: CEZ:AV0Z20760514 Keywords : hip FEM surgace replacement pathological contact pressure stress * hip FEM surgace replacement pathological contact pressure stress Subject RIV: BO - Biophysics
Advanced HVAC modeling with FemLab/Simulink/MatLab
Schijndel, van A.W.M.
2003-01-01
The combined MatLab toolboxes FemLab and Simulink are evaluated as solvers for HVAC problems based on partial differential equations (PDEs). The FemLab software is designed to simulate systems of coupled PDEs, 1-D, 2-D or 3-D, nonlinear and time dependent. In order to show how the program works, a
FEM × DEM: a new efficient multi-scale approach for geotechnical problems with strain localization
Nguyen Trung Kien
2017-01-01
Full Text Available The paper presents a multi-scale modeling of Boundary Value Problem (BVP approach involving cohesive-frictional granular materials in the FEM × DEM multi-scale framework. On the DEM side, a 3D model is defined based on the interactions of spherical particles. This DEM model is built through a numerical homogenization process applied to a Volume Element (VE. It is then paired with a Finite Element code. Using this numerical tool that combines two scales within the same framework, we conducted simulations of biaxial and pressuremeter tests on a cohesive-frictional granular medium. In these cases, it is known that strain localization does occur at the macroscopic level, but since FEMs suffer from severe mesh dependency as soon as shear band starts to develop, the second gradient regularization technique has been used. As a consequence, the objectivity of the computation with respect to mesh dependency is restored.
3D FEM Geometry and Material Flow Optimization of Porthole-Die Extrusion
Ceretti, Elisabetta; Mazzoni, Luca; Giardini, Claudio
2007-01-01
The aim of this work is to design and to improve the geometry of a porthole-die for the production of aluminum components by means of 3D FEM simulations. In fact, the use of finite element models will allow to investigate the effects of the die geometry (webs, extrusion cavity) on the material flow and on the stresses acting on the die so to reduce the die wear and to improve the tool life. The software used to perform the simulations was a commercial FEM code, Deform 3D. The technological data introduced in the FE model have been furnished by METRA S.p.A. Company, partner in this research. The results obtained have been considered valid and helpful by the Company for building a new optimized extrusion porthole-die
Analysis of the deep rolling process on turbine blades using the FEM/BEM-coupling
Baecker, V; Klocke, F; Wegner, H; Timmer, A; Grzhibovskis, R; Rjasanow, S
2010-01-01
Highly stressed components of aircraft engines, like turbine blades, have to satisfy stringent requirements regarding durability and reliability. The induction of compressive stresses and strain hardening in their surface layer has proven as a promising method to significantly increase their fatigue resistance. The required surface layer properties can be achieved by deep rolling. The determination of optimal process parameters still requires elaborate experimental set-up and subsequent time- and cost-extensive measurements. In previous works the application of the Finite Element Method (FEM) was proposed as an effective and cost reducing alternative to predict the surface layer state for given process parameters. However, FEM requires very fine mesh in the surface layer to resolve the high stress gradients with sufficient accuracy. The hereby caused high time and memory requirements render an efficient simulation of complete turbine components as impossible. In this article a solution is offered by coupling the FEM with the Boundary Elements Method (BEM). It enables the computing of large scale models at low computational cost and high result accuracy. Different approaches of the FEM/BEM-coupling for the simulation of deep rolling are examined with regard to their stability and required computing time.
The adjoint sensitivity method, a contribution to the code uncertainty evaluation
Ounsy, A.; Brun, B.; De Crecy, F.
1994-01-01
This paper deals with the application of the adjoint sensitivity method (ASM) to thermal hydraulic codes. The advantage of the method is to use small central processing unit time in comparison with the usual approach requiring one complete code run per sensitivity determination. In the first part the mathematical aspects of the problem are treated, and the applicability of the method of the functional-type response of a thermal hydraulic model is demonstrated. On a simple example of non-linear hyperbolic equation (Burgers equation) the problem has been analysed. It is shown that the formalism used in the literature treating this subject is not appropriate. A new mathematical formalism circumventing the problem is proposed. For the discretized form of the problem, two methods are possible: the continuous ASM and the discrete ASM. The equivalence of both methods is demonstrated; nevertheless only the discrete ASM constitutes a practical solution for thermal hydraulic codes. The application of the discrete ASM to the thermal hydraulic safety code CATHARE is then presented for two examples. They demonstrate that the discrete ASM constitutes an efficient tool for the analysis of code sensitivity. ((orig.))
The adjoint sensitivity method, a contribution to the code uncertainty evaluation
Ounsy, A.; Crecy, F. de; Brun, B.
1993-01-01
The application of the ASM (Adjoint Sensitivity Method) to thermohydraulic codes, is examined. The advantage of the method is to be very few CPU time consuming in comparison with usual approach requiring one complete code run per sensitivity determination. The mathematical aspects of the problem are first described, and the applicability of the method of the functional-type response of a thermalhydraulic model is demonstrated. On a simple example of non linear hyperbolic equation (Burgers equation) the problem has been analyzed. It is shown that the formalism used in the literature treating this subject is not appropriate. A new mathematical formalism circumventing the problem is proposed. For the discretized form of the problem, two methods are possible: the Continuous ASM and the Discrete ASM. The equivalence of both methods is demonstrated; nevertheless only the DASM constitutes a practical solution for thermalhydraulic codes. The application of the DASM to the thermalhydraulic safety code CATHARE is then presented for two examples. They demonstrate that ASM constitutes an efficient tool for the analysis of code sensitivity. (authors) 7 figs., 5 tabs., 8 refs
The adjoint sensitivity method. A contribution to the code uncertainty evaluation
Ounsy, A.; Brun, B.
1993-01-01
The application of the ASM (Adjoint Sensitivity Method) to thermohydraulic codes, is examined. The advantage of the method is to be very few CPU time consuming in comparison with usual approach requiring one complete code run per sensitivity determination. The mathematical aspects of the problem are first described, and the applicability of the method of the functional-type response of a thermalhydraulic model is demonstrated. On a simple example of non linear hyperbolic equation (Burgers equation) the problem has been analyzed. It is shown that the formalism used in the literature treating this subject is not appropriate. A new mathematical formalism circumventing the problem is proposed. For the discretized form of the problem, two methods are possible: the Continuous ASM and the Discrete ASM. The equivalence of both methods is demonstrated; nevertheless only the DASM constitutes a practical solution for thermalhydraulic codes. The application of the DASM to the thermalhydraulic safety code CATHARE is then presented for two examples. They demonstrate that ASM constitutes an efficient tool for the analysis of code sensitivity. (authors) 7 figs., 5 tabs., 8 refs
The adjoint sensitivity method. A contribution to the code uncertainty evaluation
Ounsy, A; Brun, B
1994-12-31
The application of the ASM (Adjoint Sensitivity Method) to thermohydraulic codes, is examined. The advantage of the method is to be very few CPU time consuming in comparison with usual approach requiring one complete code run per sensitivity determination. The mathematical aspects of the problem are first described, and the applicability of the method of the functional-type response of a thermalhydraulic model is demonstrated. On a simple example of non linear hyperbolic equation (Burgers equation) the problem has been analyzed. It is shown that the formalism used in the literature treating this subject is not appropriate. A new mathematical formalism circumventing the problem is proposed. For the discretized form of the problem, two methods are possible: the Continuous ASM and the Discrete ASM. The equivalence of both methods is demonstrated; nevertheless only the DASM constitutes a practical solution for thermalhydraulic codes. The application of the DASM to the thermalhydraulic safety code CATHARE is then presented for two examples. They demonstrate that ASM constitutes an efficient tool for the analysis of code sensitivity. (authors) 7 figs., 5 tabs., 8 refs.
The adjoint sensitivity method, a contribution to the code uncertainty evaluation
Ounsy, A; Crecy, F de; Brun, B
1994-12-31
The application of the ASM (Adjoint Sensitivity Method) to thermohydraulic codes, is examined. The advantage of the method is to be very few CPU time consuming in comparison with usual approach requiring one complete code run per sensitivity determination. The mathematical aspects of the problem are first described, and the applicability of the method of the functional-type response of a thermalhydraulic model is demonstrated. On a simple example of non linear hyperbolic equation (Burgers equation) the problem has been analyzed. It is shown that the formalism used in the literature treating this subject is not appropriate. A new mathematical formalism circumventing the problem is proposed. For the discretized form of the problem, two methods are possible: the Continuous ASM and the Discrete ASM. The equivalence of both methods is demonstrated; nevertheless only the DASM constitutes a practical solution for thermalhydraulic codes. The application of the DASM to the thermalhydraulic safety code CATHARE is then presented for two examples. They demonstrate that ASM constitutes an efficient tool for the analysis of code sensitivity. (authors) 7 figs., 5 tabs., 8 refs.
Fujimura, Toichiro; Okumura, Keisuke
2002-11-01
A prototype version of a diffusion code has been developed to analyze the hexagonal core as reduced moderation reactor and the applicability of some acceleration methods have been investigated to accelerate the convergence of the iterative solution method. The hexagonal core is divided into regular triangular prisms in the three-dimensional code MOSRA-Prism and a polynomial expansion nodal method is applied to approximate the neutron flux distribution by a cubic polynomial. The multi-group diffusion equation is solved iteratively with ordinal inner and outer iterations and the effectiveness of acceleration methods is ascertained by applying an adaptive acceleration method and a neutron source extrapolation method, respectively. The formulation of the polynomial expansion nodal method is outlined in the report and the local and global effectiveness of the acceleration methods is discussed with various sample calculations. A new general expression of vacuum boundary condition, derived in the formulation is also described. (author)
Hybrid Micro-Depletion method in the DYN3D code
Bilodid, Yurii [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Div. Reactor Safety
2016-07-01
A new method for accounting spectral history effects was developed and implemented in the reactor dynamics code DYN3D. Detailed nuclide content is calculated for each region of the reactor core and used to correct fuel properties. The new method demonstrates excellent results in test cases.
Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code
Taherkhani, Ahmad; Malmi, Lauri
2013-01-01
In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…
Application of Wielandt method in continuous-energy nuclear data sensitivity analysis with RMC code
Qiu Yishu; Wang Kan; She Ding
2015-01-01
The Iterated Fission Probability (IFP) method, an accurate method to estimate adjoint-weighted quantities in the continuous-energy Monte Carlo criticality calculations, has been widely used for calculating kinetic parameters and nuclear data sensitivity coefficients. By using a strategy of waiting, however, this method faces the challenge of high memory usage to store the tallies of original contributions which size is proportional to the number of particle histories in each cycle. Recently, the Wielandt method, applied by Monte Carlo code McCARD to calculate kinetic parameters, estimates adjoint fluxes in a single particle history and thus can save memory usage. In this work, the Wielandt method has been applied in Rector Monte Carlo code RMC for nuclear data sensitivity analysis. The methodology and algorithm of applying Wielandt method in estimation of adjoint-based sensitivity coefficients are discussed. Verification is performed by comparing the sensitivity coefficients calculated by Wielandt method with analytical solutions, those computed by IFP method which is also implemented in RMC code for sensitivity analysis, and those from the multi-group TSUNAMI-3D module in SCALE code package. (author)
Application of Finite Layer Method in Pavement Structural Analysis
Pengfei Liu
2017-06-01
Full Text Available The finite element (FE method has been widely used in predicting the structural responses of asphalt pavements. However, the three-dimensional (3D modeling in general-purpose FE software systems such as ABAQUS requires extensive computations and is relatively time-consuming. To address this issue, a specific computational code EasyFEM was developed based on the finite layer method (FLM for analyzing structural responses of asphalt pavements under a static load. Basically, it is a 3D FE code that requires only a one-dimensional (1D mesh by incorporating analytical methods and using Fourier series in the other two dimensions, which can significantly reduce the computational time and required resources due to the easy implementation of parallel computing technology. Moreover, a newly-developed Element Energy Projection (EEP method for super-convergent calculations was implemented in EasyFEM to improve the accuracy of solutions for strains and stresses over the whole pavement model. The accuracy of the program is verified by comparing it with results from BISAR and ABAQUS for a typical asphalt pavement structure. The results show that the predicted responses from ABAQUS and EasyFEM are in good agreement with each other. The EasyFEM with the EEP post-processing technique converges faster compared with the results derived from ordinary EasyFEM applications, which proves that the EEP technique can improve the accuracy of strains and stresses from EasyFEM. In summary, the EasyFEM has a potential to provide a flexible and robust platform for the numerical simulation of asphalt pavements and can easily be post-processed with the EEP technique to enhance its advantages.
The OpenMOC method of characteristics neutral particle transport code
Boyd, William; Shaner, Samuel; Li, Lulu; Forget, Benoit; Smith, Kord
2014-01-01
Highlights: • An open source method of characteristics neutron transport code has been developed. • OpenMOC shows nearly perfect scaling on CPUs and 30× speedup on GPUs. • Nonlinear acceleration techniques demonstrate a 40× reduction in source iterations. • OpenMOC uses modern software design principles within a C++ and Python framework. • Validation with respect to the C5G7 and LRA benchmarks is presented. - Abstract: The method of characteristics (MOC) is a numerical integration technique for partial differential equations, and has seen widespread use for reactor physics lattice calculations. The exponential growth in computing power has finally brought the possibility for high-fidelity full core MOC calculations within reach. The OpenMOC code is being developed at the Massachusetts Institute of Technology to investigate algorithmic acceleration techniques and parallel algorithms for MOC. OpenMOC is a free, open source code written using modern software languages such as C/C++ and CUDA with an emphasis on extensible design principles for code developers and an easy to use Python interface for code users. The present work describes the OpenMOC code and illustrates its ability to model large problems accurately and efficiently
Modeling radiation belt dynamics using a 3-D layer method code
Wang, C.; Ma, Q.; Tao, X.; Zhang, Y.; Teng, S.; Albert, J. M.; Chan, A. A.; Li, W.; Ni, B.; Lu, Q.; Wang, S.
2017-08-01
A new 3-D diffusion code using a recently published layer method has been developed to analyze radiation belt electron dynamics. The code guarantees the positivity of the solution even when mixed diffusion terms are included. Unlike most of the previous codes, our 3-D code is developed directly in equatorial pitch angle (α0), momentum (p), and L shell coordinates; this eliminates the need to transform back and forth between (α0,p) coordinates and adiabatic invariant coordinates. Using (α0,p,L) is also convenient for direct comparison with satellite data. The new code has been validated by various numerical tests, and we apply the 3-D code to model the rapid electron flux enhancement following the geomagnetic storm on 17 March 2013, which is one of the Geospace Environment Modeling Focus Group challenge events. An event-specific global chorus wave model, an AL-dependent statistical plasmaspheric hiss wave model, and a recently published radial diffusion coefficient formula from Time History of Events and Macroscale Interactions during Substorms (THEMIS) statistics are used. The simulation results show good agreement with satellite observations, in general, supporting the scenario that the rapid enhancement of radiation belt electron flux for this event results from an increased level of the seed population by radial diffusion, with subsequent acceleration by chorus waves. Our results prove that the layer method can be readily used to model global radiation belt dynamics in three dimensions.
Avci, H.I.; Raghuram, S.; Baybutt, P.
1985-04-01
A new computer code called MATADOR (Methods for the Analysis of Transport And Deposition Of Radionuclides) has been developed to replace the CORRAL-2 computer code which was written for the Reactor Safety Study (WASH-1400). This report is a User's Manual for MATADOR. MATADOR is intended for use in system risk studies to analyze radionuclide transport and deposition in reactor containments. The principal output of the code is information on the timing and magnitude of radionuclide releases to the environment as a result of severely degraded core accidents. MATADOR considers the transport of radionuclides through the containment and their removal by natural deposition and by engineered safety systems such as sprays. It is capable of analyzing the behavior of radionuclides existing either as vapors or aerosols in the containment. The code requires input data on the source terms into the containment, the geometry of the containment, and thermal-hydraulic conditions in the containment
Jagannathan, V.
1985-01-01
For solving the multigroup diffusion theory equations in 3-D problems in which the material properties are uniform in large segments of axial direction, the synthesis method is known to give fairly accurate results, at very low computational cost. In the code system FEMSYN, the single channel continuous flux synthesis option has been incorporated. One can generate the radial trail functions by either finite difference method (FDM) or finite element method (FEM). The axial mixing functions can also be found by either FDM or FEM. Use of FEM for both radial and axial directions is found to reduce the calculation time considerably. One can determine eigenvalue, 3-D flux and power distributions with FEMSYN. In this report, a detailed discription of the synthesis module SYNTHD is given. (author)
WASTK: A Weighted Abstract Syntax Tree Kernel Method for Source Code Plagiarism Detection
Deqiang Fu
2017-01-01
Full Text Available In this paper, we introduce a source code plagiarism detection method, named WASTK (Weighted Abstract Syntax Tree Kernel, for computer science education. Different from other plagiarism detection methods, WASTK takes some aspects other than the similarity between programs into account. WASTK firstly transfers the source code of a program to an abstract syntax tree and then gets the similarity by calculating the tree kernel of two abstract syntax trees. To avoid misjudgment caused by trivial code snippets or frameworks given by instructors, an idea similar to TF-IDF (Term Frequency-Inverse Document Frequency in the field of information retrieval is applied. Each node in an abstract syntax tree is assigned a weight by TF-IDF. WASTK is evaluated on different datasets and, as a result, performs much better than other popular methods like Sim and JPlag.
Development of a CAD-based neutron transport code with the method of characteristics
Chen Zhenping; Wang Dianxi; He Tao; Wang Guozhong; Zheng Huaqing
2012-01-01
The main problem determining whether the method of characteristics (MOC) can be used in complicated and highly heterogeneous geometry is how to combine an effective geometry processing method with MOC. In this study, a new idea making use of MCAM, which is a Mutlti-Calculation Automatic Modeling for Neutronics and Radiation Transport program developed by FDS Team, for geometry description and ray tracing of particle transport was brought forward to solve the geometry problem mentioned above. Based on the theory and approach as the foregoing statement, a two dimensional neutron transport code was developed which had been integrated into VisualBUS, developed by FDS Team. Several benchmarks were used to verify the validity of the code and the numerical results were coincident with the reference values very well, which indicated the accuracy and feasibility of the method and the MOC code. (authors)
Park, Sang-Jin; Kim, Hoe-Woong; Joo, Young-Sang; Kim, Sung-Kyun; Kim, Jong-Bum [KAERI, Daejeon (Korea, Republic of)
2016-05-15
This paper introduces the 2-D FEM simulation of the propagation and radiation of the leaky Lamb wave in and from a plate-type ultrasonic waveguide sensor conducted for the radiation beam profile analysis. The FEM simulations are performed with three different excitation frequencies and the radiation beam profiles obtained from FEM simulations are compared with those obtained from corresponding experiments. This paper deals with the 2-D FEM simulation of the propagation and radiation of the leaky Lamb wave in and from a plate-type ultrasonic waveguide sensor conducted to analyze the radiation beam profiles. The radiation beam profile results obtained from the FEM simulation show good agreement with the ones obtained from the experiment. This result will be utilized to improve the performance of the developed waveguide sensor. The quality of the visualized image is mainly affected by beam profile characteristics of the leaky wave radiated from the waveguide sensor. However, the relationships between the radiation beam profile and many parameters of the waveguide sensor are not fully revealed yet. Therefore, further parametric studies are necessary to improve the performance of the sensor and the finite element method (FEM) is one of the most effective tools for the parametric study.
Methods for Coding Tobacco-Related Twitter Data: A Systematic Review.
Lienemann, Brianna A; Unger, Jennifer B; Cruz, Tess Boley; Chu, Kar-Hai
2017-03-31
As Twitter has grown in popularity to 313 million monthly active users, researchers have increasingly been using it as a data source for tobacco-related research. The objective of this systematic review was to assess the methodological approaches of categorically coded tobacco Twitter data and make recommendations for future studies. Data sources included PsycINFO, Web of Science, PubMed, ABI/INFORM, Communication Source, and Tobacco Regulatory Science. Searches were limited to peer-reviewed journals and conference proceedings in English from January 2006 to July 2016. The initial search identified 274 articles using a Twitter keyword and a tobacco keyword. One coder reviewed all abstracts and identified 27 articles that met the following inclusion criteria: (1) original research, (2) focused on tobacco or a tobacco product, (3) analyzed Twitter data, and (4) coded Twitter data categorically. One coder extracted data collection and coding methods. E-cigarettes were the most common type of Twitter data analyzed, followed by specific tobacco campaigns. The most prevalent data sources were Gnip and Twitter's Streaming application programming interface (API). The primary methods of coding were hand-coding and machine learning. The studies predominantly coded for relevance, sentiment, theme, user or account, and location of user. Standards for data collection and coding should be developed to be able to more easily compare and replicate tobacco-related Twitter results. Additional recommendations include the following: sample Twitter's databases multiple times, make a distinction between message attitude and emotional tone for sentiment, code images and URLs, and analyze user profiles. Being relatively novel and widely used among adolescents and black and Hispanic individuals, Twitter could provide a rich source of tobacco surveillance data among vulnerable populations. ©Brianna A Lienemann, Jennifer B Unger, Tess Boley Cruz, Kar-Hai Chu. Originally published in the
Fluxball magnetic field analysis using a hybrid analytical/FEM/BEM with equivalent currents
Fernandes, João F.P.; Camilo, Fernando M.; Machado, V. Maló
2016-01-01
In this paper, a fluxball electric machine is analyzed concerning the magnetic flux, force and torque. A novel method is proposed based in a special hybrid FEM/BEM (Finite Element Method/Boundary Element Method) with equivalent currents by using an analytical treatment for the source field determination. The method can be applied to evaluate the magnetic field in axisymmetric problems, in the presence of several magnetic materials. Same results obtained by a commercial Finite Element Analysis tool are presented for validation purposes with the proposed method. - Highlights: • The Fluxball machine magnetic field is analyzed by a new FEM/BEM/Analytical method. • The method is adequate for axisymmetric non homogeneous magnetic field problems. • The source magnetic field is evaluated considering a non-magnetic equivalent problem. • Material magnetization vectors are accounted by using equivalent currents. • A strong reduction of the finite element domain is achieved.
Introduction into scientific work methods-a necessity when performance-based codes are introduced
Dederichs, Anne; Sørensen, Lars Schiøtt
The introduction of performance-based codes in Denmark in 2004 requires new competences from people working with different aspects of fire safety in the industry and the public sector. This abstract presents an attempt in reducing problems with handling and analysing the mathematical methods...... and CFD models when applying performance-based codes. This is done within the educational program "Master of Fire Safety Engineering" at the department of Civil Engineering at the Technical University of Denmark. It was found that the students had general problems with academic methods. Therefore, a new...
Efficient depth intraprediction method for H.264/AVC-based three-dimensional video coding
Oh, Kwan-Jung; Oh, Byung Tae
2015-04-01
We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.
Wilson, R.D.; Price, R.K.; Kosanke, K.L.
1983-03-01
As part of the Department of Energy's National Uranium Resource Evaluation (NURE) project's Technology Development effort, a number of computer codes and accompanying data bases were assembled for use in modeling responses of nuclear borehole logging Sondes. The logging methods include fission neutron, active and passive gamma-ray, and gamma-gamma. These CDC-compatible computer codes and data bases are available on magnetic tape from the DOE Technical Library at its Grand Junction Area Office. Some of the computer codes are standard radiation-transport programs that have been available to the radiation shielding community for several years. Other codes were specifically written to model the response of borehole radiation detectors or are specialized borehole modeling versions of existing Monte Carlo transport programs. Results from several radiation modeling studies are available as two large data bases (neutron and gamma-ray). These data bases are accompanied by appropriate processing programs that permit the user to model a wide range of borehole and formation-parameter combinations for fission-neutron, neutron-, activation and gamma-gamma logs. The first part of this report consists of a brief abstract for each code or data base. The abstract gives the code name and title, short description, auxiliary requirements, typical running time (CDC 6600), and a list of references. The next section gives format specifications and/or directory for the tapes. The final section of the report presents listings for programs used to convert data bases between machine floating-point and EBCDIC
Lee, Byung Hee; Lee, Kyung Sang; Kim, Woo Ho; Han, Joon Koo; Choi, Byung Ihn; Han, Man Chung [College of Medicine, Seoul National Univ., Seoul (Korea, Republic of)
1990-10-15
The authors developed a computer program for use in printing report as well as data storage and retrieval in the Radiology department. This program used IBM PC AT and was written in dBASE III plus language. The automatic coding method of the ACR code, developed by Kim et al was applied in this program, and the framework of this program is the same as that developed for the surgical pathology department. The working sheet, which contained the name card for X-ray film identification and the results of previous radiologic studies, were printed during registration. The word precessing function was applied for issuing the formal report of radiologic study, and the data storage was carried out during the typewriting of the report. Two kinds of data files were stored in the hard disk ; the temporary file contained full information and the permanent file contained patient's identification data, and ACR code. Searching of a specific case was performed by chart number, patients name, date of study, or ACR code within a second. All the cases were arranged by ACR codes of procedure code, anatomy code, and pathology code. Every new data was copied to the diskette after daily work automatically, with which data could be restored in case of hard diskette failure. The main advantage of this program with comparison to the larger computer system is its low price. Based on the experience in the Seoul District Armed Forces General Hospital, we assume that this program provides solution to various problems in the radiology department where a large computer system with well designed software is not available.
GPU-accelerated 3D neutron diffusion code based on finite difference method
Xu, Q.; Yu, G.; Wang, K. [Dept. of Engineering Physics, Tsinghua Univ. (China)
2012-07-01
Finite difference method, as a traditional numerical solution to neutron diffusion equation, although considered simpler and more precise than the coarse mesh nodal methods, has a bottle neck to be widely applied caused by the huge memory and unendurable computation time it requires. In recent years, the concept of General-Purpose computation on GPUs has provided us with a powerful computational engine for scientific research. In this study, a GPU-Accelerated multi-group 3D neutron diffusion code based on finite difference method was developed. First, a clean-sheet neutron diffusion code (3DFD-CPU) was written in C++ on the CPU architecture, and later ported to GPUs under NVIDIA's CUDA platform (3DFD-GPU). The IAEA 3D PWR benchmark problem was calculated in the numerical test, where three different codes, including the original CPU-based sequential code, the HYPRE (High Performance Pre-conditioners)-based diffusion code and CITATION, were used as counterpoints to test the efficiency and accuracy of the GPU-based program. The results demonstrate both high efficiency and adequate accuracy of the GPU implementation for neutron diffusion equation. A speedup factor of about 46 times was obtained, using NVIDIA's Geforce GTX470 GPU card against a 2.50 GHz Intel Quad Q9300 CPU processor. Compared with the HYPRE-based code performing in parallel on an 8-core tower server, the speedup of about 2 still could be observed. More encouragingly, without any mathematical acceleration technology, the GPU implementation ran about 5 times faster than CITATION which was speeded up by using the SOR method and Chebyshev extrapolation technique. (authors)
GPU-accelerated 3D neutron diffusion code based on finite difference method
Xu, Q.; Yu, G.; Wang, K.
2012-01-01
Finite difference method, as a traditional numerical solution to neutron diffusion equation, although considered simpler and more precise than the coarse mesh nodal methods, has a bottle neck to be widely applied caused by the huge memory and unendurable computation time it requires. In recent years, the concept of General-Purpose computation on GPUs has provided us with a powerful computational engine for scientific research. In this study, a GPU-Accelerated multi-group 3D neutron diffusion code based on finite difference method was developed. First, a clean-sheet neutron diffusion code (3DFD-CPU) was written in C++ on the CPU architecture, and later ported to GPUs under NVIDIA's CUDA platform (3DFD-GPU). The IAEA 3D PWR benchmark problem was calculated in the numerical test, where three different codes, including the original CPU-based sequential code, the HYPRE (High Performance Pre-conditioners)-based diffusion code and CITATION, were used as counterpoints to test the efficiency and accuracy of the GPU-based program. The results demonstrate both high efficiency and adequate accuracy of the GPU implementation for neutron diffusion equation. A speedup factor of about 46 times was obtained, using NVIDIA's Geforce GTX470 GPU card against a 2.50 GHz Intel Quad Q9300 CPU processor. Compared with the HYPRE-based code performing in parallel on an 8-core tower server, the speedup of about 2 still could be observed. More encouragingly, without any mathematical acceleration technology, the GPU implementation ran about 5 times faster than CITATION which was speeded up by using the SOR method and Chebyshev extrapolation technique. (authors)
Prediction of the properties of PVD/CVD coatings with the use of FEM analysis
Śliwa, Agata; Mikuła, Jarosław; Gołombek, Klaudiusz; Tański, Tomasz; Kwaśny, Waldemar; Bonek, Mirosław; Brytan, Zbigniew
2016-01-01
Highlights: • Prediction of the properties of PVD/CVD coatings with the use of (FEM) analysis. • Stress distribution in multilayer Ti/Ti(C,N)/CrN, Ti/Ti(C,N)/(Ti,Al)N coatings. • The experimental values of stresses were determined on X-ray diffraction patterns. • An FEM model was established for the purpose of building a computer simulation. - Abstract: The aim of this paper is to present the results of the prediction of the properties of PVD/CVD coatings with the use of finite element method (FEM) analysis. The possibility of employing the FEM in the evaluation of stress distribution in multilayer Ti/Ti(C,N)/CrN, Ti/Ti(C,N)/(Ti,Al)N, Ti/(Ti,Si)N/(Ti,Si)N, and Ti/DLC/DLC coatings by taking into account their deposition conditions on magnesium alloys has been discussed in the paper. The difference in internal stresses in the zone between the coating and the substrate is caused by, first of all, the difference between the mechanical and thermal properties of the substrate and the coating, and also by the structural changes that occur in these materials during the fabrication process, especially during the cooling process following PVD and CVD treatment. The experimental values of stresses were determined based on X-ray diffraction patterns that correspond to the modelled values, which in turn can be used to confirm the correctness of the accepted mathematical model for testing the problem. An FEM model was established for the purpose of building a computer simulation of the internal stresses in the coatings. The accuracy of the FEM model was verified by comparing the results of the computer simulation of the stresses with experimental results. A computer simulation of the stresses was carried out in the ANSYS environment using the FEM method. Structure observations, chemical composition measurements, and mechanical property characterisations of the investigated materials has been carried out to give a background for the discussion of the results that were
Prediction of the properties of PVD/CVD coatings with the use of FEM analysis
Śliwa, Agata; Mikuła, Jarosław; Gołombek, Klaudiusz; Tański, Tomasz; Kwaśny, Waldemar; Bonek, Mirosław, E-mail: miroslaw.bonek@polsl.pl; Brytan, Zbigniew
2016-12-01
Highlights: • Prediction of the properties of PVD/CVD coatings with the use of (FEM) analysis. • Stress distribution in multilayer Ti/Ti(C,N)/CrN, Ti/Ti(C,N)/(Ti,Al)N coatings. • The experimental values of stresses were determined on X-ray diffraction patterns. • An FEM model was established for the purpose of building a computer simulation. - Abstract: The aim of this paper is to present the results of the prediction of the properties of PVD/CVD coatings with the use of finite element method (FEM) analysis. The possibility of employing the FEM in the evaluation of stress distribution in multilayer Ti/Ti(C,N)/CrN, Ti/Ti(C,N)/(Ti,Al)N, Ti/(Ti,Si)N/(Ti,Si)N, and Ti/DLC/DLC coatings by taking into account their deposition conditions on magnesium alloys has been discussed in the paper. The difference in internal stresses in the zone between the coating and the substrate is caused by, first of all, the difference between the mechanical and thermal properties of the substrate and the coating, and also by the structural changes that occur in these materials during the fabrication process, especially during the cooling process following PVD and CVD treatment. The experimental values of stresses were determined based on X-ray diffraction patterns that correspond to the modelled values, which in turn can be used to confirm the correctness of the accepted mathematical model for testing the problem. An FEM model was established for the purpose of building a computer simulation of the internal stresses in the coatings. The accuracy of the FEM model was verified by comparing the results of the computer simulation of the stresses with experimental results. A computer simulation of the stresses was carried out in the ANSYS environment using the FEM method. Structure observations, chemical composition measurements, and mechanical property characterisations of the investigated materials has been carried out to give a background for the discussion of the results that were
Comparison of different methods used in integral codes to model coagulation of aerosols
Beketov, A. I.; Sorokin, A. A.; Alipchenkov, V. M.; Mosunova, N. A.
2013-09-01
The methods for calculating coagulation of particles in the carrying phase that are used in the integral codes SOCRAT, ASTEC, and MELCOR, as well as the Hounslow and Jacobson methods used to model aerosol processes in the chemical industry and in atmospheric investigations are compared on test problems and against experimental results in terms of their effectiveness and accuracy. It is shown that all methods are characterized by a significant error in modeling the distribution function for micrometer particles if calculations are performed using rather "coarse" spectra of particle sizes, namely, when the ratio of the volumes of particles from neighboring fractions is equal to or greater than two. With reference to the problems considered, the Hounslow method and the method applied in the aerosol module used in the ASTEC code are the most efficient ones for carrying out calculations.
Prediction of the properties of PVD/CVD coatings with the use of FEM analysis
Śliwa, Agata; Mikuła, Jarosław; Gołombek, Klaudiusz; Tański, Tomasz; Kwaśny, Waldemar; Bonek, Mirosław; Brytan, Zbigniew
2016-12-01
The aim of this paper is to present the results of the prediction of the properties of PVD/CVD coatings with the use of finite element method (FEM) analysis. The possibility of employing the FEM in the evaluation of stress distribution in multilayer Ti/Ti(C,N)/CrN, Ti/Ti(C,N)/(Ti,Al)N, Ti/(Ti,Si)N/(Ti,Si)N, and Ti/DLC/DLC coatings by taking into account their deposition conditions on magnesium alloys has been discussed in the paper. The difference in internal stresses in the zone between the coating and the substrate is caused by, first of all, the difference between the mechanical and thermal properties of the substrate and the coating, and also by the structural changes that occur in these materials during the fabrication process, especially during the cooling process following PVD and CVD treatment. The experimental values of stresses were determined based on X-ray diffraction patterns that correspond to the modelled values, which in turn can be used to confirm the correctness of the accepted mathematical model for testing the problem. An FEM model was established for the purpose of building a computer simulation of the internal stresses in the coatings. The accuracy of the FEM model was verified by comparing the results of the computer simulation of the stresses with experimental results. A computer simulation of the stresses was carried out in the ANSYS environment using the FEM method. Structure observations, chemical composition measurements, and mechanical property characterisations of the investigated materials has been carried out to give a background for the discussion of the results that were recorded during the modelling process.
Sjenitzer, Bart L.; Hoogenboom, J. Eduard, E-mail: B.L.Sjenitzer@TUDelft.nl, E-mail: J.E.Hoogenboom@TUDelft.nl [Delft University of Technology (Netherlands)
2011-07-01
A new Dynamic Monte Carlo method is implemented in the general purpose Monte Carlo code Tripoli 4.6.1. With this new method incorporated, a general purpose code can be used for safety transient analysis, such as the movement of a control rod or in an accident scenario. To make the Tripoli code ready for calculating on dynamic systems, the Tripoli scheme had to be altered to incorporate time steps, to include the simulation of delayed neutron precursors and to simulate prompt neutron chains. The modified Tripoli code is tested on two sample cases, a steady-state system and a subcritical system and the resulting neutron fluxes behave just as expected. The steady-state calculation has a constant neutron flux over time and this result shows the stability of the calculation. The neutron flux stays constant with acceptable variance. This also shows that the starting conditions are determined correctly. The sub-critical case shows that the code can also handle dynamic systems with a varying neutron flux. (author)
Sjenitzer, Bart L.; Hoogenboom, J. Eduard
2011-01-01
A new Dynamic Monte Carlo method is implemented in the general purpose Monte Carlo code Tripoli 4.6.1. With this new method incorporated, a general purpose code can be used for safety transient analysis, such as the movement of a control rod or in an accident scenario. To make the Tripoli code ready for calculating on dynamic systems, the Tripoli scheme had to be altered to incorporate time steps, to include the simulation of delayed neutron precursors and to simulate prompt neutron chains. The modified Tripoli code is tested on two sample cases, a steady-state system and a subcritical system and the resulting neutron fluxes behave just as expected. The steady-state calculation has a constant neutron flux over time and this result shows the stability of the calculation. The neutron flux stays constant with acceptable variance. This also shows that the starting conditions are determined correctly. The sub-critical case shows that the code can also handle dynamic systems with a varying neutron flux. (author)
Murata, Isao; Mori, Takamasa; Nakagawa, Masayuki; Shirai, Hiroshi.
1996-03-01
High Temperature Gas-cooled Reactors (HTGRs) employ spherical fuels named coated fuel particles (CFPs) consisting of a microsphere of low enriched UO 2 with coating layers in order to prevent FP release. There exist many spherical fuels distributed randomly in the cores. Therefore, the nuclear design of HTGRs is generally performed on the basis of the multigroup approximation using a diffusion code, S N transport code or group-wise Monte Carlo code. This report summarizes a Monte Carlo hard sphere packing simulation code to simulate the packing of equal hard spheres and evaluate the necessary probability distribution of them, which is used for the application of the new Monte Carlo calculation method developed to treat randomly distributed spherical fuels with the continuous energy Monte Carlo method. By using this code, obtained are the various statistical values, namely Radial Distribution Function (RDF), Nearest Neighbor Distribution (NND), 2-dimensional RDF and so on, for random packing as well as ordered close packing of FCC and BCC. (author)
Nanty, Simon
2015-01-01
This work relates to the framework of uncertainty quantification for numerical simulators, and more precisely studies two industrial applications linked to the safety studies of nuclear plants. These two applications have several common features. The first one is that the computer code inputs are functional and scalar variables, functional ones being dependent. The second feature is that the probability distribution of functional variables is known only through a sample of their realizations. The third feature, relative to only one of the two applications, is the high computational cost of the code, which limits the number of possible simulations. The main objective of this work was to propose a complete methodology for the uncertainty analysis of numerical simulators for the two considered cases. First, we have proposed a methodology to quantify the uncertainties of dependent functional random variables from a sample of their realizations. This methodology enables to both model the dependency between variables and their link to another variable, called co-variate, which could be, for instance, the output of the considered code. Then, we have developed an adaptation of a visualization tool for functional data, which enables to simultaneously visualize the uncertainties and features of dependent functional variables. Second, a method to perform the global sensitivity analysis of the codes used in the two studied cases has been proposed. In the case of a computationally demanding code, the direct use of quantitative global sensitivity analysis methods is intractable. To overcome this issue, the retained solution consists in building a surrogate model or meta model, a fast-running model approximating the computationally expensive code. An optimized uniform sampling strategy for scalar and functional variables has been developed to build a learning basis for the meta model. Finally, a new approximation approach for expensive codes with functional outputs has been
Building America Guidance for Identifying and Overcoming Code, Standard, and Rating Method Barriers
Cole, P. C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Halverson, M. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2013-09-01
This guidance document was prepared using the input from the meeting summarized in the draft CSI Roadmap to provide Building America research teams and partners with specific information and approaches to identifying and overcoming potential barriers to Building America innovations arising in and/or stemming from codes, standards, and rating methods.
The Effects of Single and Dual Coded Multimedia Instructional Methods on Chinese Character Learning
Wang, Ling
2013-01-01
Learning Chinese characters is a difficult task for adult English native speakers due to the significant differences between the Chinese and English writing system. The visuospatial properties of Chinese characters have inspired the development of instructional methods using both verbal and visual information based on the Dual Coding Theory. This…
Method and device for fast code acquisition in spread spectrum receivers
Coenen, A.J.R.M.
1993-01-01
Abstract of NL 9101155 (A) Method for code acquisition in a satellite receiver. The biphase-modulated high-frequency carrier transmitted by a satellite is converted via a fixed local oscillator frequency down to the baseband, whereafter the baseband signal is fed via a bandpass filter, which has an
Implantation of a new calculation method of fuel depletion in the CITHAM code
Alvarenga, M.A.B.
1985-01-01
It is evaluated the accuracy of the linear aproximation method used in the CITHAN code to obtain the solution of depletion equations. Results are compared with the Benchmark problem. The convenience of depletion chain before criticality calculations is analysed. The depletion calculation was modified using linear combination technic of linear chains. (M.C.K.) [pt
Fuel penetration of intersubassembly gaps in LMFBRs: a calculational method with the SIMMER-II code
DeVault, G.P.
1983-01-01
Early fuel removal from the active core of a liquid-metal-cooled fast breeder reactor (LMFBR) undergoing a core-disruptive accident may reduce the potential for large energetics resulting from recriticalities. A possible avenue for early fuel removal in heterogeneous core LMFBRs is the failure of duct walls in disrupted driver subassemblies followed by fuel penetration into the gaps between blanket subassemblies. The SIMMER-II code was modified to simulate flow between subassembly gaps. Calculations with the modified SIMMER-II code indicate the capabilities of the method and the potential for fuel mass reduction in the active core
Lee, A.G.; Wilkin, G.B.
1995-01-01
This report is a compilation of the information submitted by AECL, CIAE, JAERI, ORNL and Siemens in response to a need identified at the 'Workshop on R and D Needs' at the IGORR-3 meeting. The survey compiled information on the national standards applied to the Safety Quality Assurance (SQA) programs undertaken by the participants. Information was assembled for the computer codes and nuclear data libraries used in accident and safety analyses for research reactors and the methods used to verify and validate the codes and libraries. Although the survey was not comprehensive, it provides a basis for exchanging information of common interest to the research reactor community
TMCC: a transient three-dimensional neutron transport code by the direct simulation method - 222
Shen, H.; Li, Z.; Wang, K.; Yu, G.
2010-01-01
A direct simulation method (DSM) is applied to solve the transient three-dimensional neutron transport problems. DSM is based on the Monte Carlo method, and can be considered as an application of the Monte Carlo method in the specific type of problems. In this work, the transient neutronics problem is solved by simulating the dynamic behaviors of neutrons and precursors of delayed neutrons during the transient process. DSM gets rid of various approximations which are always necessary to other methods, so it is precise and flexible in the requirement of geometric configurations, material compositions and energy spectrum. In this paper, the theory of DSM is introduced first, and the numerical results obtained with the new transient analysis code, named TMCC (Transient Monte Carlo Code), are presented. (authors)
Probability-neighbor method of accelerating geometry treatment in reactor Monte Carlo code RMC
She, Ding; Li, Zeguang; Xu, Qi; Wang, Kan; Yu, Ganglin
2011-01-01
Probability neighbor method (PNM) is proposed in this paper to accelerate geometry treatment of Monte Carlo (MC) simulation and validated in self-developed reactor Monte Carlo code RMC. During MC simulation by either ray-tracking or delta-tracking method, large amounts of time are spent in finding out which cell one particle is located in. The traditional way is to search cells one by one with certain sequence defined previously. However, this procedure becomes very time-consuming when the system contains a large number of cells. Considering that particles have different probability to enter different cells, PNM method optimizes the searching sequence, i.e., the cells with larger probability are searched preferentially. The PNM method is implemented in RMC code and the numerical results show that the considerable time of geometry treatment in MC calculation for complicated systems is saved, especially effective in delta-tracking simulation. (author)
Qiu, Yishu; She, Ding; Tang, Xiao; Wang, Kan; Liang, Jingang
2016-01-01
Highlights: • A new algorithm is proposed to reduce memory consumption for sensitivity analysis. • The fission matrix method is used to generate adjoint fission source distributions. • Sensitivity analysis is performed on a detailed 3D full-core benchmark with RMC. - Abstract: Recently, there is a need to develop advanced methods of computing eigenvalue sensitivity coefficients to nuclear data in the continuous-energy Monte Carlo codes. One of these methods is the iterated fission probability (IFP) method, which is adopted by most of Monte Carlo codes of having the capabilities of computing sensitivity coefficients, including the Reactor Monte Carlo code RMC. Though it is accurate theoretically, the IFP method faces the challenge of huge memory consumption. Therefore, it may sometimes produce poor sensitivity coefficients since the number of particles in each active cycle is not sufficient enough due to the limitation of computer memory capacity. In this work, two algorithms of the Contribution-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) method, namely, the collision-event-based algorithm (C-CLUTCH) which is also implemented in SCALE and the fission-event-based algorithm (F-CLUTCH) which is put forward in this work, are investigated and implemented in RMC to reduce memory requirements for computing eigenvalue sensitivity coefficients. While the C-CLUTCH algorithm requires to store concerning reaction rates of every collision, the F-CLUTCH algorithm only stores concerning reaction rates of every fission point. In addition, the fission matrix method is put forward to generate the adjoint fission source distribution for the CLUTCH method to compute sensitivity coefficients. These newly proposed approaches implemented in RMC code are verified by a SF96 lattice model and the MIT BEAVRS benchmark problem. The numerical results indicate the accuracy of the F-CLUTCH algorithm is the same as the C
Structural dynamics in LMFBR containment analysis. A brief survey of computational methods and codes
Chang, Y.W.
1977-01-01
This paper gives a brief survey of the computational methods and codes available for LMFBR containment analysis. The various numerical methods commonly used in the computer codes are compared. It provides the reactor engineers to up-to-date information on the development of structural dynamics in LMFBR containment analysis. It can also be used as a basis for the selection of the numerical method in the future code development. First, the commonly used finite-difference expressions in the Lagrangian codes will be compared. Sample calculations will be used as a basis for discussing and comparing the accuracy of the various finite-difference representations. The distortion of the meshes will also be compared; the techniques used for eliminating the numerical instabilities will be discussed and compared using examples. Next, the numerical methods used in the Eulerian formulation will be compared, first among themselves and then with the Lagrangian formulations. Special emphasis is placed on the effect of mass diffusion of the Eulerian calculation on the propagation of discontinuities. Implicit and explicit numerical integrations will be discussed and results obtained from these two techniques will be compared. Then, the finite-element methods are compared with the finite-difference methods. The advantages and disadvantages of the two methods will be discussed in detail, together with the versatility and ease of application of the method to containment analysis having complex geometries. It will also be shown that the finite-element equations for a constant-pressure fluid element is identical to the finite-difference equations using contour integrations. Finally, conclusions based on this study will be given
Comparisons of coded aperture imaging using various apertures and decoding methods
Chang, L.T.; Macdonald, B.; Perez-Mendez, V.
1976-07-01
The utility of coded aperture γ camera imaging of radioisotope distributions in Nuclear Medicine is in its ability to give depth information about a three dimensional source. We have calculated imaging with Fresnel zone plate and multiple pinhole apertures to produce coded shadows and reconstruction of these shadows using correlation, Fresnel diffraction, and Fourier transform deconvolution. Comparisons of the coded apertures and decoding methods are made by evaluating their point response functions both for in-focus and out-of-focus image planes. Background averages and standard deviations were calculated. In some cases, background subtraction was made using combinations of two complementary apertures. Results using deconvolution reconstruction for finite numbers of events are also given
GRS Method for Uncertainty and Sensitivity Evaluation of Code Results and Applications
Glaeser, H.
2008-01-01
During the recent years, an increasing interest in computational reactor safety analysis is to replace the conservative evaluation model calculations by best estimate calculations supplemented by uncertainty analysis of the code results. The evaluation of the margin to acceptance criteria, for example, the maximum fuel rod clad temperature, should be based on the upper limit of the calculated uncertainty range. Uncertainty analysis is needed if useful conclusions are to be obtained from best estimate thermal-hydraulic code calculations, otherwise single values of unknown accuracy would be presented for comparison with regulatory acceptance limits. Methods have been developed and presented to quantify the uncertainty of computer code results. The basic techniques proposed by GRS are presented together with applications to a large break loss of coolant accident on a reference reactor as well as on an experiment simulating containment behaviour
Building America Guidance for Identifying and Overcoming Code, Standard, and Rating Method Barriers
Cole, Pamala C.; Halverson, Mark A.
2013-09-01
The U.S. Department of Energy’s (DOE) Building America program implemented a new Codes and Standards Innovation (CSI) Team in 2013. The Team’s mission is to assist Building America (BA) research teams and partners in identifying and resolving conflicts between Building America innovations and the various codes and standards that govern the construction of residences. A CSI Roadmap was completed in September, 2013. This guidance document was prepared using the information in the CSI Roadmap to provide BA research teams and partners with specific information and approaches to identifying and overcoming potential barriers to Building America (BA) innovations arising in and/or stemming from codes, standards, and rating methods. For more information on the BA CSI team, please email: CSITeam@pnnl.gov
An efficient simulation method of a cyclotron sector-focusing magnet using 2D Poisson code
Gad Elmowla, Khaled Mohamed M; Chai, Jong Seo, E-mail: jschai@skku.edu; Yeon, Yeong H; Kim, Sangbum; Ghergherehchi, Mitra
2016-10-01
In this paper we discuss design simulations of a spiral magnet using 2D Poisson code. The Independent Layers Method (ILM) is a new technique that was developed to enable the use of two-dimensional simulation code to calculate a non-symmetric 3-dimensional magnetic field. In ILM, the magnet pole is divided into successive independent layers, and the hill and valley shape around the azimuthal direction is implemented using a reference magnet. The normalization of the magnetic field in the reference magnet produces a profile that can be multiplied by the maximum magnetic field in the hill magnet, which is a dipole magnet made of the hills at the same radius. Both magnets are then calculated using the 2D Poisson SUPERFISH code. Then a fully three-dimensional magnetic field is produced using TOSCA for the original spiral magnet, and the comparison of the 2D and 3D results shows a good agreement between both.
Development of improved methods for the LWR lattice physics code EPRI-CELL
Williams, M.L.; Wright, R.Q.; Barhen, J.
1982-07-01
A number of improvements have been made by ORNL to the lattice physics code EPRI-CELL (E-C) which is widely used by utilities for analysis of power reactors. The code modifications were made mainly in the thermal and epithermal routines and resulted in improved reactor physics approximations and more efficient running times. The improvements in the thermal flux calculation included implementation of a group-dependent rebalance procedure to accelerate the iterative process and a more rigorous calculation of interval-to-interval collision probabilities. The epithermal resonance shielding methods used in the code have been extensively studied to determine its major approximations and to examine the sensitivity of computed results to these approximations. The study has resulted in several improvements in the original methodology
Gildersleeve, Sara; Singer, Jefferson A; Skerrett, Karen; Wein, Shelter
2017-05-01
"We-ness," a couple's mutual investment in their relationship and in each other, has been found to be a potent dimension of couple resilience. This study examined the development of a method to capture We-ness in psychotherapy through the coding of relationship narratives co-constructed by couples ("We-Stories"). It used a coding system to identify the core thematic elements that make up these narratives. Couples that self-identified as "happy" (N = 53) generated We-Stories and completed measures of relationship satisfaction and mutuality. These stories were then coded using the We-Stories coding manual. Findings indicated that security, an element that involves aspects of safety, support, and commitment, was most common, appearing in 58.5% of all narratives. This element was followed by the elements of pleasure (49.1%) and shared meaning/vision (37.7%). The number of "We-ness" elements was also correlated with and predictive of discrepancy scores on measures of relationship mutuality, indicating the validity of the We-Stories coding manual. Limitations and future directions are discussed.
Goto, Minoru; Takamatsu, Kuniyoshi
2007-03-01
The HTTR temperature coefficients required for the core dynamics calculations had been calculated from the HTTR core calculation results by the diffusion code with which the corrections had been performed using the core calculation results by the Monte-Carlo code MVP. This calculation method for the temperature coefficients was considered to have some issues to be improved. Then, the calculation method was improved to obtain the temperature coefficients in which the corrections by the Monte-Carlo code were not required. Specifically, from the point of view of neutron spectrum calculated by lattice calculations, the lattice model was revised which had been used for the calculations of the temperature coefficients. The HTTR core calculations were performed by the diffusion code with the group constants which were generated by the lattice calculations with the improved lattice model. The core calculations and the lattice calculations were performed by the SRAC code system. The HTTR core dynamics calculation was performed with the temperature coefficient obtained from the core calculation results. In consequence, the core dynamics calculation result showed good agreement with the experimental data and the valid temperature coefficient could be calculated only by the diffusion code without the corrections by Monte-Carlo code. (author)
3-D spherical harmonics code FFT3 by the finite Fourier transformation method
Kobayashi, K.
1997-01-01
In the odd order spherical harmonics method, the rigorous boundary condition at the material interfaces is that the even moments of the angular flux and the normal components of the even order moments of current vectors must be continuous. However, it is difficult to derive spatial discretized equations by the finite difference or finite element methods, which satisfy this material interface condition. It is shown that using the finite Fourier transformation method, space discretized equations which satisfy this interface condition can be easily derived. The discrepancies of the flux distribution near void region between spherical harmonics method codes may be due to the difference of application of the material interface condition. (author)
Baniassadi, Majid; Mortazavi, Behzad; Hamedani, Amani; Garmestani, Hamid; Ahzi, Said; Fathi-Torbaghan, Madjid; Ruch, David; Khaleel, Mohammad A.
2012-01-31
In this study, a previously developed reconstruction methodology is extended to three-dimensional reconstruction of a three-phase microstructure, based on two-point correlation functions and two-point cluster functions. The reconstruction process has been implemented based on hybrid stochastic methodology for simulating the virtual microstructure. While different phases of the heterogeneous medium are represented by different cells, growth of these cells is controlled by optimizing parameters such as rotation, shrinkage, translation, distribution and growth rates of the cells. Based on the reconstructed microstructure, finite element method (FEM) was used to compute the effective elastic modulus and effective thermal conductivity. A statistical approach, based on two-point correlation functions, was also used to directly estimate the effective properties of the developed microstructures. Good agreement between the predicted results from FEM analysis and statistical methods was found confirming the efficiency of the statistical methods for prediction of thermo-mechanical properties of three-phase composites.
A perturbation-based susbtep method for coupled depletion Monte-Carlo codes
Kotlyar, Dan; Aufiero, Manuele; Shwageraus, Eugene; Fratoni, Massimiliano
2017-01-01
Highlights: • The GPT method allows to calculate the sensitivity coefficients to any perturbation. • Full Jacobian of sensitivities, cross sections (XS) to concentrations, may be obtained. • The time dependent XS is obtained by combining the GPT and substep methods. • The proposed GPT substep method considerably reduces the time discretization error. • No additional MC transport solutions are required within the time step. - Abstract: Coupled Monte Carlo (MC) methods are becoming widely used in reactor physics analysis and design. Many research groups therefore, developed their own coupled MC depletion codes. Typically, in such coupled code systems, neutron fluxes and cross sections are provided to the depletion module by solving a static neutron transport problem. These fluxes and cross sections are representative only of a specific time-point. In reality however, both quantities would change through the depletion time interval. Recently, Generalized Perturbation Theory (GPT) equivalent method that relies on collision history approach was implemented in Serpent MC code. This method was used here to calculate the sensitivity of each nuclide and reaction cross section due to the change in concentration of every isotope in the system. The coupling method proposed in this study also uses the substep approach, which incorporates these sensitivity coefficients to account for temporal changes in cross sections. As a result, a notable improvement in time dependent cross section behavior was obtained. The method was implemented in a wrapper script that couples Serpent with an external depletion solver. The performance of this method was compared with other existing methods. The results indicate that the proposed method requires substantially less MC transport solutions to achieve the same accuracy.
Reznik, L.
1994-01-01
Various computer codes employed at Israel Electricity Company for preliminary reactor design analysis and fuel cycle scoping calculations have been often subject to program source modifications. Although most changes were due to computer or operating system compatibility problems, a number of significant modifications were due to model improvement and enhancements of algorithm efficiency and accuracy. With growing acceptance of software quality assurance requirements and methods, a program of implementing extensive testing of modified software has been adopted within the regular maintenance activities. In this work survey has been performed of various software quality assurance methods of software testing which belong mainly to the two major categories of implementation ('white box') and specification-based ('black box') testing. The results of this survey exhibits a clear preference of specification-based testing. In particular the equivalence class partitioning method and the boundary value method have been selected as especially suitable functional methods for testing reactor analysis codes.A separate study of software quality assurance methods and techniques has been performed in this work objective to establish appropriate pre-test software specification methods. Two methods of software analysis and specification have been selected as the most suitable for this purpose: The method of data flow diagrams has been shown to be particularly valuable for performing the functional/procedural software specification while the entities - relationship diagrams has been approved to be efficient for specifying software data/information domain. Feasibility of these two methods has been analyzed in particular for software uncertainty analysis and overall code accuracy estimation. (author). 14 refs
The sensitivity analysis by adjoint method for the uncertainty evaluation of the CATHARE-2 code
Barre, F.; de Crecy, A.; Perret, C. [French Atomic Energy Commission (CEA), Grenoble (France)
1995-09-01
This paper presents the application of the DASM (Discrete Adjoint Sensitivity Method) to CATHARE 2 thermal-hydraulics code. In a first part, the basis of this method is presented. The mathematical model of the CATHARE 2 code is based on the two fluid six equation model. It is discretized using implicit time discretization and it is relatively easy to implement this method in the code. The DASM is the ASM directly applied to the algebraic system of the discretized code equations which has been demonstrated to be the only solution of the mathematical model. The ASM is an integral part of the new version 1.4 of CATHARE. It acts as a post-processing module. It has been qualified by comparison with the {open_quotes}brute force{close_quotes} technique. In a second part, an application of the DASM in CATHARE 2 is presented. It deals with the determination of the uncertainties of the constitutive relationships, which is a compulsory step for calculating the final uncertainty of a given response. First, the general principles of the method are explained: the constitutive relationship are represented by several parameters and the aim is to calculate the variance-covariance matrix of these parameters. The experimental results of the separate effect tests used to establish the correlation are considered. The variance of the corresponding results calculated by CATHARE are estimated by comparing experiment and calculation. A DASM calculation is carried out to provide the derivatives of the responses. The final covariance matrix is obtained by combination of the variance of the responses and those derivatives. Then, the application of this method to a simple case-the blowdown Canon experiment-is presented. This application has been successfully performed.
The sensitivity analysis by adjoint method for the uncertainty evaluation of the CATHARE-2 code
Barre, F.; de Crecy, A.; Perret, C.
1995-01-01
This paper presents the application of the DASM (Discrete Adjoint Sensitivity Method) to CATHARE 2 thermal-hydraulics code. In a first part, the basis of this method is presented. The mathematical model of the CATHARE 2 code is based on the two fluid six equation model. It is discretized using implicit time discretization and it is relatively easy to implement this method in the code. The DASM is the ASM directly applied to the algebraic system of the discretized code equations which has been demonstrated to be the only solution of the mathematical model. The ASM is an integral part of the new version 1.4 of CATHARE. It acts as a post-processing module. It has been qualified by comparison with the open-quotes brute forceclose quotes technique. In a second part, an application of the DASM in CATHARE 2 is presented. It deals with the determination of the uncertainties of the constitutive relationships, which is a compulsory step for calculating the final uncertainty of a given response. First, the general principles of the method are explained: the constitutive relationship are represented by several parameters and the aim is to calculate the variance-covariance matrix of these parameters. The experimental results of the separate effect tests used to establish the correlation are considered. The variance of the corresponding results calculated by CATHARE are estimated by comparing experiment and calculation. A DASM calculation is carried out to provide the derivatives of the responses. The final covariance matrix is obtained by combination of the variance of the responses and those derivatives. Then, the application of this method to a simple case-the blowdown Canon experiment-is presented. This application has been successfully performed
Benchmarking of epithermal methods in the lattice-physics code EPRI-CELL
Williams, M.L.; Wright, R.Q.; Barhen, J.; Rothenstein, W.; Toney, B.
1982-01-01
The epithermal cross section shielding methods used in the lattice physics code EPRI-CELL (E-C) have been extensively studied to determine its major approximations and to examine the sensitivity of computed results to these approximations. The study has resulted in several improvements in the original methodology. These include: treatment of the external moderator source with intermediate resonance (IR) theory, development of a new Dancoff factor expression to account for clad interactions, development of a new method for treating resonance interference, and application of a generalized least squares method to compute best-estimate values for the Bell factor and group-dependent IR parameters. The modified E-C code with its new ENDF/B-V cross section library is tested for several numerical benchmark problems. Integral parameters computed by EC are compared with those obtained with point-cross section Monte Carlo calculations, and E-C fine group cross sections are benchmarked against point-cross section descrete ordinates calculations. It is found that the code modifications improve agreement between E-C and the more sophisticated methods. E-C shows excellent agreement on the integral parameters and usually agrees within a few percent on fine-group, shielded cross sections
MAXWELL3, 3-D FEM Electromagnetism
Grant, J.B.
2001-01-01
1 - Description of program or function: MAXWELL3 is a linear, time domain, finite element code designed for simulation of electromagnetic fields interacting with three-dimensional objects. The simulation region is discretized into 6-sided, 8-nodded elements which need not form a logically regular grid. Scatterers may be perfectly conducting or dielectric. Restart capability and a Muer-type radiating boundary are included. MAXWELL3 can be run in a two-dimensional mode or on infinitesimally thin geometries. The output of time histories on surfaces, or shells, in addition to volumes, is allowed. Two post-processors are included - HIST2XY, which splits the MAXWELL3 history file into simple xy data files, and FFT A BS, which performs fast Fourier transformations on the xy data. 2 - Method of solution: The numerical method requires that the model be discretized with a mesh generator. MAXWELL3 then uses the mesh and computes the time domain electric and magnetic fields by integrating Maxwell's divergence-free curl equations over time. The output from MAXWELL3 can then be used with a post-processor to get the desired information in a graphical form. The explicit time integration is done with a leap-frog technique that alternates evaluating the electric and magnetic fields at half time steps. This allows for centered time differencing accurate in second order. The algorithm is naturally robust and requires no parameters. 3 - Restrictions on the complexity of the problem: MAXWELL3 has no mesh generation capabilities. Anisotropic, nonlinear, and magnetic materials cannot be modeled. Material interfaces only account for dielectric changes and neglect any surface charges that would be present at the surface of a partially conducting material. The radiation boundary algorithm is only accurate for normally incident fields and becomes less accurate as the angle of incidence increases. Thus, only models using scattered fields should use the radiation boundary. This limits MAXWELL3
A novel quantum LSB-based steganography method using the Gray code for colored quantum images
Heidari, Shahrokh; Farzadnia, Ehsan
2017-10-01
As one of the prevalent data-hiding techniques, steganography is defined as the act of concealing secret information in a cover multimedia encompassing text, image, video and audio, imperceptibly, in order to perform interaction between the sender and the receiver in which nobody except the receiver can figure out the secret data. In this approach a quantum LSB-based steganography method utilizing the Gray code for quantum RGB images is investigated. This method uses the Gray code to accommodate two secret qubits in 3 LSBs of each pixel simultaneously according to reference tables. Experimental consequences which are analyzed in MATLAB environment, exhibit that the present schema shows good performance and also it is more secure and applicable than the previous one currently found in the literature.
YUAN Dongfeng; WANG Chengxiang; YAO Qi; CAO Zhigang
2001-01-01
Based on "capacity rule", the perfor-mance of multilevel coding (MLC) schemes with dif-ferent set partitioning strategies and decoding meth-ods in AWGN and Rayleigh fading channels is investi-gated, in which BCH codes are chosen as componentcodes and 8ASK modulation is used. Numerical re-sults indicate that MLC scheme with UP strategy canobtain optimal performance in AWGN channels andBP is the best mapping strategy for Rayleigh fadingchannels. BP strategy is of good robustness in bothkinds of channels to realize an optimum MLC system.Multistage decoding (MSD) is a sub-optimal decodingmethod of MLC for both channels. For Ungerboeckpartitioning (UP) and mixed partitioning (MP) strat-egy, MSD is strongly recommended to use for MLCsystem, while for BP strategy, PDL is suggested to useas a simple decoding method compared with MSD.
Ahnert, C.; Aragones, J.M.; Corella, M.R.; Esteban, A.; Martinez-Val, J.M.; Minguez, E.; Perlado, J.M.; Pena, J.; Matias, E. de; Llorente, A.; Navascues, J.; Serrano, J.
1976-01-01
Description of methods and computer codes for Fuel Management and Nuclear Design of Reload Cycles in PWR, developed at JEN by adaptation of previous codes (LEOPARD, NUTRIX, CITATION, FUELCOST) and implementation of original codes (TEMP, SOTHIS, CICLON, NUDO, MELON, ROLLO, LIBRA, PENELOPE) and their application to the project of Management and Design of Reload Cycles of a 510 Mwt PWR, including comparison with results of experimental operation and other calculations for validation of methods. (author) [es
Navardi, Mohammad Javad; Babaghorbani, Behnaz; Ketabi, Abbas
2014-01-01
Highlights: • This paper proposes a new method to optimize a Switched Reluctance Motor (SRM). • A combination of SOA and GA with Finite Element Method (FEM) analysis is employed to solve the SRM design optimization. • The results show that optimized SRM obtains higher average torque and higher efficiency. - Abstract: In this paper, performance optimization of Switched Reluctance Motor (SRM) was determined using Seeker Optimization Algorithm (SOA). The most efficient aim of the algorithm was found for maximum torque value at a minimum mass of the entire construction, following changing the geometric parameters. The optimization process was carried out using a combination of Seeker Optimization Algorithm and Finite Element Method (FEM). Fitness value was calculated by FEM analysis using COMSOL3.4, and the SOA was realized by MATLAB. The proposed method has been applied for a case study and it has been also compared with Genetic Algorithm (GA). The results show that the optimized motor using SOA had higher torque value and efficiency with lower mass and torque ripple, exhibiting the validity of this methodology for SRM design
Parks, C.V.; Broadhead, B.L.; Hermann, O.W.; Tang, J.S.; Cramer, S.N.; Gauthey, J.C.; Kirk, B.L.; Roussin, R.W.
1988-07-01
This report provides a preliminary assessment of the computational tools and existing methods used to obtain radiation dose rates from shielded spent nuclear fuel and high-level radioactive waste (HLW). Particular emphasis is placed on analysis tools and techniques applicable to facilities/equipment designed for the transport or storage of spent nuclear fuel or HLW. Applications to cask transport, storage, and facility handling are considered. The report reviews the analytic techniques for generating appropriate radiation sources, evaluating the radiation transport through the shield, and calculating the dose at a desired point or surface exterior to the shield. Discrete ordinates, Monte Carlo, and point kernel methods for evaluating radiation transport are reviewed, along with existing codes and data that utilize these methods. A literature survey was employed to select a cadre of codes and data libraries to be reviewed. The selection process was based on specific criteria presented in the report. Separate summaries were written for several codes (or family of codes) that provided information on the method of solution, limitations and advantages, availability, data access, ease of use, and known accuracy. For each data library, the summary covers the source of the data, applicability of these data, and known verification efforts. Finally, the report discusses the overall status of spent fuel shielding analysis techniques and attempts to illustrate areas where inaccuracy and/or uncertainty exist. The report notes the advantages and limitations of several analysis procedures and illustrates the importance of using adequate cross-section data sets. Additional work is recommended to enable final selection/validation of analysis tools that will best meet the US Department of Energy's requirements for use in developing a viable HLW management system. 188 refs., 16 figs., 27 tabs
How could the replica method improve accuracy of performance assessment of channel coding?
Kabashima, Yoshiyuki [Department of Computational Intelligence and Systems Science, Tokyo Institute of technology, Yokohama 226-8502 (Japan)], E-mail: kaba@dis.titech.ac.jp
2009-12-01
We explore the relation between the techniques of statistical mechanics and information theory for assessing the performance of channel coding. We base our study on a framework developed by Gallager in IEEE Trans. Inform. Theory IT-11, 3 (1965), where the minimum decoding error probability is upper-bounded by an average of a generalized Chernoff's bound over a code ensemble. We show that the resulting bound in the framework can be directly assessed by the replica method, which has been developed in statistical mechanics of disordered systems, whereas in Gallager's original methodology further replacement by another bound utilizing Jensen's inequality is necessary. Our approach associates a seemingly ad hoc restriction with respect to an adjustable parameter for optimizing the bound with a phase transition between two replica symmetric solutions, and can improve the accuracy of performance assessments of general code ensembles including low density parity check codes, although its mathematical justification is still open.
Methods and codes for assessing the off-site Consequences of nuclear accidents. Volume 2
Kelly, G.N.; Luykx, F.
1991-01-01
The Commission of the European Communities, within the framework of its 1980-84 radiation protection research programme, initiated a two-year project in 1983 entitled methods for assessing the radiological impact of accidents (Maria). This project was continued in a substantially enlarged form within the 1985-89 research programme. The main objectives of the project were, firstly, to develop a new probabilistic accident consequence code that was modular, incorporated the best features of those codes already in use, could be readily modified to take account of new data and model developments and would be broadly applicable within the EC; secondly, to acquire a better understanding of the limitations of current models and to develop more rigorous approaches where necessary; and, thirdly, to quantify the uncertainties associated with the model predictions. This research led to the development of the accident consequence code Cosyma (COde System from MAria), which will be made generally available later in 1990. The numerous and diverse studies that have been undertaken in support of this development are summarized in this paper, together with indications of where further effort might be most profitably directed. Consideration is also given to related research directed towards the development of real-time decision support systems for use in off-site emergency management
The application of NISA II FEM package in seismic qualification of small class IE electric motors
Fancev, T.; Saban, I.; Grgic, D.
1995-01-01
According to the IEEE standards 323/1974 and 344/1975, seismic qualification of class IE equipment is appropriate combination of test and analysis methods. Complex equipment and assemblies are usually tested through seismic testing. The analysis is recommended for simple equipment that can be easily modeled to correctly predict its response. This article deals with the application of NISA II FEM package in 3D FE modeling and mode shape calculations of small power low voltage electric motors. (author)
Discrete conservation of nonnegativity for elliptic problems solved by the hp-FEM
Šolín, P.; Vejchodský, Tomáš; Araiza, R.
2007-01-01
Roč. 76, 1-3 (2007), s. 205-210 ISSN 0378-4754 R&D Projects: GA ČR GP201/04/P021 Institutional research plan: CEZ:AV0Z10190503 Keywords : discrete nonnegativity conservation * discrete Green's function * elliptic problems * hp-FEM * higher-order finite element methods * Poisson equation * numerical experimetns Subject RIV: BA - General Mathematics Impact factor: 0.738, year: 2007
Liu, Mei-Feng; Zhong, Guo-Yun; He, Xiao-Hai; Qing, Lin-Bo
2016-09-01
Currently, most video resources on line are encoded in the H.264/AVC format. More fluent video transmission can be obtained if these resources are encoded in the newest international video coding standard: high efficiency video coding (HEVC). In order to improve the video transmission and storage on line, a transcoding method from H.264/AVC to HEVC is proposed. In this transcoding algorithm, the coding information of intraprediction, interprediction, and motion vector (MV) in H.264/AVC video stream are used to accelerate the coding in HEVC. It is found through experiments that the region of interprediction in HEVC overlaps that in H.264/AVC. Therefore, the intraprediction for the region in HEVC, which is interpredicted in H.264/AVC, can be skipped to reduce coding complexity. Several macroblocks in H.264/AVC are combined into one PU in HEVC when the MV difference between two of the macroblocks in H.264/AVC is lower than a threshold. This method selects only one coding unit depth and one prediction unit (PU) mode to reduce the coding complexity. An MV interpolation method of combined PU in HEVC is proposed according to the areas and distances between the center of one macroblock in H.264/AVC and that of the PU in HEVC. The predicted MV accelerates the motion estimation for HEVC coding. The simulation results show that our proposed algorithm achieves significant coding time reduction with a little loss in bitrates distortion rate, compared to the existing transcoding algorithms and normal HEVC coding.
Simulation of clinical X-ray tube using the Monte Carlo Method - PENELOPE code
Albuquerque, M.A.G.; David, M.G.; Almeida, C.E. de; Magalhaes, L.A.G.; Braz, D.
2015-01-01
Breast cancer is the most common type of cancer among women. The main strategy to increase the long-term survival of patients with this disease is the early detection of the tumor, and mammography is the most appropriate method for this purpose. Despite the reduction of cancer deaths, there is a big concern about the damage caused by the ionizing radiation to the breast tissue. To evaluate these measures it was modeled a mammography equipment, and obtained the depth spectra using the Monte Carlo method - PENELOPE code. The average energies of the spectra in depth and the half value layer of the mammography output spectrum. (author)
Methods and codes for neutronic calculations of the MARIA research reactor
Andrzejewski, K.; Kulikowska, T.; Bretscher, M.M.; Hanan, N.A.; Matos, J.E.
1998-01-01
The core of the MARIA high flux multipurpose research reactor is highly heterogeneous. It consists of beryllium blocks arranged in 6x8 matrix, tubular fuel assemblies, control rods and irradiation channels. The reflector is also heterogeneous and consists of graphite blocks clad with aluminium. Its structure is perturbed by the experimental beam tubes. This paper presents methods and codes used to calculate the MARIA reactor neutronics characteristics and experience gained thus far at IAE and ANL. At ANL the methods of MARIA calculations were developed in connection with RERTR program. At IAE the package of programs was developed to help its operator in optimization of fuel utilization. (author)
Methods for Using Small Non-Coding RNAs to Improve Recombinant Protein Expression in Mammalian Cells
Sarah Inwood
2018-01-01
Full Text Available The ability to produce recombinant proteins by utilizing different “cell factories” revolutionized the biotherapeutic and pharmaceutical industry. Chinese hamster ovary (CHO cells are the dominant industrial producer, especially for antibodies. Human embryonic kidney cells (HEK, while not being as widely used as CHO cells, are used where CHO cells are unable to meet the needs for expression, such as growth factors. Therefore, improving recombinant protein expression from mammalian cells is a priority, and continuing effort is being devoted to this topic. Non-coding RNAs are RNA segments that are not translated into a protein and often have a regulatory role. Since their discovery, major progress has been made towards understanding their functions. Non-coding RNA has been investigated extensively in relation to disease, especially cancer, and recently they have also been used as a method for engineering cells to improve their protein expression capability. In this review, we provide information about methods used to identify non-coding RNAs with the potential of improving recombinant protein expression in mammalian cell lines.
Bécares, V.; Pérez-Martín, S.; Vázquez-Antolín, M.; Villamarín, D.; Martín-Fuertes, F.; González-Romero, E.M.; Merino, I.
2014-01-01
Highlights: • Review of several Monte Carlo effective delayed neutron fraction calculation methods. • These methods have been implemented with the Monte Carlo code MCNPX. • They have been benchmarked against against some critical and subcritical systems. • Several nuclear data libraries have been used. - Abstract: The calculation of the effective delayed neutron fraction, β eff , with Monte Carlo codes is a complex task due to the requirement of properly considering the adjoint weighting of delayed neutrons. Nevertheless, several techniques have been proposed to circumvent this difficulty and obtain accurate Monte Carlo results for β eff without the need of explicitly determining the adjoint flux. In this paper, we make a review of some of these techniques; namely we have analyzed two variants of what we call the k-eigenvalue technique and other techniques based on different interpretations of the physical meaning of the adjoint weighting. To test the validity of all these techniques we have implemented them with the MCNPX code and we have benchmarked them against a range of critical and subcritical systems for which either experimental or deterministic values of β eff are available. Furthermore, several nuclear data libraries have been used in order to assess the impact of the uncertainty in nuclear data in the calculated value of β eff
Application Of WIMS Code To Calculation Kartini Reactor Parameters By Pin-Cell And Cluster Method
Sumarsono, Bambang; Tjiptono, T.W.
1996-01-01
Analysis UZrH fuel element parameters calculation in Kartini Reactor by WIMS Code has been done. The analysis is done by pin cell and cluster method. The pin cell method is done as a function percent burn-up and by 8 group 3 region analysis and cluster method by 8 group 12 region analysis. From analysis and calculation resulted K ∼ = 1.3687 by pin cell method and K ∼ = 1.3162 by cluster method and so deviation is 3.83%. By pin cell analysis as a function percent burn-up at the percent burn-up greater than 59.50%, the multiplication factor is less than one (k ∼ < 1) it is mean that the fuel element reactivity is negative
Charge-conserving FEM-PIC schemes on general grids
Campos Pinto, M.; Jund, S.; Salmon, S.; Sonnendruecker, E.
2014-01-01
Particle-In-Cell (PIC) solvers are a major tool for the understanding of the complex behavior of a plasma or a particle beam in many situations. An important issue for electromagnetic PIC solvers, where the fields are computed using Maxwell's equations, is the problem of discrete charge conservation. In this article, we aim at proposing a general mathematical formulation for charge-conserving finite-element Maxwell solvers coupled with particle schemes. In particular, we identify the finite-element continuity equations that must be satisfied by the discrete current sources for several classes of time-domain Vlasov-Maxwell simulations to preserve the Gauss law at each time step, and propose a generic algorithm for computing such consistent sources. Since our results cover a wide range of schemes (namely curl-conforming finite element methods of arbitrary degree, general meshes in two or three dimensions, several classes of time discretization schemes, particles with arbitrary shape factors and piecewise polynomial trajectories of arbitrary degree), we believe that they provide a useful roadmap in the design of high-order charge-conserving FEM-PIC numerical schemes. (authors)
Adaptive stochastic Galerkin FEM with hierarchical tensor representations
Eigel, Martin
2016-01-08
PDE with stochastic data usually lead to very high-dimensional algebraic problems which easily become unfeasible for numerical computations because of the dense coupling structure of the discretised stochastic operator. Recently, an adaptive stochastic Galerkin FEM based on a residual a posteriori error estimator was presented and the convergence of the adaptive algorithm was shown. While this approach leads to a drastic reduction of the complexity of the problem due to the iterative discovery of the sparsity of the solution, the problem size and structure is still rather limited. To allow for larger and more general problems, we exploit the tensor structure of the parametric problem by representing operator and solution iterates in the tensor train (TT) format. The (successive) compression carried out with these representations can be seen as a generalisation of some other model reduction techniques, e.g. the reduced basis method. We show that this approach facilitates the efficient computation of different error indicators related to the computational mesh, the active polynomial chaos index set, and the TT rank. In particular, the curse of dimension is avoided.
Lee, A.G.; Wilkin, G.B.
1996-03-01
During the 'Workshop on R and D needs' at the 3rd Meeting of the International Group on Research Reactors (IGORR-III), the participants agreed that it would be useful to compile a survey of the computer codes and nuclear data libraries used in accident and safety analyses for research reactors and the methods various organizations use to verify and validate their codes and libraries. Five organizations, Atomic Energy of Canada Limited (AECL, Canada), China Institute of Atomic Energy (CIAE, People's Republic of China), Japan Atomic Energy Research Institute (JAERI, Japan), Oak Ridge National Laboratories (ORNL, USA), and Siemens (Germany) responded to the survey. The results of the survey are compiled in this report. (author) 36 refs., 3 tabs
AN APPROACH TO EFFICIENT FEM SIMULATIONS ON GRAPHICS PROCESSING UNITS USING CUDA
Björn Nutti
2014-04-01
Full Text Available The paper presents a highly efficient way of simulating the dynamic behavior of deformable objects by means of the finite element method (FEM with computations performed on Graphics Processing Units (GPU. The presented implementation reduces bottlenecks related to memory accesses by grouping the necessary data per node pairs, in contrast to the classical way done per element. This strategy reduces the memory access patterns that are not suitable for the GPU memory architecture. Furthermore, the presented implementation takes advantage of the underlying sparse-block-matrix structure, and it has been demonstrated how to avoid potential bottlenecks in the algorithm. To achieve plausible deformational behavior for large local rotations, the objects are modeled by means of a simplified co-rotational FEM formulation.
Stress and fatigue analyses of primary circuit components of NPP using FEM
Gal, P.
2015-01-01
This poster is a short illustration of the numerical assessment of the VVER-440 reactor pressure vessel (RPV) main flange. RPV main flange consists in free flange, pressure ring, flange bolts, nut and nickel gasket. Operating temperature transient modes, like heat up regime can lead to serious tension in bolts. So temperature fields have to be calculated. The fatigue assessment of the main flange bolt requires the determination of the coefficient of stress concentrators in bolt thread. Stress concentrators can be computed through FEM or given by norms (PNAEG). The most significant value of fatigue usage factor is in the first thread connection between bolt and nut. A finite element method (FEM) is used for calculation stress and temperature distribution in the reactor flange. The reassessment was performed according Czech normative document NTD-A.S.I. and VERLIFE
FEM analysis of impact of external objects to pipelines
Gracie, Robert; Konuk, Ibrahim [Geological Survey of Canada, Ottawa, ON (Canada)]. E-mail: ikonuk@NRCan.gc.ca; Fredj, Abdelfettah [BMT Fleet Technology Limited, Ottawa, ON (Canada)
2003-07-01
One of the most common hazards to pipelines is impact of external objects. Earth moving machinery, farm equipment or bullets can dent or fail land pipelines. External objects such as anchors, fishing gear, ice can damage offshore pipelines. This paper develops an FEM model to simulate the impact process and presents investigations using the FEM model to determine the influence of the geometry and velocity of the impacting object and also will study the influence of the pipe diameter, wall thickness, and concrete thickness along with internal pressure. The FEM model is developed by using LS-DYNA explicit FEM software utilizing shell and solid elements. The model allows damage and removal of the concrete and corrosion coating elements during impact. Parametric studies will be presented relating the dent size to pipe diameter, wall thickness and concrete thickness, internal pipe pressure, and impacting object geometry. The primary objective of this paper is to develop and present the FEM model. The model can be applied to both offshore and land pipeline problems. Some examples are used to illustrate how the model can be applied to real life problems. A future paper will present more detailed parametric studies. (author)
A Mixed Methods Approach to Code Stakeholder Beliefs in Urban Water Governance
Bell, E. V.; Henry, A.; Pivo, G.
2017-12-01
What is a reliable way to code policies to represent belief systems? The Advocacy Coalition Framework posits that public policy may be viewed as manifestations of belief systems. Belief systems include both ontological beliefs about cause-and-effect relationships and policy effectiveness, as well as normative beliefs about appropriate policy instruments and the relative value of different outcomes. The idea that belief systems are embodied in public policy is important for urban water governance because it trains our focus on belief conflict; this can help us understand why many water-scarce cities do not adopt innovative technology despite available scientific information. To date, there has been very little research on systematic, rigorous methods to measure the belief system content of public policies. We address this by testing the relationship between beliefs and policy participation to develop an innovative coding framework. With a focus on urban water governance in Tucson, Arizona, we analyze grey literature on local water management. Mentioned policies are coded into a typology of common approaches identified in urban water governance literature, which include regulation, education, price and non-price incentives, green infrastructure and other types of technology. We then survey local water stakeholders about their perceptions of these policies. Urban water governance requires coordination of organizations from multiple sectors, and we cannot assume that belief development and policy participation occur in a vacuum. Thus, we use a generalized exponential random graph model to test the relationship between perceptions and policy participation in the Tucson water governance network. We measure policy perceptions for organizations by averaging across their respective, affiliated respondents and generating a belief distance matrix of coordinating network participants. Similarly, we generate a distance matrix of these actors based on the frequency of their
Investigation of load transfer efficiency in jointed plain concrete pavements (JPCP using FEM
Vahid Sadeghi
2018-05-01
Full Text Available Owing to heavy traffic loads, rigid pavements encounter various types of failures at transverse joints during their lifetime. Three-dimensional finite-element method (3D-FEM was used to assess the structural response of jointed concrete pavement under moving tandem axle loads. In this study, 3D FEM was verified using an existing numerical model and field measurement of the concrete slab traversed by a moving truck. This paper also investigated the effects of multiple parameters: material properties, slab geometry, load magnitude and frictional status of the slab and base layer on load transfer efficiency (LTE of the transverse joints. Further study has been done to investigate the slab performance without the dowel bars which occurs when parts of the pavement needed to be repaired using precast slabs. The aggregate interlock between the new slab and the existing slab is simulated by frictional interface. In 3D FEM model, the load transfer efficiency has been improved by increasing the elasticity modules of the concrete slab and the base layer or increasing the slab thickness. This can decrease the joints' deflections, reduces the damages on pavement joints. Removing dowel bars adversely affected the load transfer. Keywords: Concrete pavement, Load transfer, Finite-element method, Dowel bar, Structural behavior
Stenvall, A; Tarhasaari, T
2010-01-01
Many people these days employ only commercial finite element method (FEM) software when solving for the hysteresis losses of superconductors. Thus, the knowledge of a modeller is in the capability of using the black boxes of software efficiently. This has led to a relatively superficial examination of different formulations while the discussion stays mainly on the usage of the user interfaces of these programs. Also, if we stay only at the mercy of commercial software producers, we end up having less and less knowledge on the details of solvers. Then, it becomes more and more difficult to conceptually solve new kinds of problem. This may prevent us finding new kinds of method to solve old problems more efficiently, or finding a solution for a problem that was considered almost impossible earlier. In our earlier research, we presented the background of a co-tree gauged T-ψ FEM solver for computing the hysteresis losses of superconductors. In this paper, we examine the feasibility of FEM and eddy current vector potential formulation in the same problem.
Non-linear heat transfer computer code by finite element method
Nagato, Kotaro; Takikawa, Noboru
1977-01-01
The computer code THETA-2D for the calculation of temperature distribution by the two-dimensional finite element method was made for the analysis of heat transfer in a high temperature structure. Numerical experiment was performed for the numerical integration of the differential equation of heat conduction. The Runge-Kutta method of the numerical experiment produced an unstable solution. A stable solution was obtained by the β method with the β value of 0.35. In high temperature structures, the radiative heat transfer can not be neglected. To introduce a term of the radiative heat transfer, a functional neglecting the radiative heat transfer was derived at first. Then, the radiative term was added after the discretion by variation method. Five model calculations were carried out by the computer code. Calculation of steady heat conduction was performed. When estimated initial temperature is 1,000 degree C, reasonable heat blance was obtained. In case of steady-unsteady temperature calculation, the time integral by THETA-2D turned out to be under-estimation for enthalpy change. With a one-dimensional model, the temperature distribution in a structure, in which heat conductivity is dependent on temperature, was calculated. Calculation with a model which has a void inside was performed. Finally, model calculation for a complex system was carried out. (Kato, T.)
Coding Model and Mapping Method of Spherical Diamond Discrete Grids Based on Icosahedron
LIN Bingxian
2016-12-01
Full Text Available Discrete Global Grid(DGG provides a fundamental environment for global-scale spatial data's organization and management. DGG's encoding scheme, which blocks coordinate transformation between different coordination reference frames and reduces the complexity of spatial analysis, contributes a lot to the multi-scale expression and unified modeling of spatial data. Compared with other kinds of DGGs, Diamond Discrete Global Grid(DDGG based on icosahedron is beneficial to the spherical spatial data's integration and expression for much better geometric properties. However, its structure seems more complicated than DDGG on octahedron due to its initial diamond's edges cannot fit meridian and parallel. New challenges are posed when it comes to the construction of hierarchical encoding system and mapping relationship with geographic coordinates. On this issue, this paper presents a DDGG's coding system based on the Hilbert curve and designs conversion methods between codes and geographical coordinates. The study results indicate that this encoding system based on the Hilbert curve can express space scale and location information implicitly with the similarity between DDG and planar grid put into practice, and balances efficiency and accuracy of conversion between codes and geographical coordinates in order to support global massive spatial data's modeling, integrated management and all kinds of spatial analysis.
Thomine O.
2013-12-01
Full Text Available The present work deals with an optimization procedure developed in the full-f global GYrokinetic SEmi-LAgrangian code (GYSELA. Optimizing the writing of the restart files is necessary to reduce the computing impact of crashes. These files require a very large memory space, and particularly so for very large mesh sizes. The limited bandwidth of the data pipe between the comput- ing nodes and the storage system induces a non-scalable part in the GYSELA code, which increases with the mesh size. Indeed the transfer time of RAM to data depends linearly on the files size. The necessity of non synchronized writing-in-file procedure is therefore crucial. A new GYSELA module has been developed. This asynchronous procedure allows the frequent writ- ing of the restart files, whilst preventing a severe slowing down due to the limited writing bandwidth. This method has been improved to generate a checksum control of the restart files, and automatically rerun the code in case of a crash for any cause.
Kida, Takashi; Umeda, Miki; Sugikawa, Susumu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
2003-03-01
MOX dissolution using silver-mediated electrochemical method will be employed for the preparation of plutonium nitrate solution in the criticality safety experiments in the Nuclear Fuel Cycle Safety Engineering Research Facility (NUCEF). A simulation code for the MOX dissolution has been developed for the operating support. The present report describes the outline of the simulation code, a comparison with the experimental data and a parameter study on the MOX dissolution. The principle of this code is based on the Zundelevich's model for PuO{sub 2} dissolution using Ag(II). The influence of nitrous acid on the material balance of Ag(II) is taken into consideration and the surface area of MOX powder is evaluated by particle size distribution in this model. The comparison with experimental data was carried out to confirm the validity of this model. It was confirmed that the behavior of MOX dissolution could adequately be simulated using an appropriate MOX dissolution rate constant. It was found from the result of parameter studies that MOX particle size was major governing factor on the dissolution rate. (author)
Implementation of the probability table method in a continuous-energy Monte Carlo code system
Sutton, T.M.; Brown, F.B.
1998-10-01
RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5
CHF predictor derived from a 3D thermal-hydraulic code and an advanced statistical method
Banner, D.; Aubry, S.
2004-01-01
A rod bundle CHF predictor has been determined by using a 3D code (THYC) to compute local thermal-hydraulic conditions at the boiling crisis location. These local parameters have been correlated to the critical heat flux by using an advanced statistical method based on spline functions. The main characteristics of the predictor are presented in conjunction with a detailed analysis of predictions (P/M ratio) in order to prove that the usual safety methodology can be applied with such a predictor. A thermal-hydraulic design criterion is obtained (1.13) and the predictor is compared with the WRB-1 correlation. (author)
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
Oliver, A. Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.
Suzuki, Tadakazu
1979-11-01
Thirty two programs for linear and nonlinear optimization problems with or without constraints have been developed or incorporated, and their stability, convergence and efficiency have been examined. On the basis of these evaluations, the first version of the optimization code system SCOOP-I has been completed. The SCOOP-I is designed to be an efficient, reliable, useful and also flexible system for general applications. The system enables one to find global optimization point for a wide class of problems by selecting the most appropriate optimization method built in it. (author)
Crack growth analysis in a weld-heat-affected zone using S-version FEM
Kikuchi, Masanori; Wada, Yoshitaka; Shimizu, Yuto; Li, Yulong
2012-01-01
The objective of this study is the prediction of crack propagation under thermal, residual stress fields using S-Version FEM (S-FEM). By using the S-FEM technique, only the local mesh should be re-meshed and it becomes easy to simulate crack growth. By combining with an auto-meshing technique, the local mesh is re-meshed automatically, and a curved crack path is modeled easily. Virtual crack closure integral method (VCCM) is used to evaluate energy release rate at the crack tip. For crack growth analyses, crack growth rate and growth direction are determined using criteria for mixed mode loading condition. In order to confirm the validity of this analysis, some comparisons with previously reported analyses were done, and good agreement obtained. In this study, residual stress data were provided by JAEA, Japan Atomic Energy Agency, based on their numerical simulation. Stress corrosion crack (SCC) growth analyses in a pipe are conducted in two-dimensional and three-dimensional fields. Two cases, for an axi-symmetric distribution of residual stress in the pipe wall and a non-axisymmetric one are assumed. Effects of residual stress distribution patterns on SCC cracking are evaluated and discussed.
Liu, Ying; Song, Huadong; Zhu, Panpan; Lu, Hao; Tang, Qi
2017-08-01
The elasticity of erythrocytes is an important criterion to evaluate the quality of blood. This paper presents a novel research on erythrocytes' elasticity with the application of optical tweezers and the finite element method (FEM) during blood storage. In this work, the erythrocytes with different in vitro times were linearly stretched by trapping force using optical tweezers and the time dependent elasticity of erythrocytes was investigated. The experimental results indicate that the membrane shear moduli of erythrocytes increased with the increasing in vitro time, namely the elasticity was decreasing. Simultaneously, an erythrocyte shell model with two parameters (membrane thickness h and membrane shear modulus H) was built to simulate the linear stretching states of erythrocytes by the FEM, and the simulations conform to the results obtained in the experiment. The evolution process was found that the erythrocytes membrane thicknesses were decreasing. The analysis assumes that the partial proteins and lipid bilayer of erythrocyte membrane were decomposed during the in vitro preservation of blood, which results in thin thickness, weak bending resistance, and losing elasticity of erythrocyte membrane. This study implies that the FEM can be employed to investigate the inward mechanical property changes of erythrocyte in different environments, which also can be a guideline for studying the erythrocyte mechanical state suffered from different diseases.
A method of non-contact reading code based on computer vision
Zhang, Chunsen; Zong, Xiaoyu; Guo, Bingxuan
2018-03-01
With the purpose of guarantee the computer information exchange security between internal and external network (trusted network and un-trusted network), A non-contact Reading code method based on machine vision has been proposed. Which is different from the existing network physical isolation method. By using the computer monitors, camera and other equipment. Deal with the information which will be on exchanged, Include image coding ,Generate the standard image , Display and get the actual image , Calculate homography matrix, Image distort correction and decoding in calibration, To achieve the computer information security, Non-contact, One-way transmission between the internal and external network , The effectiveness of the proposed method is verified by experiments on real computer text data, The speed of data transfer can be achieved 24kb/s. The experiment shows that this algorithm has the characteristics of high security, fast velocity and less loss of information. Which can meet the daily needs of the confidentiality department to update the data effectively and reliably, Solved the difficulty of computer information exchange between Secret network and non-secret network, With distinctive originality, practicability, and practical research value.
Hao Yang
2014-11-01
Full Text Available Terrestrial laser scanning technology (TLS is a new technique for quickly getting three-dimensional information. In this paper we research the health assessment of concrete structures with a Finite Element Method (FEM model based on TLS. The goal focuses on the benefits of 3D TLS in the generation and calibration of FEM models, in order to build a convenient, efficient and intelligent model which can be widely used for the detection and assessment of bridges, buildings, subways and other objects. After comparing the finite element simulation with surface-based measurement data from TLS, the FEM model is determined to be acceptable with an error of less than 5%. The benefit of TLS lies mainly in the possibility of a surface-based validation of results predicted by the FEM model.
Yang, Hao; Xu, Xiangyang; Neumann, Ingo
2014-11-19
Terrestrial laser scanning technology (TLS) is a new technique for quickly getting three-dimensional information. In this paper we research the health assessment of concrete structures with a Finite Element Method (FEM) model based on TLS. The goal focuses on the benefits of 3D TLS in the generation and calibration of FEM models, in order to build a convenient, efficient and intelligent model which can be widely used for the detection and assessment of bridges, buildings, subways and other objects. After comparing the finite element simulation with surface-based measurement data from TLS, the FEM model is determined to be acceptable with an error of less than 5%. The benefit of TLS lies mainly in the possibility of a surface-based validation of results predicted by the FEM model.
Review of solution approach, methods, and recent results of the TRAC-PF1 system code
Mahaffy, J.H.; Liles, D.R.; Knight, T.D.
1983-01-01
The current version of the Transient Reactor Analysis Code (TRAC-PF1) was created to improve on the capabilities of its predecessor (TRAC-PD2) for analyzing slow reactor transients such as small-break loss-of-coolant accidents. TRAC-PF1 continues to use a semi-implicit finite-difference method for modeling three-dimensional flows in the reactor vessel. However, it contains a new stability-enhancing two-step (SETS) finite-difference tecnique for one-dimensional flow calculations. This method is not restricted by a material Courant stability condition, allowing much larger time-step sizes during slow transients than would a semi-implicit method. These have been successfully applied to the analysis of a variety of experiments and hypothetical plant transients covering a full range of two-phase flow regimes
Vasile Cojocaru
2016-12-01
Full Text Available Several methods can be used in the FEM studies to apply the loads on a plain bearing. The paper presents a comparative analysis of maximum stress obtained for three loading scenarios: resultant force applied on the shaft – bearing assembly, variable pressure with sinusoidal distribution applied on the bearing surface, variable pressure with parabolic distribution applied on the bearing surface.
Lof, J.
2001-01-01
The use of the finite element method (FEM) is getting increasingly important in the understanding of processes that occur during aluminium extrusion. The bearing area is one of the most difficult areas to model in a numerical simulation. To investigate the phenomena that occur in the bearing,
An imaging method of wavefront coding system based on phase plate rotation
Yi, Rigui; Chen, Xi; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua
2018-01-01
Wave-front coding has a great prospect in extending the depth of the optical imaging system and reducing optical aberrations, but the image quality and noise performance are inevitably reduced. According to the theoretical analysis of the wave-front coding system and the phase function expression of the cubic phase plate, this paper analyzed and utilized the feature that the phase function expression would be invariant in the new coordinate system when the phase plate rotates at different angles around the z-axis, and we proposed a method based on the rotation of the phase plate and image fusion. First, let the phase plate rotated at a certain angle around the z-axis, the shape and distribution of the PSF obtained on the image surface remain unchanged, the rotation angle and direction are consistent with the rotation angle of the phase plate. Then, the middle blurred image is filtered by the point spread function of the rotation adjustment. Finally, the reconstruction images were fused by the method of the Laplacian pyramid image fusion and the Fourier transform spectrum fusion method, and the results were evaluated subjectively and objectively. In this paper, we used Matlab to simulate the images. By using the Laplacian pyramid image fusion method, the signal-to-noise ratio of the image is increased by 19% 27%, the clarity is increased by 11% 15% , and the average gradient is increased by 4% 9% . By using the Fourier transform spectrum fusion method, the signal-to-noise ratio of the image is increased by 14% 23%, the clarity is increased by 6% 11% , and the average gradient is improved by 2% 6%. The experimental results show that the image processing by the above method can improve the quality of the restored image, improving the image clarity, and can effectively preserve the image information.
Non-coding RNA detection methods combined to improve usability, reproducibility and precision
Kreikemeyer Bernd
2010-09-01
Full Text Available Abstract Background Non-coding RNAs gain more attention as their diverse roles in many cellular processes are discovered. At the same time, the need for efficient computational prediction of ncRNAs increases with the pace of sequencing technology. Existing tools are based on various approaches and techniques, but none of them provides a reliable ncRNA detector yet. Consequently, a natural approach is to combine existing tools. Due to a lack of standard input and output formats combination and comparison of existing tools is difficult. Also, for genomic scans they often need to be incorporated in detection workflows using custom scripts, which decreases transparency and reproducibility. Results We developed a Java-based framework to integrate existing tools and methods for ncRNA detection. This framework enables users to construct transparent detection workflows and to combine and compare different methods efficiently. We demonstrate the effectiveness of combining detection methods in case studies with the small genomes of Escherichia coli, Listeria monocytogenes and Streptococcus pyogenes. With the combined method, we gained 10% to 20% precision for sensitivities from 30% to 80%. Further, we investigated Streptococcus pyogenes for novel ncRNAs. Using multiple methods--integrated by our framework--we determined four highly probable candidates. We verified all four candidates experimentally using RT-PCR. Conclusions We have created an extensible framework for practical, transparent and reproducible combination and comparison of ncRNA detection methods. We have proven the effectiveness of this approach in tests and by guiding experiments to find new ncRNAs. The software is freely available under the GNU General Public License (GPL, version 3 at http://www.sbi.uni-rostock.de/moses along with source code, screen shots, examples and tutorial material.
Yamaguchi, Yasuhiro
1991-01-01
The present report describes a computer code DEEP which calculates the organ dose equivalents and the effective dose equivalent for external photon exposure by the Monte Carlo method. MORSE-CG, Monte Carlo radiation transport code, is incorporated into the DEEP code to simulate photon transport phenomena in and around a human body. The code treats an anthropomorphic phantom represented by mathematical formulae and user has a choice for the phantom sex: male, female and unisex. The phantom can wear personal dosimeters on it and user can specify their location and dimension. This document includes instruction and sample problem for the code as well as the general description of dose calculation, human phantom and computer code. (author)
Huang, Sheng; Ao, Xiang; Li, Yuan-yuan; Zhang, Rui
2016-09-01
In order to meet the needs of high-speed development of optical communication system, a construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on multiplicative group of finite field is proposed. The Tanner graph of parity check matrix of the code constructed by this method has no cycle of length 4, and it can make sure that the obtained code can get a good distance property. Simulation results show that when the bit error rate ( BER) is 10-6, in the same simulation environment, the net coding gain ( NCG) of the proposed QC-LDPC(3 780, 3 540) code with the code rate of 93.7% in this paper is improved by 2.18 dB and 1.6 dB respectively compared with those of the RS(255, 239) code in ITU-T G.975 and the LDPC(3 2640, 3 0592) code in ITU-T G.975.1. In addition, the NCG of the proposed QC-LDPC(3 780, 3 540) code is respectively 0.2 dB and 0.4 dB higher compared with those of the SG-QC-LDPC(3 780, 3 540) code based on the two different subgroups in finite field and the AS-QC-LDPC(3 780, 3 540) code based on the two arbitrary sets of a finite field. Thus, the proposed QC-LDPC(3 780, 3 540) code in this paper can be well applied in optical communication systems.
An Effective Transform Unit Size Decision Method for High Efficiency Video Coding
Chou-Chen Wang
2014-01-01
Full Text Available High efficiency video coding (HEVC is the latest video coding standard. HEVC can achieve higher compression performance than previous standards, such as MPEG-4, H.263, and H.264/AVC. However, HEVC requires enormous computational complexity in encoding process due to quadtree structure. In order to reduce the computational burden of HEVC encoder, an early transform unit (TU decision algorithm (ETDA is adopted to pruning the residual quadtree (RQT at early stage based on the number of nonzero DCT coefficients (called NNZ-EDTA to accelerate the encoding process. However, the NNZ-ETDA cannot effectively reduce the computational load for sequences with active motion or rich texture. Therefore, in order to further improve the performance of NNZ-ETDA, we propose an adaptive RQT-depth decision for NNZ-ETDA (called ARD-NNZ-ETDA by exploiting the characteristics of high temporal-spatial correlation that exist in nature video sequences. Simulation results show that the proposed method can achieve time improving ratio (TIR about 61.26%~81.48% when compared to the HEVC test model 8.1 (HM 8.1 with insignificant loss of image quality. Compared with the NNZ-ETDA, the proposed method can further achieve an average TIR about 8.29%~17.92%.
KIN SP: A boundary element method based code for single pile kinematic bending in layered soil
Stefano Stacul
2018-02-01
Full Text Available In high seismicity areas, it is important to consider kinematic effects to properly design pile foundations. Kinematic effects are due to the interaction between pile and soil deformations induced by seismic waves. One of the effect is the arise of significant strains in weak soils that induce bending moments on piles. These moments can be significant in presence of a high stiffness contrast in a soil deposit. The single pile kinematic interaction problem is generally solved with beam on dynamic Winkler foundation approaches (BDWF or using continuous models. In this work, a new boundary element method (BEM based computer code (KIN SP is presented where the kinematic analysis is preceded by a free-field response analysis. The analysis results of this method, in terms of bending moments at the pile-head and at the interface of a two-layered soil, are influenced by many factors including the soil–pile interface discretization. A parametric study is presented with the aim to suggest the minimum number of boundary elements to guarantee the accuracy of a BEM solution, for typical pile–soil relative stiffness values as a function of the pile diameter, the location of the interface of a two-layered soil and of the stiffness contrast. KIN SP results have been compared with simplified solutions in literature and with those obtained using a quasi-three-dimensional (3D finite element code.
Guo-Qiang Zeng
2014-01-01
Full Text Available As a novel evolutionary optimization method, extremal optimization (EO has been successfully applied to a variety of combinatorial optimization problems. However, the applications of EO in continuous optimization problems are relatively rare. This paper proposes an improved real-coded population-based EO method (IRPEO for continuous unconstrained optimization problems. The key operations of IRPEO include generation of real-coded random initial population, evaluation of individual and population fitness, selection of bad elements according to power-law probability distribution, generation of new population based on uniform random mutation, and updating the population by accepting the new population unconditionally. The experimental results on 10 benchmark test functions with the dimension N=30 have shown that IRPEO is competitive or even better than the recently reported various genetic algorithm (GA versions with different mutation operations in terms of simplicity, effectiveness, and efficiency. Furthermore, the superiority of IRPEO to other evolutionary algorithms such as original population-based EO, particle swarm optimization (PSO, and the hybrid PSO-EO is also demonstrated by the experimental results on some benchmark functions.
Methods tuned on the physical problem. A way to improve numerical codes
Ixaru, L.Gr.
2010-01-01
We consider the problem on how the numerical methods tuned on the physical problem can contribute to the enhancement of the performance of the codes. We illustrate this on two simple cases: solution of time independent one-dimensional Schroedinger equation, and the computation of integrals with oscillatory integrands. In both cases the tuned versions bring a massive gain in accuracy at negligible extra cost. We presented two simple problems where successive levels of tuning enhance significantly the accuracy at negligible extra cost. These problems should be seen as representing only some illustrations on how the codes can be improved but we must also mention that in many cases tuned versions still have to be developed. Just for a suggestion, quadrature formulae which involve the integrand and a number of successive derivatives of this exist, but no formula is available when some of these derivatives are missing, for example when we dispose of y and y'' but not of y'. A direct application will be on the case when the integrand involves the solution of the Schrodinger equation by the method of Numerov. (author)
On a Weak Discrete Maximum Principle for hp-FEM
Šolín, Pavel; Vejchodský, Tomáš
-, č. 209 (2007), s. 54-65 ISSN 0377-0427 R&D Projects: GA ČR(CZ) GA102/05/0629 Institutional research plan: CEZ:AV0Z20570509; CEZ:AV0Z10190503 Keywords : discrete maximum principle * hp-FEM Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 0.943, year: 2007
The concepts and functions of a FEM workstation
Brown, R.R.; Gloudeman, J.F.
1982-01-01
Recent advances in microprocessor-based computer hardware and associated software provide a basis for the development of a FEM workstation. The key requirements for such a workstation are reviewed and the recent hardware and software developments are discussed that make such a workstation both technically and economically feasible at this time. (orig.)
Initial experiments with the FOM-Fusion-FEM
Verhoeven, A.G.A.; Bongers, W.A.; Caplan, M.; Dijk, G. van; Elzendoorn, B.S.Q.
1995-01-01
A Free Electron Maser is being built for ECRH applications on future fusion research devices such as ITER. A unique feature of the Dutch FOM-Fusion-FEM is the possibility to tune the frequency over the entire range from 130 to 260 GHz while the output power exceeds 1 MW
FEM growth and yield data monocultures - White willow
Jansen, J.J.; Oosterbaan, A.; Goudzwaard, L.; Oldenburger, J.F.; Mohren, G.M.J.; Ouden, den J.
2016-01-01
The current database is part of the FEM growth and yield database, a collection of growth and yield data from even-aged monocultures (douglas fir, common oak, poplar, Japanese Larch, Norway spruce, Scots pine, Corsican pine, Austrian pine, red oak and several other species, with only a few plots,
FEM growth and yield data Monocultures - Poplar (revised version)
Mohren, G.M.J.; Goudzwaard, L.; Jansen, J.J.; Schmidt, P.; Oosterbaan, A.; Oldenburger, J.; Ouden, den J.
2017-01-01
The current database is part of the FEM growth and yield database, a collection of growth and yield data from even-aged monocultures (douglas fir, common oak, poplar, Japanese Larch, Norway spruce, Scots pine, Corsican pine, Austrian pine, red oak and several other species with only a few plots,
FEM simulation of multi step forming of thick sheet
Wisselink, H.H.; Huetink, Han
2004-01-01
A case study has been performed on the forming of an industrial product. This product, a bracket, is made of 5mm thick sheet in multiple steps. The process exists of a bending step followed by a drawing and a flanging step. FEM simulations have been used to investigate this forming process. First,
FEM simulation of static loading test of the Omega beam
Bílý, Petr; Kohoutková, Alena; Jedlinský, Petr
2017-09-01
The paper deals with a FEM simulation of static loading test of the Omega beam. Omega beam is a precast prestressed high-performance concrete element with the shape of Greek letter omega. Omega beam was designed as a self-supporting permanent formwork member for construction of girder bridges. FEM program ATENA Science was exploited for simulation of load-bearing test of the beam. The numerical model was calibrated using the data from both static loading test and tests of material properties. Comparison of load-displacement diagrams obtained from the experiment and the model was conducted. Development of cracks and crack patterns were compared. Very good agreement of experimental data and the FEM model was reached. The calibrated model can be used for design of optimized Omega beams in the future without the need of expensive loading tests. The calibrated material model can be also exploited in other types of FEM analyses of bridges constructed with the use of Omega beams, such as limit state analysis, optimization of shear connectors, prediction of long-term deflections or prediction of crack development.
FEM growth and yield data monocultures - other species
Goudzwaard, L.; Jansen, J.J.; Oosterbaan, A.; Oldenburger, J.F.; Mohren, G.M.J.; Ouden, den J.
2016-01-01
The current database is part of the FEM growth and yield database, a collection of growth and yield data from even-aged monocultures (douglas fir, common oak, poplar, Japanese Larch, Norway spruce, Scots pine, Corsican pine, Austrian pine, red oak and several other species, with only a few plots,
SWAAM-LT: The long-term, sodium/water reaction analysis method computer code
Shin, Y.W.; Chung, H.H.; Wiedermann, A.H.; Tanabe, H.
1993-01-01
The SWAAM-LT Code, developed for analysis of long-term effects of sodium/water reactions, is discussed. The theoretical formulation of the code is described, including the introduction of system matrices for ease of computer programming as a general system code. Also, some typical results of the code predictions for available large scale tests are presented. Test data for the steam generator design with the cover-gas feature and without the cover-gas feature are available and analyzed. The capabilities and limitations of the code are then discussed in light of the comparison between the code prediction and the test data
Comparison of a semi-empirical method with some model codes for gamma-ray spectrum calculation
Sheng, Fan; Zhixiang, Zhao [Chinese Nuclear Data Center, Beijing, BJ (China)
1996-06-01
Gamma-ray spectra calculated by a semi-empirical method are compared with those calculated by the model codes such as GNASH, TNG, UNF and NDCP-1. The results of the calculations are discussed. (2 tabs., 3 figs.).
Stego Keys Performance on Feature Based Coding Method in Text Domain
Din Roshidi
2017-01-01
Full Text Available A main critical factor on embedding process in any text steganography method is a key used known as stego key. This factor will be influenced the success of the embedding process of text steganography method to hide a message from third party or any adversary. One of the important aspects on embedding process in text steganography method is the fitness performance of the stego key. Three parameters of the fitness performance of the stego key have been identified such as capacity ratio, embedded fitness ratio and saving space ratio. It is because a better as capacity ratio, embedded fitness ratio and saving space ratio offers of any stego key; a more message can be hidden. Therefore, main objective of this paper is to analyze three features coding based namely CALP, VERT and QUAD of stego keys in text steganography on their capacity ratio, embedded fitness ratio and saving space ratio. It is found that CALP method give a good effort performance compared to VERT and QUAD methods.
Galerkin finite element methods for wave problems
basis functions (called G1FEM here) and quadratic basis functions (called G2FEM) ... mulation of Brookes & Hughes (1982) that implicitly incorporates numerical ..... functions and (c) SUPG method in the (kh − ω t)-plane for explicit Euler.
Application of finite element method in mechanical design of automotive parts
Gu, Suohai
2017-09-01
As an effective numerical analysis method, finite element method (FEM) has been widely used in mechanical design and other fields. In this paper, the development of FEM is introduced firstly, then the specific steps of FEM applications are illustrated and the difficulties of FEM are summarized in detail. Finally, applications of FEM in automobile components such as automobile wheel, steel plate spring, body frame, shaft parts and so on are summarized, compared with related research experiments.
Mixed FEM for Second Order Elliptic Problems on Polygonal Meshes with BEM-Based Spaces
Efendiev, Yalchin
2014-01-01
We present a Boundary Element Method (BEM)-based FEM for mixed formulations of second order elliptic problems in two dimensions. The challenge, we would like to address, is a proper construction of H(div)-conforming vector valued trial functions on arbitrary polygonal partitions of the domain. The proposed construction generates trial functions on polygonal elements which inherit some of the properties of the unknown solution. In the numerical realization, the relevant local problems are treated by means of boundary integral formulations. We test the accuracy of the method on two model problems. © 2014 Springer-Verlag.
The FLUKA code for application of Monte Carlo methods to promote high precision ion beam therapy
Parodi, K; Cerutti, F; Ferrari, A; Mairani, A; Paganetti, H; Sommerer, F
2010-01-01
Monte Carlo (MC) methods are increasingly being utilized to support several aspects of commissioning and clinical operation of ion beam therapy facilities. In this contribution two emerging areas of MC applications are outlined. The value of MC modeling to promote accurate treatment planning is addressed via examples of application of the FLUKA code to proton and carbon ion therapy at the Heidelberg Ion Beam Therapy Center in Heidelberg, Germany, and at the Proton Therapy Center of Massachusetts General Hospital (MGH) Boston, USA. These include generation of basic data for input into the treatment planning system (TPS) and validation of the TPS analytical pencil-beam dose computations. Moreover, we review the implementation of PET/CT (Positron-Emission-Tomography / Computed- Tomography) imaging for in-vivo verification of proton therapy at MGH. Here, MC is used to calculate irradiation-induced positron-emitter production in tissue for comparison with the +-activity measurement in order to infer indirect infor...
Quantum image pseudocolor coding based on the density-stratified method
Jiang, Nan; Wu, Wenya; Wang, Luo; Zhao, Na
2015-05-01
Pseudocolor processing is a branch of image enhancement. It dyes grayscale images to color images to make the images more beautiful or to highlight some parts on the images. This paper proposes a quantum image pseudocolor coding scheme based on the density-stratified method which defines a colormap and changes the density value from gray to color parallel according to the colormap. Firstly, two data structures: quantum image GQIR and quantum colormap QCR are reviewed or proposed. Then, the quantum density-stratified algorithm is presented. Based on them, the quantum realization in the form of circuits is given. The main advantages of the quantum version for pseudocolor processing over the classical approach are that it needs less memory and can speed up the computation. Two kinds of examples help us to describe the scheme further. Finally, the future work are analyzed.
ORIGEN-2.2, Isotope Generation and Depletion Code Matrix Exponential Method
2002-01-01
1 - Description of problem or function: ORIGEN is a computer code system for calculating the buildup, decay, and processing of radioactive materials. ORIGEN2 is a revised version of ORIGEN and incorporates updates of the reactor models, cross sections, fission product yields, decay data, and decay photon data, as well as the source code. ORIGEN-2.1 replaces ORIGEN and includes additional libraries for standard and extended-burnup PWR and BWR calculations, which are documented in ORNL/TM-11018. ORIGEN2.1 was first released in August 1991 and was replaced with ORIGEN2 Version 2.2 in June 2002. Version 2.2 was the first update to ORIGEN2 in over 10 years and was stimulated by a user discovering a discrepancy in the mass of fission products calculated using ORIGEN2 V2.1. Code modifications, as well as reducing the irradiation time step to no more than 100 days/step reduced the discrepancy from ∼10% to 0.16%. The bug does not noticeably affect the fission product mass in typical ORIGEN2 calculations involving reactor fuels because essentially all of the fissions come from actinides that have explicit fission product yield libraries. Thus, most previous ORIGEN2 calculations that were otherwise set up properly should not be affected. 2 - Method of solution: ORIGEN uses a matrix exponential method to solve a large system of coupled, linear, first-order ordinary differential equations with constant coefficients. ORIGEN2 has been variably dimensioned to allow the user to tailor the size of the executable module to the problem size and/or the available computer space. Dimensioned arrays have been set large enough to handle almost any size problem, using virtual memory capabilities available on most mainframe and 386/486 based PCS. The user is provided with much of the framework necessary to put some of the arrays to several different uses, call for the subroutines that perform the desired operations, and provide a mechanism to execute multiple ORIGEN2 problems with a single
Bouzakis, K.D. [Aristoteles Univ., Thessaloniki (Greece). Dept. of Mech. Eng.; Vidakis, N. [Aristoteles Univ., Thessaloniki (Greece). Dept. of Mech. Eng.; Leyendecker, T. [CemeCon, 52068 Aachen (Germany); Lemmer, O. [CemeCon, 52068 Aachen (Germany); Fuss, H.G. [CemeCon, 52068 Aachen (Germany); Erkens, G. [CemeCon, 52068 Aachen (Germany)
1996-12-15
The impact test, in combination with a finite element method (FEM) simulation, is used to determine stress values that characterise the fatigue behaviour of thin hard coatings, such as TiAlN, TiAlCN, CrN, MoN, etc. The successive impacts of a cemented carbide ball onto a coated probe induce high contact loads, which can vary in amplitude and cause plastic deformation in the substrate. In the present paper FEM calculations are used in order to determine the critical stress values, which lead to coating fatigue failure. The parametric FEM simulation developed considers elastic behaviour for the coating and elastic plastic behaviour for the substrate. The results of the FEM calculations are correlated to experimental data, as well as to SEM observations of the imprints and to microspectrum analyses within the contact region. Herewith, critical values for various stress components, which are responsible for distinctive fatigue failure modes of the coating-substrate compounds can be obtained. (orig.)
A five-colour colour-coded mapping method for DCE-MRI analysis of head and neck tumours
Yuan, J.; Chow, S.K.K.; Yeung, D.K.W.; King, A.D.
2012-01-01
Aim: To devise a method to convert the time–intensity curves (TICs) of head and neck dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) data into a pixel-by-pixel colour-coded map for identifying normal tissues and tumours. Materials and methods: Twenty-three patients with head and neck squamous cell carcinoma (HNSCC) underwent DCE-MRI. TIC patterns of primary tumours, metastatic nodes, and normal tissues were assessed and a program was devised to convert the patterns into a classified colour-coded map. The enhancement patterns of tumours and normal tissue structures were evaluated and categorized into nine grades (0–8) based on the predominance of coloured pixels on maps. Results: Five identified TIC patterns were converted into a colour-coded map consisting of red (maximum enhancement), brown (continuous slow rise-up), yellow (rapid wash-in and wash-out), green (rapid wash-in and plateau), and blue (rapid wash-in and rise-up). The colour-coded map distinguished all 21 primary tumours and 15 metastatic nodes from normal structures. Primary tumours and metastatic nodes were colour coded as predominantly yellow (grades 1–2) in 17/21 and 6/15, green (grades 3–5) in 3/21 and 5/15, and blue (grades 6–7) in 1/21 and 4/15, respectively. Vessels were coded red in 46/46 (grade 0) and muscles were coded brown in 23/23 (grade 8). Salivary glands, thyroid glands, and palatine tonsils were coded into predominantly yellow (grade 1) in 46/46 and 10/10 and 18/22, respectively. Conclusion: DCE-MRI derived five-colour-coded mapping provides an objective easy-to-interpret method to assess the dynamic enhancement pattern of head and neck cancers.
Performance Analysis of FEM Algorithmson GPU and Many-Core Architectures
Khurram, Rooh
2015-04-27
The roadmaps of the leading supercomputer manufacturers are based on hybrid systems, which consist of a mix of conventional processors and accelerators. This trend is mainly due to the fact that the power consumption cost of the future cpu-only Exascale systems will be unsustainable, thus accelerators such as graphic processing units (GPUs) and many-integrated-core (MIC) will likely be the integral part of the TOP500 (http://www.top500.org/) supercomputers, beyond 2020. The emerging supercomputer architecture will bring new challenges for the code developers. Continuum mechanics codes will particularly be affected, because the traditional synchronous implicit solvers will probably not scale on hybrid Exascale machines. In the previous study[1], we reported on the performance of a conjugate gradient based mesh motion algorithm[2]on Sandy Bridge, Xeon Phi, and K20c. In the present study we report on the comparative study of finite element codes, using PETSC and AmgX solvers on CPU and GPUs, respectively [3,4]. We believe this study will be a good starting point for FEM code developers, who are contemplating a CPU to accelerator transition.
Fem Modelling of Lumbar Vertebra System
Rimantas Kačianauskas
2014-02-01
Full Text Available The article presents modeling of human lumbar vertebra and it‘sdeformation analysis using finite elements method. The problemof tissue degradation is raised. Using the computer aided modelingwith SolidWorks software the models of lumbar vertebra(L1 and vertebra system L1-L4 were created. The article containssocial and medical problem analysis, description of modelingmethods and the results of deformation test for one vertebramodel and for model of 4 vertebras (L1-L4.
Embedded 3D shape measurement system based on a novel spatio-temporal coding method
Xu, Bin; Tian, Jindong; Tian, Yong; Li, Dong
2016-11-01
Structured light measurement has been wildly used since 1970s in industrial component detection, reverse engineering, 3D molding, robot navigation, medical and many other fields. In order to satisfy the demand for high speed, high precision and high resolution 3-D measurement for embedded system, a new patterns combining binary and gray coding principle in space are designed and projected onto the object surface orderly. Each pixel corresponds to the designed sequence of gray values in time - domain, which is treated as a feature vector. The unique gray vector is then dimensionally reduced to a scalar which could be used as characteristic information for binocular matching. In this method, the number of projected structured light patterns is reduced, and the time-consuming phase unwrapping in traditional phase shift methods is avoided. This algorithm is eventually implemented on DM3730 embedded system for 3-D measuring, which consists of an ARM and a DSP core and has a strong capability of digital signal processing. Experimental results demonstrated the feasibility of the proposed method.
Sessarego, Matias; Ramos García, Néstor; Sørensen, Jens Nørkær
2017-01-01
Aerodynamic and structural dynamic performance analysis of modern wind turbines are routinely estimated in the wind energy field using computational tools known as aeroelastic codes. Most aeroelastic codes use the blade element momentum (BEM) technique to model the rotor aerodynamics and a modal......, multi-body or the finite-element approach to model the turbine structural dynamics. The present work describes the development of a novel aeroelastic code that combines a three-dimensional viscous–inviscid interactive method, method for interactive rotor aerodynamic simulations (MIRAS...... Code Comparison Collaboration Project. Simulation tests consist of steady wind inflow conditions with different combinations of yaw error, wind shear, tower shadow and turbine-elastic modeling. Turbulent inflow created by using a Mann box is also considered. MIRAS-FLEX results, such as blade tip...
Callari, C.; Federico, F.
2000-04-01
Laboratory consolidation of structured clayey soils is analysed in this paper. The research is carried out by two different methods. The first one treats the soil as an isotropic homogeneous equivalent Double Porosity (DP) medium. The second method rests on the extensive application of the Finite Element Method (FEM) to combinations of different soils, composing 2D or fully 3D ordered structured media that schematically discretize the complex material. Two reference problems, representing typical situations of 1D laboratory consolidation of structured soils, are considered. For each problem, solution is obtained through integration of the equations governing the consolidation of the DP medium as well as via FEM applied to the ordered schemes composed of different materials. The presence of conventional experimental devices to ensure the drainage of the sample is taken into account through appropriate boundary conditions. Comparison of FEM results with theoretical results clearly points out the ability of the DP model to represent consolidation processes of structurally complex soils. Limits of applicability of the DP model may arise when the rate of fluid exchange between the two porous systems is represented through oversimplified relations. Results of computations, obtained having assigned reasonable values to the meso-structural and to the experimental apparatus parameters, point out that a partially efficient drainage apparatus strongly influences the distribution along the sample and the time evolution of the interstitial water pressure acting in both systems of pores. Data of consolidation tests in a Rowe's cell on samples of artificially fissured clays reported in the literature are compared with the analytical and numerical results showing a significant agreement.
Abe, Alfredo Y.; Santos, Adimir dos
1995-01-01
The present work summarizes the verification of the treatment of self-shielding based on Bondarenko method in HAMMER-TECHNION cell code for the Pu O 2 -U O 2 critical system using JENDL-3 nuclear data library. The results obtained are in excellent agreement with the original treatment of self-shielding employed by HAMMER-TECHNION cell code. (author). 9 refs, 1 fig, 9 tabs
Macek, Jiri; Kral, Pavel
2010-01-01
The content of the presentation was as follows: Conservative versus best estimate approach, Brief description and selection of methodology, Description of uncertainty methods, Examples of the BE methodology. It is concluded that where BE computer codes are used, uncertainty and sensitivity analyses should be included; if best estimate codes + uncertainty are used, the safety margins increase; and BE + BSA is the next step in licensing analyses. (P.A.)
Iterative linear solvers in a 2D radiation-hydrodynamics code: Methods and performance
Baldwin, C.; Brown, P.N.; Falgout, R.; Graziani, F.; Jones, J.
1999-01-01
Computer codes containing both hydrodynamics and radiation play a central role in simulating both astrophysical and inertial confinement fusion (ICF) phenomena. A crucial aspect of these codes is that they require an implicit solution of the radiation diffusion equations. The authors present in this paper the results of a comparison of five different linear solvers on a range of complex radiation and radiation-hydrodynamics problems. The linear solvers used are diagonally scaled conjugate gradient, GMRES with incomplete LU preconditioning, conjugate gradient with incomplete Cholesky preconditioning, multigrid, and multigrid-preconditioned conjugate gradient. These problems involve shock propagation, opacities varying over 5--6 orders of magnitude, tabular equations of state, and dynamic ALE (Arbitrary Lagrangian Eulerian) meshes. They perform a problem size scalability study by comparing linear solver performance over a wide range of problem sizes from 1,000 to 100,000 zones. The fundamental question they address in this paper is: Is it more efficient to invert the matrix in many inexpensive steps (like diagonally scaled conjugate gradient) or in fewer expensive steps (like multigrid)? In addition, what is the answer to this question as a function of problem size and is the answer problem dependent? They find that the diagonally scaled conjugate gradient method performs poorly with the growth of problem size, increasing in both iteration count and overall CPU time with the size of the problem and also increasing for larger time steps. For all problems considered, the multigrid algorithms scale almost perfectly (i.e., the iteration count is approximately independent of problem size and problem time step). For pure radiation flow problems (i.e., no hydrodynamics), they see speedups in CPU time of factors of ∼15--30 for the largest problems, when comparing the multigrid solvers relative to diagonal scaled conjugate gradient
CFD and FEM modeling of PPOOLEX experiments
Paettikangas, T.; Niemi, J.; Timperi, A. (VTT Technical Research Centre of Finland (Finland))
2011-01-15
Large-break LOCA experiment performed with the PPOOLEX experimental facility is analysed with CFD calculations. Simulation of the first 100 seconds of the experiment is performed by using the Euler-Euler two-phase model of FLUENT 6.3. In wall condensation, the condensing water forms a film layer on the wall surface, which is modelled by mass transfer from the gas phase to the liquid water phase in the near-wall grid cell. The direct-contact condensation in the wetwell is modelled with simple correlations. The wall condensation and direct-contact condensation models are implemented with user-defined functions in FLUENT. Fluid-Structure Interaction (FSI) calculations of the PPOOLEX experiments and of a realistic BWR containment are also presented. Two-way coupled FSI calculations of the experiments have been numerically unstable with explicit coupling. A linear perturbation method is therefore used for preventing the numerical instability. The method is first validated against numerical data and against the PPOOLEX experiments. Preliminary FSI calculations are then performed for a realistic BWR containment by modeling a sector of the containment and one blowdown pipe. For the BWR containment, one- and two-way coupled calculations as well as calculations with LPM are carried out. (Author)
Construction of compact FEM using solenoid-induced helical wiggler
Ohigashi, N.; Tsunawaki, Y.; Fujita, M.; Imasaki, K.; Mima, K.; Nakai, S.
2003-01-01
A prototype of compact Free-Electron Maser (FEM) has been designed for the operation in a usual small laboratory which does not have electric source capacity available enough. The electron energy is 60-120 keV. As it is lower, stronger guiding magnetic field is necessary in addition to wiggler field. To fulfil this condition a solenoid-induced helical wiggler is applied from the viewpoint of saving the electric power of restricted source capacity. The wiggler, for example, with the period of 12 mm creates the field of 92 G in the guiding field of 3.2 kG. The whole system of FEM has been just constructed in a small-scale laboratory. It is so small to occupy the area of 0.7x2.9 m 2
Niu, H.; Wang, H.; Ye, X.
2017-01-01
application. A converter-level finite element simulation (FEM) simulation is carried out to obtain the ambient temperature of electrolytic capacitors and power MOSFETs used in the LED driver, which takes into account the impact of the driver enclosure and the thermal coupling among different components....... Therefore, the proposed method bridges the link between the global ambient temperature profile outside of the enclosure and the local ambient temperature profiles of the components of interest inside the driver. A quantitative comparison of the estimated annual lifetime consumptions of MOSFETs...
Jagannathan, V.
1985-01-01
A modular computer code system called FEMSYN has been developed to solve the multigroup diffusion theory equations. The various methods that are incorporated in FEMSYN are (i) finite difference method (FDM) (ii) finite element method (FEM) and (iii) single channel flux synthesis method (SCFS). These methods are described in detail in parts II, III and IV of the present report. In this report, a comparison of the accuracy and the speed of different methods of solution for some benchmark problems are reported. The input preparation and listing of sample input and output are included in the Appendices. The code FEMSYN has been used to solve a wide variety of reactor core problems. It can be used for both LWR and PHWR applications. (author)
Moreau, J; Rabot, H; Robin, C [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1965-07-01
The two codes presented here allow to determine the multiplication constant of media containing fissionable materials under numerous and divided forms; they are based on the Monte-Carlo method. The first code apply to x, y, z, geometries. The volume to be studied ought to be divisible in parallelepipeds, the media within each parallelepiped being limited by non-secant surfaces. The second code is intended for r, 0, z geometries. The results include an analysis of collisions in each medium. Applications and examples with informations on time and accuracy are given. (authors) [French] Les deux codes presentes dans ce rapport permettent la determination des coefficients de multiplication de milieux contenant des matieres fissiles sous des formes tres variees et divisees, ils reposent sur la methode de Monte-Carlo. Le premier code s'applique aux geometries x, y, z, le volume a etudier doit pouvoir etre decompose en parallelepipedes, les milieux a l'interieur de chaque parallelepipede etant limites par des surfaces non secantes. Le deuxieme code s'applique aux geometries r, 0, z. Les resultats comportent une analyse des collisions dans chaque milieu. Des applications et des exemples avec les indications de temps et de precision sont fournis. (auteurs)
Comparative FEM-based Analysis of Multiphase Induction Motor
Leonard Livadaru
2014-09-01
Full Text Available This paper presents a comparative study of multiphase induction motor, which has alternately three-, five- and six-phase stator winding. The machine has been designed particularly for this purpose and has individual ring coils placed in each stator slot. The study consists in FEM analyses and mainly looks for the particularities of magnetic quantities such as air-gap flux density and electromagnetic torque.
Green's function method and its application to verification of diffusion models of GASFLOW code
Xu, Z.; Travis, J.R.; Breitung, W.
2007-07-01
To validate the diffusion model and the aerosol particle model of the GASFLOW computer code, theoretical solutions of advection diffusion problems are developed by using the Green's function method. The work consists of a theory part and an application part. In the first part, the Green's functions of one-dimensional advection diffusion problems are solved in infinite, semi-infinite and finite domains with the Dirichlet, the Neumann and/or the Robin boundary conditions. Novel and effective image systems especially for the advection diffusion problems are made to find the Green's functions in a semi-infinite domain. Eigenfunction method is utilized to find the Green's functions in a bounded domain. In the case, key steps of a coordinate transform based on a concept of reversed time scale, a Laplace transform and an exponential transform are proposed to solve the Green's functions. Then the product rule of the multi-dimensional Green's functions is discussed in a Cartesian coordinate system. Based on the building blocks of one-dimensional Green's functions, the multi-dimensional Green's function solution can be constructed by applying the product rule. Green's function tables are summarized to facilitate the application of the Green's function. In the second part, the obtained Green's function solutions benchmark a series of validations to the diffusion model of gas species in continuous phase and the diffusion model of discrete aerosol particles in the GASFLOW code. Perfect agreements are obtained between the GASFLOW simulations and the Green's function solutions in case of the gas diffusion. Very good consistencies are found between the theoretical solutions of the advection diffusion equations and the numerical particle distributions in advective flows, when the drag force between the micron-sized particles and the conveying gas flow meets the Stokes' law about resistance. This situation is corresponding to a very small Reynolds number based on the particle
Decoy state method for quantum cryptography based on phase coding into faint laser pulses
Kulik, S. P.; Molotkov, S. N.
2017-12-01
We discuss the photon number splitting attack (PNS) in systems of quantum cryptography with phase coding. It is shown that this attack, as well as the structural equations for the PNS attack for phase encoding, differs physically from the analogous attack applied to the polarization coding. As far as we know, in practice, in all works to date processing of experimental data has been done for phase coding, but using formulas for polarization coding. This can lead to inadequate results for the length of the secret key. These calculations are important for the correct interpretation of the results, especially if it concerns the criterion of secrecy in quantum cryptography.
Novel methods in the Particle-In-Cell accelerator Code-Framework Warp
Vay, J-L [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Grote, D. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Cohen, R. H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Friedman, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2012-12-26
The Particle-In-Cell (PIC) Code-Framework Warp is being developed by the Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL) to guide the development of accelerators that can deliver beams suitable for high-energy density experiments and implosion of inertial fusion capsules. It is also applied in various areas outside the Heavy Ion Fusion program to the study and design of existing and next-generation high-energy accelerators, including the study of electron cloud effects and laser wakefield acceleration for example. This study presents an overview of Warp's capabilities, summarizing recent original numerical methods that were developed by the HIFS-VNL (including PIC with adaptive mesh refinement, a large-timestep 'drift-Lorentz' mover for arbitrarily magnetized species, a relativistic Lorentz invariant leapfrog particle pusher, simulations in Lorentz-boosted frames, an electromagnetic solver with tunable numerical dispersion and efficient stride-based digital filtering), with special emphasis on the description of the mesh refinement capability. In addition, selected examples of the applications of the methods to the abovementioned fields are given.
Mode splitting effect in FEMs with oversized Bragg resonators
Peskov, N. Yu.; Sergeev, A. S. [Institute of Applied Physics Russian Academy of Sciences, Nizhny Novgorod (Russian Federation); Kaminsky, A. K.; Perelstein, E. A.; Sedykh, S. N. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Kuzikov, S. V. [Institute of Applied Physics Russian Academy of Sciences, Nizhny Novgorod (Russian Federation); Nizhegorodsky State University, Nizhny Novgorod (Russian Federation)
2016-07-15
Splitting of the fundamental mode in an oversized Bragg resonator with a step of the corrugation phase, which operates over the feedback loop involving the waveguide waves of different transverse structures, was found to be the result of mutual influence of the neighboring zones of the Bragg scattering. Theoretical description of this effect was developed within the framework of the advanced (four-wave) coupled-wave approach. It is shown that mode splitting reduces the selective properties, restricts the output power, and decreases the stability of the narrow-band operating regime in the free-electron maser (FEM) oscillators based on such resonators. The results of the theoretical analysis were confirmed by 3D simulations and “cold” microwave tests. Experimental data on Bragg resonators with different parameters in a 30-GHz FEM are presented. The possibility of reducing the mode splitting by profiling the corrugation parameters is shown. The use of the mode splitting effect for the output power enhancement by passive compression of the double-frequency pulse generated in the FEM with such a resonator is discussed.
An object-oriented class design for the generalized finite element method programming
Dorival Piedade Neto
Full Text Available The Generalized Finite Element Method (GFEM is a numerical method based on the Finite Element Method (FEM, presenting as its main feature the possibility of improving the solution by means of local enrichment functions. In spite of its advantages, the method demands a complex data structure, which can be especially benefited by the Object-Oriented Programming (OOP. Even though the OOP for the traditional FEM has been extensively described in the technical literature, specific design issues related to the GFEM are yet little discussed and not clearly defined. In the present article it is described an Object-Oriented (OO class design for the GFEM, aiming to achieve a computational code that presents a flexible class structure, circumventing the difficulties associated to the method characteristics. The proposed design is evaluated by means of some numerical examples, computed using a code implemented in Python programming language.
Uncertainty analysis methods for quantification of source terms using a large computer code
Han, Seok Jung
1997-02-01
Quantification of uncertainties in the source term estimations by a large computer code, such as MELCOR and MAAP, is an essential process of the current probabilistic safety assessments (PSAs). The main objectives of the present study are (1) to investigate the applicability of a combined procedure of the response surface method (RSM) based on input determined from a statistical design and the Latin hypercube sampling (LHS) technique for the uncertainty analysis of CsI release fractions under a hypothetical severe accident sequence of a station blackout at Young-Gwang nuclear power plant using MAAP3.0B code as a benchmark problem; and (2) to propose a new measure of uncertainty importance based on the distributional sensitivity analysis. On the basis of the results obtained in the present work, the RSM is recommended to be used as a principal tool for an overall uncertainty analysis in source term quantifications, while using the LHS in the calculations of standardized regression coefficients (SRC) and standardized rank regression coefficients (SRRC) to determine the subset of the most important input parameters in the final screening step and to check the cumulative distribution functions (cdfs) obtained by RSM. Verification of the response surface model for its sufficient accuracy is a prerequisite for the reliability of the final results obtained by the combined procedure proposed in the present work. In the present study a new measure has been developed to utilize the metric distance obtained from cumulative distribution functions (cdfs). The measure has been evaluated for three different cases of distributions in order to assess the characteristics of the measure: The first case and the second are when the distribution is known as analytical distributions and the other case is when the distribution is unknown. The first case is given by symmetry analytical distributions. The second case consists of two asymmetry distributions of which the skewness is non zero
Keltie, Kim; Cole, Helen; Arber, Mick; Patrick, Hannah; Powell, John; Campbell, Bruce; Sims, Andrew
2014-11-28
Several authors have developed and applied methods to routine data sets to identify the nature and rate of complications following interventional procedures. But, to date, there has been no systematic search for such methods. The objective of this article was to find, classify and appraise published methods, based on analysis of clinical codes, which used routine healthcare databases in a United Kingdom setting to identify complications resulting from interventional procedures. A literature search strategy was developed to identify published studies that referred, in the title or abstract, to the name or acronym of a known routine healthcare database and to complications from procedures or devices. The following data sources were searched in February and March 2013: Cochrane Methods Register, Conference Proceedings Citation Index - Science, Econlit, EMBASE, Health Management Information Consortium, Health Technology Assessment database, MathSciNet, MEDLINE, MEDLINE in-process, OAIster, OpenGrey, Science Citation Index Expanded and ScienceDirect. Of the eligible papers, those which reported methods using clinical coding were classified and summarised in tabular form using the following headings: routine healthcare database; medical speciality; method for identifying complications; length of follow-up; method of recording comorbidity. The benefits and limitations of each approach were assessed. From 3688 papers identified from the literature search, 44 reported the use of clinical codes to identify complications, from which four distinct methods were identified: 1) searching the index admission for specified clinical codes, 2) searching a sequence of admissions for specified clinical codes, 3) searching for specified clinical codes for complications from procedures and devices within the International Classification of Diseases 10th revision (ICD-10) coding scheme which is the methodology recommended by NHS Classification Service, and 4) conducting manual clinical
Experience with the Incomplete Cholesky Conjugate Gradient method in a diffusion code
Hoebel, W.
1985-01-01
For the numerical solution of sparse systems of linear equations arising from finite difference approximation of the multidimensional neutron diffusion equation fast methods are needed. Effective algorithms for scalar computers may not be likewise suitable on vector computers. In the improved version DIXY2 of the Karlsruhe two-dimensional neutron diffusion code for rectangular geometries an Incomplete Cholesky Conjugate Gradient (ICCG) algorithm has been combined with the originally implemented Cyclically Reduced 4-Lines SOR (CR4LSOR) inner iteration method. The combined procedure is automatically activated for slowly converging applications, thus leading to a drastic reduction of iterations as well as CPU-times on a scalar computer. In a follow-up benchmark study necessary modifications to ICCG and CR4LSOR for their use on a vector computer were investigated. It was found that a modified preconditioning for the ICCG algorithm restricted to the block diagonal matrix is an effective method both on scalar and vector computers. With a splitting of the 9-band-matrix in two triangular Cholesky matrices necessary inversions are performed on a scalar machine by recursive forward and backward substitutions. On vector computers an additional factorization of the triangular matrices into four bidiagonal matrices enables Buneman reduction and the recursive inversion is restricted to a small system. A similar strategy can be realized with CR4LSOR if the unvectorizable Gauss-Seidel iteration is replaced by Double Jacobi and Buneman technique for a vector computer. Compared to single line blocking over the original mesh the cyclical 4-lines reduction of the DIXY inner iteration scheme reduces numbers of iterations and CPU-times considerably
Experience with the incomplete Cholesky conjugate gradient method in a diffusion code
Hoebel, W.
1986-01-01
For the numerical solution of sparse systems of linear equations arising from the finite difference approximation of the multidimensional neutron diffusion equation, fast methods are needed. Effective algorithms for scalar computers may not be likewise suitable on vector computers. In the improved version (DIXY2) of the Karlsruhe two-dimensional neutron diffusion code for rectangular geometries, an incomplete Cholesky conjugate gradient (ICCG) algorithm has been combined with the originally implemented cyclically reduced four-line successive overrelaxation (CR4LSOR) inner iteration method. The combined procedure is automatically activated for slowly converging applications, thus leading to a drastic reduction of iterations as well as CPU times on a scalar computer. In a follow-up benchmark study, necessary modifications to ICCG and CR4LSOR for use on a vector computer were investigated. It was found that a modified preconditioning for the ICCG algorithm restricted to the block diagonal matrix is an effective method both on scalar and vector computers. With a splitting of the nine-band matrix in two triangular Cholesky matrices, necessary inversions are performed on a scalar machine by recursive forward and backward substitutions. On vector computers an additional factorization of the triangular matrices into four bidiagonal matrices enables Buneman reduction, and the recursive inversion is restricted to a small system. A similar strategy can be realized with CR4LSOR if the unvectorizable Gauss-eidel iteration is replaced by Double Jacobi and Buneman techniques for a vector computer. Compared to single-line blocking over the original mesh, the cyclical four-line reduction of the DIXY inner iteration scheme reduces numbers of iterations and CPU times considerably
Seyed Abbas Taher
2011-01-01
Full Text Available In this article, a new fault detection technique is proposed for squirrel cage induction motor (SCIM based on detection of rotor bar failure. This type of fault detection is commonly carried out, while motor continues to work at a steady-state regime. Recently, several methods have been presented for rotor bar failure detection based on evaluation of the start-up transient current. The proposed method here is capable of fault detection immediately after bar breakage, where a three-phase SCIM is modelled in finite element method (FEM using Maxwell2D software. Broken rotor bars are then modelled by the corresponding outer rotor impedance obtained by GA, thereby presenting an analogue model extracted from FEM to be simulated in a flexible environment such as MATLAB/SIMULINK. To improve the failure recognition, the stator current signal was analysed using discrete wavelet transform (DWT.
Sam, Ann; Reszka, Stephanie; Odom, Samuel; Hume, Kara; Boyd, Brian
2015-01-01
Momentary time sampling, partial-interval recording, and event coding are observational coding methods commonly used to examine the social and challenging behaviors of children at risk for or with developmental delays or disabilities. Yet there is limited research comparing the accuracy of and relationship between these three coding methods. By…
Frichet, A.; Mollard, P.; Gentet, G.; Lippert, H. J.; Curva-Tivig, F.; Cole, S.; Garner, N.
2014-07-01
Since three decades, AREVA has been incrementally implementing upgrades in the BWR and PWR Fuel design and codes and methods leading to an ever greater fuel efficiency and easier licensing. For PWRs, AREVA is implementing upgraded versions of its HTP{sup T}M and AFA 3G technologies called HTP{sup T}M-I and AFA3G-I. These fuel assemblies feature improved robustness and dimensional stability through the ultimate optimization of their hold down system, the use of Q12, the AREVA advanced quaternary alloy for guide tube, the increase in their wall thickness and the stiffening of the spacer to guide tube connection. But an even bigger step forward has been achieved a s AREVA has successfully developed and introduces to the market the GAIA product which maintains the resistance to grid to rod fretting (GTRF) of the HTP{sup T}M product while providing addition al thermal-hydraulic margin and high resistance to Fuel Assembly bow. (Author)
A massively parallel method of characteristic neutral particle transport code for GPUs
Boyd, W. R.; Smith, K.; Forget, B.
2013-01-01
Over the past 20 years, parallel computing has enabled computers to grow ever larger and more powerful while scientific applications have advanced in sophistication and resolution. This trend is being challenged, however, as the power consumption for conventional parallel computing architectures has risen to unsustainable levels and memory limitations have come to dominate compute performance. Heterogeneous computing platforms, such as Graphics Processing Units (GPUs), are an increasingly popular paradigm for solving these issues. This paper explores the applicability of GPUs for deterministic neutron transport. A 2D method of characteristics (MOC) code - OpenMOC - has been developed with solvers for both shared memory multi-core platforms as well as GPUs. The multi-threading and memory locality methodologies for the GPU solver are presented. Performance results for the 2D C5G7 benchmark demonstrate 25-35 x speedup for MOC on the GPU. The lessons learned from this case study will provide the basis for further exploration of MOC on GPUs as well as design decisions for hardware vendors exploring technologies for the next generation of machines for scientific computing. (authors)
A simple method for simulation of coherent synchrotron radiation in a tracking code
Borland, M.
2000-01-01
Coherent synchrotron radiation (CSR) is of great interest to those designing accelerators as drivers for free-electron lasers (FELs). Although experimental evidence is incomplete, CSR is predicted to have potentially severe effects on the emittance of high-brightness electron beams. The performance of an FEL depends critically on the emittance, current, and energy spread of the beam. Attempts to increase the current through magnetic bunch compression can lead to increased emittance and energy spread due to CSR in the dipoles of such a compressor. The code elegant was used for design and simulation of the bunch compressor for the Low-Energy Undulator Test Line (LEUTL) FEL at the Advanced Photon Source (APS). In order to facilitate this design, a fast algorithm was developed based on the 1-D formalism of Saldin and coworkers. In addition, a plausible method of including CSR effects in drift spaces following the chicane magnets was developed and implemented. The algorithm is fast enough to permit running hundreds of tolerance simulations including CSR for 50 thousand particles. This article describes the details of the implementation and shows results for the APS bunch compressor
Lee, Joo Hee
2006-02-01
There is growing interest in developing pebble bed reactors (PBRs) as a candidate of very high temperature gas-cooled reactors (VHTRs). Until now, most existing methods of nuclear design analysis for this type of reactors are base on old finite-difference solvers or on statistical methods. But for realistic analysis of PBRs, there is strong desire of making available high fidelity nodal codes in three-dimensional (r,θ,z) cylindrical geometry. Recently, the Analytic Function Expansion Nodal (AFEN) method developed quite extensively in Cartesian (x,y,z) geometry and in hexagonal-z geometry was extended to two-group (r,z) cylindrical geometry, and gave very accurate results. In this thesis, we develop a method for the full three-dimensional cylindrical (r,θ,z) geometry and implement the method into a code named TOPS. The AFEN methodology in this geometry as in hexagonal geometry is 'robus' (e.g., no occurrence of singularity), due to the unique feature of the AFEN method that it does not use the transverse integration. The transverse integration in the usual nodal methods, however, leads to an impasse, that is, failure of the azimuthal term to be transverse-integrated over r-z surface. We use 13 nodal unknowns in an outer node and 7 nodal unknowns in an innermost node. The general solution of the node can be expressed in terms of that nodal unknowns, and can be updated using the nodal balance equation and the current continuity condition. For more realistic analysis of PBRs, we implemented em Marshak boundary condition to treat the incoming current zero boundary condition and the partial current translation (PCT) method to treat voids in the core. The TOPS code was verified in the various numerical tests derived from Dodds problem and PBMR-400 benchmark problem. The results of the TOPS code show high accuracy and fast computing time than the VENTURE code that is based on finite difference method (FDM)
Berke, Ethan M; Shi, Xun
2009-04-29
Travel time is an important metric of geographic access to health care. We compared strategies of estimating travel times when only subject ZIP code data were available. Using simulated data from New Hampshire and Arizona, we estimated travel times to nearest cancer centers by using: 1) geometric centroid of ZIP code polygons as origins, 2) population centroids as origin, 3) service area rings around each cancer center, assigning subjects to rings by assuming they are evenly distributed within their ZIP code, 4) service area rings around each center, assuming the subjects follow the population distribution within the ZIP code. We used travel times based on street addresses as true values to validate estimates. Population-based methods have smaller errors than geometry-based methods. Within categories (geometry or population), centroid and service area methods have similar errors. Errors are smaller in urban areas than in rural areas. Population-based methods are superior to the geometry-based methods, with the population centroid method appearing to be the best choice for estimating travel time. Estimates in rural areas are less reliable.
Cheng-Yu Yeh
2012-01-01
Full Text Available With the large availability of protein interaction networks and microarray data supported, to identify the linear paths that have biological significance in search of a potential pathway is a challenge issue. We proposed a color-coding method based on the characteristics of biological network topology and applied heuristic search to speed up color-coding method. In the experiments, we tested our methods by applying to two datasets: yeast and human prostate cancer networks and gene expression data set. The comparisons of our method with other existing methods on known yeast MAPK pathways in terms of precision and recall show that we can find maximum number of the proteins and perform comparably well. On the other hand, our method is more efficient than previous ones and detects the paths of length 10 within 40 seconds using CPU Intel 1.73GHz and 1GB main memory running under windows operating system.
FAFNER - a fully 3-D neutral beam injection code using Monte Carlo methods
Lister, G.G.
1985-01-01
A computer code is described which models the injection of fast neutral particles into 3-dimensional toroidal plasmas and follows the paths of the resulting fast ions until they are either lost to the system or fully thermalised. A comprehensive model for the neutral beam injection system is included. The code is written especially for the use on the CRAY-1 computer: in particular, the modular nature of the program should enable the most time consuming sections of the program to be vectorised for each particular experiment to be modelled. The effects of plasma contamination by possible injection of impurities, such as oxygen, with the beams are also included. The code may also be readily adapted to plasmas for which a 1 or 2-dimensional description is adequate. It has also been constructed with a view to ready coupling with a transport or equilibrium code. (orig.)
A Systematic Method for Verification and Validation of Gyrokinetic Microstability Codes
Bravenec, Ronald [Fourth State Research, Austin, TX (United States)
2017-11-14
My original proposal for the period Feb. 15, 2014 through Feb. 14, 2017 called for an integrated validation and verification effort carried out by myself with collaborators. The validation component would require experimental profile and power-balance analysis. In addition, it would require running the gyrokinetic codes varying the input profiles within experimental uncertainties to seek agreement with experiment before discounting a code as invalidated. Therefore, validation would require a major increase of effort over my previous grant periods which covered only code verification (code benchmarking). Consequently, I had requested full-time funding. Instead, I am being funded at somewhat less than half time (5 calendar months per year). As a consequence, I decided to forego the validation component and to only continue the verification efforts.
Huang, Wen; Koric, Seid; Yu, Xin; Hsia, K Jimmy; Li, Xiuling
2014-11-12
Micro- and nanoscale tubular structures can be formed by strain-induced self-rolled-up nanomembranes. Precision engineering of the shape and dimension determines the performance of devices based on this platform for electronic, optical, and biological applications. A transient quasi-static finite element method (FEM) with moving boundary conditions is proposed as a general approach to design diverse types of three-dimensional (3D) rolled-up geometries. This method captures the dynamic release process of membranes through etching driven by mismatch strain and accurately predicts the final dimensions of rolled-up structures. Guided by the FEM modeling, experimental demonstration using silicon nitride membranes was achieved with unprecedented precision including controlling fractional turns of a rolled-up membrane, anisotropic rolling to form helical structures, and local stress control for 3D hierarchical architectures.
In vitro toxicity of FemOn, FemOn-SiO2 composite, and SiO2-FemOn core-shell magnetic nanoparticles
Toropova YG
2017-01-01
of uncoated, FemOn-SiO2 composite flake-like, and SiO2-FemOn core-shell IONPs on cell viability, function, and morphology were tested 48 h postincubation in human umbilical vein endothelial cell culture. Cell viability and apoptosis/necrosis rate were determined using 3-(4,5-dimethylthiazol-2-yl-2,5-diphenyltetrazolium bromide assay and annexin V-phycoerythrin kit, respectively. Cell morphology was evaluated using bright-field microscopy and forward and lateral light scattering profiles obtained with flow cytometry analysis. All tested IONP types were used at three different doses, that is, 0.7, 7.0, and 70.0 µg. Dose-dependent changes in cell morphology, viability, and apoptosis rate were shown. At higher doses, all types of IONPs caused formation of binucleated cells suggesting impaired cytokinesis. FemOn-SiO2 composite flake-like and SiO2-FemOn core-shell IONPs were characterized by similar profile of cytotoxicity, whereas bare IONPs were shown to be less toxic. The presence of either silica core or silica nanoflakes in composite IONPs can promote cytotoxic effects. Keywords: iron oxide nanoparticles, composite nanoparticles, silica coating, silica nanoflakes, cytotoxicity
Rector, D.R.; Wheeler, C.L.; Lombardo, N.J.
1986-11-01
COBRA-SFS (Spent Fuel Storage) is a general thermal-hydraulic analysis computer code used to predict temperatures and velocities in a wide variety of systems. The code was refined and specialized for spent fuel storage system analyses for the US Department of Energy's Commercial Spent Fuel Management Program. The finite-volume equations governing mass, momentum, and energy conservation are written for an incompressible, single-phase fluid. The flow equations model a wide range of conditions including natural circulation. The energy equations include the effects of solid and fluid conduction, natural convection, and thermal radiation. The COBRA-SFS code is structured to perform both steady-state and transient calculations: however, the transient capability has not yet been validated. This volume describes the finite-volume equations and the method used to solve these equations. It is directed toward the user who is interested in gaining a more complete understanding of these methods
Dattoli, Giuseppe
2005-01-01
The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high intensity electron accelerators. A code devoted to the analysis of this type of problems should be fast and reliable: conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problem in accelerators. The extension of these method to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique, using exponential operators implemented numerically in C++. We show that the integration procedure is capable of reproducing the onset of an instability and effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, parametric studies a...
Thermal hydraulic calculation of wire-wrapped bundles using a finite element method. Thesee code
Rouzaud, P.; Gay, B.; Verviest, R.
1981-07-01
The physical and mathematical models used in the THESEE code now under development by the CEA/CEN Cadarache are presented. The objective of this code is to predict the fine three-dimensional temperature field in the sodium in a wire-wrapped rod bundle. Numerical results of THESEE are compared with measurements obtained by Belgonucleaire in 1976 in a sodium-cooled seven-rod bundle
Wagner, John C.; Peplow, Douglas E.; Mosher, Scott W.; Evans, Thomas M.
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10 2-4 ), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.
Wagner, John C.; Peplow, Douglas E.; Mosher, Scott W.; Evans, Thomas M.
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.
Wagner, J.C.; Peplow, D.E.; Mosher, S.W.; Evans, T.M.
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10 2-4 ), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications. (author)
A fuel performance code TRUST VIc and its validation
Ishida, M; Kogai, T [Nippon Nuclear Fuel Development Co. Ltd., Oarai, Ibaraki (Japan)
1997-08-01
This paper describes a fuel performance code TRUST V1c developed to analyze thermal and mechanical behavior of LWR fuel rod. Submodels in the code include FP gas models depicting gaseous swelling, gas release from pellet and axial gas mixing. The code has FEM-based structure to handle interaction between thermal and mechanical submodels brought by the gas models. The code is validated against irradiation data of fuel centerline temperature, FGR, pellet porosity and cladding deformation. (author). 9 refs, 8 figs.
A fuel performance code TRUST VIc and its validation
Ishida, M.; Kogai, T.
1997-01-01
This paper describes a fuel performance code TRUST V1c developed to analyze thermal and mechanical behavior of LWR fuel rod. Submodels in the code include FP gas models depicting gaseous swelling, gas release from pellet and axial gas mixing. The code has FEM-based structure to handle interaction between thermal and mechanical submodels brought by the gas models. The code is validated against irradiation data of fuel centerline temperature, FGR, pellet porosity and cladding deformation. (author). 9 refs, 8 figs
Noh, J. M.; Yoo, J. W.; Joo, H. K.
2004-01-01
In this study, we invented a method of component decomposition to derive the systematic inter-nodal coupled equations of the refined AFEN method and developed an object oriented nodal code to solve the derived coupled equations. The method of component decomposition decomposes the intra-nodal flux expansion of a nodal method into even and odd components in three dimensions to reduce the large coupled linear system equation into several small single equations. This method requires no additional technique to accelerate the iteration process to solve the inter-nodal coupled equations, since the derived equations can automatically act as the coarse mesh re-balance equations. By utilizing the object oriented programming concepts such as abstraction, encapsulation, inheritance and polymorphism, dynamic memory allocation, and operator overloading, we developed an object oriented nodal code that can facilitate the input/output and the dynamic control of the memories, and can make the maintenance easy. (authors)
Zhu, Debin; Tang, Yabing; Xing, Da; Chen, Wei R
2008-05-15
A bio bar code assay based on oligonucleotide-modified gold nanoparticles (Au-NPs) provides a PCR-free method for quantitative detection of nucleic acid targets. However, the current bio bar code assay requires lengthy experimental procedures including the preparation and release of bar code DNA probes from the target-nanoparticle complex and immobilization and hybridization of the probes for quantification. Herein, we report a novel PCR-free electrochemiluminescence (ECL)-based bio bar code assay for the quantitative detection of genetically modified organism (GMO) from raw materials. It consists of tris-(2,2'-bipyridyl) ruthenium (TBR)-labeled bar code DNA, nucleic acid hybridization using Au-NPs and biotin-labeled probes, and selective capture of the hybridization complex by streptavidin-coated paramagnetic beads. The detection of target DNA is realized by direct measurement of ECL emission of TBR. It can quantitatively detect target nucleic acids with high speed and sensitivity. This method can be used to quantitatively detect GMO fragments from real GMO products.
Zhang, Baocheng; Teunissen, Peter J. G.; Yuan, Yunbin; Zhang, Xiao; Li, Min
2018-03-01
Sensing the ionosphere with the global positioning system involves two sequential tasks, namely the ionospheric observable retrieval and the ionospheric parameter estimation. A prominent source of error has long been identified as short-term variability in receiver differential code bias (rDCB). We modify the carrier-to-code leveling (CCL), a method commonly used to accomplish the first task, through assuming rDCB to be unlinked in time. Aside from the ionospheric observables, which are affected by, among others, the rDCB at one reference epoch, the Modified CCL (MCCL) can also provide the rDCB offsets with respect to the reference epoch as by-products. Two consequences arise. First, MCCL is capable of excluding the effects of time-varying rDCB from the ionospheric observables, which, in turn, improves the quality of ionospheric parameters of interest. Second, MCCL has significant potential as a means to detect between-epoch fluctuations experienced by rDCB of a single receiver.
Banner, D; Crecy, F de
1993-06-01
The pseudo cubic Spline method (PCSM) is a statistical tool developed by the CEA. It is designed to analyse experimental points and in particular thermalhydraulic data. Predictors of the occurrence of critical heat flux are obtained by using Spline functions. In this paper, predictors have been computed from the same CHF databases by using two different flow analyses to derive local thermal-hydraulic variables at the CHF location. In fact, CEA`s FLICA-3M represents rod bundles by interconnected subchannels whereas EDF`s THYC code uses a porous 3D approach. In a first step, the PCSM is briefly presented as well as the two codes studied here. Then, the comparison methodology is explained in order to prove that advanced analysis of thermalhydraulic codes can be achieved with the PCSM. (authors). 6 figs., 2 tabs., 5 refs.
Hong, S.Y.; Yeater, M.L.
1985-01-01
This paper discusses stress intensity factor calculations and fatigue analysis for a PWR primary coolant piping system. The influence function method is applied to evaluate ASME Code Section XI Appendix A ''analysis of flaw indication'' for the application to a PWR primary piping. Results of the analysis are discussed in detail. (orig.)
Basombrio, F.G.; Sanchez Sarmiento, G.
1978-01-01
A general code for solving two-dimensional thermo-elastoplastic problems in geometries of arbitrary shape using the finite element method, is presented. The initial stress incremental procedure was adopted, for given histories of load and temperature. Some classical applications are included. (Auth.)
Zmijarevic, I; Tomashevic, Dj [Institut za Nuklearne Nauke Boris Kidric, Belgrade (Yugoslavia)
1988-07-01
This paper presents Chebychev acceleration of outer iterations of a nodal diffusion code of high accuracy. Extrapolation parameters, unique for all moments are calculated using the node integrated distribution of fission source. Sample calculations are presented indicating the efficiency of method. (author)
DYNA3D2000*, Explicit 3-D Hydrodynamic FEM Program
Lin, J.
2002-01-01
1 - Description of program or function: DYNA3D2000 is a nonlinear explicit finite element code for analyzing 3-D structures and solid continuum. The code is vectorized and available on several computer platforms. The element library includes continuum, shell, beam, truss and spring/damper elements to allow maximum flexibility in modeling physical problems. Many materials are available to represent a wide range of material behavior, including elasticity, plasticity, composites, thermal effects and rate dependence. In addition, DYNA3D has a sophisticated contact interface capability, including frictional sliding, single surface contact and automatic contact generation. 2 - Method of solution: Discretization of a continuous model transforms partial differential equations into algebraic equations. A numerical solution is then obtained by solving these algebraic equations through a direct time marching scheme. 3 - Restrictions on the complexity of the problem: Recent software improvements have eliminated most of the user identified limitations with dynamic memory allocation and a very large format description that has pushed potential problem sizes beyond the reach of most users. The dominant restrictions remain in code execution speed and robustness, which the developers constantly strive to improve
Electrical performance analysis of HTS synchronous motor based on 3D FEM
Baik, S.K.; Kwon, Y.K.; Kim, H.M.; Lee, J.D.; Kim, Y.C.; Park, G.S.
2010-01-01
A 1-MW class superconducting motor with High-Temperature Superconducting (HTS) field coil is analyzed and tested. This machine is a prototype to make sure applicability aimed at generator and industrial motor applications such as blowers, pumps and compressors installed in large plants. This machine has the HTS field coil made of Bi-2223 HTS wire and the conventional copper armature (stator) coils cooled by water. The 1-MW class HTS motor is analyzed by 3D electromagnetic Finite Element Method (FEM) to get magnetic field distribution, self and mutual inductance, and so forth. Especially excitation voltage (Back EMF) is estimated by using the mutual inductance between armature and field coils and compared with experimental result. Open and short circuit tests were conducted in generator mode while a 1.1-MW rated induction machine was rotating the HTS machine. Electrical parameters such as mutual inductance and synchronous inductance are deduced from these tests and also compared with the analysis results from FEM.
Investigation of the fittest shear transfer model used to FEM analysis of RC structures
Endo, Tatumi; Aoyagi, Masao; Endo, Takao
1988-01-01
In order to rationalize the design method of reinforced concrete (RC) structures in the nuclear power plant, the structural analysis, which is able to simulate the seismic behavior of RC structures, should be established. In this report, the investigation of shear transfer model at shear plane to be applied to FEM analysis is performed. Main conclusions obtained within the limit of the study are as follows. 1. Development of the shear transfer model at shear plane. 1) Two shear transfer models are developed to be used to the 2-dimensional nonlinear FEM analysis. 2) In one model suggested, reinforcements are modeled by plate elements and the nonlinearity of concrete surrounding reinforcement but the properties of bond-slip relation between concrete and reinforcements is also considered. 3) In another model, reinforcements are modeled by equivalent concrete properties, in which axial regidity and dowel effects of reinforcements are considered. 2. Verification of the suggested model. 1) It is confirmed that the computational results using the above-mentioned model could simulate the experimental ones fairly well. 2) Considering the application to the analysis of RC structures in the design, the model, in which reinforcement are modeled by equivalent concrete properties, is useful in view point of accuracy and simplicity. (author)
Androsenko, A.A.; Androsenko, P.A.; Kagalenko, I.Eh.; Mironovich, Yu.N.
1992-01-01
Consideration is given of a technique and algorithms of constructing neutron trajectories in the Monte-Carlo method taking into account the data on adjoint transport equation solution. When simulating the transport part of transfer kernel the use is made of piecewise-linear approximation of free path length density along the particle motion direction. The approach has been implemented in programs within the framework of the BRAND code system. The importance is calculated in the multigroup P 1 -approximation within the framework of the DD-30 code system. The efficiency of the developed computation technique is demonstrated by means of solution of two model problems. 4 refs.; 2 tabs
Grant, C.R.
1981-01-01
It could take a considerable amount of memory and processing time to represent a reactor in its simulation by means of a diffusion code and considering areas in which nuclear and geometrical properties are invariant, such as reflector, water columns, etc. To avoid an explicit representation of these zones, a method employing a matrix was developed consisting in expressing the net currents of each group as a function of the total flux. Estimates are made for different geometries, introducing the PUMA difussion code of materials. Several tests made proved a very sound reliability of the results obtained in 2 and 5 groups. (author) [es
Huffhines, Lindsay; Tunno, Angela M.; Cho, Bridget; Hambrick, Erin P.; Campos, Ilse; Lichty, Brittany; Jackson, Yo
2016-01-01
State social service agency case files are a common mechanism for obtaining information about a child’s maltreatment history, yet these documents are often challenging for researchers to access, and then to process in a manner consistent with the requirements of social science research designs. Specifically, accessing and navigating case files is an extensive undertaking, and a task that many researchers have had to maneuver with little guidance. Even after the files are in hand and the research questions and relevant variables have been clarified, case file information about a child’s maltreatment exposure can be idiosyncratic, vague, inconsistent, and incomplete, making coding such information into useful variables for statistical analyses difficult. The Modified Maltreatment Classification System (MMCS) is a popular tool used to guide the process, and though comprehensive, this coding system cannot cover all idiosyncrasies found in case files. It is not clear from the literature how researchers implement this system while accounting for issues outside of the purview of the MMCS or that arise during MMCS use. Finally, a large yet reliable file coding team is essential to the process, however, the literature lacks training guidelines and methods for establishing reliability between coders. In an effort to move the field toward a common approach, the purpose of the present discussion is to detail the process used by one large-scale study of child maltreatment, the Studying Pathways to Adjustment and Resilience in Kids (SPARK) project, a longitudinal study of resilience in youth in foster care. The article addresses each phase of case file coding, from accessing case files, to identifying how to measure constructs of interest, to dealing with exceptions to the coding system, to coding variables reliably, to training large teams of coders and monitoring for fidelity. Implications for a comprehensive and efficient approach to case file coding are discussed. PMID
Asayama, Tai
2003-03-01
For the commercialization of fast breeder reactors, 'System Based Code', a completely new scheme of a code on structural integrity, is being developed. One of the distinguished features of the System Based Code is that it is able to determine a reasonable total margin on a structural of system, by allowing the exchanges of margins between various technical items. Detailed estimation of failure probability of a given combination of technical items and its comparison with a target value is one way to achieve this. However, simpler and easier methods that allow margin exchange without detailed calculation of failure probability are desirable in design. The authors have developed a simplified method such as a 'design factor method' from this viewpoint. This report describes a 'Vector Method', which was been newly developed. Following points are reported: 1) The Vector Method allows margin exchange evaluation on an 'equi-quality assurance plane' using vector calculation. Evaluation is easy and sufficient accuracy is achieved. The equi-quality assurance plane is obtained by a projection of an 'equi-failure probability surface in a n-dimensional space, which is calculated beforehand for typical combinations of design variables. 2) The Vector Method is considered to give the 'Quality Assurance Index Method' a probabilistic interpretation. 3) An algebraic method was proposed for the calculation of failure probabilities, which is necessary to obtain a equi-failure probability surface. This method calculates failure probabilities without using numerical methods such as Monte Carlo simulation or numerical integration. Under limited conditions, this method is quite effective compared to numerical methods. 4) An illustration of the procedure of margin exchange evaluation is given. It may be possible to use this method to optimize ISI plans; even it is not fully implemented in the System Based Code. (author)
U.S. Sodium Fast Reactor Codes and Methods: Current Capabilities and Path Forward
Brunett, A. J.; Fanning, T. H.
2017-06-26
The United States has extensive experience with the design, construction, and operation of sodium cooled fast reactors (SFRs) over the last six decades. Despite the closure of various facilities, the U.S. continues to dedicate research and development (R&D) efforts to the design of innovative experimental, prototype, and commercial facilities. Accordingly, in support of the rich operating history and ongoing design efforts, the U.S. has been developing and maintaining a series of tools with capabilities that envelope all facets of SFR design and safety analyses. This paper provides an overview of the current U.S. SFR analysis toolset, including codes such as SAS4A/SASSYS-1, MC2-3, SE2-ANL, PERSENT, NUBOW-3D, and LIFE-METAL, as well as the higher-fidelity tools (e.g. PROTEUS) being integrated into the toolset. Current capabilities of the codes are described and key ongoing development efforts are highlighted for some codes.
A study on directional resistivity logging-while-drilling based on self-adaptive hp-FEM
Liu, Dejun; Li, Hui; Zhang, Yingying; Zhu, Gengxue; Ai, Qinghui
2014-12-01
Numerical simulation of resistivity logging-while-drilling (LWD) tool response provides guidance for designing novel logging instruments and interpreting real-time logging data. In this paper, based on self-adaptive hp-finite element method (hp-FEM) algorithm, we analyze LWD tool response against model parameters and briefly illustrate geosteering capabilities of directional resistivity LWD. Numerical simulation results indicate that the change of source spacing is of obvious influence on the investigation depth and detecting precision of resistivity LWD tool; the change of frequency can improve the resolution of low-resistivity formation and high-resistivity formation. The simulation results also indicate that the self-adaptive hp-FEM algorithm has good convergence speed and calculation accuracy to guide the geologic steering drilling and it is suitable to simulate the response of resistivity LWD tools.
Zhang, Ziyu; Jiang, Wen; Dolbow, John E.; Spencer, Benjamin W.
2018-01-01
We present a strategy for the numerical integration of partial elements with the eXtended finite element method (X-FEM). The new strategy is specifically designed for problems with propagating cracks through a bulk material that exhibits inelasticity. Following a standard approach with the X-FEM, as the crack propagates new partial elements are created. We examine quadrature rules that have sufficient accuracy to calculate stiffness matrices regardless of the orientation of the crack with respect to the element. This permits the number of integration points within elements to remain constant as a crack propagates, and for state data to be easily transferred between successive discretizations. In order to maintain weights that are strictly positive, we propose an approach that blends moment-fitted weights with volume-fraction based weights. To demonstrate the efficacy of this simple approach, we present results from numerical tests and examples with both elastic and plastic material response.
Comparison of the THYC and FLICA-3M codes by the pseudo-cubic thin-plate method
Banner, D.; Crecy, F. de.
1993-06-01
The pseudo cubic Spline method (PCSM) is a statistical tool developed by the CEA. It is designed to analyse experimental points and in particular thermalhydraulic data. Predictors of the occurrence of critical heat flux are obtained by using Spline functions. In this paper, predictors have been computed from the same CHF databases by using two different flow analyses to derive local thermal-hydraulic variables at the CHF location. In fact, CEA's FLICA-3M represents rod bundles by interconnected subchannels whereas EDF's THYC code uses a porous 3D approach. In a first step, the PCSM is briefly presented as well as the two codes studied here. Then, the comparison methodology is explained in order to prove that advanced analysis of thermalhydraulic codes can be achieved with the PCSM. (authors). 6 figs., 2 tabs., 5 refs
Implementation of an implicit method into heat conduction calculation of TRAC-PF1/MOD2 code
Akimoto, Hajime; Abe, Yutaka; Ohnuki, Akira; Murao, Yoshio
1990-08-01
A two-dimensional unsteady heat conduction equation is solved in the TRAC-PF/MOD2 code to calculate temperature transients in fuel rod. A large CPU time is often required to get stable solution of temperature transients in the TRAC calculation with a small axial node size (less than 1.0 mm), because the heat conduction equation is discretized explicitly. To eliminate the restriction of the maximum time step size by the heat conduction calculation, an implicit method for solving the heat condition equation was developed and implemented into the TRAC code. Several assessment calculations were performed with the original and modified TRAC codes. It is confirmed that the implicit method is reliable and is successfully implemented into the TRAC code through comparison with theoretical solutions and assessment calculation results. It is demonstrated that the implicit method makes the heat conduction calculation practical even for the analyses of temperature transients with the axial node size less than 0.1 mm. (author)
Solution of charged particle transport equation by Monte-Carlo method in the BRANDZ code system
Artamonov, S.N.; Androsenko, P.A.; Androsenko, A.A.
1992-01-01
Consideration is given to the issues of Monte-Carlo employment for the solution of charged particle transport equation and its implementation in the BRANDZ code system under the conditions of real 3D geometry and all the data available on radiation-to-matter interaction in multicomponent and multilayer targets. For the solution of implantation problem the results of BRANDZ data comparison with the experiments and calculations by other codes in complexes systems are presented. The results of direct nuclear pumping process simulation for laser-active media by a proton beam are also included. 4 refs.; 7 figs
User manual for version 4.3 of the Tripoli-4 Monte-Carlo method particle transport computer code
Both, J.P.; Mazzolo, A.; Peneliau, Y.; Petit, O.; Roesslinger, B.
2003-01-01
This manual relates to Version 4.3 TRIPOLI-4 code. TRIPOLI-4 is a computer code simulating the transport of neutrons, photons, electrons and positrons. It can be used for radiation shielding calculations (long-distance propagation with flux attenuation in non-multiplying media) and neutronic calculations (fissile medium, criticality or sub-criticality basis). This makes it possible to calculate k eff (for criticality), flux, currents, reaction rates and multi-group cross-sections. TRIPOLI-4 is a three-dimensional code that uses the Monte-Carlo method. It allows for point-wise description in terms of energy of cross-sections and multi-group homogenized cross-sections and features two modes of geometrical representation: surface and combinatorial. The code uses cross-section libraries in ENDF/B format (such as JEF2-2, ENDF/B-VI and JENDL) for point-wise description cross-sections in APOTRIM format (from the APOLLO2 code) or a format specific to TRIPOLI-4 for multi-group description. (authors)
Relating system-to-CFD coupled code analyses to theoretical framework of a multi-scale method
Cadinu, F.; Kozlowski, T.; Dinh, T.N.
2007-01-01
Over past decades, analyses of transient processes and accidents in a nuclear power plant have been performed, to a significant extent and with a great success, by means of so called system codes, e.g. RELAP5, CATHARE, ATHLET codes. These computer codes, based on a multi-fluid model of two-phase flow, provide an effective, one-dimensional description of the coolant thermal-hydraulics in the reactor system. For some components in the system, wherever needed, the effect of multi-dimensional flow is accounted for through approximate models. The later are derived from scaled experiments conducted for selected accident scenarios. Increasingly, however, we have to deal with newer and ever more complex accident scenarios. In some such cases the system codes fail to serve as simulation vehicle, largely due to its deficient treatment of multi-dimensional flow (in e.g. downcomer, lower plenum). A possible way of improvement is to use the techniques of Computational Fluid Dynamics (CFD). Based on solving Navier-Stokes equations, CFD codes have been developed and used, broadly, to perform analysis of multi-dimensional flow, dominantly in non-nuclear industry and for single-phase flow applications. It is clear that CFD simulations can not substitute system codes but just complement them. Given the intrinsic multi-scale nature of this problem, we propose to relate it to the more general field of research on multi-scale simulations. Even though multi-scale methods are developed on case-by-case basis, the need for a unified framework brought to the development of the heterogeneous multi-scale method (HMM)
Kalkkuhl, J; Hunt, K J; Fritz, H
1999-01-01
An finite-element methods (FEM)-based neural-network approach to Nonlinear AutoRegressive with eXogenous input (NARX) modeling is presented. The method uses multilinear interpolation functions on C0 rectangular elements. The local and global structure of the resulting model is analyzed. It is shown that the model can be interpreted both as a local model network and a single layer feedforward neural network. The main aim is to use the model for nonlinear control design. The proposed FEM NARX description is easily accessible to feedback linearizing control techniques. Its use with a two-degrees of freedom nonlinear internal model controller is discussed. The approach is applied to modeling of the nonlinear longitudinal dynamics of an experimental lorry, using measured data. The modeling results are compared with local model network and multilayer perceptron approaches. A nonlinear speed controller was designed based on the identified FEM model. The controller was implemented in a test vehicle, and several experimental results are presented.
Prediction of flow- induced dynamic stress in an axial pump impeller using FEM
Gao, J Y; Hou, Y S; Xi, S Z; Cai, Z H; Yao, P P; Shi, H L
2013-01-01
Axial pumps play an important role in water supply and flood control projects. Along with growing requirements for high reliability and large capacity, the dynamic stress of axial pumps has become a key problem. Unsteady flow is a significant reason which results structural dynamic stress of a pump. This paper reports on a flow-induced dynamic stress simulation in an axial pump impeller at three flow conditions by using FEM code. The pressure pulsation obtained from flow simulation using CFD code was set as the force boundary condition. The results show that the maximum stress of impeller appeared at joint between blade and root flange near trailing edge or joint between blade and root flange near leading edge. The dynamic stress of the two zones was investigated under three flow conditions (0.8Q d , 1.0Q d , 1.1Q d ) in time domain and frequency domain. The frequencies of stress at zones of maximum stress are 22.9Hz and 37.5Hz as the fundamental frequency and its harmonics. The fundamental frequencies are nearly equal to vane passing frequency (22.9 Hz) and 3 times blade passing frequency (37.5Hz). The first dominant frequency at zones of maximum stress is equal to the vane passing frequency due to rotor-stator interaction between the vane and the blade. This study would be helpful for axial pumps in reducing stress, improving structure design and fatigue life
Video coding and decoding devices and methods preserving ppg relevant information
2013-01-01
The present invention relates to a video encoding device (10) for encoding video data and a corresponding video decoding device, wherein during decoding PPG relevant information shall be preserved. For this purpose the video coding device (10) comprises a first encoder (20) for encoding input video
Development of flow network analysis code for block type VHTR core by linear theory method
Lee, J. H.; Yoon, S. J.; Park, J. W.; Park, G. C.
2012-01-01
VHTR (Very High Temperature Reactor) is high-efficiency nuclear reactor which is capable of generating hydrogen with high temperature of coolant. PMR (Prismatic Modular Reactor) type reactor consists of hexagonal prismatic fuel blocks and reflector blocks. The flow paths in the prismatic VHTR core consist of coolant holes, bypass gaps and cross gaps. Complicated flow paths are formed in the core since the coolant holes and bypass gap are connected by the cross gap. Distributed coolant was mixed in the core through the cross gap so that the flow characteristics could not be modeled as a simple parallel pipe system. It requires lot of effort and takes very long time to analyze the core flow with CFD analysis. Hence, it is important to develop the code for VHTR core flow which can predict the core flow distribution fast and accurate. In this study, steady state flow network analysis code is developed using flow network algorithm. Developed flow network analysis code was named as FLASH code and it was validated with the experimental data and CFD simulation results. (authors)
Mohammadnia Meysam
2013-01-01
Full Text Available The flux expansion nodal method is a suitable method for considering nodalization effects in node corners. In this paper we used this method to solve the intra-nodal flux analytically. Then, a computer code, named MA.CODE, was developed using the C# programming language. The code is capable of reactor core calculations for hexagonal geometries in two energy groups and three dimensions. The MA.CODE imports two group constants from the WIMS code and calculates the effective multiplication factor, thermal and fast neutron flux in three dimensions, power density, reactivity, and the power peaking factor of each fuel assembly. Some of the code's merits are low calculation time and a user friendly interface. MA.CODE results showed good agreement with IAEA benchmarks, i. e. AER-FCM-101 and AER-FCM-001.
Systematic analysis of coding and noncoding DNA sequences using methods of statistical linguistics
Mantegna, R. N.; Buldyrev, S. V.; Goldberger, A. L.; Havlin, S.; Peng, C. K.; Simons, M.; Stanley, H. E.
1995-01-01
We compare the statistical properties of coding and noncoding regions in eukaryotic and viral DNA sequences by adapting two tests developed for the analysis of natural languages and symbolic sequences. The data set comprises all 30 sequences of length above 50 000 base pairs in GenBank Release No. 81.0, as well as the recently published sequences of C. elegans chromosome III (2.2 Mbp) and yeast chromosome XI (661 Kbp). We find that for the three chromosomes we studied the statistical properties of noncoding regions appear to be closer to those observed in natural languages than those of coding regions. In particular, (i) a n-tuple Zipf analysis of noncoding regions reveals a regime close to power-law behavior while the coding regions show logarithmic behavior over a wide interval, while (ii) an n-gram entropy measurement shows that the noncoding regions have a lower n-gram entropy (and hence a larger "n-gram redundancy") than the coding regions. In contrast to the three chromosomes, we find that for vertebrates such as primates and rodents and for viral DNA, the difference between the statistical properties of coding and noncoding regions is not pronounced and therefore the results of the analyses of the investigated sequences are less conclusive. After noting the intrinsic limitations of the n-gram redundancy analysis, we also briefly discuss the failure of the zeroth- and first-order Markovian models or simple nucleotide repeats to account fully for these "linguistic" features of DNA. Finally, we emphasize that our results by no means prove the existence of a "language" in noncoding DNA.
Ionescu , Irina; Moës , Nicolas; Cartraud , Patrice; Béringhier , Marianne
2007-01-01
International audience; The advances of material characterization by means of imaging techniques require powerful computational methods for numerical analyses. This paper focuses on the advantages of coupling the X-FEM and level sets to solve microstructures with complex geometry. The level set information is obtained from a digital image and then used within a X-FEM computation, where the mesh does not need to conform to the material interface. An example of homogeniza-tion is presented.; La...
Sarajaervi, U.; Cronvall, O.
2007-03-01
Fatigue is produced by cyclic application of stresses by mechanical or thermal loading. The metal subjected to fluctuating stress will fail at stresses much lower than those required to cause fracture in a single application of load. The key parameters are the range of stress variation and the number of its occurrences. Low-cycle fatigue, usually induced by mechanical and thermal loads, is distinguished from high-cycle fatigue, mainly associated with vibration or high number of small thermal fluctuations. Numerical models describing fatigue behaviour of austenitic stainless piping steels under cyclic loading and their applicability for modelling of low-cycle-fatigue are discussed in this report. In order to describe the cyclic behaviour of the material for analysis with finite element method (FEM) based analysis code ABAQUS, the test data, i.e. stress-strain curves, have to be processed. A code to process the data all through the test duration was developed within this study. A description of this code is given also in this report. Input data for ABAQUS was obtained to describe both kinematic and isotropic hardening properties. Further, by combining the result data for various strain amplitudes a mathematic expression was be created which allows defining a parameter surface for cyclic (i.e. isotropic) hardening. Input data for any strain amplitude within the range of minimum and maximum strain amplitudes of the test data can be assessed with the help of the developed 3D stress-strain surface presentation. The modelling of the fatigue induced initiation and growth of cracks was not considered in this study. On the other hand, a considerable part of the fatigue life of nuclear power plant (NPP) piping components is spent in the phase preceding the initiation and growth of cracks. (au)
Sarajaervi, U.; Cronvall, O. [VTT (Finland)
2007-03-15
Fatigue is produced by cyclic application of stresses by mechanical or thermal loading. The metal subjected to fluctuating stress will fail at stresses much lower than those required to cause fracture in a single application of load. The key parameters are the range of stress variation and the number of its occurrences. Low-cycle fatigue, usually induced by mechanical and thermal loads, is distinguished from high-cycle fatigue, mainly associated with vibration or high number of small thermal fluctuations. Numerical models describing fatigue behaviour of austenitic stainless piping steels under cyclic loading and their applicability for modelling of low-cycle-fatigue are discussed in this report. In order to describe the cyclic behaviour of the material for analysis with finite element method (FEM) based analysis code ABAQUS, the test data, i.e. stress-strain curves, have to be processed. A code to process the data all through the test duration was developed within this study. A description of this code is given also in this report. Input data for ABAQUS was obtained to describe both kinematic and isotropic hardening properties. Further, by combining the result data for various strain amplitudes a mathematic expression was be created which allows defining a parameter surface for cyclic (i.e. isotropic) hardening. Input data for any strain amplitude within the range of minimum and maximum strain amplitudes of the test data can be assessed with the help of the developed 3D stress-strain surface presentation. The modelling of the fatigue induced initiation and growth of cracks was not considered in this study. On the other hand, a considerable part of the fatigue life of nuclear power plant (NPP) piping components is spent in the phase preceding the initiation and growth of cracks. (au)
Kawai, T.
Among the topics discussed are the application of FEM to nonlinear free surface flow, Navier-Stokes shallow water wave equations, incompressible viscous flows and weather prediction, the mathematical analysis and characteristics of FEM, penalty function FEM, convective, viscous, and high Reynolds number FEM analyses, the solution of time-dependent, three-dimensional and incompressible Navier-Stokes equations, turbulent boundary layer flow, FEM modeling of environmental problems over complex terrain, and FEM's application to thermal convection problems and to the flow of polymeric materials in injection molding processes. Also covered are FEMs for compressible flows, including boundary layer flows and transonic flows, hybrid element approaches for wave hydrodynamic loadings, FEM acoustic field analyses, and FEM treatment of free surface flow, shallow water flow, seepage flow, and sediment transport. Boundary element methods and FEM computational technique topics are also discussed. For individual items see A84-25834 to A84-25896
Nodal DG-FEM solution of high-order Boussinesq-type equations
Engsig-Karup, Allan Peter; Hesthaven, Jan S.; Bingham, Harry B.
2006-01-01
We present a discontinuous Galerkin finite element method (DG-FEM) solution to a set of high-order Boussinesq-type equations for modelling highly nonlinear and dispersive water waves in one and two horizontal dimensions. The continuous equations are discretized using nodal polynomial basis...... functions of arbitrary order in space on each element of an unstructured computational domain. A fourth order explicit Runge-Kutta scheme is used to advance the solution in time. Methods for introducing artificial damping to control mild nonlinear instabilities are also discussed. The accuracy...... and convergence of the model with both h (grid size) and p (order) refinement are verified for the linearized equations, and calculations are provided for two nonlinear test cases in one horizontal dimension: harmonic generation over a submerged bar; and reflection of a steep solitary wave from a vertical wall...
Popuri, Karteek; Cobzas, Dana; Esfandiari, Nina; Baracos, Vickie; Jägersand, Martin
2016-02-01
The proportions of muscle and fat tissues in the human body, referred to as body composition is a vital measurement for cancer patients. Body composition has been recently linked to patient survival and the onset/recurrence of several types of cancers in numerous cancer research studies. This paper introduces a fully automatic framework for the segmentation of muscle and fat tissues from CT images to estimate body composition. We developed a novel finite element method (FEM) deformable model that incorporates a priori shape information via a statistical deformation model (SDM) within the template-based segmentation framework. The proposed method was validated on 1000 abdominal and 530 thoracic CT images and we obtained very good segmentation results with Jaccard scores in excess of 90% for both the muscle and fat regions.
Thermal modal analysis of novel non-pneumatic mechanical elastic wheel based on FEM and EMA
Zhao, Youqun; Zhu, Mingmin; Lin, Fen; Xiao, Zhen; Li, Haiqing; Deng, Yaoji
2018-01-01
A combination of Finite Element Method (FEM) and Experiment Modal Analysis (EMA) have been employed here to characterize the structural dynamic response of mechanical elastic wheel (ME-Wheel) operating under a specific thermal environment. The influence of high thermal condition on the structural dynamic response of ME-Wheel is investigated. The obtained results indicate that the EMA results are in accordance with those obtained using the proposed Finite Element (FE) model, indicting the high reliability of this FE model applied in analyzing the modal of ME-Wheel working under practical thermal environment. It demonstrates that the structural dynamic response of ME-Wheel operating under a specific thermal condition can be predicted and evaluated using the proposed analysis method, which is beneficial for the dynamic optimization design of the wheel structure to avoid tire temperature related vibration failure and improve safety of tire.
Generalized concatenated quantum codes
Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng Bei
2009-01-01
We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.
Analysis of piping systems by finite element method using code SAP-IV
Cizelj, L.; Ogrizek, D.
1987-01-01
Due to extensive and multiple use of the computer code SAP-IV we have decided to install it on VAX 11/750 machine. Installation required a large quantity of programming due to great discrepancies between the CDC (the original program version) and the VAX. Testing was performed basically in the field of pipe elements, based on a comparison between results obtained with the codes PSAFE2, DOCIJEV, PIPESD and SAP -V. Besides, the model of reactor pressure vessel with 3-D thick shell elements was done. The capabilities show good agreement with the results of other programs mentioned above. Along with the package installation, the graphical postprocessors being developed for mesh plotting. (author)
Review of solution approach, methods, and recent results of the RELAP5 system code
Trapp, J.A.; Ransom, V.H.
1983-01-01
The present RELAP5 code is based on a semi-implicit numerical scheme for the hydrodynamic model. The basic guidelines employed in the development of the semi-implicit numerical scheme are discussed and the numerical features of the scheme are illustrated by analysis for a simple, but analogous, single-equation model. The basic numerical scheme is recorded and results from several simulations are presented. The experimental results and code simulations are used in a complementary fashion to develop insights into nuclear-plant response that would not be obtained if either tool were used alone. Further analysis using the simple single-equation model is carried out to yield insights that are presently being used to implement a more-implicit multi-step scheme in the experimental version of RELAP5. The multi-step implicit scheme is also described
Du, Q.; Cen, Z.; Zhu, H.
1989-01-01
This paper reports linear elastic fracture analysis based upon the stress intensity factor evaluation successfully applied to safety assessments of cracked structures. The nozzle junction are usually subjected to high pressure and thermal loads simultaneously. In validity of linear elastic fracture analysis, K can be decomposed into K P (caused by mechanic loads) and K τ (caused by thermal loads). Under thermal transient loading, explicit analysis (say by the FEM or BEM) of K tracing an entire history respectively for a range of crack depth may be much more time consuming. The techniques of weight function provide efficient means for transforming the problem into the stress computation of the uncracked structure and generation of influence function (for the given structure and size of crack). In this paper, a combination of BE-FEM has been used for the analysis of the cracked nozzle structure by techniques of weight function. The influence functions are obtained by coupled BE-FEM and the uncracked structure stress are computed by finite element methods
Hemanth, M; Deoli, Shilpi; Raghuveer, H P; Rani, M S; Hegde, Chatura; Vedavathi, B
2015-09-01
Simulation of periodontal ligament (PDL) using non-linear finite element method (FEM) analysis gives better insight into understanding of the biology of tooth movement. The stresses in the PDL were evaluated for intrusion and lingual root torque using non-linear properties. A three-dimensional (3D) FEM model of the maxillary incisors was generated using Solidworks modeling software. Stresses in the PDL were evaluated for intrusive and lingual root torque movements by 3D FEM using ANSYS software. These stresses were compared with linear and non-linear analyses. For intrusive and lingual root torque movements, distribution of stress over the PDL was within the range of optimal stress value as proposed by Lee, but was exceeding the force system given by Proffit as optimum forces for orthodontic tooth movement with linear properties. When same force load was applied in non-linear analysis, stresses were more compared to linear analysis and were beyond the optimal stress range as proposed by Lee for both intrusive and lingual root torque. To get the same stress as linear analysis, iterations were done using non-linear properties and the force level was reduced. This shows that the force level required for non-linear analysis is lesser than that of linear analysis.
[Application of Finite Element Method in Thoracolumbar Spine Traumatology].
Zhang, Min; Qiu, Yong-gui; Shao, Yu; Gu, Xiao-feng; Zeng, Ming-wei
2015-04-01
The finite element method (FEM) is a mathematical technique using modern computer technology for stress analysis, and has been gradually used in simulating human body structures in the biomechanical field, especially more widely used in the research of thoracolumbar spine traumatology. This paper reviews the establishment of the thoracolumbar spine FEM, the verification of the FEM, and the thoracolumbar spine FEM research status in different fields, and discusses its prospects and values in forensic thoracolumbar traumatology.
Multidimensional method of spatially coupled approximation to the transverse escape in nodal codes
Jatuff, F.E.
1990-01-01
A natural extension of the polynomic development programmed in RHENO code is presented, which adds to the variable order one-dimensional functions sum, a number of terms that represent functions of production. These new terms, which provide a direct determination of transverse escapes, are calculated from the new variables coupling among nodes: the 4 fluxes in rectangle vortices (bidimensional Cartesian geometry) or the 12 fluxes half-way through the parallelepiped edges (tridimensional Cartesian geometry). (Author) [es
Method for computing self-consistent solution in a gun code
Nelson, Eric M
2014-09-23
Complex gun code computations can be made to converge more quickly based on a selection of one or more relaxation parameters. An eigenvalue analysis is applied to error residuals to identify two error eigenvalues that are associated with respective error residuals. Relaxation values can be selected based on these eigenvalues so that error residuals associated with each can be alternately reduced in successive iterations. In some examples, relaxation values that would be unstable if used alone can be used.
Some questions of using coding theory and analytical calculation methods on computers
Nikityuk, N.M.
1987-01-01
Main results of investigations devoted to the application of theory and practice of correcting codes are presented. These results are used to create very fast units for the selection of events registered in multichannel detectors of nuclear particles. Using this theory and analytical computing calculations, practically new combination devices, for example, parallel encoders, have been developed. Questions concerning the creation of a new algorithm for the calculation of digital functions by computers and problems of devising universal, dynamically reprogrammable logic modules are discussed
The codes WAV3BDY and WAV4BDY and the variational Monte Carlo method
Schiavilla, R.
1987-01-01
A description of the codes WAV3BDY and WAV4BDY, which generate the variational ground state wave functions of the A=3 and 4 nuclei, is given, followed by a discussion of the Monte Carlo integration technique, which is used to calculate expectation values and transition amplitudes of operators, and for whose implementation WAV3BDY and WAV4BDY are well suited
Finite element methods in a simulation code for offshore wind turbines
Kurz, Wolfgang
1994-06-01
Offshore installation of wind turbines will become important for electricity supply in future. Wind conditions above sea are more favorable than on land and appropriate locations on land are limited and restricted. The dynamic behavior of advanced wind turbines is investigated with digital simulations to reduce time and cost in development and design phase. A wind turbine can be described and simulated as a multi-body system containing rigid and flexible bodies. Simulation of the non-linear motion of such a mechanical system using a multi-body system code is much faster than using a finite element code. However, a modal representation of the deformation field has to be incorporated in the multi-body system approach. The equations of motion of flexible bodies due to deformation are generated by finite element calculations. At Delft University of Technology the simulation code DUWECS has been developed which simulates the non-linear behavior of wind turbines in time domain. The wind turbine is divided in subcomponents which are represented by modules (e.g. rotor, tower etc.).
Yang Xue; Satvat, Nader
2012-01-01
Highlight: ► A two-dimensional numerical code based on the method of characteristics is developed. ► The complex arbitrary geometries are represented by constructive solid geometry and decomposed by unstructured meshing. ► Excellent agreement between Monte Carlo and the developed code is observed. ► High efficiency is achieved by parallel computing. - Abstract: A transport theory code MOCUM based on the method of characteristics as the flux solver with an advanced general geometry processor has been developed for two-dimensional rectangular and hexagonal lattice and full core neutronics modeling. In the code, the core structure is represented by the constructive solid geometry that uses regularized Boolean operations to build complex geometries from simple polygons. Arbitrary-precision arithmetic is also used in the process of building geometry objects to eliminate the round-off error from the commonly used double precision numbers. Then, the constructed core frame will be decomposed and refined into a Conforming Delaunay Triangulation to ensure the quality of the meshes. The code is fully parallelized using OpenMP and is verified and validated by various benchmarks representing rectangular, hexagonal, plate type and CANDU reactor geometries. Compared with Monte Carlo and deterministic reference solution, MOCUM results are highly accurate. The mentioned characteristics of the MOCUM make it a perfect tool for high fidelity full core calculation for current and GenIV reactor core designs. The detailed representation of reactor physics parameters can enhance the safety margins with acceptable confidence levels, which lead to more economically optimized designs.
Divi Galih Prasetyo Putri
2014-03-01
Full Text Available Proses evolusi dan perawatan dari sebuah sistem merupakan proses yang sangat penting dalam rekayasa perangkat lunak tidak terkecuali pada aplikasi web. Pada proses ini kebanyakan pengembang tidak lagi berpatokan pada rancangan sistem. Hal ini menyebabkan munculnya unused method. Bagian-bagian program ini tidak lagi terpakai namun masih berada dalam sistem. Keadaan ini meningkatkan kompleksitas dan mengurangi tingkat understandability sistem. Guna mendeteksi adanya unused method pada progam diperlukan teknik untuk melakukan code analysis. Teknik static analysis yang digunakan memanfaatkan call graph yang dibangun dari kode program untuk mengetahui adanya unused method. Call graph dibangun berdasarkan pemanggilan antar method. Aplikasi ini mendeteksi unused method pada kode program PHP yang dibangun menggunakan framework CodeIgniter. Kode program sebagai inputan diurai kedalam bentuk Abstract Syntax Tree (AST yang kemudian dimanfaatkan untuk melakukan analisis terhadap kode program. Proses analisis tersebut kemudian menghasilkan sebuah call graph. Dari call graph yang dihasilkan dapat dideteksi method-method mana saja yang tidak berhasil ditelusuri dan tergolong kedalam unused method. Kakas telah diuji coba pada 5 aplikasi PHP dengan hasil rata-rata nilai presisi sistem sebesar 0.749 dan recall sebesar 1.
Sukhovoj, A.M.; Khitrov, V.A.
1982-01-01
A method of improvement of amplitude resolution in the case of record of coinciding codes on the magnetic tape is suggested. It is shown on the record with Ge(Li) detectors of cascades of gamma-transitions from the 35 Cl(n, #betta#) reaction that total width at a half maximum of the peak may decrease by a factor of 2.6 for quanta with the energy similar to the neutron binding energy. Efficiency loss is absent
Kelly, G.N.; Luykx, F.
1991-01-01
The Commission of the European Communities, within the framework of its 1980-84 radiation protection research programme, initiated a two-year project in 1983 entitled 'methods for assessing the radiological impact of accidents' (Maria). This project was continued in a substantially enlarged form within the 1985-89 research programme. The main objectives of the project were, firstly, to develop a new probabilistic accident consequence code that was modular, incorporated the best features of those codes already in use, could be readily modified to take account of new data and model developments and would be broadly applicable within the EC; secondly, to acquire a better understanding of the limitations of current models and to develop more rigorous approaches where necessary; and, thirdly, to quantify the uncertainties associated with the model predictions. This research led to the development of the accident consequence code Cosyma (COde System from MAria), which will be made generally available later in 1990. The numerous and diverse studies that have been undertaken in support of this development are summarized in this paper, together with indications of where further effort might be most profitably directed. Consideration is also given to related research directed towards the development of real-time decision support systems for use in off-site emergency management
Kikuchi, Takashi; Yoshida, Tomiji; Omote, Tatsuyuki.
1991-01-01
In the conventional method of controlling waste containers by labels attached thereto, the data relevant to wastes contained in the waste containers are limited. Further, if the label should be peeled off, there is a possibility that the wastes therein can no more be identified. Then, in the present invention, an identification plate is previously attached, to which mechanically readable codes or visually readable letters or numerical figures are written. Then, the identification codes can be read in a remote control manner at high speed and high reliability and the waste containers can be managed only by the identification codes of the containers. Further, the identification codes on the container are made so as to be free from aging degradation, thereby enabling to manage waste containers for long time storage. With such a constitution, since data can be inputted from an input terminal and a great amount of data such as concerning the source of wastes can be managed collectively on a software, the data can be managed easily. (T.M.)
Wei-I Lee
2016-12-01
Full Text Available The New Taipei City Government developed a Code-checking System (CCS using Building Information Modeling (BIM technology to facilitate an architectural design review in 2014. This system was intended to solve problems caused by cognitive gaps between designer and reviewer in the design review process. Along with considering information technology, the most important issue for the system’s development has been the logicalization of literal building codes. Therefore, to enhance the reliability and performance of the CCS, this study uses the Fuzzy Delphi Method (FDM on the basis of design thinking and communication theory to investigate the semantic difference and cognitive gaps among participants in the design review process and to propose the direction of system development. Our empirical results lead us to recommend grouping multi-stage screening and weighted assisted logicalization of non-quantitative building codes to improve the operability of CCS. Furthermore, CCS should integrate the Expert Evaluation System (EES to evaluate the design value under qualitative building codes.
NESSUS/EXPERT - An expert system for probabilistic structural analysis methods
Millwater, H.; Palmer, K.; Fink, P.
1988-01-01
An expert system (NESSUS/EXPERT) is presented which provides assistance in using probabilistic structural analysis methods. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator. NESSUS/EXPERT was developed with a combination of FORTRAN and CLIPS, a C language expert system tool, to exploit the strengths of each language.
Frosi, Paolo, E-mail: paolo.frosi@enea.it [Unità Tecnica Fusione-ENEA C.R. Frascati, Via E. Fermi 45, 00044 Frascati (Italy); Mazzone, Giuseppe [Unità Tecnica Fusione-ENEA C.R. Frascati, Via E. Fermi 45, 00044 Frascati (Italy); You, Jeong-Ha [Max Planck Institute of Plasma Physics, Boltzmann Str. 2, 85748 Garching (Germany)
2016-11-01
This paper deals with the early steps in developing a structural fem model of DEMO Divertor. The study is focused on the thermal and structural analysis of the Cassette Body: a new geometry has been developed for this component: it is foreseen that the plasma facing component (PFC) will be directly placed on the cassette but for the Dome no choice has been adopted yet. For now the model contains only a suitable schematization of the Cassette Body and its objective is to analyze the effect produced by the main loads (electromagnetic loads, coolant pressure, thermal neutron and convective loads) on itself: an available estimate of loads is that one derived from ITER: for a proper translation some assumptions have been made and they are described in the paper. Now it is not a primary purpose to obtain some definitive statements about stresses, displacements, temperatures and so on; the authors want to construct a set of FEM models that will help all the decisions of DEMO Divertor design in its future development. This set is conceived as a tool that shall be improved to account for all the main enhancements that will be found in geometry, in material properties data and in load evaluations. Moreover, the main design variables (loads, material properties, some geometric items, mesh element size) are defined as parameters. This work considers also an introductive approach for future structural verification of the Divertor Cassette Body: so a concern of the Design and Construction Rules for Mechanical Components of Nuclear Installation (RCC-MRx) has been implemented. The FEM code used is Ansys rel. 15.
FEM simulation of TBC failure in a model system
Seiler, P; Baeker, M; Roesier, J [Institut fuer Werkstoffe (IfW), Technische Universitaet Braunschweig (Germany); Beck, T; Schweda, M, E-mail: p.seiler@tu-bs.d [Institut fuer Energieforschung/ Werkstoffstruktur und -Eigenschaften (IEF 2), Forschungszentrum Juelich (Germany)
2010-07-01
In order to study the behavior of the complex failure mechanisms in thermal barrier coatings on turbine blades, a simplified model system is used to reduce the number of system parameters. The artificial system consists of a bond-coat material (fast creeping Fecralloy or slow creeping MA956) as the substrate with a Y{sub 2}O{sub 3} partially stabilized plasma sprayed zircon oxide TBC on top and a TGO between the two layers. A 2-dimensional FEM simulation was developed to calculate the growth stress inside the simplified coating system. The simulation permits the study of failure mechanisms by identifying compression and tension areas which are established by the growth of the oxide layer. This provides an insight into the possible crack paths in the coating and it allows to draw conclusions for optimizing real thermal barrier coating systems.
Pregnancy and contraceptive use among women participating in the FEM-PrEP trial.
Callahan, Rebecca; Nanda, Kavita; Kapiga, Saidi; Malahleha, Mookho; Mandala, Justin; Ogada, Teresa; Van Damme, Lut; Taylor, Douglas
2015-02-01
Pregnancy among study participants remains a challenge for trials of new HIV prevention agents despite promotion and provision of contraception. We evaluated contraceptive use, pregnancy incidence, and study drug adherence by contraceptive method among women enrolled in the FEM-PrEP trial of once-daily oral tenofovir disoproxil fumarate and emtricitabine (TDF-FTC) for HIV prevention. We required women to be using effective non-barrier contraception at enrollment. At each monthly follow-up visit, women were counseled on contraceptive use and tested for pregnancy. TDF-FTC adherence was determined by measuring plasma drug concentrations at 4-week intervals. We used Cox proportional hazards models to assess factors associated with incident pregnancy and multivariate logistic regression to examine the relationship between contraceptive method used at enrollment and TDF-FTC adherence. More than half of women were not using effective contraception before enrollment. Ninety-eight percent of these women adopted either injectable (55%) or oral (43%) contraceptives. The overall pregnancy rate was 9.6 per 100 woman-years. Among injectable users and new users of combined oral contraceptives (COCs), the rates were 1.6 and 35.1, respectively. New users of injectables had significantly greater odds of adhering to TDF-FTC than new COC users [odds ratio (95% confidence interval): 4.4 (1.7 to 11.6), P = 0.002], existing COC users [3.1 (1.3 to 7.3), P = 0.01], and existing injectable users [2.4 (1.1 to 5.6), P = 0.04]. Women using COCs during FEM-PrEP, particularly new adopters, were more likely to become pregnant and less likely to adhere to study product than injectable users. HIV prevention trials should consider requiring long-acting methods, including injectables, for study participation.
Simple rules for design of exhaust mufflers and a comparison with four-pole and FEM calculations
Jensen, Morten Skaarup; Ødegaard, John
1999-01-01
For good muffler design it is advisable to use an advanced computational method such as four-pole theory or FEM or BEM. To get a starting points for these methods and to suggest adjustments to the geometry and materials it is useful to have some simple rules of thumb. This paper presents a number...... of such "rules", and illustrates their reliability and limitations by comparing with results using some of the advanced computational methods. At the same time, this investigation also gives a comparison between four-pole theory and BEM....
A FEM based methodology to simulate multiple crack propagation in friction stir welds
Lepore, Marcello; Carlone, Pierpaolo; Berto, Filippo
2017-01-01
. The residual stress field was inferred by a thermo-mechanical FEM simulation of the process, considering temperature dependent elastic-plastic material properties, material softening and isotropic hardening. Afterwards, cracks introduced in the selected location of FEM computational domain allow stress...
Simulations of e-beam emittance effects on the performance of the fusion-FEM
Tulupov, A. V.; Caplan, M.; Urbanus, W. H.
1996-01-01
At the FOM-Institute for Plasma Physics, The Netherlands, the construction of the Fusion Free-Electron Maser (FEM) is nearing completion. The design objective of the FEM is to generate 1 MW of microwave power during long pulses in the frequency range of 130 to 260 GHz [1,2]. Applications are in the
Prediction of the FOM FEM experimental results using multi-mode time-dependent simulations
Caplan, M.; Bongers, W. A.; Verhoeven, A. G. A.; van der Geer, C. A. J.; Valentini, M.; Urbanus, W. H.
1998-01-01
The Free Electron Maser (FEM) constructed at the FOM Institute, Netherlands is now ready to undergo the first set of short pulse (< 20 mu s) experiments to demonstrate the capability of generating 1 MW of microwave power in the range 130-250 GHz. Predictions of the FEM performance requires a
Zhang, Dongping
2009-10-26
Lateral forced cooling can significantly increase the temporary overload capacity of a cable system, but the design of such systems requires a time-dependent 3D analysis of the nonlinear thermal behavior as the cooling water along the cable is heated up, resulting in position-dependent and time-dependent heat uptake. For this, a new calculation method was developed on the basis of an available 3D FEM software. The new method enables 3D simulation of force-cooled cables in consideration of the potential partial dryout of soil and of thermal stabilizations. The new method was first applied to a 110 kV wind power transmission cable for different configurations and grid conditions. It was found that with lateral forced cooling, the 110 kV will have a temporal 50 percent overload capacity. Further, the thermal characteristics and limiting capacity of a force-cooled 380 kV cable system were investigated. According to the results so far, laterally cooled cable systems open up new operating options, with advantages in terms of availability, economic efficiency, and flexibility. (orig.) [German] Eine laterale Zwangskuehlung kann die temporaere Ueberlastbarkeit einer Kabelanlage signifikant erhoehen. Der Entwurf solcher zwangsgekuehlter Kabelanlagen erfordert jedoch eine zeitabhaengige, dreidimensionale Analyse des nichtlinearen thermischen Verhaltens, da sich das Kuehlwasser entlang der Trasse erwaermt und sich so eine orts- und zeitabhaengige Waermeaufnahme ergibt. Zu diesem Zweck wurde auf der Basis eines vorhandenen zweidimensionalen FEM-Programms ein neues Berechnungsverfahren entwickelt, das die dreidimensionale Simulation zwangsgekuehlter Kabelanlagen unter Beruecksichtigung einer moeglicherweise auftretenden partiellen Bodenaustrocknung und von thermischen Stabilisierungen erlaubt. Mit Hilfe dieses Berechnungsverfahrens wurde zuerst eine 110-kV-Kabelanlage zur Windenergieuebertragung bei unterschiedlichen Anordnungen und unterschiedlichen Netzsituationen untersucht
Does health promotion need a Code of Ethics? Results from an IUHPE mixed method survey.
Bull, Torill; Riggs, Elisha; Nchogu, Sussy N
2012-09-01
Health promotion is an ethically challenging field involving constant reflection of values across multiple cultures of what is regarded as good and bad health promotion practice. While many disciplines are guided by a Code of Ethics (CoE) no such guide is available to health promoters. The International Union for Health Promotion and Education (IUHPE) has been nominated as a suitable candidate for developing such a code. It is within this context that the IUHPE Student and Early Career Network (ISECN), through its Ethics Working Group, has taken up the challenge of preparing the foundations for a CoE for health promotion. An online survey comprising open and closed-answer questions was used to gather the opinions of IUHPE members regarding the need for a CoE for health promotion. The quantitative data were calculated with descriptive analyses. A thematic analysis approach was used to analyze and interpret the qualitative data. IUHPE members (n = 236) from all global regions responded to the survey. The majority (52%) of the respondents had 11 years' experience or more in the field of health promotion. Ethical dilemmas were commonly encountered. The need for a CoE for health promotion was expressed by 83% of respondents. Respondents also offered their views of possibilities, ideas and challenges regarding the development of a CoE for health promotion. Considering that health promoters encounter ethical dilemmas frequently in their practice, this study reinforces the need to develop a CoE for the field. The recommendations from the survey provide a good basis for future work to develop such a code.
Moravie, Philippe
1997-01-01
Today, in the digitized satellite image domain, the needs for high dimension increase considerably. To transmit or to stock such images (more than 6000 by 6000 pixels), we need to reduce their data volume and so we have to use real-time image compression techniques. The large amount of computations required by image compression algorithms prohibits the use of common sequential processors, for the benefits of parallel computers. The study presented here deals with parallelization of a very efficient image compression scheme, based on three techniques: Wavelets Transform (WT), Vector Quantization (VQ) and Entropic Coding (EC). First, we studied and implemented the parallelism of each algorithm, in order to determine the architectural characteristics needed for real-time image compression. Then, we defined eight parallel architectures: 3 for Mallat algorithm (WT), 3 for Tree-Structured Vector Quantization (VQ) and 2 for Huffman Coding (EC). As our system has to be multi-purpose, we chose 3 global architectures between all of the 3x3x2 systems available. Because, for technological reasons, real-time is not reached at anytime (for all the compression parameter combinations), we also defined and evaluated two algorithmic optimizations: fix point precision and merging entropic coding in vector quantization. As a result, we defined a new multi-purpose multi-SMIMD parallel machine, able to compress digitized satellite image in real-time. The definition of the best suited architecture for real-time image compression was answered by presenting 3 parallel machines among which one multi-purpose, embedded and which might be used for other applications on board. (author) [fr
New methods of analysis of materials strength data for the ASME Boiler and Pressure Vessel Code
Booker, M.K.; Booker, B.L.P.
1980-01-01
Tensile and creep data of the type used to establish allowable stress levels for the ASME Boiler and Pressure Vessel Code have been examined for type 321H stainless steel. Both inhomogeneous, unbalanced data sets and well-planned homogeneous data sets have been examined. Data have been analyzed by implementing standard manual techniques on a modern digital computer. In addition, more sophisticated techniques, practical only through the use of the computer, have been applied. The result clearly demonstrates the efficacy of computerized techniques for these types of analyses
Qiao, Shan; Jackson, Edward; Coussios, Constantin C; Cleveland, Robin O
2016-09-01
Nonlinear acoustics plays an important role in both diagnostic and therapeutic applications of biomedical ultrasound and a number of research and commercial software packages are available. In this manuscript, predictions of two solvers available in a commercial software package, pzflex, one using the finite-element-method (FEM) and the other a pseudo-spectral method, spectralflex, are compared with measurements and the Khokhlov-Zabolotskaya-Kuznetsov (KZK) Texas code (a finite-difference time-domain algorithm). The pzflex methods solve the continuity equation, momentum equation and equation of state where they account for nonlinearity to second order whereas the KZK code solves a nonlinear wave equation with a paraxial approximation for diffraction. Measurements of the field from a single element 3.3 MHz focused transducer were compared with the simulations and there was good agreement for the fundamental frequency and the harmonics; however the FEM pzflex solver incurred a high computational cost to achieve equivalent accuracy. In addition, pzflex results exhibited non-physical oscillations in the spatial distribution of harmonics when the amplitudes were relatively low. It was found that spectralflex was able to accurately capture the nonlinear fields at reasonable computational cost. These results emphasize the need to benchmark nonlinear simulations before using codes as predictive tools.
A Simple Method for Static Load Balancing of Parallel FDTD Codes
Franek, Ondrej
2016-01-01
A static method for balancing computational loads in parallel implementations of the finite-difference timedomain method is presented. The procedure is fairly straightforward and computationally inexpensive, thus providing an attractive alternative to optimization techniques. The method is descri...
Induction Machine with Improved Operating Performances for Electric Trucks. A FEM-Based Analysis
MUNTEANU, A.
2010-05-01
Full Text Available The paper presents a study concerning the performance developed by induction motors destined for motorization of heavy electric vehicles such as trucks. Taking into consideration the imposed restrictions, one presents, in a comparative manner, the main geometrical parameters which come of the classical design algorithms. A special attention is dedicated to the winding design, since it has to ensure two synchronous speeds corresponding to 16 and 8 poles, respectively. Moreover, the influence of the rotor slots shape for the improvement of the start-up is analyzed. Finally, a FEM-based study (approach based on finite element method is performed to put in view specific torque and slip values such as rated, start-up and pull-out ones.
FEM simulation study on relationship of interfacial morphology and residual stress in TBCs
Liqiang Chen; Shengkai Gong; Huibin Xu [School of Materials Science and Engineering, Beihang Univ., Beijing, BJ (China)
2005-07-01
It is generally believed that the failure of TBCs is attributed to the spallation occurred in the ceramic coat. The spallation is closed linked with sinuate morphology factors, including its amplitude and period, at the TGO/bond coat interface. In this work, dependence of the residual stress distribution on the sinuate morphology in the TBCs has been studied by means of finite element method (FEM) simulation for isothermally annealed specimens. The simulation results indicated that the maximum value of residual stress existed inside the TGO layer. It was also found that the maximum residual stress occurred at different points, near the TGO/bond coat interface at the peak of the sinuate interface, while near the TGO/ceramic coat interface at the valley, respectively. And the maximum residual stress increased with increasing the ratio of the amplitude to period in the sine morphology, which has been proved by the thermal cycle experimental results. (orig.)
A continuum based fem model for friction stir welding-model development
Buffa, G. [Ohio State University, Department of Industrial, Welding and Systems Engineering, 1971 Neil Avenue, 210 Baker Systems, Columbus, OH 43210 (United States) and Dipartimento di Tecnologia Meccanica, Produzione e Ingegneria Gestionale, Universita di Palermo, Viale delle Scienze, 90128 Palermo (Italy)]. E-mail: g.buffa@dtpm.unipa.it; Hua, J. [Ohio State University, Department of Industrial, Welding and Systems Engineering, 1971 Neil Avenue, 210 Baker Systems, Columbus, OH 43210 (United States)]. E-mail: hua.14@osu.edu; Shivpuri, R. [Ohio State University, Department of Industrial, Welding and Systems Engineering, 1971 Neil Avenue, 210 Baker Systems, Columbus, OH 43210 (United States)]. E-mail: shivpuri.1@osu.edu; Fratini, L. [Dipartimento di Tecnologia Meccanica, Produzione e Ingegneria Gestionale, Universita di Palermo, Viale delle Scienze, 90128 Palermo (Italy)]. E-mail: abaqus@dtpm.unipa.it
2006-03-15
Although friction stir welding (FSW) has been successfully used to join materials that are difficult-to-weld or unweldeable by fusion welding methods, it is still in its early development stage and, therefore, a scientific knowledge based predictive model is of significant help for thorough understanding of FSW process. In this paper, a continuum based FEM model for friction stir welding process is proposed, that is 3D Lagrangian implicit, coupled, rigid-viscoplastic. This model is calibrated by comparing with experimental results of force and temperature distribution, then is used to investigate the distribution of temperature and strain in heat affect zone and the weld nugget. The model correctly predicts the non-symmetric nature of FSW process, and the relationships between the tool forces and the variation in the process parameters. It is found that the effective strain distribution is non-symmetric about the weld line while the temperature profile is almost symmetric in the weld zone.
3D-FEM Analysis on Geogrid Reinforced Flexible Pavement Roads
Calvarano, Lidia Sarah; Palamara, Rocco; Leonardi, Giovanni; Moraci, Nicola
2017-12-01
Nowadays, the need to increase pavement service life, guarantee high performance, reduce service and maintenance costs has been turned a greater attention on the use of reinforcements. This paper presents findings of a numerical investigation on geogrid reinforced flexible pavement roads, under wheel traffic loads, using a three-dimensional Finite Element Method (FEM). The results obtained show the effectiveness of glass fibre grids as reinforcement which, with appropriate design and correct installation, by improving interface shear resistance, can be used to expand the performance of flexible pavements in different ways: by increasing the road service life providing a relevant contribution against superficial rutting or by decreasing the construction costs due to the reduction in the reinforced HMA layer thickness and thus of mineral aggregate required for its construction.
An Experimental Simulation to Validate FEM to Predict Transverse Young’s Modulus of FRP Composites
V. S. Sai
2013-01-01
Full Text Available Finite element method finds application in the analysis of FRP composites due to its versatility in getting the solution for complex cases which are not possible by exact classical analytical approaches. The finite element result is questionable unless it is obtained from converged mesh and properly validated. In the present work specimens are prepared with metallic materials so that the arrangement of fibers is close to hexagonal packing in a matrix as similar arrangement in case of FRP is complex due to the size of fibers. Transverse Young’s moduli of these specimens are determined experimentally. Equivalent FE models are designed and corresponding transverse Young’s moduli are compared with the experimental results. It is observed that the FE values are in good agreement with the experimental results, thus validating FEM for predicting transverse modulus of FRP composites.
FEM Analysis of Brushless DC Servomotor with Fractional Number of Slots per Pole
BALUTA, G.
2014-02-01
Full Text Available The authors present in this paper the analysis with Finite Element Method (FEM of the magnetic circuit for a Brushless DC servomotor with fractional number of slots/pole (9 slots and 10 poles. For this purpose, FEMM 4.2 software package was used for the analysis. To obtain the waveforms of Back-ElectroMotive Forces (BEMFs, electromagnetic and cogging torque for servomotor a program in LUA scripting language (integrated into interactive shell of FEMM4.2 has been created. A comparation with a structure with integer number of slots/pole (18 slots and 6 poles was also realized. The analysis results prove that the structure chosen is an optimal solution: sinusoidal waveforms of BEMFs, improved electromagnetic torque and reduced cogging torque. Therefore, the operating characteristics of the servomotor with 9/10 slots/poles manufactured by Sistem Euroteh Company and included in an integrated electrical drives system are presented in this paper.
A continuum based fem model for friction stir welding-model development
Buffa, G.; Hua, J.; Shivpuri, R.; Fratini, L.
2006-01-01
Although friction stir welding (FSW) has been successfully used to join materials that are difficult-to-weld or unweldeable by fusion welding methods, it is still in its early development stage and, therefore, a scientific knowledge based predictive model is of significant help for thorough understanding of FSW process. In this paper, a continuum based FEM model for friction stir welding process is proposed, that is 3D Lagrangian implicit, coupled, rigid-viscoplastic. This model is calibrated by comparing with experimental results of force and temperature distribution, then is used to investigate the distribution of temperature and strain in heat affect zone and the weld nugget. The model correctly predicts the non-symmetric nature of FSW process, and the relationships between the tool forces and the variation in the process parameters. It is found that the effective strain distribution is non-symmetric about the weld line while the temperature profile is almost symmetric in the weld zone
Report on first masing and single mode locking in a prebunched beam FEM oscillator
Cohen, M.; Eichenbaum, A.; Kleinman, H. [Tel-Aviv Univ., Ramat-Aviv (Israel)] [and others
1995-12-31
Radiation characteristics of a table-top free electron maser (FEM) are described in this paper. The FEM employs a prebunched electron beam and is operated as an oscillator in the low-gain collective (Raman) regime. Using electron beam prebunching single mode locking at any one of the possible oscillation modes was obtained. The electron beam is prebunched by a microwave tube section before it is injected into the wiggler. By tuning the electron beam bunching frequency, the FEM oscillation frequency can be locked to any eigen frequency of the resonant waveguide cavity which is within the frequency band of net gain of the FEM. The oscillation build up process is sped up, when the FEM operates with a prebunched electron beam, and the build-up time of radiation is shortened significantly. First measurements of masing with and without prebunching and characterization of the emitted radiation are reported.
Buckling analysis of SMA bonded sandwich structure – using FEM
Katariya, Pankaj V.; Das, Arijit; Panda, Subrata K.
2018-03-01
Thermal buckling strength of smart sandwich composite structure (bonded with shape memory alloy; SMA) examined numerically via a higher-order finite element model in association with marching technique. The excess geometrical distortion of the structure under the elevated environment modeled through Green’s strain function whereas the material nonlinearity counted with the help of marching method. The system responses are computed numerically by solving the generalized eigenvalue equations via a customized MATLAB code. The comprehensive behaviour of the current finite element solutions (minimum buckling load parameter) is established by solving the adequate number of numerical examples including the given input parameter. The current numerical model is extended further to check the influence of various structural parameter of the sandwich panel on the buckling temperature including the SMA effect and reported in details.
Wan, Jan; Xiong, Naixue; Zhang, Wei; Zhang, Qinchao; Wan, Zheng
2012-01-01
The reliability of wireless sensor networks (WSNs) can be greatly affected by failures of sensor nodes due to energy exhaustion or the influence of brutal external environment conditions. Such failures seriously affect the data persistence and collection efficiency. Strategies based on network coding technology for WSNs such as LTCDS can improve the data persistence without mass redundancy. However, due to the bad intermediate performance of LTCDS, a serious ‘cliff effect’ may appear during the decoding period, and source data are hard to recover from sink nodes before sufficient encoded packets are collected. In this paper, the influence of coding degree distribution strategy on the ‘cliff effect’ is observed and the prioritized data storage and dissemination algorithm PLTD-ALPHA is presented to achieve better data persistence and recovering performance. With PLTD-ALPHA, the data in sensor network nodes present a trend that their degree distribution increases along with the degree level predefined, and the persistent data packets can be submitted to the sink node according to its degree in order. Finally, the performance of PLTD-ALPHA is evaluated and experiment results show that PLTD-ALPHA can greatly improve the data collection performance and decoding efficiency, while data persistence is not notably affected. PMID:23235451
Wan, Jan; Xiong, Naixue; Zhang, Wei; Zhang, Qinchao; Wan, Zheng
2012-12-12
The reliability of wireless sensor networks (WSNs) can be greatly affected by failures of sensor nodes due to energy exhaustion or the influence of brutal external environment conditions. Such failures seriously affect the data persistence and collection efficiency. Strategies based on network coding technology for WSNs such as LTCDS can improve the data persistence without mass redundancy. However, due to the bad intermediate performance of LTCDS, a serious 'cliff effect' may appear during the decoding period, and source data are hard to recover from sink nodes before sufficient encoded packets are collected. In this paper, the influence of coding degree distribution strategy on the 'cliff effect' is observed and the prioritized data storage and dissemination algorithm PLTD-ALPHA is presented to achieve better data persistence and recovering performance. With PLTD-ALPHA, the data in sensor network nodes present a trend that their degree distribution increases along with the degree level predefined, and the persistent data packets can be submitted to the sink node according to its degree in order. Finally, the performance of PLTD-ALPHA is evaluated and experiment results show that PLTD-ALPHA can greatly improve the data collection performance and decoding efficiency, while data persistence is not notably affected.
Dias, Mafalda; Seery, David [Astronomy Centre, University of Sussex, Brighton BN1 9QH (United Kingdom); Frazer, Jonathan, E-mail: m.dias@sussex.ac.uk, E-mail: j.frazer@sussex.ac.uk, E-mail: a.liddle@sussex.ac.uk [Department of Theoretical Physics, University of the Basque Country, UPV/EHU, 48040 Bilbao (Spain)
2015-12-01
We describe how to apply the transport method to compute inflationary observables in a broad range of multiple-field models. The method is efficient and encompasses scenarios with curved field-space metrics, violations of slow-roll conditions and turns of the trajectory in field space. It can be used for an arbitrary mass spectrum, including massive modes and models with quasi-single-field dynamics. In this note we focus on practical issues. It is accompanied by a Mathematica code which can be used to explore suitable models, or as a basis for further development.
Dias, Mafalda; Seery, David; Frazer, Jonathan
2015-01-01
We describe how to apply the transport method to compute inflationary observables in a broad range of multiple-field models. The method is efficient and encompasses scenarios with curved field-space metrics, violations of slow-roll conditions and turns of the trajectory in field space. It can be used for an arbitrary mass spectrum, including massive modes and models with quasi-single-field dynamics. In this note we focus on practical issues. It is accompanied by a Mathematica code which can be used to explore suitable models, or as a basis for further development
Esterhazy, Sofi; Schneider, Felix; Schöberl, Joachim; Perugia, Ilaria; Bokelmann, Götz
2016-04-01
The research on purely numerical methods for modeling seismic waves has been more and more intensified over last decades. This development is mainly driven by the fact that on the one hand for subsurface models of interest in exploration and global seismology exact analytic solutions do not exist, but, on the other hand, retrieving full seismic waveforms is important to get insides into spectral characteristics and for the interpretation of seismic phases and amplitudes. Furthermore, the computational potential has dramatically increased in the recent past such that it became worthwhile to perform computations for large-scale problems as those arising in the field of computational seismology. Algorithms based on the Finite Element Method (FEM) are becoming increasingly popular for the propagation of acoustic and elastic waves in geophysical models as they provide more geometrical flexibility in terms of complexity as well as heterogeneity of the materials. In particular, we want to demonstrate the benefit of high-order FEMs as they also provide a better control on the accuracy. Our computations are done with the parallel Finite Element Library NGSOLVE ontop of the automatic 2D/3D mesh generator NETGEN (http://sourceforge.net/projects/ngsolve/). Further we are interested in the generation of synthetic seismograms including direct, refracted and converted waves in correlation to the presence of an underground cavity and the detailed simulation of the comprehensive wave field inside and around such a cavity that would have been created by a nuclear explosion. The motivation of this application comes from the need to find evidence of a nuclear test as they are forbidden by the Comprehensive Nuclear-Test Ban Treaty (CTBT). With this approach it is possible for us to investigate the wave field over a large bandwidth of wave numbers. This again will help to provide a better understanding on the characteristic signatures of an underground cavity, improve the protocols for
NONE
2013-08-15
The purposes of this study are to develop the safety evaluation methods and analysis codes needed in the design and construction stage of fast breeder reactor (FBR). In JFY 2012, the following results are obtained. As for the development of safety evaluation methods needed in the safety examination conducted for the reactor establishment permission, development of the analysis codes, such as core damage analysis code, were carried out following the planned schedule. As for the development of the safety evaluation method needed for the risk informed safety regulation, the quantification technique of the event tree using the Continuous Markov chain Monte Carlo method (CMMC method) were studied. (author)
Yeh, Pen-Shu (Inventor)
1998-01-01
A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.
St John, C.M.; Sanjeevan, K.
1991-12-01
The HEFF Code combines a simple boundary-element method of stress analysis with the closed form solutions for constant or exponentially decaying heat sources in an infinite elastic body to obtain an approximate method for analysis of underground excavations in a rock mass with heat generation. This manual describes the theoretical basis for the code, the code structure, model preparation, and step taken to assure that the code correctly performs its intended functions. The material contained within the report addresses the Software Quality Assurance Requirements for the Yucca Mountain Site Characterization Project. 13 refs., 26 figs., 14 tabs
Sze, Vivienne; Marpe, Detlev
2014-01-01
Context-Based Adaptive Binary Arithmetic Coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the latest High Efficiency Video Coding (HEVC) standard. While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both aspects of coding efficiency and throughput were considered. This chapter describes th...
An Improved BeiDou-2 Satellite-Induced Code Bias Estimation Method
Jingyang Fu
2018-04-01
Full Text Available Different from GPS, GLONASS, GALILEO and BeiDou-3, it is confirmed that the code multipath bias (CMB, which originate from the satellite end and can be over 1 m, are commonly found in the code observations of BeiDou-2 (BDS IGSO and MEO satellites. In order to mitigate their adverse effects on absolute precise applications which use the code measurements, we propose in this paper an improved correction model to estimate the CMB. Different from the traditional model which considering the correction values are orbit-type dependent (estimating two sets of values for IGSO and MEO, respectively and modeling the CMB as a piecewise linear function with a elevation node separation of 10°, we estimate the corrections for each BDS IGSO + MEO satellite on one hand, and a denser elevation node separation of 5° is used to model the CMB variations on the other hand. Currently, the institutions such as IGS-MGEX operate over 120 stations which providing the daily BDS observations. These large amounts of data provide adequate support to refine the CMB estimation satellite by satellite in our improved model. One month BDS observations from MGEX are used for assessing the performance of the improved CMB model by means of precise point positioning (PPP. Experimental results show that for the satellites on the same orbit type, obvious differences can be found in the CMB at the same node and frequency. Results show that the new correction model can improve the wide-lane (WL ambiguity usage rate for WL fractional cycle bias estimation, shorten the WL and narrow-lane (NL time to first fix (TTFF in PPP ambiguity resolution (AR as well as improve the PPP positioning accuracy. With our improved correction model, the usage of WL ambiguity is increased from 94.1% to 96.0%, the WL and NL TTFF of PPP AR is shorten from 10.6 to 9.3 min, 67.9 to 63.3 min, respectively, compared with the traditional correction model. In addition, both the traditional and improved CMB model have
Design and beam transport simulations of a multistage collector for the Israeli EA-FEM
Tecimer, M.; Canter, M.; Efimov, S.; Gover, A.; Sokolowski, J.
2001-12-01
A four stage asymmetric type depressed collector has been designed for the Israeli mm-wave FEM that is driven by a 1.4 MeV, 1.5 A electron beam. After leaving the interaction section the spent beam has an energy spread of 120 keV and 75 π mm mrad normalized beam emittance. Simulations of the beam transport system from the undulator exit through the decelerator tube into the collector have been carried out using EGUN and GPT codes. The latter has also been employed to study trajectories of the primary and scattered particles within the collector, optimizing the asymmetrical collector geometry and the electrode potentials at the presence of a deflecting magnetic field. The estimated overall system and collector efficiencies reach 50% and 70%, respectively, with a beam recovery of 99.6%. The design is aimed to attain millisecond long pulse operation and subsequently 1 kW average power. Simulation results are implemented in a mechanical design that leads to a simple, cost efficient assembly eliminating ceramic insulator rings between collector stages and the associated brazing in the manufacturing process. Instead, each copper plate is supported by insulating posts and freely displaceable within the vacuum chamber. We report on the simulation results of the beam transport and recovery systems and on the mechanical aspects of the multistage collector design.
Design and beam transport simulations of a multistage collector for the Israeli EA-FEM
Tecimer, M. E-mail: tecimer@post.tau.ac.il; Canter, M.; Efimov, S.; Gover, A.; Sokolowski, J
2001-12-21
A four stage asymmetric type depressed collector has been designed for the Israeli mm-wave FEM that is driven by a 1.4 MeV, 1.5 A electron beam. After leaving the interaction section the spent beam has an energy spread of 120 keV and 75 {pi} mm mrad normalized beam emittance. Simulations of the beam transport system from the undulator exit through the decelerator tube into the collector have been carried out using EGUN and GPT codes. The latter has also been employed to study trajectories of the primary and scattered particles within the collector, optimizing the asymmetrical collector geometry and the electrode potentials at the presence of a deflecting magnetic field. The estimated overall system and collector efficiencies reach 50% and 70%, respectively, with a beam recovery of 99.6%. The design is aimed to attain millisecond long pulse operation and subsequently 1 kW average power. Simulation results are implemented in a mechanical design that leads to a simple, cost efficient assembly eliminating ceramic insulator rings between collector stages and the associated brazing in the manufacturing process. Instead, each copper plate is supported by insulating posts and freely displaceable within the vacuum chamber. We report on the simulation results of the beam transport and recovery systems and on the mechanical aspects of the multistage collector design.
Design and beam transport simulations of a multistage collector for the Israeli EA-FEM
Tecimer, M.; Canter, M.; Efimov, S.; Gover, A.; Sokolowski, J.
2001-01-01
A four stage asymmetric type depressed collector has been designed for the Israeli mm-wave FEM that is driven by a 1.4 MeV, 1.5 A electron beam. After leaving the interaction section the spent beam has an energy spread of 120 keV and 75 π mm mrad normalized beam emittance. Simulations of the beam transport system from the undulator exit through the decelerator tube into the collector have been carried out using EGUN and GPT codes. The latter has also been employed to study trajectories of the primary and scattered particles within the collector, optimizing the asymmetrical collector geometry and the electrode potentials at the presence of a deflecting magnetic field. The estimated overall system and collector efficiencies reach 50% and 70%, respectively, with a beam recovery of 99.6%. The design is aimed to attain millisecond long pulse operation and subsequently 1 kW average power. Simulation results are implemented in a mechanical design that leads to a simple, cost efficient assembly eliminating ceramic insulator rings between collector stages and the associated brazing in the manufacturing process. Instead, each copper plate is supported by insulating posts and freely displaceable within the vacuum chamber. We report on the simulation results of the beam transport and recovery systems and on the mechanical aspects of the multistage collector design
From LIDAR Scanning to 3d FEM Analysis for Complex Surface and Underground Excavations
Chun, K.; Kemeny, J.
2017-12-01
Light detection and ranging (LIDAR) has been a prevalent remote-sensing technology applied in the geological fields due to its high precision and ease to use. One of the major applications is to use the detailed geometrical information of underground structures as a basis for the generation of three-dimensional numerical model that can be used in FEM analysis. To date, however, straightforward techniques in reconstructing numerical model from the scanned data of underground structures have not been well established or tested. In this paper, we propose a comprehensive approach integrating from LIDAR scanning to finite element numerical analysis, specifically converting LIDAR 3D point clouds of object containing complex surface geometry into finite element model. This methodology has been applied to the Kartchner Caverns in Arizona for the stability analysis. Numerical simulations were performed using the finite element code ABAQUS. The results indicate that the highlights of our technologies obtained from LIDAR is effective and provide reference for other similar engineering project in practice.
Design and beam transport simulations of a multistage collector for the Israeli EA-FEM
Tecimer, M; Efimov, S; Gover, A; Sokolowski, J
2001-01-01
A four stage asymmetric type depressed collector has been designed for the Israeli mm-wave FEM that is driven by a 1.4 MeV, 1.5 A electron beam. After leaving the interaction section the spent beam has an energy spread of 120 keV and 75 pi mm mrad normalized beam emittance. Simulations of the beam transport system from the undulator exit through the decelerator tube into the collector have been carried out using EGUN and GPT codes. The latter has also been employed to study trajectories of the primary and scattered particles within the collector, optimizing the asymmetrical collector geometry and the electrode potentials at the presence of a deflecting magnetic field. The estimated overall system and collector efficiencies reach 50% and 70%, respectively, with a beam recovery of 99.6%. The design is aimed to attain millisecond long pulse operation and subsequently 1 kW average power. Simulation results are implemented in a mechanical design that leads to a simple, cost efficient assembly eliminating ceramic i...
Comparisons of ratchetting analysis methods using RCC-M, RCC-MR and ASME codes
Yang Yu; Cabrillat, M.T.
2005-01-01
The present paper compares the simplified ratcheting analysis methods used in RCC-M, RCC-MR and ASME with some examples. Firstly, comparisons of the methods in RCC-M and efficiency diagram in RCC-MR are investigated. A special method is used to describe these two methods with curves in one coordinate, and the different conservation is demonstrated. RCC-M method is also be interpreted by SR (second ratio) and v (efficiency index) which is used in RCC-MR. Hence, we can easily compare the previous two methods by defining SR as abscissa and v as ordinate and plotting two curves of them. Secondly, comparisons of the efficiency curve in RCC-MR and methods in ASME-NH APPENDIX T are investigated, with significant creep. At last, two practical evaluations are performed to show the comparisons of aforementioned methods. (authors)
Quasi-Newton methods for the acceleration of multi-physics codes
Haelterman, R
2017-08-01
Full Text Available .E. Dennis, J.J. More´, Quasi-Newton methods: motivation and theory. SIAM Rev. 19, pp. 46–89 (1977) [11] J.E. Dennis, R.B. Schnabel, Least Change Secant Updates for quasi- Newton methods. SIAM Rev. 21, pp. 443–459 (1979) [12] G. Dhondt, CalculiX CrunchiX USER...) [25] J.M. Martinez, M.C. Zambaldi, An Inverse Column-Updating Method for solving large-scale nonlinear systems of equations. Optim. Methods Softw. 1, pp. 129–140 (1992) [26] J.M. Martinez, On the convergence of the column-updating method. Comp. Appl...
Borges, V.; Sefidvash, F.; Rastogi, E.P.; Huria, H.C.; Krishnani, P.D.
1989-01-01
In order to use the existing light water reactor cell calculation codes for fluidized bed nuclear reactor having spherical fuel cells, an equivalence method has been developed. This method is shown to be adequate in calculation of the Dancoff factor. This method also was applicable in LEOPARD code and the results obtained in calculation of K ∞ was compared with the obtained using the DTF IV code, the results showed that the method is adequate for the calculations neutronics of the fluidized bed nuclear reactor. (author) [pt
Suzuki, Shunichi; Motoshima, Takayuki; Naemura, Yumi; Kubo, Shin; Kanie, Shunji
2009-01-01
The authors develop a numerical code based on Local Discontinuous Galerkin Method for transient groundwater flow and reactive solute transport problems in order to make it possible to do three dimensional performance assessment on radioactive waste repositories at the earliest stage possible. Local discontinuous Galerkin Method is one of mixed finite element methods which are more accurate ones than standard finite element methods. In this paper, the developed numerical code is applied to several problems which are provided analytical solutions in order to examine its accuracy and flexibility. The results of the simulations show the new code gives highly accurate numeric solutions. (author)
Garion, C
2009-01-01
Modern particle accelerators require UHV conditions during their operation. In the accelerating cavities, breakdowns can occur, releasing large amount of gas into the vacuum chamber. To determine the pressure profile along the cavity as a function of time, the time-dependent behaviour of the gas has to be simulated. To do that, it is useful to apply accurate three-dimensional method, such as Test Particles Monte Carlo. In this paper, a time-dependent Test Particles Monte Carlo is used. It has been implemented in a Finite Element code, CASTEM. The principle is to track a sample of molecules during time. The complex geometry of the cavities can be created either in the FE code or in a CAD software (CATIA in our case). The interface between the two softwares to export the geometry from CATIA to CASTEM is given. The algorithm of particle tracking for collisionless flow in the FE code is shown. Thermal outgassing, pumping surfaces and electron and/or ion stimulated desorption can all be generated as well as differ...
Darazi, R.; Gouze, A.; Macq, B.
2009-01-01
Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.
Yuan, Jian-guo; Zhou, Guang-xiang; Gao, Wen-chun; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-01-01
According to the requirements of the increasing development for optical transmission systems, a novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on the subgroup of the finite field multiplicative group is proposed. Furthermore, this construction method can effectively avoid the girth-4 phenomena and has the advantages such as simpler construction, easier implementation, lower encoding/decoding complexity, better girth properties and more flexible adjustment for the code length and code rate. The simulation results show that the error correction performance of the QC-LDPC(3 780,3 540) code with the code rate of 93.7% constructed by this proposed method is excellent, its net coding gain is respectively 0.3 dB, 0.55 dB, 1.4 dB and 1.98 dB higher than those of the QC-LDPC(5 334,4 962) code constructed by the method based on the inverse element characteristics in the finite field multiplicative group, the SCG-LDPC(3 969,3 720) code constructed by the systematically constructed Gallager (SCG) random construction method, the LDPC(32 640,30 592) code in ITU-T G.975.1 and the classic RS(255,239) code which is widely used in optical transmission systems in ITU-T G.975 at the bit error rate ( BER) of 10-7. Therefore, the constructed QC-LDPC(3 780,3 540) code is more suitable for optical transmission systems.
Uncertainty Methods Framework Development for the TRACE Thermal-Hydraulics Code by the U.S.NRC
Bajorek, Stephen M.; Gingrich, Chester
2013-01-01
The Code of Federal Regulations, Title 10, Part 50.46 requires that the Emergency Core Cooling System (ECCS) performance be evaluated for a number of postulated Loss-Of-Coolant-Accidents (LOCAs). The rule allows two methods for calculation of the acceptance criteria; using a realistic model in the so-called 'Best Estimate' approach, or the more prescriptive following Appendix K to Part 50. Because of the conservatism of Appendix K, recent Evaluation Model submittals to the NRC used the realistic approach. With this approach, the Evaluation Model must demonstrate that the Peak Cladding Temperature (PCT), the Maximum Local Oxidation (MLO) and Core-Wide Oxidation (CWO) remain below their regulatory limits with a 'high probability'. Guidance for Best Estimate calculations following 50.46(a)(1) was provided by Regulatory Guide 1.157. This Guide identified a 95% probability level as being acceptable for comparisons of best-estimate predictions to the applicable regulatory limits, but was vague with respect to acceptable methods in which to determine the code uncertainty. Nor, did it specify if a confidence level should be determined. As a result, vendors have developed Evaluation Models utilizing several different methods to combine uncertainty parameters and determine the PCT and other variables to a high probability. In order to quantify the accuracy of TRACE calculations for a wide variety of applications and to audit Best Estimate calculations made by industry, the NRC is developing its own independent methodology to determine the peak cladding temperature and other parameters of regulatory interest to a high probability. Because several methods are in use, and each vendor's methodology ranges different parameters, the NRC method must be flexible and sufficiently general. Not only must the method apply to LOCA analysis for conventional light-water reactors, it must also be extendable to new reactor designs and type of analyses where the acceptance criteria are less
Advanced resonance self-shielding method for gray resonance treatment in lattice physics code GALAXY
Koike, Hiroki; Yamaji, Kazuya; Kirimura, Kazuki; Sato, Daisuke; Matsumoto, Hideki; Yamamoto, Akio
2012-01-01
A new resonance self-shielding method based on the equivalence theory is developed for general application to the lattice physics calculations. The present scope includes commercial light water reactor (LWR) design applications which require both calculation accuracy and calculation speed. In order to develop the new method, all the calculation processes from cross-section library preparation to effective cross-section generation are reviewed and reframed by adopting the current enhanced methodologies for lattice calculations. The new method is composed of the following four key methods: (1) cross-section library generation method with a polynomial hyperbolic tangent formulation, (2) resonance self-shielding method based on the multi-term rational approximation for general lattice geometry and gray resonance absorbers, (3) spatially dependent gray resonance self-shielding method for generation of intra-pellet power profile and (4) integrated reaction rate preservation method between the multi-group and the ultra-fine-group calculations. From the various verifications and validations, applicability of the present resonance treatment is totally confirmed. As a result, the new resonance self-shielding method is established, not only by extension of a past concentrated effort in the reactor physics research field, but also by unification of newly developed unique and challenging techniques for practical application to the lattice physics calculations. (author)
Anderson, D.V.; Shumaker, D.E.
1993-01-01
From a computational standpoint, particle simulation calculations for plasmas have not adapted well to the transitions from scalar to vector processing nor from serial to parallel environments. They have suffered from inordinate and excessive accessing of computer memory and have been hobbled by relatively inefficient gather-scatter constructs resulting from the use of indirect indexing. Lastly, the many-to-one mapping characteristic of the deposition phase has made it difficult to perform this in parallel. The authors' code sorts and reorders the particles in a spatial order. This allows them to greatly reduce the memory references, to run in directly indexed vector mode, and to employ domain decomposition to achieve parallelization. In this hybrid simulation the electrons are modeled as a fluid and the field equations solved are obtained from the electron momentum equation together with the pre-Maxwell equations (displacement current neglected). Either zero or finite electron mass can be used in the electron model. The resulting field equations are solved with an iteratively explicit procedure which is thus trivial to parallelize. Likewise, the field interpolations and the particle pushing is simple to parallelize. The deposition, sorting, and reordering phases are less simple and it is for these that the authors present detailed algorithms. They have now successfully tested the parallel version of HOPS in serial mode and it is now being readied for parallel execution on the Cray C-90. They will then port HOPS to a massively parallel computer, in the next year
Method of accounting for code safety valve setpoint drift in safety analyses
Rousseau, K.R.; Bergeron, P.A.
1989-01-01
In performing the safety analyses for transients that result in a challenge to the reactor coolant system (RCS) pressure boundary, the general acceptance criterion is that the peak RCS pressure not exceed the American Society of Mechanical Engineers limit of 110% of the design pressure. Without crediting non-safety-grade pressure mitigating systems, protection from this limit is mainly provided by the primary and secondary code safety valves. In theory, the combination of relief capacity and setpoints for these valves is designed to provide this protection. Generally, banks of valves are set at varying setpoints staggered by 15- to 20-psid increments to minimize the number of valves that would open by an overpressure challenge. In practice, however, when these valves are removed and tested (typically during a refueling outage), setpoints are sometimes found to have drifted by >50 psid. This drift should be accounted for during the performance of the safety analysis. This paper describes analyses performed by Yankee Atomic Electric Company (YAEC) to account for setpoint drift in safety valves from testing. The results of these analyses are used to define safety valve operability or acceptance criteria
Radiation field characterization of a BNCT research facility using Monte Carlo method - code MCNP-4B
Hernandez, Antonio Carlos
2002-01-01
Boron Neutron Capture Therapy - BNCT - is a selective cancer treatment and arises as an alternative therapy to treat cancer when usual techniques - surgery, chemotherapy or radiotherapy - show no satisfactory results. The main proposal of this work is to project a facility to BNCT studies. This facility relies on the use of an Am Be neutron source and on a set of moderators, filters and shielding which will provide the best neutron/gamma beam characteristic for these Becton studies, i.e., high intensity thermal and/or epithermal neutron fluxes and with the minimum feasible gamma rays and fast neutrons contaminants. A computational model of the experiment was used to obtain the radiation field in the sample irradiation position. The calculations have been performed with the MCNP 4B Monte Carlo Code and the results obtained can be regarded as satisfactory, i.e., a thermal neutron fluencyN T = 1,35x10 8 n/cm , a fast neutron dose of 5,86x10 -10 Gy/N T and a gamma ray dose of 8,30x10 -14 Gy/N T . (author)
Radiation field characterization of a BNCT research facility using Monte Carlo Method - Code MCNP-4B
Hernandes, Antonio Carlos
2002-01-01
Boron Neutron Capture Therapy - BNCT- is a selective cancer treatment and arises as an alternative therapy to treat cancer when usual techniques - surgery, chemotherapy or radiotherapy - show no satisfactory results. The main proposal of this work is to project a facility to BNCT studies. This facility relies on the use of an AmBe neutron source and on a set of moderators, filters and shielding which will provide the best neutron/gamma beam characteristic for these BNCT studies, i.e., high intensity thermal and/or epithermal neutron fluxes and with the minimum feasible gamma rays and fast neutrons contaminants. A computational model of the experiment was used to obtain the radiation field in the sample irradiation position. The calculations have been performed with the MCNP 4B Monte Carlo Code and the results obtained can be regarded as satisfactory, i.e., a thermal neutron fluency Ν Τ = 1,35x10 8 n/cm 2 , a fast neutron dose of 5,86x -1 0 Gy/Ν Τ and a gamma ray dose of 8,30x -14 Gy/Ν Τ . (author)
Tso, C.F. [Arup (United Kingdom); Hueggenberg, R. [Gesellschaft fuer Nuklear-Behaelter mbH (Germany)
2004-07-01
Drop testing and analysis are the two methods for demonstrating the performance of packages in hypothetical drop accident scenarios. The exact purpose of the tests and the analyses, and the relative prominence of the two in the license application, may depend on the Competent Authority and will vary between countries. The Finite Element Method (FEM) is a powerful analysis tool. A reliable finite element (FE) code when used correctly and appropriately, will allow a package's behaviour to be simulated reliably. With improvements in computing power, and in sophistication and reliability of FE codes, it is likely that FEM calculations will increasingly be used as evidence of drop test performance when seeking Competent Authority approval. What is lacking at the moment, however, is a standardised method of assessing a FE code in order to determine whether it is sufficiently reliable or pessimistic. To this end, the project Evaluation of Codes for Analysing the Drop Test Performance of Radioactive Material Transport Containers, funded by the European Commission Directorate-General XVII (now Directorate-General for Energy and Transport) and jointly performed by Arup and Gesellschaft fuer Nuklear-Behaelter mbH, was carried out in 1998. The work consisted of three components: Survey of existing finite element software, with a view to finding codes that may be capable of analysing drop test performance of radioactive material packages, and to produce an inventory of them. Develop a set of benchmark problems to evaluate software used for analysing the drop test performance of packages. Evaluate the finite element codes by testing them against the benchmarks This paper presents a summary of this work.
Tso, C.F.; Hueggenberg, R.
2004-01-01
Drop testing and analysis are the two methods for demonstrating the performance of packages in hypothetical drop accident scenarios. The exact purpose of the tests and the analyses, and the relative prominence of the two in the license application, may depend on the Competent Authority and will vary between countries. The Finite Element Method (FEM) is a powerful analysis tool. A reliable finite element (FE) code when used correctly and appropriately, will allow a package's behaviour to be simulated reliably. With improvements in computing power, and in sophistication and reliability of FE codes, it is likely that FEM calculations will increasingly be used as evidence of drop test performance when seeking Competent Authority approval. What is lacking at the moment, however, is a standardised method of assessing a FE code in order to determine whether it is sufficiently reliable or pessimistic. To this end, the project Evaluation of Codes for Analysing the Drop Test Performance of Radioactive Material Transport Containers, funded by the European Commission Directorate-General XVII (now Directorate-General for Energy and Transport) and jointly performed by Arup and Gesellschaft fuer Nuklear-Behaelter mbH, was carried out in 1998. The work consisted of three components: Survey of existing finite element software, with a view to finding codes that may be capable of analysing drop test performance of radioactive material packages, and to produce an inventory of them. Develop a set of benchmark problems to evaluate software used for analysing the drop test performance of packages. Evaluate the finite element codes by testing them against the benchmarks This paper presents a summary of this work
Park, Jae-Hong; Kim, Moo-Hwan; Bae, Seong-Won; Byun, Sang-Chul [Pohang University of Science and Technology, Pohang (Korea, Republic of)
1998-03-15
The final objectives of this study are to establish the way of measuring the integrity of containment building structures and safety analysis in the period of a postuIated severe accidents and to decrease the uncertainty of these methods. For that object, the CONTAIN 1.2 codes model for analyzing the severe accidents phenomena and the heat transfer between the air inside the containment buildings and inner walls have been reviewed and analyzed. For the double containment wall provided to the next generation nuclear reactor, which is different to the previous type of containment, the temperature and pressure rising history were calculated and compared to the results of previous ones.
Oh, C.H.; Cho, Z.H.; California Univ., Irvine
1986-01-01
A new phase coding method using a selection gradient for high speed NMR flow velocity measurements is introduced and discussed. To establish a phase-velocity relationship of flow under the slice selection gradient and spin-echo RF pulse, the Bloch equation was numerically solved under the assumption that only one directional flow exists, i.e. in the direction of slice selection. Details of the numerical solution of the Bloch equation and techniques related to the numerical computations are also given. Finally, using the numerical calculation, high speed flow velocity measurement was attempted and found to be in good agreement with other complementary controlled measurements. (author)
Video coding and decoding devices and methods preserving PPG relevant information
2015-01-01
The present invention relates to a video encoding device (10, 10', 10") and method for encoding video data and to a corresponding video decoding device (60, 60') and method. To preserve PPG relevant information after encoding without requiring a large amount of additional data for the video encoder
Video coding and decoding devices and methods preserving ppg relevant information
2013-01-01
The present invention relates to a video encoding device (10, 10', 10'') and method for encoding video data and to a corresponding video decoding device (60, 60') and method. To preserve PPG relevant information after encoding without requiring a large amount of additional data for the video encoder
The probabilistic method of WWER fuel rod strength estimation using the START-3 code
Bibilashvili, Yu.K.; Medvedev, A.V.; Bogatyr, S.M.; Sokolov, F.F.; Khramtsov, M.V.
2001-01-01
During the last years probability methods of studying were widely used to determine the influence exerted by the geometry, technology and performance parameters of a fuel rod on the characteristics of its condition. Despite the diversity of probability methods their basis is formed by the simplest schema of the Monte-Carlo method (MC). This schema assumes a great number of the realizations of a random value and the statistical assessment of its characteristics. To generate random values, use is usually made of a pseudo-random number generator. The application of the quasi-random sequence elements in place of the latter substantially reduces the machine time since it promotes a quicker convergence of the method. Probability methods used to study the characteristics of a fuel rod condition can be considered to be an auxiliary means of deterministic calculations that allows the assessment of the conservatism degree of design calculations. (author)
Nijhof, A.H.J.; Cludts, Stephan; Fisscher, O.A.M.; Laan, Albertus
2003-01-01
More and more organisations formulate a code of conduct in order to stimulate responsible behaviour among their members. Much time and energy is usually spent fixing the content of the code but many organisations get stuck in the challenge of implementing and maintaining the code. The code then
V. V. Galchenko
2016-12-01
Full Text Available The description of calculation scheme of fuel assembly for preparation of few-group characteristics is considered with help of Serpent code. This code uses the Monte-Carlo method and energy continuous microscopic data libraries. Serpent code is devoted for calculation of fuel assembly characteristics, burnup calculations and preparation of few-group homogenized macroscopic cross-sections. The results of verification simulations in comparison with other codes (WIMS, HELIOS, NESSEL etc., which are used for neutron-physical analysis of VVER type fuel, are presented.
Startsev, N; Dimov, P; Grosche, B; Tretyakov, F; Schüz, J; Akleyev, A
2015-01-01
To follow up populations exposed to several radiation accidents in the Southern Urals, a cause-of-death registry was established at the Urals Center capturing deaths in the Chelyabinsk, Kurgan and Sverdlovsk region since 1950. When registering deaths over such a long time period, quality measures need to be in place to maintain quality and reduce the impact of individual coders as well as quality changes in death certificates. To ensure the uniformity of coding, a method for semi-automatic coding was developed, which is described here. Briefly, the method is based on a dynamic thesaurus, database-supported coding and parallel coding by two different individuals. A comparison of the proposed method for organizing the coding process with the common procedure of coding showed good agreement, with, at the end of the coding process, 70 - 90% agreement for the three-digit ICD -9 rubrics. The semi-automatic method ensures a sufficiently high quality of coding by at the same time providing an opportunity to reduce the labor intensity inherent in the creation of large-volume cause-of-death registries.
2005-01-01
A - Description of program or function: (1) Problems to be solved: MVP/GMVP can solve eigenvalue and fixed-source problems. The multigroup code GMVP can solve forward and adjoint problems for neutron, photon and neutron-photon coupled transport. The continuous-energy code MVP can solve only the forward problems. Both codes can also perform time-dependent calculations. (2) Geometry description: MVP/GMVP employs combinatorial geometry to describe the calculation geometry. It describes spatial regions by the combination of the 3-dimensional objects (BODIes). Currently, the following objects (BODIes) can be used. - BODIes with linear surfaces: half space, parallelepiped, right parallelepiped, wedge, right hexagonal prism; - BODIes with quadratic surface and linear surfaces: cylinder, sphere, truncated right cone, truncated elliptic cone, ellipsoid by rotation, general ellipsoid; - Arbitrary quadratic surface and torus. The rectangular and hexagonal lattice geometry can be used to describe the repeated geometry. Furthermore, the statistical geometry model is available to treat coated fuel particles or pebbles for high temperature reactors. (3) Particle sources: The various forms of energy-, angle-, space- and time-dependent distribution functions can be specified. (4) Cross sections: The ANISN-type PL cross sections or the double-differential cross sections can be used in the multigroup code GMVP. On the other hand, the specific cross section libraries are used in the continuous-energy code MVP. The libraries are generated from the evaluated nuclear data (JENDL-3.3, ENDF/B-VI, JEF-3.0 etc.) by using the LICEM code. The neutron cross sections in the unresolved resonance region are described by the probability table method. The neutron cross sections at arbitrary temperatures are available for MVP by just specifying the temperatures in the input data. (5) Boundary conditions: Vacuum, perfect reflective, isotropic reflective (white), periodic boundary conditions can be
Modelling Sawing of Metal Tubes Through FEM Simulation
Bort, C. M. Giorgio; Bosetti, P.; Bruschi, S.
2011-01-01
The paper presents the development of a numerical model of the sawing process of AISI 304 thin tubes, which is cut through a circular blade with alternating roughing and finishing teeth. The numerical simulation environment is the three-dimensional FEM software Deform v.10.1. The teeth actual trajectories were determined by a blade kinematics analysis developed in Matlab. Due to the manufacturing rolling steps and subsequent welding stage, the tube material is characterized by a gradient of properties along its thickness. Consequently, a simplified cutting test was set up and carried out in order to identify the values of relevant material parameters to be used in the numerical model. The dedicated test was the Orthogonal Tube Cutting test (OTC), which was performed on an instrumented lathe. The proposed numerical model was validated by comparing numerical results and experimental data obtained from sawing tests carried out on an industrial machine. The following outputs were compared: the cutting force, the chip thickness, and the chip contact area.
Modelling Sawing of Metal Tubes Through FEM Simulation
Bort, C. M. Giorgio; Bosetti, P.; Bruschi, S.
2011-05-01
The paper presents the development of a numerical model of the sawing process of AISI 304 thin tubes, which is cut through a circular blade with alternating roughing and finishing teeth. The numerical simulation environment is the three-dimensional FEM software Deform™ v.10.1. The teeth actual trajectories were determined by a blade kinematics analysis developed in Matlab™. Due to the manufacturing rolling steps and subsequent welding stage, the tube material is characterized by a gradient of properties along its thickness. Consequently, a simplified cutting test was set up and carried out in order to identify the values of relevant material parameters to be used in the numerical model. The dedicated test was the Orthogonal Tube Cutting test (OTC), which was performed on an instrumented lathe. The proposed numerical model was validated by comparing numerical results and experimental data obtained from sawing tests carried out on an industrial machine. The following outputs were compared: the cutting force, the chip thickness, and the chip contact area.
Estimation of POL-iteration methods in fast running DNBR code
Kwon, Hyuk; Kim, S. J.; Seo, K. W.; Hwang, D. H. [KAERI, Daejeon (Korea, Republic of)
2016-05-15
In this study, various root finding methods are applied to the POL-iteration module in SCOMS and POLiteration efficiency is compared with reference method. On the base of these results, optimum algorithm of POL iteration is selected. The POL requires the iteration until present local power reach limit power. The process to search the limiting power is equivalent with a root finding of nonlinear equation. POL iteration process involved in online monitoring system used a variant bisection method that is the most robust algorithm to find the root of nonlinear equation. The method including the interval accelerating factor and escaping routine out of ill-posed condition assured the robustness of SCOMS system. POL iteration module in SCOMS shall satisfy the requirement which is a minimum calculation time. For this requirement of calculation time, non-iterative algorithm, few channel model, simple steam table are implemented into SCOMS to improve the calculation time. MDNBR evaluation at a given operating condition requires the DNBR calculation at all axial locations. An increasing of POL-iteration number increased a calculation load of SCOMS significantly. Therefore, calculation efficiency of SCOMS is strongly dependent on the POL iteration number. In case study, the iterations of the methods have a superlinear convergence for finding limiting power but Brent method shows a quardratic convergence speed. These methods are effective and better than the reference bisection algorithm.
Development of an artificial neural network model integrated with constitutive and FEM models
Kong, L.X.; Hodgson, P.D.
2000-01-01
Although the standard error of IPANN model developed by Kong and Hodgson is lower than the constitutive model, it is found that the prediction of reaction force and torque during rolling with FEM is less accurate for IPANN model in some deformation regions. It is the summation of the product of the strain and stress in the deformation range, which contributes most to the precise prediction. An ANN model is therefore, developed in this work by integrating both the IPANN and FEM models. It is found that the integrated IPANN and FEM model is the most accurate model. (author)
A FEM Modeling of the Concrete Pavement Made of the Recycling Material
Šešlija Miloš
2016-01-01
Full Text Available Paper is a brief review of the research focused on formulation an numerical model for the concrete pavement which is made by the recycling material. For numerical modeling the finite element model (FEM and the 3D finite element model were applied. The software EverFE 2.25, was used. The results of FEM analysis is in a chapter shape showing move value change, strees and deflections for all layers a construction road model. In the next phase of the research was provided by FEM software with appropriate general purpose non-linear models, which allows the analysis of the real behavior of solid pavement under load.
A comparison of different quasi-newton acceleration methods for partitioned multi-physics codes
Haelterman, R
2018-02-01
Full Text Available & structures, 88/7, pp. 446–457 (2010) 8. J.E. Dennis, J.J. More´, Quasi-Newton methods: motivation and theory. SIAM Rev. 19, pp. 46–89 (1977) A Comparison of Quasi-Newton Acceleration Methods 15 9. J.E. Dennis, R.B. Schnabel, Least Change Secant Updates... Dois Metodos de Broyden. Mat. Apl. Comput. 1/2, pp. 135– 143 (1982) 25. J.M. Martinez, A quasi-Newton method with modification of one column per iteration. Com- puting 33, pp. 353–362 (1984) 26. J.M. Martinez, M.C. Zambaldi, An Inverse Column...