WorldWideScience

Sample records for standard bases finiteness

  1. Evaluation Standard for Safety Coefficient of Roller Compacted Concrete Dam Based on Finite Element Method

    Directory of Open Access Journals (Sweden)

    Bo Li

    2014-01-01

    Full Text Available The lack of evaluation standard for safety coefficient based on finite element method (FEM limits the wide application of FEM in roller compacted concrete dam (RCCD. In this paper, the strength reserve factor (SRF method is adopted to simulate gradual failure and possible unstable modes of RCCD system. The entropy theory and catastrophe theory are used to obtain the ultimate bearing resistance and failure criterion of the RCCD. The most dangerous sliding plane for RCCD failure is found using the Latin hypercube sampling (LHS and auxiliary analysis of partial least squares regression (PLSR. Finally a method for determining the evaluation standard of RCCD safety coefficient based on FEM is put forward using least squares support vector machines (LSSVM and particle swarm optimization (PSO. The proposed method is applied to safety coefficient analysis of the Longtan RCCD in China. The calculation shows that RCCD failure is closely related to RCCD interface strength, and the Longtan RCCD is safe in the design condition. Considering RCCD failure characteristic and combining the advantages of several excellent algorithms, the proposed method determines the evaluation standard for safety coefficient of RCCD based on FEM for the first time and can be popularized to any RCCD.

  2. Non Standard Finite Difference Scheme for Mutualistic Interaction Description

    OpenAIRE

    Gabbriellini, Gianluca

    2012-01-01

    One of the more interesting themes of the mathematical ecology is the description of the mutualistic interaction between two interacting species. Based on continuous-time model developed by Holland and DeAngelis 2009 for consumer-resource mutualism description, this work deals with the application of the Mickens Non Standard Finite Difference method to transform the continuous-time scheme into a discrete-time one. It has been proved that the Mickens scheme is dynamically consistent with the o...

  3. An inverse method based on finite element model to derive the plastic flow properties from non-standard tensile specimens of Eurofer97 steel

    Directory of Open Access Journals (Sweden)

    S. Knitel

    2016-12-01

    Full Text Available A new inverse method was developed to derive the plastic flow properties of non-standard disk tensile specimens, which were so designed to fit irradiation rods used for spallation irradiations in SINQ (Schweizer Spallations Neutronen Quelle target at Paul Scherrer Institute. The inverse method, which makes use of MATLAB and the finite element code ABAQUS, is based upon the reconstruction of the load-displacement curve by a succession of connected small linear segments. To do so, the experimental engineering stress/strain curve is divided into an elastic and a plastic section, and the plastic section is further divided into small segments. Each segment is then used to determine an associated pair of true stress/plastic strain values, representing the constitutive behavior. The main advantage of the method is that it does not rely on a hypothetic analytical expression of the constitutive behavior. To account for the stress/strain gradients that develop in the non-standard specimen, the stress and strain were weighted over the volume of the deforming elements. The method was validated with tensile tests carried out at room temperature on non-standard flat disk tensile specimens as well as on standard cylindrical specimens made of the reduced-activation tempered martensitic steel Eurofer97. While both specimen geometries presented a significant difference in terms of deformation localization during necking, the same true stress/strain curve was deduced from the inverse method. The potential and usefulness of the inverse method is outlined for irradiated materials that suffer from a large uniform elongation reduction.

  4. Baryon number dissipation at finite temperature in the standard model

    International Nuclear Information System (INIS)

    Mottola, E.; Raby, S.; Starkman, G.

    1990-01-01

    We analyze the phenomenon of baryon number violation at finite temperature in the standard model, and derive the relaxation rate for the baryon density in the high temperature electroweak plasma. The relaxation rate, γ is given in terms of real time correlation functions of the operator E·B, and is directly proportional to the sphaleron transition rate, Γ: γ preceq n f Γ/T 3 . Hence it is not instanton suppressed, as claimed by Cohen, Dugan and Manohar (CDM). We show explicitly how this result is consistent with the methods of CDM, once it is recognized that a new anomalous commutator is required in their approach. 19 refs., 2 figs

  5. On the fate of the Standard Model at finite temperature

    Energy Technology Data Exchange (ETDEWEB)

    Rose, Luigi Delle; Marzo, Carlo [Università del Salento, Dipartimento di Matematica e Fisica “Ennio De Giorgi' ,Via Arnesano, 73100 Lecce (Italy); INFN - Sezione di Lecce,via Arnesano, 73100 Lecce (Italy); Urbano, Alfredo [SISSA - International School for Advanced Studies,via Bonomea 256, 34136 Trieste (Italy)

    2016-05-10

    In this paper we revisit and update the computation of thermal corrections to the stability of the electroweak vacuum in the Standard Model. At zero temperature, we make use of the full two-loop effective potential, improved by three-loop beta functions with two-loop matching conditions. At finite temperature, we include one-loop thermal corrections together with resummation of daisy diagrams. We solve numerically — both at zero and finite temperature — the bounce equation, thus providing an accurate description of the thermal tunneling. Assuming a maximum temperature in the early Universe of the order of 10{sup 18} GeV, we find that the instability bound excludes values of the top mass M{sub t}≳173.6 GeV, with M{sub h}≃125 GeV and including uncertainties on the strong coupling. We discuss the validity and temperature-dependence of this bound in the early Universe, with a special focus on the reheating phase after inflation.

  6. Finite element analyses for seismic shear wall international standard problem

    International Nuclear Information System (INIS)

    Park, Y.J.; Hofmayer, C.H.

    1998-04-01

    Two identical reinforced concrete (RC) shear walls, which consist of web, flanges and massive top and bottom slabs, were tested up to ultimate failure under earthquake motions at the Nuclear Power Engineering Corporation's (NUPEC) Tadotsu Engineering Laboratory, Japan. NUPEC provided the dynamic test results to the OECD (Organization for Economic Cooperation and Development), Nuclear Energy Agency (NEA) for use as an International Standard Problem (ISP). The shear walls were intended to be part of a typical reactor building. One of the major objectives of the Seismic Shear Wall ISP (SSWISP) was to evaluate various seismic analysis methods for concrete structures used for design and seismic margin assessment. It also offered a unique opportunity to assess the state-of-the-art in nonlinear dynamic analysis of reinforced concrete shear wall structures under severe earthquake loadings. As a participant of the SSWISP workshops, Brookhaven National Laboratory (BNL) performed finite element analyses under the sponsorship of the U.S. Nuclear Regulatory Commission (USNRC). Three types of analysis were performed, i.e., monotonic static (push-over), cyclic static and dynamic analyses. Additional monotonic static analyses were performed by two consultants, F. Vecchio of the University of Toronto (UT) and F. Filippou of the University of California at Berkeley (UCB). The analysis results by BNL and the consultants were presented during the second workshop in Yokohama, Japan in 1996. A total of 55 analyses were presented during the workshop by 30 participants from 11 different countries. The major findings on the presented analysis methods, as well as engineering insights regarding the applicability and reliability of the FEM codes are described in detail in this report. 16 refs., 60 figs., 16 tabs

  7. Finite element analyses for Seismic Shear Wall International Standard Problem

    International Nuclear Information System (INIS)

    Park, Y.; Hofmayer, C.; Chokshi, N.

    1997-01-01

    In the seismic design of shear wall structures, e.g., nuclear reactor buildings, a linear FEM analysis is frequently used to quantify the stresses under the design loading condition. The final design decisions, however, are still based on empirical design rules established over decades from accumulated laboratory test data. This paper presents an overview of the state-of-the-art on the application of nonlinear FEM analysis to reinforced concrete (RC) shear wall structures under severe earthquake loadings based on the findings obtained during the Seismic Shear Wall International Standard Problem (SSWISP) Workshop in 1996. Also, BNL's analysis results of the International Standard Problem (ISP) shear walls under monotonic static, cyclic static and dynamic loading conditions are described

  8. Stability and non-standard finite difference method of the generalized Chua's circuit

    KAUST Repository

    Radwan, Ahmed G.; Moaddy, K.; Momani, Shaher M.

    2011-01-01

    In this paper, we develop a framework to obtain approximate numerical solutions of the fractional-order Chua's circuit with Memristor using a non-standard finite difference method. Chaotic response is obtained with fractional-order elements as well

  9. Relations between finite zero structure of the plant and the standard $H_\\infty$ controller order reduction

    NARCIS (Netherlands)

    Watanabe, Takao; Stoorvogel, Antonie Arij

    2001-01-01

    A relation between the finite zero structure of the plant and the standard $H_\\infty$ controller was studied. The mechanism was also investigated using the ARE-based $H_\\infty$ controller that is represented by a free parameter. It was observed that the mechanism of the controller reduction is

  10. Discrete phase space based on finite fields

    International Nuclear Information System (INIS)

    Gibbons, Kathleen S.; Hoffman, Matthew J.; Wootters, William K.

    2004-01-01

    The original Wigner function provides a way of representing in phase space the quantum states of systems with continuous degrees of freedom. Wigner functions have also been developed for discrete quantum systems, one popular version being defined on a 2Nx2N discrete phase space for a system with N orthogonal states. Here we investigate an alternative class of discrete Wigner functions, in which the field of real numbers that labels the axes of continuous phase space is replaced by a finite field having N elements. There exists such a field if and only if N is a power of a prime; so our formulation can be applied directly only to systems for which the state-space dimension takes such a value. Though this condition may seem limiting, we note that any quantum computer based on qubits meets the condition and can thus be accommodated within our scheme. The geometry of our NxN phase space also leads naturally to a method of constructing a complete set of N+1 mutually unbiased bases for the state space

  11. Finite element based electric motor design optimization

    Science.gov (United States)

    Campbell, C. Warren

    1993-01-01

    The purpose of this effort was to develop a finite element code for the analysis and design of permanent magnet electric motors. These motors would drive electromechanical actuators in advanced rocket engines. The actuators would control fuel valves and thrust vector control systems. Refurbishing the hydraulic systems of the Space Shuttle after each flight is costly and time consuming. Electromechanical actuators could replace hydraulics, improve system reliability, and reduce down time.

  12. Standard Model Extension and Casimir effect for fermions at finite temperature

    Energy Technology Data Exchange (ETDEWEB)

    Santos, A.F., E-mail: alesandroferreira@fisica.ufmt.br [Instituto de Física, Universidade Federal de Mato Grosso, 78060-900, Cuiabá, Mato Grosso (Brazil); Department of Physics and Astronomy, University of Victoria, 3800 Finnerty Road, Victoria, BC (Canada); Khanna, Faqir C., E-mail: khannaf@uvic.ca [Department of Physics and Astronomy, University of Victoria, 3800 Finnerty Road, Victoria, BC (Canada); Department of Physics, University of Alberta, T6J 2J1, Edmonton, Alberta (Canada)

    2016-11-10

    Lorentz and CPT symmetries are foundations for important processes in particle physics. Recent studies in Standard Model Extension (SME) at high energy indicate that these symmetries may be violated. Modifications in the lagrangian are necessary to achieve a hermitian hamiltonian. The fermion sector of the standard model extension is used to calculate the effects of the Lorentz and CPT violation on the Casimir effect at zero and finite temperature. The Casimir effect and Stefan–Boltzmann law at finite temperature are calculated using the thermo field dynamics formalism.

  13. Astronaut Bone Medical Standards Derived from Finite Element (FE) Models of QCT Scans from Population Studies

    Science.gov (United States)

    Sibonga, J. D.; Feiveson, A. H.

    2014-01-01

    This work was accomplished in support of the Finite Element [FE] Strength Task Group, NASA Johnson Space Center [JSC], Houston, TX. This group was charged with the task of developing rules for using finite-element [FE] bone-strength measures to construct operating bands for bone health that are relevant to astronauts following exposure to spaceflight. FE modeling is a computational tool used by engineers to estimate the failure loads of complex structures. Recently, some engineers have used this tool to characterize the failure loads of the hip in population studies that also monitored fracture outcomes. A Directed Research Task was authorized in July, 2012 to investigate FE data from these population studies to derive these proposed standards of bone health as a function of age and gender. The proposed standards make use of an FE-based index that integrates multiple contributors to bone strength, an expanded evaluation that is critical after an astronaut is exposed to spaceflight. The current index of bone health used by NASA is the measurement of areal BMD. There was a concern voiced by a research and clinical advisory panel that the sole use of areal BMD would be insufficient to fully evaluate the effects of spaceflight on the hip. Hence, NASA may not have a full understanding of fracture risk, both during and after a mission, and may be poorly estimating in-flight countermeasure efficacy. The FE Strength Task Group - composed of principal investigators of the aforementioned population studies and of FE modelers -donated some of its population QCT data to estimate of hip bone strength by FE modeling for this specific purpose. Consequently, Human Health Countermeasures [HHC] has compiled a dataset of FE hip strengths, generated by a single FE modeling approach, from human subjects (approx.1060) with ages covering the age range of the astronauts. The dataset has been analyzed to generate a set of FE strength cutoffs for the following scenarios: a) Qualify an

  14. Finite

    Directory of Open Access Journals (Sweden)

    W.R. Azzam

    2015-08-01

    Full Text Available This paper reports the application of using a skirted foundation system to study the behavior of foundations with structural skirts adjacent to a sand slope and subjected to earthquake loading. The effect of the adopted skirts to safeguard foundation and slope from collapse is studied. The skirts effect on controlling horizontal soil movement and decreasing pore water pressure beneath foundations and beside the slopes during earthquake is investigated. This technique is investigated numerically using finite element analysis. A four story reinforced concrete building that rests on a raft foundation is idealized as a two-dimensional model with and without skirts. A two dimensional plain strain program PLAXIS, (dynamic version is adopted. A series of models for the problem under investigation were run under different skirt depths and lactation from the slope crest. The effect of subgrade relative density and skirts thickness is also discussed. Nodal displacement and element strains were analyzed for the foundation with and without skirts and at different studied parameters. The research results showed a great effectiveness in increasing the overall stability of the slope and foundation. The confined soil footing system by such skirts reduced the foundation acceleration therefore it can be tended to damping element and relieved the transmitted disturbance to the adjacent slope. This technique can be considered as a good method to control the slope deformation and decrease the slope acceleration during earthquakes.

  15. Finite Elements on Point Based Surfaces

    NARCIS (Netherlands)

    Clarenz, U.; Rumpf, M.; Telea, A.

    2004-01-01

    We present a framework for processing point-based surfaces via partial differential equations (PDEs). Our framework efficiently and effectively brings well-known PDE-based processing techniques to the field of point-based surfaces. Our method is based on the construction of local tangent planes and

  16. Stability and non-standard finite difference method of the generalized Chua's circuit

    KAUST Repository

    Radwan, Ahmed G.

    2011-08-01

    In this paper, we develop a framework to obtain approximate numerical solutions of the fractional-order Chua\\'s circuit with Memristor using a non-standard finite difference method. Chaotic response is obtained with fractional-order elements as well as integer-order elements. Stability analysis and the condition of oscillation for the integer-order system are discussed. In addition, the stability analyses for different fractional-order cases are investigated showing a great sensitivity to small order changes indicating the poles\\' locations inside the physical s-plane. The GrnwaldLetnikov method is used to approximate the fractional derivatives. Numerical results are presented graphically and reveal that the non-standard finite difference scheme is an effective and convenient method to solve fractional-order chaotic systems, and to validate their stability. © 2011 Elsevier Ltd. All rights reserved.

  17. Convergence of a residual based artificial viscosity finite element method

    KAUST Repository

    Nazarov, Murtazo

    2013-02-01

    We present a residual based artificial viscosity finite element method to solve conservation laws. The Galerkin approximation is stabilized by only residual based artificial viscosity, without any least-squares, SUPG, or streamline diffusion terms. We prove convergence of the method, applied to a scalar conservation law in two space dimensions, toward an unique entropy solution for implicit time stepping schemes. © 2012 Elsevier B.V. All rights reserved.

  18. Casimir effect at finite temperature for pure-photon sector of the minimal Standard Model Extension

    Energy Technology Data Exchange (ETDEWEB)

    Santos, A.F., E-mail: alesandroferreira@fisica.ufmt.br [Instituto de Física, Universidade Federal de Mato Grosso, 78060-900, Cuiabá, Mato Grosso (Brazil); Department of Physics and Astronomy, University of Victoria, 3800 Finnerty Road Victoria, BC (Canada); Khanna, Faqir C., E-mail: khannaf@uvic.ca [Department of Physics and Astronomy, University of Victoria, 3800 Finnerty Road Victoria, BC (Canada)

    2016-12-15

    Dynamics between particles is governed by Lorentz and CPT symmetry. There is a violation of Parity (P) and CP symmetry at low levels. The unified theory, that includes particle physics and quantum gravity, may be expected to be covariant with Lorentz and CPT symmetry. At high enough energies, will the unified theory display violation of any symmetry? The Standard Model Extension (SME), with Lorentz and CPT violating terms, has been suggested to include particle dynamics. The minimal SME in the pure photon sector is considered in order to calculate the Casimir effect at finite temperature.

  19. Non-standard finite difference and Chebyshev collocation methods for solving fractional diffusion equation

    Science.gov (United States)

    Agarwal, P.; El-Sayed, A. A.

    2018-06-01

    In this paper, a new numerical technique for solving the fractional order diffusion equation is introduced. This technique basically depends on the Non-Standard finite difference method (NSFD) and Chebyshev collocation method, where the fractional derivatives are described in terms of the Caputo sense. The Chebyshev collocation method with the (NSFD) method is used to convert the problem into a system of algebraic equations. These equations solved numerically using Newton's iteration method. The applicability, reliability, and efficiency of the presented technique are demonstrated through some given numerical examples.

  20. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...... and related to the uncertainty of the impulse response coefficients. The simulations can be used to benchmark l2 MPC against FIR based robust MPC as well as to estimate the maximum performance improvements by robust MPC....

  1. Integrating a logarithmic-strain based hyper-elastic formulation into a three-field mixed finite element formulation to deal with incompressibility in finite-strain elasto-plasticity

    International Nuclear Information System (INIS)

    Dina Al Akhrass; Bruchon, Julien; Drapier, Sylvain; Fayolle, Sebastien

    2014-01-01

    This paper deals with the treatment of incompressibility in solid mechanics in finite-strain elasto-plasticity. A finite-strain model proposed by Miehe, Apel and Lambrecht, which is based on a logarithmic strain measure and its work-conjugate stress tensor is chosen. Its main interest is that it allows for the adoption of standard constitutive models established in a small-strain framework. This model is extended to take into account the plastic incompressibility constraint intrinsically. In that purpose, an extension of this model to a three-field mixed finite element formulation is proposed, involving displacements, a strain variable and pressure as nodal variables with respect to standard finite element. Numerical examples of finite-strain problems are presented to assess the performance of the formulation. To conclude, an industrial case for which the classical under-integrated elements fail is considered. (authors)

  2. Local Projection-Based Stabilized Mixed Finite Element Methods for Kirchhoff Plate Bending Problems

    Directory of Open Access Journals (Sweden)

    Xuehai Huang

    2013-01-01

    Full Text Available Based on stress-deflection variational formulation, we propose a family of local projection-based stabilized mixed finite element methods for Kirchhoff plate bending problems. According to the error equations, we obtain the error estimates of the approximation to stress tensor in energy norm. And by duality argument, error estimates of the approximation to deflection in H1-norm are achieved. Then we design an a posteriori error estimator which is closely related to the equilibrium equation, constitutive equation, and nonconformity of the finite element spaces. With the help of Zienkiewicz-Guzmán-Neilan element spaces, we prove the reliability of the a posteriori error estimator. And the efficiency of the a posteriori error estimator is proved by standard bubble function argument.

  3. Finite Element Based Design and Optimization for Piezoelectric Accelerometers

    DEFF Research Database (Denmark)

    Liu, Bin; Kriegbaum, B.; Yao, Q.

    1998-01-01

    A systematic Finite Element design and optimisation procedure is implemented for the development of piezoelectric accelerometers. Most of the specifications of accelerometers can be obtained using the Finite Element simulations. The deviations between the simulated and calibrated sensitivities...

  4. Artificial emotional model based on finite state machine

    Institute of Scientific and Technical Information of China (English)

    MENG Qing-mei; WU Wei-guo

    2008-01-01

    According to the basic emotional theory, the artificial emotional model based on the finite state machine(FSM) was presented. In finite state machine model of emotion, the emotional space included the basic emotional space and the multiple emotional spaces. The emotion-switching diagram was defined and transition function was developed using Markov chain and linear interpolation algorithm. The simulation model was built using Stateflow toolbox and Simulink toolbox based on the Matlab platform.And the model included three subsystems: the input one, the emotion one and the behavior one. In the emotional subsystem, the responses of different personalities to the external stimuli were described by defining personal space. This model takes states from an emotional space and updates its state depending on its current state and a state of its input (also a state-emotion). The simulation model realizes the process of switching the emotion from the neutral state to other basic emotions. The simulation result is proved to correspond to emotion-switching law of human beings.

  5. Finite Countermodel Based Verification for Program Transformation (A Case Study

    Directory of Open Access Journals (Sweden)

    Alexei P. Lisitsa

    2015-12-01

    Full Text Available Both automatic program verification and program transformation are based on program analysis. In the past decade a number of approaches using various automatic general-purpose program transformation techniques (partial deduction, specialization, supercompilation for verification of unreachability properties of computing systems were introduced and demonstrated. On the other hand, the semantics based unfold-fold program transformation methods pose themselves diverse kinds of reachability tasks and try to solve them, aiming at improving the semantics tree of the program being transformed. That means some general-purpose verification methods may be used for strengthening program transformation techniques. This paper considers the question how finite countermodels for safety verification method might be used in Turchin's supercompilation method. We extract a number of supercompilation sub-algorithms trying to solve reachability problems and demonstrate use of an external countermodel finder for solving some of the problems.

  6. Hybrid Fundamental Solution Based Finite Element Method: Theory and Applications

    Directory of Open Access Journals (Sweden)

    Changyong Cao

    2015-01-01

    Full Text Available An overview on the development of hybrid fundamental solution based finite element method (HFS-FEM and its application in engineering problems is presented in this paper. The framework and formulations of HFS-FEM for potential problem, plane elasticity, three-dimensional elasticity, thermoelasticity, anisotropic elasticity, and plane piezoelectricity are presented. In this method, two independent assumed fields (intraelement filed and auxiliary frame field are employed. The formulations for all cases are derived from the modified variational functionals and the fundamental solutions to a given problem. Generation of elemental stiffness equations from the modified variational principle is also described. Typical numerical examples are given to demonstrate the validity and performance of the HFS-FEM. Finally, a brief summary of the approach is provided and future trends in this field are identified.

  7. Behavioral Modeling Based on Probabilistic Finite Automata: An Empirical Study.

    Science.gov (United States)

    Tîrnăucă, Cristina; Montaña, José L; Ontañón, Santiago; González, Avelino J; Pardo, Luis M

    2016-06-24

    Imagine an agent that performs tasks according to different strategies. The goal of Behavioral Recognition (BR) is to identify which of the available strategies is the one being used by the agent, by simply observing the agent's actions and the environmental conditions during a certain period of time. The goal of Behavioral Cloning (BC) is more ambitious. In this last case, the learner must be able to build a model of the behavior of the agent. In both settings, the only assumption is that the learner has access to a training set that contains instances of observed behavioral traces for each available strategy. This paper studies a machine learning approach based on Probabilistic Finite Automata (PFAs), capable of achieving both the recognition and cloning tasks. We evaluate the performance of PFAs in the context of a simulated learning environment (in this case, a virtual Roomba vacuum cleaner robot), and compare it with a collection of other machine learning approaches.

  8. A systematic study of finite BRST-BV transformations within W-X formulation of the standard and the Sp(2)-extended field-antifield formalism

    Energy Technology Data Exchange (ETDEWEB)

    Batalin, Igor A. [P.N. Lebedev Physical Institute, Moscow (Russian Federation); Tomsk State Pedagogical University, Tomsk (Russian Federation); Bering, Klaus [Masaryk University, Faculty of Science, Brno (Czech Republic); Lavrov, Peter M. [Tomsk State Pedagogical University, Tomsk (Russian Federation); National Research Tomsk State University, Tomsk (Russian Federation)

    2016-03-15

    Finite BRST-BV transformations are studied systematically within the W-X formulation of the standard and the Sp(2)-extended field-antifield formalism. The finite BRST-BV transformations are introduced by formulating a new version of the Lie equations. The corresponding finite change of the gauge-fixing master action X and the corresponding Ward identity are derived. (orig.)

  9. Comparison of Subset-Based Local and Finite Element-Based Global Digital Image Correlation

    KAUST Repository

    Pan, Bing

    2015-02-12

    Digital image correlation (DIC) techniques require an image matching algorithm to register the same physical points represented in different images. Subset-based local DIC and finite element-based (FE-based) global DIC are the two primary image matching methods that have been extensively investigated and regularly used in the field of experimental mechanics. Due to its straightforward implementation and high efficiency, subset-based local DIC has been used in almost all commercial DIC packages. However, it is argued by some researchers that FE-based global DIC offers better accuracy because of the enforced continuity between element nodes. We propose a detailed performance comparison between these different DIC algorithms both in terms of measurement accuracy and computational efficiency. Then, by measuring displacements of the same calculation points using the same calculation algorithms (e.g., correlation criterion, initial guess estimation, subpixel interpolation, optimization algorithm and convergence conditions) and identical calculation parameters (e.g., subset or element size), the performances of subset-based local DIC and two FE-based global DIC approaches are carefully compared in terms of measurement error and computational efficiency using both numerical tests and real experiments. A detailed examination of the experimental results reveals that, when subset (element) size is not very small and the local deformation within a subset (element) can be well approximated by the shape function used, standard subset-based local DIC approach not only provides better results in measured displacements, but also demonstrates much higher computation efficiency. However, several special merits of FE-based global DIC approaches are indicated.

  10. Comparison of Subset-Based Local and Finite Element-Based Global Digital Image Correlation

    KAUST Repository

    Pan, Bing; Wang, B.; Lubineau, Gilles; Moussawi, Ali

    2015-01-01

    Digital image correlation (DIC) techniques require an image matching algorithm to register the same physical points represented in different images. Subset-based local DIC and finite element-based (FE-based) global DIC are the two primary image matching methods that have been extensively investigated and regularly used in the field of experimental mechanics. Due to its straightforward implementation and high efficiency, subset-based local DIC has been used in almost all commercial DIC packages. However, it is argued by some researchers that FE-based global DIC offers better accuracy because of the enforced continuity between element nodes. We propose a detailed performance comparison between these different DIC algorithms both in terms of measurement accuracy and computational efficiency. Then, by measuring displacements of the same calculation points using the same calculation algorithms (e.g., correlation criterion, initial guess estimation, subpixel interpolation, optimization algorithm and convergence conditions) and identical calculation parameters (e.g., subset or element size), the performances of subset-based local DIC and two FE-based global DIC approaches are carefully compared in terms of measurement error and computational efficiency using both numerical tests and real experiments. A detailed examination of the experimental results reveals that, when subset (element) size is not very small and the local deformation within a subset (element) can be well approximated by the shape function used, standard subset-based local DIC approach not only provides better results in measured displacements, but also demonstrates much higher computation efficiency. However, several special merits of FE-based global DIC approaches are indicated.

  11. On Chudnovsky-Based Arithmetic Algorithms in Finite Fields

    OpenAIRE

    Atighehchi, Kevin; Ballet, Stéphane; Bonnecaze, Alexis; Rolland, Robert

    2015-01-01

    Thanks to a new construction of the so-called Chudnovsky-Chudnovsky multiplication algorithm, we design efficient algorithms for both the exponentiation and the multiplication in finite fields. They are tailored to hardware implementation and they allow computations to be parallelized while maintaining a low number of bilinear multiplications. We give an example with the finite field ${\\mathbb F}_{16^{13}}$.

  12. Indexing business processes based on annotated finite state automata

    NARCIS (Netherlands)

    Mahleko, B.; Wombacher, Andreas

    The existing service discovery infrastructure with UDDI as the de facto standard, is limited in that it does not support more complex searching based on matching business processes. Two business processes match if they agree on their simple services, their processing order as well as any mandatory

  13. Face-based smoothed finite element method for real-time simulation of soft tissue

    Science.gov (United States)

    Mendizabal, Andrea; Bessard Duparc, Rémi; Bui, Huu Phuoc; Paulus, Christoph J.; Peterlik, Igor; Cotin, Stéphane

    2017-03-01

    In soft tissue surgery, a tumor and other anatomical structures are usually located using the preoperative CT or MR images. However, due to the deformation of the concerned tissues, this information suffers from inaccuracy when employed directly during the surgery. In order to account for these deformations in the planning process, the use of a bio-mechanical model of the tissues is needed. Such models are often designed using the finite element method (FEM), which is, however, computationally expensive, in particular when a high accuracy of the simulation is required. In our work, we propose to use a smoothed finite element method (S-FEM) in the context of modeling of the soft tissue deformation. This numerical technique has been introduced recently to overcome the overly stiff behavior of the standard FEM and to improve the solution accuracy and the convergence rate in solid mechanics problems. In this paper, a face-based smoothed finite element method (FS-FEM) using 4-node tetrahedral elements is presented. We show that in some cases, the method allows for reducing the number of degrees of freedom, while preserving the accuracy of the discretization. The method is evaluated on a simulation of a cantilever beam loaded at the free end and on a simulation of a 3D cube under traction and compression forces. Further, it is applied to the simulation of the brain shift and of the kidney's deformation. The results demonstrate that the method outperforms the standard FEM in a bending scenario and that has similar accuracy as the standard FEM in the simulations of the brain-shift and of the kidney's deformation.

  14. Finite element analysis of rotating beams physics based interpolation

    CERN Document Server

    Ganguli, Ranjan

    2017-01-01

    This book addresses the solution of rotating beam free-vibration problems using the finite element method. It provides an introduction to the governing equation of a rotating beam, before outlining the solution procedures using Rayleigh-Ritz, Galerkin and finite element methods. The possibility of improving the convergence of finite element methods through a judicious selection of interpolation functions, which are closer to the problem physics, is also addressed. The book offers a valuable guide for students and researchers working on rotating beam problems – important engineering structures used in helicopter rotors, wind turbines, gas turbines, steam turbines and propellers – and their applications. It can also be used as a textbook for specialized graduate and professional courses on advanced applications of finite element analysis.

  15. Optimization of thermal systems based on finite-time thermodynamics and thermoeconomics

    Energy Technology Data Exchange (ETDEWEB)

    Durmayaz, A. [Istanbul Technical University (Turkey). Department of Mechanical Engineering; Sogut, O.S. [Istanbul Technical University, Maslak (Turkey). Department of Naval Architecture and Ocean Engineering; Sahin, B. [Yildiz Technical University, Besiktas, Istanbul (Turkey). Department of Naval Architecture; Yavuz, H. [Istanbul Technical University, Maslak (Turkey). Institute of Energy

    2004-07-01

    The irreversibilities originating from finite-time and finite-size constraints are important in the real thermal system optimization. Since classical thermodynamic analysis based on thermodynamic equilibrium do not consider these constraints directly, it is necessary to consider the energy transfer between the system and its surroundings in the rate form. Finite-time thermodynamics provides a fundamental starting point for the optimization of real thermal systems including the fundamental concepts of heat transfer and fluid mechanics to classical thermodynamics. In this study, optimization studies of thermal systems, that consider various objective functions, based on finite-time thermodynamics and thermoeconomics are reviewed. (author)

  16. Topology optimization of bounded acoustic problems using the hybrid finite element-wave based method

    DEFF Research Database (Denmark)

    Goo, Seongyeol; Wang, Semyung; Kook, Junghwan

    2017-01-01

    This paper presents an alternative topology optimization method for bounded acoustic problems that uses the hybrid finite element-wave based method (FE-WBM). The conventional method for the topology optimization of bounded acoustic problems is based on the finite element method (FEM), which...

  17. Model-based estimation of finite population total in stratified sampling

    African Journals Online (AJOL)

    The work presented in this paper concerns the estimation of finite population total under model – based framework. Nonparametric regression approach as a method of estimating finite population total is explored. The asymptotic properties of the estimators based on nonparametric regression are also developed under ...

  18. Evaluation of finite difference and FFT-based solutions of the transport of intensity equation.

    Science.gov (United States)

    Zhang, Hongbo; Zhou, Wen-Jing; Liu, Ying; Leber, Donald; Banerjee, Partha; Basunia, Mahmudunnabi; Poon, Ting-Chung

    2018-01-01

    A finite difference method is proposed for solving the transport of intensity equation. Simulation results show that although slower than fast Fourier transform (FFT)-based methods, finite difference methods are able to reconstruct the phase with better accuracy due to relaxed assumptions for solving the transport of intensity equation relative to FFT methods. Finite difference methods are also more flexible than FFT methods in dealing with different boundary conditions.

  19. Finite-Time Stabilization and Adaptive Control of Memristor-Based Delayed Neural Networks.

    Science.gov (United States)

    Wang, Leimin; Shen, Yi; Zhang, Guodong

    Finite-time stability problem has been a hot topic in control and system engineering. This paper deals with the finite-time stabilization issue of memristor-based delayed neural networks (MDNNs) via two control approaches. First, in order to realize the stabilization of MDNNs in finite time, a delayed state feedback controller is proposed. Then, a novel adaptive strategy is applied to the delayed controller, and finite-time stabilization of MDNNs can also be achieved by using the adaptive control law. Some easily verified algebraic criteria are derived to ensure the stabilization of MDNNs in finite time, and the estimation of the settling time functional is given. Moreover, several finite-time stability results as our special cases for both memristor-based neural networks (MNNs) without delays and neural networks are given. Finally, three examples are provided for the illustration of the theoretical results.Finite-time stability problem has been a hot topic in control and system engineering. This paper deals with the finite-time stabilization issue of memristor-based delayed neural networks (MDNNs) via two control approaches. First, in order to realize the stabilization of MDNNs in finite time, a delayed state feedback controller is proposed. Then, a novel adaptive strategy is applied to the delayed controller, and finite-time stabilization of MDNNs can also be achieved by using the adaptive control law. Some easily verified algebraic criteria are derived to ensure the stabilization of MDNNs in finite time, and the estimation of the settling time functional is given. Moreover, several finite-time stability results as our special cases for both memristor-based neural networks (MNNs) without delays and neural networks are given. Finally, three examples are provided for the illustration of the theoretical results.

  20. Finite time synchronization of memristor-based Cohen-Grossberg neural networks with mixed delays

    OpenAIRE

    Chen, Chuan; Li, Lixiang; Peng, Haipeng; Yang, Yixian

    2017-01-01

    Finite time synchronization, which means synchronization can be achieved in a settling time, is desirable in some practical applications. However, most of the published results on finite time synchronization don't include delays or only include discrete delays. In view of the fact that distributed delays inevitably exist in neural networks, this paper aims to investigate the finite time synchronization of memristor-based Cohen-Grossberg neural networks (MCGNNs) with both discrete delay and di...

  1. Stochastic Finite Elements in Reliability-Based Structural Optimization

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Engelund, S.

    1995-01-01

    Application of stochastic finite elements in structural optimization is considered. It is shown how stochastic fields modelling e.g. the modulus of elasticity can be discretized in stochastic variables and how a sensitivity analysis of the reliability of a structural system with respect to optimi......Application of stochastic finite elements in structural optimization is considered. It is shown how stochastic fields modelling e.g. the modulus of elasticity can be discretized in stochastic variables and how a sensitivity analysis of the reliability of a structural system with respect...... to optimization variables can be performed. A computer implementation is described and an illustrative example is given....

  2. Stochastic Finite Elements in Reliability-Based Structural Optimization

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Engelund, S.

    Application of stochastic finite elements in structural optimization is considered. It is shown how stochastic fields modelling e.g. the modulus of elasticity can be discretized in stochastic variables and how a sensitivity analysis of the reliability of a structural system with respect to optimi......Application of stochastic finite elements in structural optimization is considered. It is shown how stochastic fields modelling e.g. the modulus of elasticity can be discretized in stochastic variables and how a sensitivity analysis of the reliability of a structural system with respect...

  3. Piezoelectric Accelerometers Modification Based on the Finite Element Method

    DEFF Research Database (Denmark)

    Liu, Bin; Kriegbaum, B.

    2000-01-01

    The paper describes the modification of piezoelectric accelerometers using a Finite Element (FE) method. Brüel & Kjær Accelerometer Type 8325 is chosen as an example to illustrate the advanced accelerometer development procedure. The deviation between the measurement and FE simulation results...

  4. Reliability-Based Shape Optimization using Stochastic Finite Element Methods

    DEFF Research Database (Denmark)

    Enevoldsen, Ib; Sørensen, John Dalsgaard; Sigurdsson, G.

    1991-01-01

    stochastic fields (e.g. loads and material parameters such as Young's modulus and the Poisson ratio). In this case stochastic finite element techniques combined with FORM analysis can be used to obtain measures of the reliability of the structural systems, see Der Kiureghian & Ke (6) and Liu & Der Kiureghian...

  5. CHAOS-BASED ADVANCED ENCRYPTION STANDARD

    KAUST Repository

    Abdulwahed, Naif B.

    2013-01-01

    This thesis introduces a new chaos-based Advanced Encryption Standard (AES). The AES is a well-known encryption algorithm that was standardized by U.S National Institute of Standard and Technology (NIST) in 2001. The thesis investigates and explores

  6. Isogeometric finite element data structures based on Bézier extraction of T-splines

    NARCIS (Netherlands)

    Scott, M.A.; Borden, M.J.; Verhoosel, C.V.; Sederberg, T.W.; Hughes, T.J.R.

    2011-01-01

    We develop finite element data structures for T-splines based on Bézier extraction generalizing our previous work for NURBS. As in traditional finite element analysis, the extracted Bézier elements are defined in terms of a fixed set of polynomial basis functions, the so-called Bernstein basis. The

  7. Diagnosis of Constant Faults in Read-Once Contact Networks over Finite Bases using Decision Trees

    KAUST Repository

    Busbait, Monther I.

    2014-01-01

    We study the depth of decision trees for diagnosis of constant faults in read-once contact networks over finite bases. This includes diagnosis of 0-1 faults, 0 faults and 1 faults. For any finite basis, we prove a linear upper bound on the minimum

  8. Guided waves dispersion equations for orthotropic multilayered pipes solved using standard finite elements code.

    Science.gov (United States)

    Predoi, Mihai Valentin

    2014-09-01

    The dispersion curves for hollow multilayered cylinders are prerequisites in any practical guided waves application on such structures. The equations for homogeneous isotropic materials have been established more than 120 years ago. The difficulties in finding numerical solutions to analytic expressions remain considerable, especially if the materials are orthotropic visco-elastic as in the composites used for pipes in the last decades. Among other numerical techniques, the semi-analytical finite elements method has proven its capability of solving this problem. Two possibilities exist to model a finite elements eigenvalue problem: a two-dimensional cross-section model of the pipe or a radial segment model, intersecting the layers between the inner and the outer radius of the pipe. The last possibility is here adopted and distinct differential problems are deduced for longitudinal L(0,n), torsional T(0,n) and flexural F(m,n) modes. Eigenvalue problems are deduced for the three modes classes, offering explicit forms of each coefficient for the matrices used in an available general purpose finite elements code. Comparisons with existing solutions for pipes filled with non-linear viscoelastic fluid or visco-elastic coatings as well as for a fully orthotropic hollow cylinder are all proving the reliability and ease of use of this method. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Effective arithmetic in finite fields based on Chudnovsky's multiplication algorithm

    OpenAIRE

    Atighehchi , Kévin; Ballet , Stéphane; Bonnecaze , Alexis; Rolland , Robert

    2016-01-01

    International audience; Thanks to a new construction of the Chudnovsky and Chudnovsky multiplication algorithm, we design efficient algorithms for both the exponentiation and the multiplication in finite fields. They are tailored to hardware implementation and they allow computations to be parallelized, while maintaining a low number of bilinear multiplications.À partir d'une nouvelle construction de l'algorithme de multiplication de Chudnovsky et Chudnovsky, nous concevons des algorithmes ef...

  10. A three-dimensional cell-based smoothed finite element method for elasto-plasticity

    International Nuclear Information System (INIS)

    Lee, Kye Hyung; Im, Se Yong; Lim, Jae Hyuk; Sohn, Dong Woo

    2015-01-01

    This work is concerned with a three-dimensional cell-based smoothed finite element method for application to elastic-plastic analysis. The formulation of smoothed finite elements is extended to cover elastic-plastic deformations beyond the classical linear theory of elasticity, which has been the major application domain of smoothed finite elements. The finite strain deformations are treated with the aid of the formulation based on the hyperelastic constitutive equation. The volumetric locking originating from the nearly incompressible behavior of elastic-plastic deformations is remedied by relaxing the volumetric strain through the mean value. The comparison with the conventional finite elements demonstrates the effectiveness and accuracy of the present approach.

  11. A three-dimensional cell-based smoothed finite element method for elasto-plasticity

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kye Hyung; Im, Se Yong [KAIST, Daejeon (Korea, Republic of); Lim, Jae Hyuk [KARI, Daejeon (Korea, Republic of); Sohn, Dong Woo [Korea Maritime and Ocean University, Busan (Korea, Republic of)

    2015-02-15

    This work is concerned with a three-dimensional cell-based smoothed finite element method for application to elastic-plastic analysis. The formulation of smoothed finite elements is extended to cover elastic-plastic deformations beyond the classical linear theory of elasticity, which has been the major application domain of smoothed finite elements. The finite strain deformations are treated with the aid of the formulation based on the hyperelastic constitutive equation. The volumetric locking originating from the nearly incompressible behavior of elastic-plastic deformations is remedied by relaxing the volumetric strain through the mean value. The comparison with the conventional finite elements demonstrates the effectiveness and accuracy of the present approach.

  12. Fracture criterion for brittle materials based on statistical cells of finite volume

    International Nuclear Information System (INIS)

    Cords, H.; Kleist, G.; Zimmermann, R.

    1986-06-01

    An analytical consideration of the Weibull Statistical Analysis of brittle materials established the necessity of including one additional material constant for a more comprehensive description of the failure behaviour. The Weibull analysis is restricted to infinitesimal volume elements in consequence of the differential calculus applied. It was found that infinitesimally small elements are in conflict with the basic statistical assumption and that the differential calculus is not needed in fact since nowadays most of the stress analyses are based on finite element calculations, and these are most suitable for a subsequent statistical analysis of strength. The size of a finite statistical cell has been introduced as the third material parameter. It should represent the minimum volume containing all statistical features of the material such as distribution of pores, flaws and grains. The new approach also contains a unique treatment of failure under multiaxial stresses. The quantity responsible for failure under multiaxial stresses is introduced as a modified strain energy. Sixteen different tensile specimens including CT-specimens have been investigated experimentally and analyzed with the probabilistic fracture criterion. As a result it can be stated that the failure rates of all types of specimens made from three different grades of graphite are predictable. The accuracy of the prediction is one standard deviation. (orig.) [de

  13. Finite detector based projection model for super resolution CT

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Hengyong; Wang, Ge [Wake Forest Univ. Health Sciences, Winston-Salem, NC (United States). Dept. of Radiology; Virgina Tech, Blacksburg, VA (United States). Biomedical Imaging Div.

    2011-07-01

    For finite detector and focal spot sizes, here we propose a projection model for super resolution CT. First, for a given X-ray source point, a projection datum is modeled as an area integral over a narrow fan-beam connecting the detector elemental borders and the X-ray source point. Then, the final projection value is expressed as the integral obtained in the first step over the whole focal spot support. An ordered-subset simultaneous algebraic reconstruction technique (OS-SART) is developed using the proposed projection model. In the numerical simulation, our method produces super spatial resolution and suppresses high-frequency artifacts. (orig.)

  14. Logic synthesis for FPGA-based finite state machines

    CERN Document Server

    Barkalov, Alexander; Kolopienczyk, Malgorzata; Mielcarek, Kamil; Bazydlo, Grzegorz

    2016-01-01

    This book discusses control units represented by the model of a finite state machine (FSM). It contains various original methods and takes into account the peculiarities of field-programmable gate arrays (FPGA) chips and a FSM model. It shows that one of the peculiarities of FPGA chips is the existence of embedded memory blocks (EMB). The book is devoted to the solution of problems of logic synthesis and reduction of hardware amount in control units. The book will be interesting and useful for researchers and PhD students in the area of Electrical Engineering and Computer Science, as well as for designers of modern digital systems.

  15. Finite time synchronization of memristor-based Cohen-Grossberg neural networks with mixed delays

    Science.gov (United States)

    2017-01-01

    Finite time synchronization, which means synchronization can be achieved in a settling time, is desirable in some practical applications. However, most of the published results on finite time synchronization don’t include delays or only include discrete delays. In view of the fact that distributed delays inevitably exist in neural networks, this paper aims to investigate the finite time synchronization of memristor-based Cohen-Grossberg neural networks (MCGNNs) with both discrete delay and distributed delay (mixed delays). By means of a simple feedback controller and novel finite time synchronization analysis methods, several new criteria are derived to ensure the finite time synchronization of MCGNNs with mixed delays. The obtained criteria are very concise and easy to verify. Numerical simulations are presented to demonstrate the effectiveness of our theoretical results. PMID:28931066

  16. Finite time synchronization of memristor-based Cohen-Grossberg neural networks with mixed delays.

    Science.gov (United States)

    Chen, Chuan; Li, Lixiang; Peng, Haipeng; Yang, Yixian

    2017-01-01

    Finite time synchronization, which means synchronization can be achieved in a settling time, is desirable in some practical applications. However, most of the published results on finite time synchronization don't include delays or only include discrete delays. In view of the fact that distributed delays inevitably exist in neural networks, this paper aims to investigate the finite time synchronization of memristor-based Cohen-Grossberg neural networks (MCGNNs) with both discrete delay and distributed delay (mixed delays). By means of a simple feedback controller and novel finite time synchronization analysis methods, several new criteria are derived to ensure the finite time synchronization of MCGNNs with mixed delays. The obtained criteria are very concise and easy to verify. Numerical simulations are presented to demonstrate the effectiveness of our theoretical results.

  17. Finite time synchronization of memristor-based Cohen-Grossberg neural networks with mixed delays.

    Directory of Open Access Journals (Sweden)

    Chuan Chen

    Full Text Available Finite time synchronization, which means synchronization can be achieved in a settling time, is desirable in some practical applications. However, most of the published results on finite time synchronization don't include delays or only include discrete delays. In view of the fact that distributed delays inevitably exist in neural networks, this paper aims to investigate the finite time synchronization of memristor-based Cohen-Grossberg neural networks (MCGNNs with both discrete delay and distributed delay (mixed delays. By means of a simple feedback controller and novel finite time synchronization analysis methods, several new criteria are derived to ensure the finite time synchronization of MCGNNs with mixed delays. The obtained criteria are very concise and easy to verify. Numerical simulations are presented to demonstrate the effectiveness of our theoretical results.

  18. Finite element model updating of concrete structures based on imprecise probability

    Science.gov (United States)

    Biswal, S.; Ramaswamy, A.

    2017-09-01

    Imprecise probability based methods are developed in this study for the parameter estimation, in finite element model updating for concrete structures, when the measurements are imprecisely defined. Bayesian analysis using Metropolis Hastings algorithm for parameter estimation is generalized to incorporate the imprecision present in the prior distribution, in the likelihood function, and in the measured responses. Three different cases are considered (i) imprecision is present in the prior distribution and in the measurements only, (ii) imprecision is present in the parameters of the finite element model and in the measurement only, and (iii) imprecision is present in the prior distribution, in the parameters of the finite element model, and in the measurements. Procedures are also developed for integrating the imprecision in the parameters of the finite element model, in the finite element software Abaqus. The proposed methods are then verified against reinforced concrete beams and prestressed concrete beams tested in our laboratory as part of this study.

  19. Extension to linear dynamics for hybrid stress finite element formulation based on additional displacements

    Science.gov (United States)

    Sumihara, K.

    Based upon legitimate variational principles, one microscopic-macroscopic finite element formulation for linear dynamics is presented by Hybrid Stress Finite Element Method. The microscopic application of Geometric Perturbation introduced by Pian and the introduction of infinitesimal limit core element (Baby Element) have been consistently combined according to the flexible and inherent interpretation of the legitimate variational principles initially originated by Pian and Tong. The conceptual development based upon Hybrid Finite Element Method is extended to linear dynamics with the introduction of physically meaningful higher modes.

  20. Finite element based composite solution for neutron transport problems

    International Nuclear Information System (INIS)

    Mirza, A.N.; Mirza, N.M.

    1995-01-01

    A finite element treatment for solving neutron transport problems is presented. The employs region-wise discontinuous finite elements for the spatial representation of the neutron angular flux, while spherical harmonics are used for directional dependence. Composite solutions has been obtained by using different orders of angular approximations in different parts of a system. The method has been successfully implemented for one dimensional slab and two dimensional rectangular geometry problems. An overall reduction in the number of nodal coefficients (more than 60% in some cases as compared to conventional schemes) has been achieved without loss of accuracy with better utilization of computational resources. The method also provides an efficient way of handling physically difficult situations such as treatment of voids in duct problems and sharply changing angular flux. It is observed that a great wealth of information about the spatial and directional dependence of the angular flux is obtained much more quickly as compared to Monte Carlo method, where most of the information in restricted to the locality of immediate interest. (author)

  1. Modeling hemodynamics in intracranial aneurysms: Comparing accuracy of CFD solvers based on finite element and finite volume schemes.

    Science.gov (United States)

    Botti, Lorenzo; Paliwal, Nikhil; Conti, Pierangelo; Antiga, Luca; Meng, Hui

    2018-06-01

    Image-based computational fluid dynamics (CFD) has shown potential to aid in the clinical management of intracranial aneurysms (IAs) but its adoption in the clinical practice has been missing, partially due to lack of accuracy assessment and sensitivity analysis. To numerically solve the flow-governing equations CFD solvers generally rely on two spatial discretization schemes: Finite Volume (FV) and Finite Element (FE). Since increasingly accurate numerical solutions are obtained by different means, accuracies and computational costs of FV and FE formulations cannot be compared directly. To this end, in this study we benchmark two representative CFD solvers in simulating flow in a patient-specific IA model: (1) ANSYS Fluent, a commercial FV-based solver and (2) VMTKLab multidGetto, a discontinuous Galerkin (dG) FE-based solver. The FV solver's accuracy is improved by increasing the spatial mesh resolution (134k, 1.1m, 8.6m and 68.5m tetrahedral element meshes). The dGFE solver accuracy is increased by increasing the degree of polynomials (first, second, third and fourth degree) on the base 134k tetrahedral element mesh. Solutions from best FV and dGFE approximations are used as baseline for error quantification. On average, velocity errors for second-best approximations are approximately 1cm/s for a [0,125]cm/s velocity magnitude field. Results show that high-order dGFE provide better accuracy per degree of freedom but worse accuracy per Jacobian non-zero entry as compared to FV. Cross-comparison of velocity errors demonstrates asymptotic convergence of both solvers to the same numerical solution. Nevertheless, the discrepancy between under-resolved velocity fields suggests that mesh independence is reached following different paths. This article is protected by copyright. All rights reserved.

  2. Electric field calculations in brain stimulation based on finite elements

    DEFF Research Database (Denmark)

    Windhoff, Mirko; Opitz, Alexander; Thielscher, Axel

    2013-01-01

    The need for realistic electric field calculations in human noninvasive brain stimulation is undisputed to more accurately determine the affected brain areas. However, using numerical techniques such as the finite element method (FEM) is methodologically complex, starting with the creation...... of accurate head models to the integration of the models in the numerical calculations. These problems substantially limit a more widespread application of numerical methods in brain stimulation up to now. We introduce an optimized processing pipeline allowing for the automatic generation of individualized...... the successful usage of the pipeline in six subjects, including field calculations for transcranial magnetic stimulation and transcranial direct current stimulation. The quality of the head volume meshes is validated both in terms of capturing the underlying anatomy and of the well-shapedness of the mesh...

  3. Generalized renewal process for repairable systems based on finite Weibull mixture

    International Nuclear Information System (INIS)

    Veber, B.; Nagode, M.; Fajdiga, M.

    2008-01-01

    Repairable systems can be brought to one of possible states following a repair. These states are: 'as good as new', 'as bad as old' and 'better than old but worse than new'. The probabilistic models traditionally used to estimate the expected number of failures account for the first two states, but they do not properly apply to the last one, which is more realistic in practice. In this paper, a probabilistic model that is applicable to all of the three after-repair states, called generalized renewal process (GRP), is applied. Simplistically, GRP addresses the repair assumption by introducing the concept of virtual age into the stochastic point processes to enable them to represent the full spectrum of repair assumptions. The shape of measured or design life distributions of systems can vary considerably, and therefore frequently cannot be approximated by simple distribution functions. The scope of the paper is to prove that a finite Weibull mixture, with positive component weights only, can be used as underlying distribution of the time to first failure (TTFF) of the GRP model, on condition that the unknown parameters can be estimated. To support the main idea, three examples are presented. In order to estimate the unknown parameters of the GRP model with m-fold Weibull mixture, the EM algorithm is applied. The GRP model with m mixture components distributions is compared to the standard GRP model based on two-parameter Weibull distribution by calculating the expected number of failures. It can be concluded that the suggested GRP model with Weibull mixture with an arbitrary but finite number of components is suitable for predicting failures based on the past performance of the system

  4. A design-based approximation to the Bayes Information Criterion in finite population sampling

    Directory of Open Access Journals (Sweden)

    Enrico Fabrizi

    2014-05-01

    Full Text Available In this article, various issues related to the implementation of the usual Bayesian Information Criterion (BIC are critically examined in the context of modelling a finite population. A suitable design-based approximation to the BIC is proposed in order to avoid the derivation of the exact likelihood of the sample which is often very complex in a finite population sampling. The approximation is justified using a theoretical argument and a Monte Carlo simulation study.

  5. Biquartic Finite Volume Element Metho d Based on Lobatto-Guass Structure

    Institute of Scientific and Technical Information of China (English)

    Gao Yan-ni; Chen Yan-li

    2015-01-01

    In this paper, a biquartic finite volume element method based on Lobatto-Guass structure is presented for variable coefficient elliptic equation on rectangular partition. Not only the optimal H1 and L2 error estimates but also some super-convergent properties are available and could be proved for this method. The numer-ical results obtained by this finite volume element scheme confirm the validity of the theoretical analysis and the effectiveness of this method.

  6. Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.

    Science.gov (United States)

    Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing

    2016-01-01

    Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.

  7. Parallel Computation on Multicore Processors Using Explicit Form of the Finite Element Method and C++ Standard Libraries

    Directory of Open Access Journals (Sweden)

    Rek Václav

    2016-11-01

    Full Text Available In this paper, the form of modifications of the existing sequential code written in C or C++ programming language for the calculation of various kind of structures using the explicit form of the Finite Element Method (Dynamic Relaxation Method, Explicit Dynamics in the NEXX system is introduced. The NEXX system is the core of engineering software NEXIS, Scia Engineer, RFEM and RENEX. It has the possibilities of multithreaded running, which can now be supported at the level of native C++ programming language using standard libraries. Thanks to the high degree of abstraction that a contemporary C++ programming language provides, a respective library created in this way can be very generalized for other purposes of usage of parallelism in computational mechanics.

  8. Standard Model CP-violation and baryon asymmetry; 2, finite temperature

    CERN Document Server

    Gavela-Legazpi, Maria Belen; Orloff, J; Pène, O; Quimbay, C

    1994-01-01

    We consider the scattering of quasi-particles off the boundary created during a first order electroweak phase transition. Spatial coherence is lost due to the quasi-quark damping rate, and we show that reflection on the boundary is suppressed, even at tree-level. Simply on CP considerations, we argue against electroweak baryogenesis in the Standard Model via the charge transport mechanism. A CP asymmetry is produced in the reflection properties of quarks and antiquarks hitting the phase boundary. An effect is present at order \\alpha_W^2 in rate and a regular GIM behaviour is found, which can be expressed in terms of two unitarity triangles. A crucial role is played by the damping rate of quasi-particles in a hot plasma, which is a relevant scale together with M_W and the temperature. The effect is many orders of magnitude below what observation requires.

  9. Technical bases for criticality safety standards

    International Nuclear Information System (INIS)

    Clayton, E.D.

    1980-01-01

    An American National Standard implies a consensus of those substantially concerned with its scope and provisions. The technical basis, or foundation, on which the consensus rests, must in turn, be firmly established and documented for public review. The technical bases are discussed and reviewed of several standards in different stages of completion and acceptance: ANSI/ANS-8.12, 1978, Nuclear Criticality Control and Safety of Homogeneous Plutonium - Uranium Mixtures Outside Reactors (Approved July 17, 1978); ANS-815, Nuclear Criticality Control of Special Actinide Elements (Draft No. 5 of newly proposed standard); ANS-8.14, Use of Solutions of Neutron Absorbers for Criticality Control (Draft No. 4 of newly proposed standard); ANS-8.5 (Revision of N16.4, 1971), Use of Borosilicate-Glass Raschig Rings as a Neutron Absorber in Solutions of Fissile Material (Draft No. 5 as a result of prescribed five-year review and update of old standard). In each of the preceding, the newly proposed (or revised) limits are based on the extension of experimental data via well established calculations, or by means of independent calculations with adequate margins for uncertainties. The four cases serve to illustrate the insight of the work group members in the establishment of the technical bases for the limits and the level of activity required on their part in the preparation of ANSI Standards. A time span of from four up to seven years has not been uncommon for the preparation, review, and acceptance of an ANSI Standard. 8 figures. 7 tables

  10. Node-based finite element method for large-scale adaptive fluid analysis in parallel environments

    Energy Technology Data Exchange (ETDEWEB)

    Toshimitsu, Fujisawa [Tokyo Univ., Collaborative Research Center of Frontier Simulation Software for Industrial Science, Institute of Industrial Science (Japan); Genki, Yagawa [Tokyo Univ., Department of Quantum Engineering and Systems Science (Japan)

    2003-07-01

    In this paper, a FEM-based (finite element method) mesh free method with a probabilistic node generation technique is presented. In the proposed method, all computational procedures, from the mesh generation to the solution of a system of equations, can be performed fluently in parallel in terms of nodes. Local finite element mesh is generated robustly around each node, even for harsh boundary shapes such as cracks. The algorithm and the data structure of finite element calculation are based on nodes, and parallel computing is realized by dividing a system of equations by the row of the global coefficient matrix. In addition, the node-based finite element method is accompanied by a probabilistic node generation technique, which generates good-natured points for nodes of finite element mesh. Furthermore, the probabilistic node generation technique can be performed in parallel environments. As a numerical example of the proposed method, we perform a compressible flow simulation containing strong shocks. Numerical simulations with frequent mesh refinement, which are required for such kind of analysis, can effectively be performed on parallel processors by using the proposed method. (authors)

  11. Node-based finite element method for large-scale adaptive fluid analysis in parallel environments

    International Nuclear Information System (INIS)

    Toshimitsu, Fujisawa; Genki, Yagawa

    2003-01-01

    In this paper, a FEM-based (finite element method) mesh free method with a probabilistic node generation technique is presented. In the proposed method, all computational procedures, from the mesh generation to the solution of a system of equations, can be performed fluently in parallel in terms of nodes. Local finite element mesh is generated robustly around each node, even for harsh boundary shapes such as cracks. The algorithm and the data structure of finite element calculation are based on nodes, and parallel computing is realized by dividing a system of equations by the row of the global coefficient matrix. In addition, the node-based finite element method is accompanied by a probabilistic node generation technique, which generates good-natured points for nodes of finite element mesh. Furthermore, the probabilistic node generation technique can be performed in parallel environments. As a numerical example of the proposed method, we perform a compressible flow simulation containing strong shocks. Numerical simulations with frequent mesh refinement, which are required for such kind of analysis, can effectively be performed on parallel processors by using the proposed method. (authors)

  12. A point-value enhanced finite volume method based on approximate delta functions

    Science.gov (United States)

    Xuan, Li-Jun; Majdalani, Joseph

    2018-02-01

    We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.

  13. Promoting Culturally Responsive Standards-Based Teaching

    Science.gov (United States)

    Saifer, Steffen; Barton, Rhonda

    2007-01-01

    Culturally responsive standards-based (CRSB) teaching can help bring diverse school communities together and make learning meaningful. Unlike multicultural education--which is an important way to incorporate the world's cultural and ethnic diversity into lessons--CRSB teaching draws on the experiences, understanding, views, concepts, and ways of…

  14. New 2D adaptive mesh refinement algorithm based on conservative finite-differences with staggered grid

    Science.gov (United States)

    Gerya, T.; Duretz, T.; May, D. A.

    2012-04-01

    We present new 2D adaptive mesh refinement (AMR) algorithm based on stress-conservative finite-differences formulated for non-uniform rectangular staggered grid. The refinement approach is based on a repetitive cell splitting organized via a quad-tree construction (every parent cell is split into 4 daughter cells of equal size). Irrespective of the level of resolution every cell has 5 staggered nodes (2 horizontal velocities, 2 vertical velocities and 1 pressure) for which respective governing equations, boundary conditions and interpolation equations are formulated. The connectivity of the grid is achieved via cross-indexing of grid cells and basic nodal points located in their corners: four corner nodes are indexed for every cell and up to 4 surrounding cells are indexed for every node. The accuracy of the approach depends critically on the formulation of the stencil used at the "hanging" velocity nodes located at the boundaries between different levels of resolution. Most accurate results are obtained for the scheme based on the volume flux balance across the resolution boundary combined with stress-based interpolation of velocity orthogonal to the boundary. We tested this new approach with a number of 2D variable viscosity analytical solutions. Our tests demonstrate that the adaptive staggered grid formulation has convergence properties similar to those obtained in case of a standard, non-adaptive staggered grid formulation. This convergence is also achieved when resolution boundary crosses sharp viscosity contrast interfaces. The convergence rates measured are found to be insensitive to scenarios when the transition in grid resolution crosses sharp viscosity contrast interfaces. We compared various grid refinement strategies based on distribution of different field variables such as viscosity, density and velocity. According to these tests the refinement allows for significant (0.5-1 order of magnitude) increase in the computational accuracy at the same

  15. Groebner Finite Path Algebras

    OpenAIRE

    Leamer, Micah J.

    2004-01-01

    Let K be a field and Q a finite directed multi-graph. In this paper I classify all path algebras KQ and admissible orders with the property that all of their finitely generated ideals have finite Groebner bases. MS

  16. Coupled thermomechanical behavior of graphene using the spring-based finite element approach

    Energy Technology Data Exchange (ETDEWEB)

    Georgantzinos, S. K., E-mail: sgeor@mech.upatras.gr; Anifantis, N. K., E-mail: nanif@mech.upatras.gr [Machine Design Laboratory, Department of Mechanical Engineering and Aeronautics, University of Patras, Rio, 26500 Patras (Greece); Giannopoulos, G. I., E-mail: ggiannopoulos@teiwest.gr [Materials Science Laboratory, Department of Mechanical Engineering, Technological Educational Institute of Western Greece, 1 Megalou Alexandrou Street, 26334 Patras (Greece)

    2016-07-07

    The prediction of the thermomechanical behavior of graphene using a new coupled thermomechanical spring-based finite element approach is the aim of this work. Graphene sheets are modeled in nanoscale according to their atomistic structure. Based on molecular theory, the potential energy is defined as a function of temperature, describing the interatomic interactions in different temperature environments. The force field is approached by suitable straight spring finite elements. Springs simulate the interatomic interactions and interconnect nodes located at the atomic positions. Their stiffness matrix is expressed as a function of temperature. By using appropriate boundary conditions, various different graphene configurations are analyzed and their thermo-mechanical response is approached using conventional finite element procedures. A complete parametric study with respect to the geometric characteristics of graphene is performed, and the temperature dependency of the elastic material properties is finally predicted. Comparisons with available published works found in the literature demonstrate the accuracy of the proposed method.

  17. Wavelet-based spectral finite element dynamic analysis for an axially moving Timoshenko beam

    Science.gov (United States)

    Mokhtari, Ali; Mirdamadi, Hamid Reza; Ghayour, Mostafa

    2017-08-01

    In this article, wavelet-based spectral finite element (WSFE) model is formulated for time domain and wave domain dynamic analysis of an axially moving Timoshenko beam subjected to axial pretension. The formulation is similar to conventional FFT-based spectral finite element (SFE) model except that Daubechies wavelet basis functions are used for temporal discretization of the governing partial differential equations into a set of ordinary differential equations. The localized nature of Daubechies wavelet basis functions helps to rule out problems of SFE model due to periodicity assumption, especially during inverse Fourier transformation and back to time domain. The high accuracy of WSFE model is then evaluated by comparing its results with those of conventional finite element and SFE results. The effects of moving beam speed and axial tensile force on vibration and wave characteristics, and static and dynamic stabilities of moving beam are investigated.

  18. Matchmaking for Business Processes based on Conjunctive Finite State Automata

    NARCIS (Netherlands)

    Wombacher, Andreas; Fankhauser, Peter; Mahleko, Bendick; Neuhold, Erich

    2005-01-01

    Web services have a potential to enhance B2B e-commerce over the internet by allowing companies and organisations to publish their business processes on service directories where potential trading partners can find them. This can give rise to new business paradigms based on ad-hoc trading relations

  19. Research-based standards for accessible housing

    DEFF Research Database (Denmark)

    Helle, Tina; Iwarsson, Susanne; Brandt, Åse

    Since standards for accessible housing seldom are manifestly based on research and vary cross nationally, it is important to examine if there exists any scientific evidence, supporting these standards. Thus, one aim of this study was to review the literature in search of such scientific evidence...... data on older citizens and their housing environment in Sweden, Germany and Latvia (n=1150), collected with the Housing Enabler instrument. Applying statistical simulation we explored how different national standards for housing design influenced the prevalence of common environmental barriers. Kaplan...... by the database search (n= 2,577), resulting in the inclusion of one publication. Contacts to leading researchers in the field identified five publications. The hand search of 22 journals led to one publication. We have exemplified how the prevalence of common environmental problems in housing environments...

  20. Finite element approximations of the stokes flow problem based upon various variational principles

    International Nuclear Information System (INIS)

    Franca, L.P.; Hughers, T.J.R.; Stenberg, R.

    1989-05-01

    Finite element methods are constructed by adding to the usual Galerkin method terms that are mesh-dependent least-squares forms of the Euler-Lagrange equations. The methods are consistent and possess additional stability compared to the Galerkin method. Finite element interpolations, which are unstable in the Galerkin approach, are now convergent. The methodology is applied to the velocity-pressure formulation, a.k.a., Herrmann's formulation, to the stress-velocity formulation, a.k.a., Hellinger-Reissner's formulation and to a new formulation based on augmented stress, pressure and velocity [pt

  1. Controlling chaos in permanent magnet synchronous motor based on finite-time stability theory

    International Nuclear Information System (INIS)

    Du-Qu, Wei; Bo, Zhang

    2009-01-01

    This paper reports that the performance of permanent magnet synchronous motor (PMSM) degrades due to chaos when its systemic parameters fall into a certain area. To control the undesirable chaos in PMSM, a nonlinear controller, which is simple and easy to be constructed, is presented to achieve finite-time chaos control based on the finite-time stability theory. Computer simulation results show that the proposed controller is very effective. The obtained results may help to maintain the industrial servo driven system's security operation. (general)

  2. Finite element analysis of trabecular bone structures : a comparison of image-based meshing techniques

    NARCIS (Netherlands)

    Ulrich, D.; Rietbergen, van B.; Weinans, H.; Rüegsegger, P.

    1998-01-01

    In this study, we investigate if finite element (FE) analyses of human trabecular bone architecture based on 168 microm images can provide relevant information about the bone mechanical characteristics. Three human trabecular bone samples, one taken from the femoral head, one from the iliac crest,

  3. A new Identity Based Encryption (IBE) scheme using extended Chebyshev polynomial over finite fields Zp

    International Nuclear Information System (INIS)

    Benasser Algehawi, Mohammed; Samsudin, Azman

    2010-01-01

    We present a method to extract key pairs needed for the Identity Based Encryption (IBE) scheme from extended Chebyshev polynomial over finite fields Z p . Our proposed scheme relies on the hard problem and the bilinear property of the extended Chebyshev polynomial over Z p . The proposed system is applicable, secure, and reliable.

  4. A finite element perspective on non-linear FFT-based micromechanical simulations

    NARCIS (Netherlands)

    Zeman, J.; de Geus, T.W.J.; Vondřejc, J.; Peerlings, R.H.J.; Geers, M.G.D.

    2016-01-01

    Fourier solvers have become efficient tools to establish structure-property relations in heterogeneous materials. Introduced as an alternative to the Finite Element (FE) method, they are based on fixed-point solutions of the Lippmann-Schwinger type integral equation. Their computational efficiency

  5. A finite element perspective on nonlinear FFT-based micromechanical simulations

    NARCIS (Netherlands)

    Zeman, J.; de Geus, T.W.J.; Vondrejc, J.; Peerlings, R.H.J.; Geers, M.G.D.

    2017-01-01

    Fourier solvers have become efficient tools to establish structure-property relations in heterogeneous materials. Introduced as an alternative to the Finite Element (FE) method, they are based on fixed-point solutions of the Lippmann-Schwinger type integral equation. Their computational efficiency

  6. Diagnosis of three types of constant faults in read-once contact networks over finite bases

    KAUST Repository

    Busbait, Monther I.; Moshkov, Mikhail

    2016-01-01

    We study the depth of decision trees for diagnosis of three types of constant faults in read-once contact networks over finite bases containing only indecomposable networks. For each basis and each type of faults, we obtain a linear upper bound

  7. A Kohn–Sham equation solver based on hexahedral finite elements

    International Nuclear Information System (INIS)

    Fang Jun; Gao Xingyu; Zhou Aihui

    2012-01-01

    We design a Kohn–Sham equation solver based on hexahedral finite element discretizations. The solver integrates three schemes proposed in this paper. The first scheme arranges one a priori locally-refined hexahedral mesh with appropriate multiresolution. The second one is a modified mass-lumping procedure which accelerates the diagonalization in the self-consistent field iteration. The third one is a finite element recovery method which enhances the eigenpair approximations with small extra work. We carry out numerical tests on each scheme to investigate the validity and efficiency, and then apply them to calculate the ground state total energies of nanosystems C 60 , C 120 , and C 275 H 172 . It is shown that our solver appears to be computationally attractive for finite element applications in electronic structure study.

  8. A finite volume method for cylindrical heat conduction problems based on local analytical solution

    KAUST Repository

    Li, Wang

    2012-10-01

    A new finite volume method for cylindrical heat conduction problems based on local analytical solution is proposed in this paper with detailed derivation. The calculation results of this new method are compared with the traditional second-order finite volume method. The newly proposed method is more accurate than conventional ones, even though the discretized expression of this proposed method is slightly more complex than the second-order central finite volume method, making it cost more calculation time on the same grids. Numerical result shows that the total CPU time of the new method is significantly less than conventional methods for achieving the same level of accuracy. © 2012 Elsevier Ltd. All rights reserved.

  9. A finite volume method for cylindrical heat conduction problems based on local analytical solution

    KAUST Repository

    Li, Wang; Yu, Bo; Wang, Xinran; Wang, Peng; Sun, Shuyu

    2012-01-01

    A new finite volume method for cylindrical heat conduction problems based on local analytical solution is proposed in this paper with detailed derivation. The calculation results of this new method are compared with the traditional second-order finite volume method. The newly proposed method is more accurate than conventional ones, even though the discretized expression of this proposed method is slightly more complex than the second-order central finite volume method, making it cost more calculation time on the same grids. Numerical result shows that the total CPU time of the new method is significantly less than conventional methods for achieving the same level of accuracy. © 2012 Elsevier Ltd. All rights reserved.

  10. Evaluation of the Ross fast solution of Richards' equation in unfavourable conditions for standard finite element methods

    International Nuclear Information System (INIS)

    Crevoisier, D.; Voltz, M.; Chanzy, A.

    2009-01-01

    Ross [Ross PJ. Modeling soil water and solute transport - fast, simplified numerical solutions. Agron J 2003;95:1352-61] developed a fast, simplified method for solving Richards' equation. This non-iterative 1D approach, using Brooks and Corey [Brooks RH, Corey AT. Hydraulic properties of porous media. Hydrol. papers, Colorado St. Univ., Fort Collins: 1964] hydraulic functions, allows a significant reduction in computing time while maintaining the accuracy of the results. The first aim of this work is to confirm these results in a more extensive set of problems, including those that would lead to serious numerical difficulties for the standard numerical method. The second aim is to validate a generalisation of the Ross method to other mathematical representations of hydraulic functions. The Ross method is compared with the standard finite element model, Hydrus-1D [Simunek J, Sejna M, Van Genuchten MTh. The HYDRUS-1D and HYDRUS-2D codes for estimating unsaturated soil hydraulic and solutes transport parameters. Agron Abstr 357; 1999]. Computing time, accuracy of results and robustness of numerical schemes are monitored in 1D simulations involving different types of homogeneous soils, grids and hydrological conditions. The Ross method associated with modified Van Genuchten hydraulic functions [Vogel T, Cislerova M. On the reliability of unsaturated hydraulic conductivity calculated from the moisture retention curve. Transport Porous Media 1988:3:1-15] proves in every tested scenario to be more robust numerically, and the compromise of computing time/accuracy is seen to be particularly improved on coarse grids. Ross method run from 1.25 to 14 times faster than Hydrus-1D. (authors)

  11. A voxel-based finite element model for the prediction of bladder deformation

    Energy Technology Data Exchange (ETDEWEB)

    Xiangfei, Chai; Herk, Marcel van; Hulshof, Maarten C. C. M.; Bel, Arjan [Radiation Oncology Department, Academic Medical Center, University of Amsterdam, 1105 AZ Amsterdam (Netherlands); Radiation Oncology Department, Netherlands Cancer Institute, 1066 CX Amsterdam (Netherlands); Radiation Oncology Department, Academic Medical Center, University of Amsterdam, 1105 AZ Amsterdam (Netherlands)

    2012-01-15

    Purpose: A finite element (FE) bladder model was previously developed to predict bladder deformation caused by bladder filling change. However, two factors prevent a wide application of FE models: (1) the labor required to construct a FE model with high quality mesh and (2) long computation time needed to construct the FE model and solve the FE equations. In this work, we address these issues by constructing a low-resolution voxel-based FE bladder model directly from the binary segmentation images and compare the accuracy and computational efficiency of the voxel-based model used to simulate bladder deformation with those of a classical FE model with a tetrahedral mesh. Methods: For ten healthy volunteers, a series of MRI scans of the pelvic region was recorded at regular intervals of 10 min over 1 h. For this series of scans, the bladder volume gradually increased while rectal volume remained constant. All pelvic structures were defined from a reference image for each volunteer, including bladder wall, small bowel, prostate (male), uterus (female), rectum, pelvic bone, spine, and the rest of the body. Four separate FE models were constructed from these structures: one with a tetrahedral mesh (used in previous study), one with a uniform hexahedral mesh, one with a nonuniform hexahedral mesh, and one with a low-resolution nonuniform hexahedral mesh. Appropriate material properties were assigned to all structures and uniform pressure was applied to the inner bladder wall to simulate bladder deformation from urine inflow. Performance of the hexahedral meshes was evaluated against the performance of the standard tetrahedral mesh by comparing the accuracy of bladder shape prediction and computational efficiency. Results: FE model with a hexahedral mesh can be quickly and automatically constructed. No substantial differences were observed between the simulation results of the tetrahedral mesh and hexahedral meshes (<1% difference in mean dice similarity coefficient to

  12. A voxel-based finite element model for the prediction of bladder deformation

    International Nuclear Information System (INIS)

    Chai Xiangfei; Herk, Marcel van; Hulshof, Maarten C. C. M.; Bel, Arjan

    2012-01-01

    Purpose: A finite element (FE) bladder model was previously developed to predict bladder deformation caused by bladder filling change. However, two factors prevent a wide application of FE models: (1) the labor required to construct a FE model with high quality mesh and (2) long computation time needed to construct the FE model and solve the FE equations. In this work, we address these issues by constructing a low-resolution voxel-based FE bladder model directly from the binary segmentation images and compare the accuracy and computational efficiency of the voxel-based model used to simulate bladder deformation with those of a classical FE model with a tetrahedral mesh. Methods: For ten healthy volunteers, a series of MRI scans of the pelvic region was recorded at regular intervals of 10 min over 1 h. For this series of scans, the bladder volume gradually increased while rectal volume remained constant. All pelvic structures were defined from a reference image for each volunteer, including bladder wall, small bowel, prostate (male), uterus (female), rectum, pelvic bone, spine, and the rest of the body. Four separate FE models were constructed from these structures: one with a tetrahedral mesh (used in previous study), one with a uniform hexahedral mesh, one with a nonuniform hexahedral mesh, and one with a low-resolution nonuniform hexahedral mesh. Appropriate material properties were assigned to all structures and uniform pressure was applied to the inner bladder wall to simulate bladder deformation from urine inflow. Performance of the hexahedral meshes was evaluated against the performance of the standard tetrahedral mesh by comparing the accuracy of bladder shape prediction and computational efficiency. Results: FE model with a hexahedral mesh can be quickly and automatically constructed. No substantial differences were observed between the simulation results of the tetrahedral mesh and hexahedral meshes (<1% difference in mean dice similarity coefficient to

  13. Citizen Observatories: A Standards Based Architecture

    Science.gov (United States)

    Simonis, Ingo

    2015-04-01

    A number of large-scale research projects are currently under way exploring the various components of citizen observatories, e.g. CITI-SENSE (http://www.citi-sense.eu), Citclops (http://citclops.eu), COBWEB (http://cobwebproject.eu), OMNISCIENTIS (http://www.omniscientis.eu), and WeSenseIt (http://www.wesenseit.eu). Common to all projects is the motivation to develop a platform enabling effective participation by citizens in environmental projects, while considering important aspects such as security, privacy, long-term storage and availability, accessibility of raw and processed data and its proper integration into catalogues and international exchange and collaboration systems such as GEOSS or INSPIRE. This paper describes the software architecture implemented for setting up crowdsourcing campaigns using standardized components, interfaces, security features, and distribution capabilities. It illustrates the Citizen Observatory Toolkit, a software suite that allows defining crowdsourcing campaigns, to invite registered and unregistered participants to participate in crowdsourcing campaigns, and to analyze, process, and visualize raw and quality enhanced crowd sourcing data and derived products. The Citizen Observatory Toolkit is not a single software product. Instead, it is a framework of components that are built using internationally adopted standards wherever possible (e.g. OGC standards from Sensor Web Enablement, GeoPackage, and Web Mapping and Processing Services, as well as security and metadata/cataloguing standards), defines profiles of those standards where necessary (e.g. SWE O&M profile, SensorML profile), and implements design decisions based on the motivation to maximize interoperability and reusability of all components. The toolkit contains tools to set up, manage and maintain crowdsourcing campaigns, allows building on-demand apps optimized for the specific sampling focus, supports offline and online sampling modes using modern cell phones with

  14. A finite element-based algorithm for rubbing induced vibration prediction in rotors

    Science.gov (United States)

    Behzad, Mehdi; Alvandi, Mehdi; Mba, David; Jamali, Jalil

    2013-10-01

    In this paper, an algorithm is developed for more realistic investigation of rotor-to-stator rubbing vibration, based on finite element theory with unilateral contact and friction conditions. To model the rotor, cross sections are assumed to be radially rigid. A finite element discretization based on traditional beam theories which sufficiently accounts for axial and transversal flexibility of the rotor is used. A general finite element discretization model considering inertial and viscoelastic characteristics of the stator is used for modeling the stator. Therefore, for contact analysis, only the boundary of the stator is discretized. The contact problem is defined as the contact between the circular rigid cross section of the rotor and “nodes” of the stator only. Next, Gap function and contact conditions are described for the contact problem. Two finite element models of the rotor and the stator are coupled via the Lagrange multipliers method in order to obtain the constrained equation of motion. A case study of the partial rubbing is simulated using the algorithm. The synchronous and subsynchronous responses of the partial rubbing are obtained for different rotational speeds. In addition, a sensitivity analysis is carried out with respect to the initial clearance, the stator stiffness, the damping parameter, and the coefficient of friction. There is a good agreement between the result of this research and the experimental result in the literature.

  15. Diagnosis of constant faults in read-once contact networks over finite bases

    KAUST Repository

    Busbait, Monther I.; Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2015-01-01

    We study the depth of decision trees for diagnosis of constant 0 and 1 faults in read-once contact networks over finite bases containing only indecomposable networks. For each basis, we obtain a linear upper bound on the minimum depth of decision trees depending on the number of edges in the networks. For bases containing networks with at most 10 edges we find coefficients for linear bounds which are close to sharp. © 2014 Elsevier B.V. All rights reserved.

  16. Diagnosis of constant faults in read-once contact networks over finite bases

    KAUST Repository

    Busbait, Monther I.

    2015-03-01

    We study the depth of decision trees for diagnosis of constant 0 and 1 faults in read-once contact networks over finite bases containing only indecomposable networks. For each basis, we obtain a linear upper bound on the minimum depth of decision trees depending on the number of edges in the networks. For bases containing networks with at most 10 edges we find coefficients for linear bounds which are close to sharp. © 2014 Elsevier B.V. All rights reserved.

  17. Diagnosis of three types of constant faults in read-once contact networks over finite bases

    KAUST Repository

    Busbait, Monther I.

    2016-03-24

    We study the depth of decision trees for diagnosis of three types of constant faults in read-once contact networks over finite bases containing only indecomposable networks. For each basis and each type of faults, we obtain a linear upper bound on the minimum depth of decision trees depending on the number of edges in networks. For bases containing networks with at most 10 edges, we find sharp coefficients for linear bounds.

  18. Above-knee prosthesis design based on fatigue life using finite element method and design of experiment.

    Science.gov (United States)

    Phanphet, Suwattanarwong; Dechjarern, Surangsee; Jomjanyong, Sermkiat

    2017-05-01

    The main objective of this work is to improve the standard of the existing design of knee prosthesis developed by Thailand's Prostheses Foundation of Her Royal Highness The Princess Mother. The experimental structural tests, based on the ISO 10328, of the existing design showed that a few components failed due to fatigue under normal cyclic loading below the required number of cycles. The finite element (FE) simulations of structural tests on the knee prosthesis were carried out. Fatigue life predictions of knee component materials were modeled based on the Morrow's approach. The fatigue life prediction based on the FE model result was validated with the corresponding structural test and the results agreed well. The new designs of the failed components were studied using the design of experimental approach and finite element analysis of the ISO 10328 structural test of knee prostheses under two separated loading cases. Under ultimate loading, knee prosthesis peak von Mises stress must be less than the yield strength of knee component's material and the total knee deflection must be lower than 2.5mm. The fatigue life prediction of all knee components must be higher than 3,000,000 cycles under normal cyclic loading. The design parameters are the thickness of joint bars, the diameter of lower connector and the thickness of absorber-stopper. The optimized knee prosthesis design meeting all the requirements was recommended. Experimental ISO 10328 structural test of the fabricated knee prosthesis based on the optimized design confirmed the finite element prediction. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  19. Parallel Representation of Value-Based and Finite State-Based Strategies in the Ventral and Dorsal Striatum.

    Directory of Open Access Journals (Sweden)

    Makoto Ito

    2015-11-01

    Full Text Available Previous theoretical studies of animal and human behavioral learning have focused on the dichotomy of the value-based strategy using action value functions to predict rewards and the model-based strategy using internal models to predict environmental states. However, animals and humans often take simple procedural behaviors, such as the "win-stay, lose-switch" strategy without explicit prediction of rewards or states. Here we consider another strategy, the finite state-based strategy, in which a subject selects an action depending on its discrete internal state and updates the state depending on the action chosen and the reward outcome. By analyzing choice behavior of rats in a free-choice task, we found that the finite state-based strategy fitted their behavioral choices more accurately than value-based and model-based strategies did. When fitted models were run autonomously with the same task, only the finite state-based strategy could reproduce the key feature of choice sequences. Analyses of neural activity recorded from the dorsolateral striatum (DLS, the dorsomedial striatum (DMS, and the ventral striatum (VS identified significant fractions of neurons in all three subareas for which activities were correlated with individual states of the finite state-based strategy. The signal of internal states at the time of choice was found in DMS, and for clusters of states was found in VS. In addition, action values and state values of the value-based strategy were encoded in DMS and VS, respectively. These results suggest that both the value-based strategy and the finite state-based strategy are implemented in the striatum.

  20. Parallel Representation of Value-Based and Finite State-Based Strategies in the Ventral and Dorsal Striatum.

    Science.gov (United States)

    Ito, Makoto; Doya, Kenji

    2015-11-01

    Previous theoretical studies of animal and human behavioral learning have focused on the dichotomy of the value-based strategy using action value functions to predict rewards and the model-based strategy using internal models to predict environmental states. However, animals and humans often take simple procedural behaviors, such as the "win-stay, lose-switch" strategy without explicit prediction of rewards or states. Here we consider another strategy, the finite state-based strategy, in which a subject selects an action depending on its discrete internal state and updates the state depending on the action chosen and the reward outcome. By analyzing choice behavior of rats in a free-choice task, we found that the finite state-based strategy fitted their behavioral choices more accurately than value-based and model-based strategies did. When fitted models were run autonomously with the same task, only the finite state-based strategy could reproduce the key feature of choice sequences. Analyses of neural activity recorded from the dorsolateral striatum (DLS), the dorsomedial striatum (DMS), and the ventral striatum (VS) identified significant fractions of neurons in all three subareas for which activities were correlated with individual states of the finite state-based strategy. The signal of internal states at the time of choice was found in DMS, and for clusters of states was found in VS. In addition, action values and state values of the value-based strategy were encoded in DMS and VS, respectively. These results suggest that both the value-based strategy and the finite state-based strategy are implemented in the striatum.

  1. A modification of projective spacetime by finite self-interaction models of virtual leptons and quarks and the electroweak GWS standard model

    International Nuclear Information System (INIS)

    Scheurich, H.

    1986-01-01

    From the projective Dirac equation in a six-dimensional Kleinian space R(3, 3) are derived finite-rotation-group models as self-interaction models of virtual leptons and quarks. The quaternion group underlying them is considered as a substructure group of projective spacetime. A finite hyperspherical carrier of the self-interaction models is embedded into projective spacetime by means of the Planck length L 0 = (hG/c 3 )/sup 1/2/ as a physical unit length. The corresponding modification of metrics in the Planck domain becomes apparent to be equivalent to the role of the Higgs field in the electroweak GWS standard model. (author)

  2. Brain-Based Learning and Standards-Based Elementary Science.

    Science.gov (United States)

    Konecki, Loretta R.; Schiller, Ellen

    This paper explains how brain-based learning has become an area of interest to elementary school science teachers, focusing on the possible relationships between, and implications of, research on brain-based learning to the teaching of science education standards. After describing research on the brain, the paper looks at three implications from…

  3. Finite-time tracking control for multiple non-holonomic mobile robots based on visual servoing

    Science.gov (United States)

    Ou, Meiying; Li, Shihua; Wang, Chaoli

    2013-12-01

    This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.

  4. CHAOS-BASED ADVANCED ENCRYPTION STANDARD

    KAUST Repository

    Abdulwahed, Naif B.

    2013-05-01

    This thesis introduces a new chaos-based Advanced Encryption Standard (AES). The AES is a well-known encryption algorithm that was standardized by U.S National Institute of Standard and Technology (NIST) in 2001. The thesis investigates and explores the behavior of the AES algorithm by replacing two of its original modules, namely the S-Box and the Key Schedule, with two other chaos- based modules. Three chaos systems are considered in designing the new modules which are Lorenz system with multiplication nonlinearity, Chen system with sign modules nonlinearity, and 1D multiscroll system with stair case nonlinearity. The three systems are evaluated on their sensitivity to initial conditions and as Pseudo Random Number Generators (PRNG) after applying a post-processing technique to their output then performing NIST SP. 800-22 statistical tests. The thesis presents a hardware implementation of dynamic S-Boxes for AES that are populated using the three chaos systems. Moreover, a full MATLAB package to analyze the chaos generated S-Boxes based on graphical analysis, Walsh-Hadamard spectrum analysis, and image encryption analysis is developed. Although these S-Boxes are dynamic, meaning they are regenerated whenever the encryption key is changed, the analysis results show that such S-Boxes exhibit good properties like the Strict Avalanche Criterion (SAC) and the nonlinearity and in the application of image encryption. Furthermore, the thesis presents a new Lorenz-chaos-based key expansion for the AES. Many researchers have pointed out that there are some defects in the original key expansion of AES and thus have motivated such chaos-based key expansion proposal. The new proposed key schedule is analyzed and assessed in terms of confusion and diffusion by performing the frequency and SAC test respectively. The obtained results show that the new proposed design is more secure than the original AES key schedule and other proposed designs in the literature. The proposed

  5. Geodetic Finite-Fault-based Earthquake Early Warning Performance for Great Earthquakes Worldwide

    Science.gov (United States)

    Ruhl, C. J.; Melgar, D.; Grapenthin, R.; Allen, R. M.

    2017-12-01

    GNSS-based earthquake early warning (EEW) algorithms estimate fault-finiteness and unsaturated moment magnitude for the largest, most damaging earthquakes. Because large events are infrequent, algorithms are not regularly exercised and insufficiently tested on few available datasets. The Geodetic Alarm System (G-larmS) is a GNSS-based finite-fault algorithm developed as part of the ShakeAlert EEW system in the western US. Performance evaluations using synthetic earthquakes offshore Cascadia showed that G-larmS satisfactorily recovers magnitude and fault length, providing useful alerts 30-40 s after origin time and timely warnings of ground motion for onshore urban areas. An end-to-end test of the ShakeAlert system demonstrated the need for GNSS data to accurately estimate ground motions in real-time. We replay real data from several subduction-zone earthquakes worldwide to demonstrate the value of GNSS-based EEW for the largest, most damaging events. We compare predicted ground acceleration (PGA) from first-alert-solutions with those recorded in major urban areas. In addition, where applicable, we compare observed tsunami heights to those predicted from the G-larmS solutions. We show that finite-fault inversion based on GNSS-data is essential to achieving the goals of EEW.

  6. Strain-based finite elements for the analysis of cylinders with holes and normally intersecting cylinders

    International Nuclear Information System (INIS)

    Sabir, A.B.

    1983-01-01

    A finite element solution to the problems of stress distribution for cylindrical shells with circular and elliptical holes and also for normally intersecting thin elastic cylindrical shells is given. Quadrilateral and triangular curved finite elements are used in the analysis. The elements are of a new class, based on simple independent generalised strain functions insofar as this is allowed by the compatibility equations. The elements also satisfy exactly the requirements of strain-free-rigid body displacements and uses only the external 'geometrical' nodal degrees of freedom to avoid the difficulties associated with unnecessary internal degrees of freedom. We first develop strain based quadrilateral and triangular elements and apply them to the solution of the problem of stress concentrations in the neighbourhood of small and large circular and elliptical holes when the cylinders are subjected to a uniform axial tension. These results are compared with analytical solutions based on shallow shell approximations and show that the use of these strain based elements obviates the need for using an inordinately large number of elements. Normally intersecting cylinders are common configurations in structural components for nuclear reactor systems and design information for such configurations are generally lacking. The opportunity is taken in the present paper to provide a finite element solution to this problem. A method of substructing will be introduced to enable a solution to the large number of non banded set of simultaneous equations encountered. (orig./HP)

  7. An efficicient data structure for three-dimensional vertex based finite volume method

    Science.gov (United States)

    Akkurt, Semih; Sahin, Mehmet

    2017-11-01

    A vertex based three-dimensional finite volume algorithm has been developed using an edge based data structure.The mesh data structure of the given algorithm is similar to ones that exist in the literature. However, the data structures are redesigned and simplied in order to fit requirements of the vertex based finite volume method. In order to increase the cache efficiency, the data access patterns for the vertex based finite volume method are investigated and these datas are packed/allocated in a way that they are close to each other in the memory. The present data structure is not limited with tetrahedrons, arbitrary polyhedrons are also supported in the mesh without putting any additional effort. Furthermore, the present data structure also supports adaptive refinement and coarsening. For the implicit and parallel implementation of the FVM algorithm, PETSc and MPI libraries are employed. The performance and accuracy of the present algorithm are tested for the classical benchmark problems by comparing the CPU time for the open source algorithms.

  8. Finite element analysis-based design of a fluid-flow control nano-valve

    International Nuclear Information System (INIS)

    Grujicic, M.; Cao, G.; Pandurangan, B.; Roy, W.N.

    2005-01-01

    A finite element method-based procedure is developed for the design of molecularly functionalized nano-size devices. The procedure is aimed at the single-walled carbon nano-tubes (SWCNTs) used in the construction of such nano-devices and utilizes spatially varying nodal forces to represent electrostatic interactions between the charged groups of the functionalizing molecules. The procedure is next applied to the design of a fluid-flow control nano-valve. The results obtained suggest that the finite element-based procedure yields the results, which are very similar to their molecular modeling counterparts for small-size nano-valves, for which both types of analyses are feasible. The procedure is finally applied to optimize the design of a larger-size nano-valve, for which the molecular modeling approach is not practical

  9. Diagnosis of Constant Faults in Read-Once Contact Networks over Finite Bases using Decision Trees

    KAUST Repository

    Busbait, Monther I.

    2014-05-01

    We study the depth of decision trees for diagnosis of constant faults in read-once contact networks over finite bases. This includes diagnosis of 0-1 faults, 0 faults and 1 faults. For any finite basis, we prove a linear upper bound on the minimum depth of decision tree for diagnosis of constant faults depending on the number of edges in a contact network over that basis. Also, we obtain asymptotic bounds on the depth of decision trees for diagnosis of each type of constant faults depending on the number of edges in contact networks in the worst case per basis. We study the set of indecomposable contact networks with up to 10 edges and obtain sharp coefficients for the linear upper bound for diagnosis of constant faults in contact networks over bases of these indecomposable contact networks. We use a set of algorithms, including one that we create, to obtain the sharp coefficients.

  10. Finite element prediction of the swift effect based on Taylor-type polycrystal plasticity models

    OpenAIRE

    Duchene, Laurent; Delannay, L.; Habraken, Anne

    2004-01-01

    This paper describes the main concepts of the stress-strain interpolation model that has been implemented in the non-linear finite element code Lagamine. This model consists in a local description of the yield locus based on the texture of the material through the full constraints Taylor’s model. The prediction of the Swift effect is investigated: the influence of the texture evolution is shown up. The LAMEL model is also investigated for the Swift effect prediction. Peer reviewed

  11. Trend analysis using non-stationary time series clustering based on the finite element method

    OpenAIRE

    Gorji Sefidmazgi, M.; Sayemuzzaman, M.; Homaifar, A.; Jha, M. K.; Liess, S.

    2014-01-01

    In order to analyze low-frequency variability of climate, it is useful to model the climatic time series with multiple linear trends and locate the times of significant changes. In this paper, we have used non-stationary time series clustering to find change points in the trends. Clustering in a multi-dimensional non-stationary time series is challenging, since the problem is mathematically ill-posed. Clustering based on the finite element method (FEM) is one of the methods ...

  12. Analysis for pressure transient of coalbed methane reservoir based on Laplace transform finite difference method

    OpenAIRE

    Lei Wang; Hongjun Yin; Xiaoshuang Yang; Chuncheng Yang; Jing Fu

    2015-01-01

    Based on fractal geometry, fractal medium of coalbed methane mathematical model is established by Langmuir isotherm adsorption formula, Fick's diffusion law, Laplace transform formula, considering the well bore storage effect and skin effect. The Laplace transform finite difference method is used to solve the mathematical model. With Stehfest numerical inversion, the distribution of dimensionless well bore flowing pressure and its derivative was obtained in real space. According to compare wi...

  13. Three-dimensional parallel edge-based finite element modeling of electromagnetic data with field redatuming

    DEFF Research Database (Denmark)

    Cai, Hongzhu; Čuma, Martin; Zhdanov, Michael

    2015-01-01

    This paper presents a parallelized version of the edge-based finite element method with a novel post-processing approach for numerical modeling of an electromagnetic field in complex media. The method uses an unstructured tetrahedral mesh which can reduce the number of degrees of freedom signific......This paper presents a parallelized version of the edge-based finite element method with a novel post-processing approach for numerical modeling of an electromagnetic field in complex media. The method uses an unstructured tetrahedral mesh which can reduce the number of degrees of freedom...... significantly. The linear system of finite element equations is solved using parallel direct solvers which are robust for ill-conditioned systems and efficient for multiple source electromagnetic (EM) modeling. We also introduce a novel approach to compute the scalar components of the electric field from...... the tangential components along each edge based on field redatuming. The method can produce a more accurate result as compared to conventional approach. We have applied the developed algorithm to compute the EM response for a typical 3D anisotropic geoelectrical model of the off-shore HC reservoir with complex...

  14. Simulation of 3D parachute fluid–structure interaction based on nonlinear finite element method and preconditioning finite volume method

    Directory of Open Access Journals (Sweden)

    Fan Yuxin

    2014-12-01

    Full Text Available A fluid–structure interaction method combining a nonlinear finite element algorithm with a preconditioning finite volume method is proposed in this paper to simulate parachute transient dynamics. This method uses a three-dimensional membrane–cable fabric model to represent a parachute system at a highly folded configuration. The large shape change during parachute inflation is computed by the nonlinear Newton–Raphson iteration and the linear system equation is solved by the generalized minimal residual (GMRES method. A membrane wrinkling algorithm is also utilized to evaluate the special uniaxial tension state of membrane elements on the parachute canopy. In order to avoid large time expenses during structural nonlinear iteration, the implicit Hilber–Hughes–Taylor (HHT time integration method is employed. For the fluid dynamic simulations, the Roe and HLLC (Harten–Lax–van Leer contact scheme has been modified and extended to compute flow problems at all speeds. The lower–upper symmetric Gauss–Seidel (LU-SGS approximate factorization is applied to accelerate the numerical convergence speed. Finally, the test model of a highly folded C-9 parachute is simulated at a prescribed speed and the results show similar characteristics compared with experimental results and previous literature.

  15. Recovery Act: Finite Volume Based Computer Program for Ground Source Heat Pump Systems

    Energy Technology Data Exchange (ETDEWEB)

    James A Menart, Professor

    2013-02-22

    This report is a compilation of the work that has been done on the grant DE-EE0002805 entitled Finite Volume Based Computer Program for Ground Source Heat Pump Systems. The goal of this project was to develop a detailed computer simulation tool for GSHP (ground source heat pump) heating and cooling systems. Two such tools were developed as part of this DOE (Department of Energy) grant; the first is a two-dimensional computer program called GEO2D and the second is a three-dimensional computer program called GEO3D. Both of these simulation tools provide an extensive array of results to the user. A unique aspect of both these simulation tools is the complete temperature profile information calculated and presented. Complete temperature profiles throughout the ground, casing, tube wall, and fluid are provided as a function of time. The fluid temperatures from and to the heat pump, as a function of time, are also provided. In addition to temperature information, detailed heat rate information at several locations as a function of time is determined. Heat rates between the heat pump and the building indoor environment, between the working fluid and the heat pump, and between the working fluid and the ground are computed. The heat rates between the ground and the working fluid are calculated as a function time and position along the ground loop. The heating and cooling loads of the building being fitted with a GSHP are determined with the computer program developed by DOE called ENERGYPLUS. Lastly COP (coefficient of performance) results as a function of time are provided. Both the two-dimensional and three-dimensional computer programs developed as part of this work are based upon a detailed finite volume solution of the energy equation for the ground and ground loop. Real heat pump characteristics are entered into the program and used to model the heat pump performance. Thus these computer tools simulate the coupled performance of the ground loop and the heat pump. The

  16. Finite Volume Based Computer Program for Ground Source Heat Pump System

    Energy Technology Data Exchange (ETDEWEB)

    Menart, James A. [Wright State University

    2013-02-22

    This report is a compilation of the work that has been done on the grant DE-EE0002805 entitled ?Finite Volume Based Computer Program for Ground Source Heat Pump Systems.? The goal of this project was to develop a detailed computer simulation tool for GSHP (ground source heat pump) heating and cooling systems. Two such tools were developed as part of this DOE (Department of Energy) grant; the first is a two-dimensional computer program called GEO2D and the second is a three-dimensional computer program called GEO3D. Both of these simulation tools provide an extensive array of results to the user. A unique aspect of both these simulation tools is the complete temperature profile information calculated and presented. Complete temperature profiles throughout the ground, casing, tube wall, and fluid are provided as a function of time. The fluid temperatures from and to the heat pump, as a function of time, are also provided. In addition to temperature information, detailed heat rate information at several locations as a function of time is determined. Heat rates between the heat pump and the building indoor environment, between the working fluid and the heat pump, and between the working fluid and the ground are computed. The heat rates between the ground and the working fluid are calculated as a function time and position along the ground loop. The heating and cooling loads of the building being fitted with a GSHP are determined with the computer program developed by DOE called ENERGYPLUS. Lastly COP (coefficient of performance) results as a function of time are provided. Both the two-dimensional and three-dimensional computer programs developed as part of this work are based upon a detailed finite volume solution of the energy equation for the ground and ground loop. Real heat pump characteristics are entered into the program and used to model the heat pump performance. Thus these computer tools simulate the coupled performance of the ground loop and the heat pump

  17. Finite-key analysis for quantum key distribution with weak coherent pulses based on Bernoulli sampling

    Science.gov (United States)

    Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato

    2017-07-01

    An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.

  18. Percolation through voids around overlapping spheres: A dynamically based finite-size scaling analysis

    Science.gov (United States)

    Priour, D. J.

    2014-01-01

    The percolation threshold for flow or conduction through voids surrounding randomly placed spheres is calculated. With large-scale Monte Carlo simulations, we give a rigorous continuum treatment to the geometry of the impenetrable spheres and the spaces between them. To properly exploit finite-size scaling, we examine multiple systems of differing sizes, with suitable averaging over disorder, and extrapolate to the thermodynamic limit. An order parameter based on the statistical sampling of stochastically driven dynamical excursions and amenable to finite-size scaling analysis is defined, calculated for various system sizes, and used to determine the critical volume fraction ϕc=0.0317±0.0004 and the correlation length exponent ν =0.92±0.05.

  19. A complementarity-based approach to phase in finite-dimensional quantum systems

    International Nuclear Information System (INIS)

    Klimov, A B; Sanchez-Soto, L L; Guise, H de

    2005-01-01

    We develop a comprehensive theory of phase for finite-dimensional quantum systems. The only physical requirement we impose is that phase is complementary to amplitude. To implement this complementarity we use the notion of mutually unbiased bases, which exist for dimensions that are powers of a prime. For a d-dimensional system (qudit) we explicitly construct d+1 classes of maximally commuting operators, each one consisting of d-1 operators. One of these classes consists of diagonal operators that represent amplitudes (or inversions). By finite Fourier transformation, it is mapped onto ladder operators that can be appropriately interpreted as phase variables. We discuss examples of qubits and qutrits, and show how these results generalize previous approaches

  20. On the Stability of the Finite Difference based Lattice Boltzmann Method

    KAUST Repository

    El-Amin, Mohamed; Sun, Shuyu; Salama, Amgad

    2013-01-01

    This paper is devoted to determining the stability conditions for the finite difference based lattice Boltzmann method (FDLBM). In the current scheme, the 9-bit two-dimensional (D2Q9) model is used and the collision term of the Bhatnagar- Gross-Krook (BGK) is treated implicitly. The implicitness of the numerical scheme is removed by introducing a new distribution function different from that being used. Therefore, a new explicit finite-difference lattice Boltzmann method is obtained. Stability analysis of the resulted explicit scheme is done using Fourier expansion. Then, stability conditions in terms of time and spatial steps, relaxation time and explicitly-implicitly parameter are determined by calculating the eigenvalues of the given difference system. The determined conditions give the ranges of the parameters that have stable solutions.

  1. On the Stability of the Finite Difference based Lattice Boltzmann Method

    KAUST Repository

    El-Amin, Mohamed

    2013-06-01

    This paper is devoted to determining the stability conditions for the finite difference based lattice Boltzmann method (FDLBM). In the current scheme, the 9-bit two-dimensional (D2Q9) model is used and the collision term of the Bhatnagar- Gross-Krook (BGK) is treated implicitly. The implicitness of the numerical scheme is removed by introducing a new distribution function different from that being used. Therefore, a new explicit finite-difference lattice Boltzmann method is obtained. Stability analysis of the resulted explicit scheme is done using Fourier expansion. Then, stability conditions in terms of time and spatial steps, relaxation time and explicitly-implicitly parameter are determined by calculating the eigenvalues of the given difference system. The determined conditions give the ranges of the parameters that have stable solutions.

  2. Analysis for pressure transient of coalbed methane reservoir based on Laplace transform finite difference method

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2015-09-01

    Full Text Available Based on fractal geometry, fractal medium of coalbed methane mathematical model is established by Langmuir isotherm adsorption formula, Fick's diffusion law, Laplace transform formula, considering the well bore storage effect and skin effect. The Laplace transform finite difference method is used to solve the mathematical model. With Stehfest numerical inversion, the distribution of dimensionless well bore flowing pressure and its derivative was obtained in real space. According to compare with the results from the analytical method, the result from Laplace transform finite difference method turns out to be accurate. The influence factors are analyzed, including fractal dimension, fractal index, skin factor, well bore storage coefficient, energy storage ratio, interporosity flow coefficient and the adsorption factor. The calculating error of Laplace transform difference method is small. Laplace transform difference method has advantages in well-test application since any moment simulation does not rely on other moment results and space grid.

  3. Finite-time stability and synchronization of memristor-based fractional-order fuzzy cellular neural networks

    Science.gov (United States)

    Zheng, Mingwen; Li, Lixiang; Peng, Haipeng; Xiao, Jinghua; Yang, Yixian; Zhang, Yanping; Zhao, Hui

    2018-06-01

    This paper mainly studies the finite-time stability and synchronization problems of memristor-based fractional-order fuzzy cellular neural network (MFFCNN). Firstly, we discuss the existence and uniqueness of the Filippov solution of the MFFCNN according to the Banach fixed point theorem and give a sufficient condition for the existence and uniqueness of the solution. Secondly, a sufficient condition to ensure the finite-time stability of the MFFCNN is obtained based on the definition of finite-time stability of the MFFCNN and Gronwall-Bellman inequality. Thirdly, by designing a simple linear feedback controller, the finite-time synchronization criterion for drive-response MFFCNN systems is derived according to the definition of finite-time synchronization. These sufficient conditions are easy to verify. Finally, two examples are given to show the effectiveness of the proposed results.

  4. FACC: A Novel Finite Automaton Based on Cloud Computing for the Multiple Longest Common Subsequences Search

    Directory of Open Access Journals (Sweden)

    Yanni Li

    2012-01-01

    Full Text Available Searching for the multiple longest common subsequences (MLCS has significant applications in the areas of bioinformatics, information processing, and data mining, and so forth, Although a few parallel MLCS algorithms have been proposed, the efficiency and effectiveness of the algorithms are not satisfactory with the increasing complexity and size of biologic data. To overcome the shortcomings of the existing MLCS algorithms, and considering that MapReduce parallel framework of cloud computing being a promising technology for cost-effective high performance parallel computing, a novel finite automaton (FA based on cloud computing called FACC is proposed under MapReduce parallel framework, so as to exploit a more efficient and effective general parallel MLCS algorithm. FACC adopts the ideas of matched pairs and finite automaton by preprocessing sequences, constructing successor tables, and common subsequences finite automaton to search for MLCS. Simulation experiments on a set of benchmarks from both real DNA and amino acid sequences have been conducted and the results show that the proposed FACC algorithm outperforms the current leading parallel MLCS algorithm FAST-MLCS.

  5. Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method

    Science.gov (United States)

    Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang

    2017-06-01

    Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

  6. A spatial discretization of the MHD equations based on the finite volume - spectral method

    International Nuclear Information System (INIS)

    Miyoshi, Takahiro

    2000-05-01

    Based on the finite volume - spectral method, we present new discretization formulae for the spatial differential operators in the full system of the compressible MHD equations. In this approach, the cell-centered finite volume method is adopted in a bounded plane (poloidal plane), while the spectral method is applied to the differential with respect to the periodic direction perpendicular to the poloidal plane (toroidal direction). Here, an unstructured grid system composed of the arbitrary triangular elements is utilized for constructing the cell-centered finite volume method. In order to maintain the divergence free constraint of the magnetic field numerically, only the poloidal component of the rotation is defined at three edges of the triangular element. This poloidal component is evaluated under the assumption that the toroidal component of the operated vector times the radius, RA φ , is linearly distributed in the element. The present method will be applied to the nonlinear MHD dynamics in an realistic torus geometry without the numerical singularities. (author)

  7. Finite Element Method Based Modeling of Resistance Spot-Welded Mild Steel

    Directory of Open Access Journals (Sweden)

    Miloud Zaoui

    Full Text Available Abstract This paper deals with Finite Element refined and simplified models of a mild steel spot-welded specimen, developed and validated based on quasi-static cross-tensile experimental tests. The first model was constructed with a fine discretization of the metal sheet and the spot weld was defined as a special geometric zone of the specimen. This model provided, in combination with experimental tests, the input data for the development of the second model, which was constructed with respect to the mesh size used in the complete car finite element model. This simplified model was developed with coarse shell elements and a spring-type beam element was used to model the spot weld behavior. The global accuracy of the two models was checked by comparing simulated and experimental load-displacement curves and by studying the specimen deformed shapes and the plastic deformation growth in the metal sheets. The obtained results show that both fine and coarse finite element models permit a good prediction of the experimental tests.

  8. Finite Element Analysis of Mechanical Characteristics of Dropped Eggs Based on Fluid-Solid Coupling Theory

    Directory of Open Access Journals (Sweden)

    Song Haiyan

    2017-01-01

    Full Text Available It is important to study the properties and mechanics of egg drop impacts in order to reduce egg loss during processing and logistics and to provide a basis for the protective packaging of egg products. In this paper, we present the results of our study of the effects of the structural parameters on the mechanical properties of an egg using a finite element model of the egg. Based on Fluid-Solid coupling theory, a finite element model of an egg was constructed using ADINA, a finite element calculation and analysis software package. To simplify the model, the internal fluid of the egg was considered to be a homogeneous substance. The egg drop impact was simulated by the coupling solution, and the feasibility of the model was verified by comparison with the experimental results of a drop test. In summary, the modeling scheme was shown to be feasible and the simulation results provide a theoretical basis for the optimum design of egg packaging and egg processing equipment.

  9. Ontology-based information standards development

    OpenAIRE

    Heravi, Bahareh Rahmanzadeh

    2012-01-01

    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University. Standards may be argued to be important enablers for achieving interoperability as they aim to provide unambiguous specifications for error-free exchange of documents and information. By implication, therefore, it is important to model and represent the concept of a standard in a clear, precise and unambiguous way. Although standards development organisations usually provide guidelines for th...

  10. 77 FR 39385 - Receipts-Based, Small Business Size Standard

    Science.gov (United States)

    2012-07-03

    .... The NRC is increasing its receipts-based, small business size standard from $6.5 million to $7 million...-based, small business size standard increasing from $6.5 million to $7.0 million. This adjustment is to... regulatory programs. The NRC is increasing its receipts-based, small business size standard from $6.5 million...

  11. A Genetic Algorithm-based Antenna Selection Approach for Large-but-Finite MIMO Networks

    KAUST Repository

    Makki, Behrooz

    2016-12-29

    We study the performance of antenna selectionbased multiple-input-multiple-output (MIMO) networks with large but finite number of transmit antennas and receivers. Considering the continuous and bursty communication scenarios with different users’ data request probabilities, we develop an efficient antenna selection scheme using genetic algorithms (GA). As demonstrated, the proposed algorithm is generic in the sense that it can be used in the cases with different objective functions, precoding methods, levels of available channel state information and channel models. Our results show that the proposed GAbased algorithm reaches (almost) the same throughput as the exhaustive search-based optimal approach, with substantially less implementation complexity.

  12. A Genetic Algorithm-based Antenna Selection Approach for Large-but-Finite MIMO Networks

    KAUST Repository

    Makki, Behrooz; Ide, Anatole; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim

    2016-01-01

    We study the performance of antenna selectionbased multiple-input-multiple-output (MIMO) networks with large but finite number of transmit antennas and receivers. Considering the continuous and bursty communication scenarios with different users’ data request probabilities, we develop an efficient antenna selection scheme using genetic algorithms (GA). As demonstrated, the proposed algorithm is generic in the sense that it can be used in the cases with different objective functions, precoding methods, levels of available channel state information and channel models. Our results show that the proposed GAbased algorithm reaches (almost) the same throughput as the exhaustive search-based optimal approach, with substantially less implementation complexity.

  13. MO-F-BRA-04: Voxel-Based Statistical Analysis of Deformable Image Registration Error via a Finite Element Method.

    Science.gov (United States)

    Li, S; Lu, M; Kim, J; Glide-Hurst, C; Chetty, I; Zhong, H

    2012-06-01

    Purpose Clinical implementation of adaptive treatment planning is limited by the lack of quantitative tools to assess deformable image registration errors (R-ERR). The purpose of this study was to develop a method, using finite element modeling (FEM), to estimate registration errors based on mechanical changes resulting from them. Methods An experimental platform to quantify the correlation between registration errors and their mechanical consequences was developed as follows: diaphragm deformation was simulated on the CT images in patients with lung cancer using a finite element method (FEM). The simulated displacement vector fields (F-DVF) were used to warp each CT image to generate a FEM image. B-Spline based (Elastix) registrations were performed from reference to FEM images to generate a registration DVF (R-DVF). The F- DVF was subtracted from R-DVF. The magnitude of the difference vector was defined as the registration error, which is a consequence of mechanically unbalanced energy (UE), computed using 'in-house-developed' FEM software. A nonlinear regression model was used based on imaging voxel data and the analysis considered clustered voxel data within images. Results A regression model analysis showed that UE was significantly correlated with registration error, DVF and the product of registration error and DVF respectively with R̂2=0.73 (R=0.854). The association was verified independently using 40 tracked landmarks. A linear function between the means of UE values and R- DVF*R-ERR has been established. The mean registration error (N=8) was 0.9 mm. 85.4% of voxels fit this model within one standard deviation. Conclusions An encouraging relationship between UE and registration error has been found. These experimental results suggest the feasibility of UE as a valuable tool for evaluating registration errors, thus supporting 4D and adaptive radiotherapy. The research was supported by NIH/NCI R01CA140341. © 2012 American Association of Physicists in

  14. Neural Network Based Finite-Time Stabilization for Discrete-Time Markov Jump Nonlinear Systems with Time Delays

    Directory of Open Access Journals (Sweden)

    Fei Chen

    2013-01-01

    Full Text Available This paper deals with the finite-time stabilization problem for discrete-time Markov jump nonlinear systems with time delays and norm-bounded exogenous disturbance. The nonlinearities in different jump modes are parameterized by neural networks. Subsequently, a linear difference inclusion state space representation for a class of neural networks is established. Based on this, sufficient conditions are derived in terms of linear matrix inequalities to guarantee stochastic finite-time boundedness and stochastic finite-time stabilization of the closed-loop system. A numerical example is illustrated to verify the efficiency of the proposed technique.

  15. Neutron Sources for Standard-Based Testing

    Energy Technology Data Exchange (ETDEWEB)

    Radev, Radoslav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McLean, Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-11-10

    The DHS TC Standards and the consensus ANSI Standards use 252Cf as the neutron source for performance testing because its energy spectrum is similar to the 235U and 239Pu fission sources used in nuclear weapons. An emission rate of 20,000 ± 20% neutrons per second is used for testing of the radiological requirements both in the ANSI standards and the TCS. Determination of the accurate neutron emission rate of the test source is important for maintaining consistency and agreement between testing results obtained at different testing facilities. Several characteristics in the manufacture and the decay of the source need to be understood and accounted for in order to make an accurate measurement of the performance of the neutron detection instrument. Additionally, neutron response characteristics of the particular instrument need to be known and taken into account as well as neutron scattering in the testing environment.

  16. Quasistatic field simulations based on finite elements and spectral methods applied to superconducting magnets

    International Nuclear Information System (INIS)

    Koch, Stephan

    2009-01-01

    This thesis is concerned with the numerical simulation of electromagnetic fields in the quasi-static approximation which is applicable in many practical cases. Main emphasis is put on higher-order finite element methods. Quasi-static applications can be found, e.g., in accelerator physics in terms of the design of magnets required for beam guidance, in power engineering as well as in high-voltage engineering. Especially during the first design and optimization phase of respective devices, numerical models offer a cheap alternative to the often costly assembly of prototypes. However, large differences in the magnitude of the material parameters and the geometric dimensions as well as in the time-scales of the electromagnetic phenomena involved lead to an unacceptably long simulation time or to an inadequately large memory requirement. Under certain circumstances, the simulation itself and, in turn, the desired design improvement becomes even impossible. In the context of this thesis, two strategies aiming at the extension of the range of application for numerical simulations based on the finite element method are pursued. The first strategy consists in parallelizing existing methods such that the computation can be distributed over several computers or cores of a processor. As a consequence, it becomes feasible to simulate a larger range of devices featuring more degrees of freedom in the numerical model than before. This is illustrated for the calculation of the electromagnetic fields, in particular of the eddy-current losses, inside a superconducting dipole magnet developed at the GSI Helmholtzzentrum fuer Schwerionenforschung as a part of the FAIR project. As the second strategy to improve the efficiency of numerical simulations, a hybrid discretization scheme exploiting certain geometrical symmetries is established. Using this method, a significant reduction of the numerical effort in terms of required degrees of freedom for a given accuracy is achieved. The

  17. Three-dimensional photoacoustic tomography based on graphics-processing-unit-accelerated finite element method.

    Science.gov (United States)

    Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying

    2013-12-01

    Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.

  18. The Numerical Simulation of the Crack Elastoplastic Extension Based on the Extended Finite Element Method

    Directory of Open Access Journals (Sweden)

    Xia Xiaozhou

    2013-01-01

    Full Text Available In the frame of the extended finite element method, the exponent disconnected function is introduced to reflect the discontinuous characteristic of crack and the crack tip enrichment function which is made of triangular basis function, and the linear polar radius function is adopted to describe the displacement field distribution of elastoplastic crack tip. Where, the linear polar radius function form is chosen to decrease the singularity characteristic induced by the plastic yield zone of crack tip, and the triangle basis function form is adopted to describe the displacement distribution character with the polar angle of crack tip. Based on the displacement model containing the above enrichment displacement function, the increment iterative form of elastoplastic extended finite element method is deduced by virtual work principle. For nonuniform hardening material such as concrete, in order to avoid the nonsymmetry characteristic of stiffness matrix induced by the non-associate flowing of plastic strain, the plastic flowing rule containing cross item based on the least energy dissipation principle is adopted. Finally, some numerical examples show that the elastoplastic X-FEM constructed in this paper is of validity.

  19. Random Finite Set Based Bayesian Filtering with OpenCL in a Heterogeneous Platform

    Directory of Open Access Journals (Sweden)

    Biao Hu

    2017-04-01

    Full Text Available While most filtering approaches based on random finite sets have focused on improving performance, in this paper, we argue that computation times are very important in order to enable real-time applications such as pedestrian detection. Towards this goal, this paper investigates the use of OpenCL to accelerate the computation of random finite set-based Bayesian filtering in a heterogeneous system. In detail, we developed an efficient and fully-functional pedestrian-tracking system implementation, which can run under real-time constraints, meanwhile offering decent tracking accuracy. An extensive evaluation analysis was carried out to ensure the fulfillment of sufficient accuracy requirements. This was followed by extensive profiling analysis to spot the potential bottlenecks in terms of execution performance, which were then targeted to come up with an OpenCL accelerated application. Video-throughput improvements from roughly 15 fps to 100 fps (6× were observed on average while processing typical MOT benchmark videos. Moreover, the worst-case frame processing yielded an 18× advantage from nearly 2 fps to 36 fps, thereby comfortably meeting the real-time constraints. Our implementation is released as open-source code.

  20. Mechanisms of self-organization and finite size effects in a minimal agent based model

    International Nuclear Information System (INIS)

    Alfi, V; Cristelli, M; Pietronero, L; Zaccaria, A

    2009-01-01

    We present a detailed analysis of the self-organization phenomenon in which the stylized facts originate from finite size effects with respect to the number of agents considered and disappear in the limit of an infinite population. By introducing the possibility that agents can enter or leave the market depending on the behavior of the price, it is possible to show that the system self-organizes in a regime with a finite number of agents which corresponds to the stylized facts. The mechanism for entering or leaving the market is based on the idea that a too stable market is unappealing for traders, while the presence of price movements attracts agents to enter and speculate on the market. We show that this mechanism is also compatible with the idea that agents are scared by a noisy and risky market at shorter timescales. We also show that the mechanism for self-organization is robust with respect to variations of the exit/entry rules and that the attempt to trigger the system to self-organize in a region without stylized facts leads to an unrealistic dynamics. We study the self-organization in a specific agent based model but we believe that the basic ideas should be of general validity

  1. Mesh Partitioning Algorithm Based on Parallel Finite Element Analysis and Its Actualization

    Directory of Open Access Journals (Sweden)

    Lei Zhang

    2013-01-01

    Full Text Available In parallel computing based on finite element analysis, domain decomposition is a key technique for its preprocessing. Generally, a domain decomposition of a mesh can be realized through partitioning of a graph which is converted from a finite element mesh. This paper discusses the method for graph partitioning and the way to actualize mesh partitioning. Relevant softwares are introduced, and the data structure and key functions of Metis and ParMetis are introduced. The writing, compiling, and testing of the mesh partitioning interface program based on these key functions are performed. The results indicate some objective law and characteristics to guide the users who use the graph partitioning algorithm and software to write PFEM program, and ideal partitioning effects can be achieved by actualizing mesh partitioning through the program. The interface program can also be used directly by the engineering researchers as a module of the PFEM software. So that it can reduce the application of the threshold of graph partitioning algorithm, improve the calculation efficiency, and promote the application of graph theory and parallel computing.

  2. The CUBLAS and CULA based GPU acceleration of adaptive finite element framework for bioluminescence tomography.

    Science.gov (United States)

    Zhang, Bo; Yang, Xiang; Yang, Fei; Yang, Xin; Qin, Chenghu; Han, Dong; Ma, Xibo; Liu, Kai; Tian, Jie

    2010-09-13

    In molecular imaging (MI), especially the optical molecular imaging, bioluminescence tomography (BLT) emerges as an effective imaging modality for small animal imaging. The finite element methods (FEMs), especially the adaptive finite element (AFE) framework, play an important role in BLT. The processing speed of the FEMs and the AFE framework still needs to be improved, although the multi-thread CPU technology and the multi CPU technology have already been applied. In this paper, we for the first time introduce a new kind of acceleration technology to accelerate the AFE framework for BLT, using the graphics processing unit (GPU). Besides the processing speed, the GPU technology can get a balance between the cost and performance. The CUBLAS and CULA are two main important and powerful libraries for programming on NVIDIA GPUs. With the help of CUBLAS and CULA, it is easy to code on NVIDIA GPU and there is no need to worry about the details about the hardware environment of a specific GPU. The numerical experiments are designed to show the necessity, effect and application of the proposed CUBLAS and CULA based GPU acceleration. From the results of the experiments, we can reach the conclusion that the proposed CUBLAS and CULA based GPU acceleration method can improve the processing speed of the AFE framework very much while getting a balance between cost and performance.

  3. Dynamic and quantitative method of analyzing service consistency evolution based on extended hierarchical finite state automata.

    Science.gov (United States)

    Fan, Linjun; Tang, Jun; Ling, Yunxiang; Li, Benxian

    2014-01-01

    This paper is concerned with the dynamic evolution analysis and quantitative measurement of primary factors that cause service inconsistency in service-oriented distributed simulation applications (SODSA). Traditional methods are mostly qualitative and empirical, and they do not consider the dynamic disturbances among factors in service's evolution behaviors such as producing, publishing, calling, and maintenance. Moreover, SODSA are rapidly evolving in terms of large-scale, reusable, compositional, pervasive, and flexible features, which presents difficulties in the usage of traditional analysis methods. To resolve these problems, a novel dynamic evolution model extended hierarchical service-finite state automata (EHS-FSA) is constructed based on finite state automata (FSA), which formally depict overall changing processes of service consistency states. And also the service consistency evolution algorithms (SCEAs) based on EHS-FSA are developed to quantitatively assess these impact factors. Experimental results show that the bad reusability (17.93% on average) is the biggest influential factor, the noncomposition of atomic services (13.12%) is the second biggest one, and the service version's confusion (1.2%) is the smallest one. Compared with previous qualitative analysis, SCEAs present good effectiveness and feasibility. This research can guide the engineers of service consistency technologies toward obtaining a higher level of consistency in SODSA.

  4. Automatic Test Pattern Generator for Fuzzing Based on Finite State Machine

    Directory of Open Access Journals (Sweden)

    Ming-Hung Wang

    2017-01-01

    Full Text Available With the rapid development of the Internet, several emerging technologies are adopted to construct fancy, interactive, and user-friendly websites. Among these technologies, HTML5 is a popular one and is widely used in establishing modern sites. However, the security issues in the new web technologies are also raised and are worthy of investigation. For vulnerability investigation, many previous studies used fuzzing and focused on generation-based approaches to produce test cases for fuzzing; however, these methods require a significant amount of knowledge and mental efforts to develop test patterns for generating test cases. To decrease the entry barrier of conducting fuzzing, in this study, we propose a test pattern generation algorithm based on the concept of finite state machines. We apply graph analysis techniques to extract paths from finite state machines and use these paths to construct test patterns automatically. According to the proposal, fuzzing can be completed through inputting a regular expression corresponding to the test target. To evaluate the performance of our proposal, we conduct an experiment in identifying vulnerabilities of the input attributes in HTML5. According to the results, our approach is not only efficient but also effective for identifying weak validators in HTML5.

  5. Dynamic and Quantitative Method of Analyzing Service Consistency Evolution Based on Extended Hierarchical Finite State Automata

    Directory of Open Access Journals (Sweden)

    Linjun Fan

    2014-01-01

    Full Text Available This paper is concerned with the dynamic evolution analysis and quantitative measurement of primary factors that cause service inconsistency in service-oriented distributed simulation applications (SODSA. Traditional methods are mostly qualitative and empirical, and they do not consider the dynamic disturbances among factors in service’s evolution behaviors such as producing, publishing, calling, and maintenance. Moreover, SODSA are rapidly evolving in terms of large-scale, reusable, compositional, pervasive, and flexible features, which presents difficulties in the usage of traditional analysis methods. To resolve these problems, a novel dynamic evolution model extended hierarchical service-finite state automata (EHS-FSA is constructed based on finite state automata (FSA, which formally depict overall changing processes of service consistency states. And also the service consistency evolution algorithms (SCEAs based on EHS-FSA are developed to quantitatively assess these impact factors. Experimental results show that the bad reusability (17.93% on average is the biggest influential factor, the noncomposition of atomic services (13.12% is the second biggest one, and the service version’s confusion (1.2% is the smallest one. Compared with previous qualitative analysis, SCEAs present good effectiveness and feasibility. This research can guide the engineers of service consistency technologies toward obtaining a higher level of consistency in SODSA.

  6. Design of LPV-Based Sliding Mode Controller with Finite Time Convergence for a Morphing Aircraft

    Directory of Open Access Journals (Sweden)

    Nuan Wen

    2017-01-01

    Full Text Available This paper proposes a finite time convergence sliding mode control (FSMC strategy based on linear parameter-varying (LPV methodology for the stability control of a morphing aircraft subject to parameter uncertainties and external disturbances. Based on the Kane method, a longitudinal dynamic model of the morphing aircraft is built. Furthermore, the linearized LPV model of the aircraft in the wing transition process is obtained, whose scheduling parameters are wing sweep angle and wingspan. The FSMC scheme is developed into LPV systems by applying the previous results for linear time-invariant (LTI systems. The sufficient condition in form of linear matrix inequality (LMI constraints is derived for the existence of a reduced-order sliding mode, in which the dynamics can be ensured to keep robust stability and L2 gain performance. The tensor-product (TP model transformation approach can be directly applied to solve infinite LMIs belonging to the polynomial parameter-dependent LPV system. Then, by the parameter-dependent Lyapunov function stability analysis, the synthesized FSMC is proved to drive the LPV system trajectories toward the predefined switching surface with a finite time arrival. Comparative simulation results in the nonlinear model demonstrate the robustness and effectiveness of this approach.

  7. 77 FR 39442 - Receipts-Based, Small Business Size Standard

    Science.gov (United States)

    2012-07-03

    ... RIN 3150-AJ14 [NRC-2012-0062] Receipts-Based, Small Business Size Standard AGENCY: Nuclear Regulatory... Regulatory Flexibility Act of 1980, as amended. The NRC is proposing to increase its receipts-based, small business size standard from $6.5 million to $7 million to conform to the standard set by the Small Business...

  8. GPU-accelerated 3D neutron diffusion code based on finite difference method

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Q.; Yu, G.; Wang, K. [Dept. of Engineering Physics, Tsinghua Univ. (China)

    2012-07-01

    Finite difference method, as a traditional numerical solution to neutron diffusion equation, although considered simpler and more precise than the coarse mesh nodal methods, has a bottle neck to be widely applied caused by the huge memory and unendurable computation time it requires. In recent years, the concept of General-Purpose computation on GPUs has provided us with a powerful computational engine for scientific research. In this study, a GPU-Accelerated multi-group 3D neutron diffusion code based on finite difference method was developed. First, a clean-sheet neutron diffusion code (3DFD-CPU) was written in C++ on the CPU architecture, and later ported to GPUs under NVIDIA's CUDA platform (3DFD-GPU). The IAEA 3D PWR benchmark problem was calculated in the numerical test, where three different codes, including the original CPU-based sequential code, the HYPRE (High Performance Pre-conditioners)-based diffusion code and CITATION, were used as counterpoints to test the efficiency and accuracy of the GPU-based program. The results demonstrate both high efficiency and adequate accuracy of the GPU implementation for neutron diffusion equation. A speedup factor of about 46 times was obtained, using NVIDIA's Geforce GTX470 GPU card against a 2.50 GHz Intel Quad Q9300 CPU processor. Compared with the HYPRE-based code performing in parallel on an 8-core tower server, the speedup of about 2 still could be observed. More encouragingly, without any mathematical acceleration technology, the GPU implementation ran about 5 times faster than CITATION which was speeded up by using the SOR method and Chebyshev extrapolation technique. (authors)

  9. GPU-accelerated 3D neutron diffusion code based on finite difference method

    International Nuclear Information System (INIS)

    Xu, Q.; Yu, G.; Wang, K.

    2012-01-01

    Finite difference method, as a traditional numerical solution to neutron diffusion equation, although considered simpler and more precise than the coarse mesh nodal methods, has a bottle neck to be widely applied caused by the huge memory and unendurable computation time it requires. In recent years, the concept of General-Purpose computation on GPUs has provided us with a powerful computational engine for scientific research. In this study, a GPU-Accelerated multi-group 3D neutron diffusion code based on finite difference method was developed. First, a clean-sheet neutron diffusion code (3DFD-CPU) was written in C++ on the CPU architecture, and later ported to GPUs under NVIDIA's CUDA platform (3DFD-GPU). The IAEA 3D PWR benchmark problem was calculated in the numerical test, where three different codes, including the original CPU-based sequential code, the HYPRE (High Performance Pre-conditioners)-based diffusion code and CITATION, were used as counterpoints to test the efficiency and accuracy of the GPU-based program. The results demonstrate both high efficiency and adequate accuracy of the GPU implementation for neutron diffusion equation. A speedup factor of about 46 times was obtained, using NVIDIA's Geforce GTX470 GPU card against a 2.50 GHz Intel Quad Q9300 CPU processor. Compared with the HYPRE-based code performing in parallel on an 8-core tower server, the speedup of about 2 still could be observed. More encouragingly, without any mathematical acceleration technology, the GPU implementation ran about 5 times faster than CITATION which was speeded up by using the SOR method and Chebyshev extrapolation technique. (authors)

  10. Finite Element Based Design Optimization of WENDELSTEIN 7-X Divertor Targets

    International Nuclear Information System (INIS)

    Plankensteiner, A.; Leuprecht, A.; Schedler, B.; Scheiber, K.; Greuner, H.

    2006-01-01

    deformations, and strains are compared to results of high heat flux tests with reasonable agreement being found. Thus, the calculated temporal and spatial evolution of temperatures, stresses, and strains for the individual design variants are evaluated with special attention being paid to stress measures, plastic strains, and damage parameters indicating the risk of failure. Based on the experimentally confirmed model, the finite element analysis resulted in an optimized design. (author)

  11. Standardized quality in MOOC based learning

    Directory of Open Access Journals (Sweden)

    Maiorescu Irina

    2015-04-01

    Full Text Available Quality in the field of e-learning and, particularly, in the field of MOOC( Massive Open Online Courses, is a topic of growing importance in both academic institutions and in the private sector as it has generally been proved that quality management can contribute to improving the performance of organizations, regardless of their object of activity. Despite the fact that there are standards relating to quality management in a general manner, professionals, academic staff, specialists and bodies felt the need for having a standardized approach of the quality in the sector of e-learning. Therefore, in the last years, in different countries quality guidelines have been developed and used for e-Learning or distance education (for example the ASTD criteria for e- Learning, the BLA Quality Mark, Quality Platform Learning by D-ELAN etc.. The current paper aims to give insights to this new form of online education provided by MOOC platforms using the specific quality standard approach.

  12. An explanation for the shape of nanoindentation unloading curves based on finite element simulation

    International Nuclear Information System (INIS)

    Bolshakov, A.; Pharr, G.M.

    1995-01-01

    Current methods for measuring hardness and modulus from nanoindentation load-displacement data are based on Sneddon's equations for the indentation of an elastic half-space by an axially symmetric rigid punch. Recent experiments have shown that nanoindentation unloading data are distinctly curved in a manner which is not consistent with either the flat punch or the conical indenter geometries frequently used in modeling, but are more closely approximated by a parabola of revolution. Finite element simulations for conical indentation of an elastic-plastic material are presented which corroborate the experimental observations, and from which a simple explanation for the shape of the unloading curve is derived. The explanation is based on the concept of an effective indenter shape whose geometry is determined by the shape of the plastic hardness impression formed during indentation

  13. Convergence of Cell Based Finite Volume Discretizations for Problems of Control in the Conduction Coefficients

    DEFF Research Database (Denmark)

    Evgrafov, Anton; Gregersen, Misha Marie; Sørensen, Mads Peter

    2011-01-01

    We present a convergence analysis of a cell-based finite volume (FV) discretization scheme applied to a problem of control in the coefficients of a generalized Laplace equation modelling, for example, a steady state heat conduction. Such problems arise in applications dealing with geometric optimal......, whereas the convergence of the coefficients happens only with respect to the "volumetric" Lebesgue measure. Additionally, depending on whether the stationarity conditions are stated for the discretized or the original continuous problem, two distinct concepts of stationarity at a discrete level arise. We...... provide characterizations of limit points, with respect to FV mesh size, of globally optimal solutions and two types of stationary points to the discretized problems. We illustrate the practical behaviour of our cell-based FV discretization algorithm on a numerical example....

  14. Simulation on Temperature Field of Radiofrequency Lesions System Based on Finite Element Method

    International Nuclear Information System (INIS)

    Xiao, D; Qian, Z; Li, W; Qian, L

    2011-01-01

    This paper mainly describes the way to get the volume model of damaged region according to the simulation on temperature field of radiofrequency ablation lesion system in curing Parkinson's disease based on finite element method. This volume model reflects, to some degree, the shape and size of the damaged tissue during the treatment with all tendencies in different time or core temperature. By using Pennes equation as heat conduction equation of radiofrequency ablation of biological tissue, the author obtains the temperature distribution field of biological tissue in the method of finite element for solving equations. In order to establish damage models at temperature points of 60 deg. C, 65 deg. C, 70 deg. C, 75 deg. C, 80 deg. C, 85 deg. C and 90 deg. C while the time points are 30s, 60s, 90s and 120s, Parkinson's disease model of nuclei is reduced to uniform, infinite model with RF pin at the origin. Theoretical simulations of these models are displayed, focusing on a variety of conditions about the effective lesion size on horizontal and vertical. The results show the binary complete quadratic non-linear joint temperature-time models of the maximum damage diameter and maximum height. The models can comprehensively reflect the degeneration of target tissue caused by radio frequency temperature and duration. This lay the foundation for accurately monitor of clinical RF treatment of Parkinson's disease in the future.

  15. Mixed finite element-based fully conservative methods for simulating wormhole propagation

    KAUST Repository

    Kou, Jisheng; Sun, Shuyu; Wu, Yuanqing

    2015-01-01

    Wormhole propagation during reactive dissolution of carbonates plays a very important role in the product enhancement of oil and gas reservoir. Because of high velocity and nonuniform porosity, the Darcy–Forchheimer model is applicable for this problem instead of conventional Darcy framework. We develop a mixed finite element scheme for numerical simulation of this problem, in which mixed finite element methods are used not only for the Darcy–Forchheimer flow equations but also for the solute transport equation by introducing an auxiliary flux variable to guarantee full mass conservation. In theoretical analysis aspects, based on the cut-off operator of solute concentration, we construct an analytical function to control and handle the change of porosity with time; we treat the auxiliary flux variable as a function of velocity and establish its properties; we employ the coupled analysis approach to deal with the fully coupling relation of multivariables. From this, the stability analysis and a priori error estimates for velocity, pressure, concentration and porosity are established in different norms. Numerical results are also given to verify theoretical analysis and effectiveness of the proposed scheme.

  16. Gear hot forging process robust design based on finite element method

    International Nuclear Information System (INIS)

    Xuewen, Chen; Won, Jung Dong

    2008-01-01

    During the hot forging process, the shaping property and forging quality will fluctuate because of die wear, manufacturing tolerance, dimensional variation caused by temperature and the different friction conditions, etc. In order to control this variation in performance and to optimize the process parameters, a robust design method is proposed in this paper, based on the finite element method for the hot forging process. During the robust design process, the Taguchi method is the basic robust theory. The finite element analysis is incorporated in order to simulate the hot forging process. In addition, in order to calculate the objective function value, an orthogonal design method is selected to arrange experiments and collect sample points. The ANOVA method is employed to analyze the relationships of the design parameters and design objectives and to find the best parameters. Finally, a case study for the gear hot forging process is conducted. With the objective to reduce the forging force and its variation, the robust design mathematical model is established. The optimal design parameters obtained from this study indicate that the forging force has been reduced and its variation has been controlled

  17. Mixed finite element-based fully conservative methods for simulating wormhole propagation

    KAUST Repository

    Kou, Jisheng

    2015-10-11

    Wormhole propagation during reactive dissolution of carbonates plays a very important role in the product enhancement of oil and gas reservoir. Because of high velocity and nonuniform porosity, the Darcy–Forchheimer model is applicable for this problem instead of conventional Darcy framework. We develop a mixed finite element scheme for numerical simulation of this problem, in which mixed finite element methods are used not only for the Darcy–Forchheimer flow equations but also for the solute transport equation by introducing an auxiliary flux variable to guarantee full mass conservation. In theoretical analysis aspects, based on the cut-off operator of solute concentration, we construct an analytical function to control and handle the change of porosity with time; we treat the auxiliary flux variable as a function of velocity and establish its properties; we employ the coupled analysis approach to deal with the fully coupling relation of multivariables. From this, the stability analysis and a priori error estimates for velocity, pressure, concentration and porosity are established in different norms. Numerical results are also given to verify theoretical analysis and effectiveness of the proposed scheme.

  18. A Proposed Stochastic Finite Difference Approach Based on Homogenous Chaos Expansion

    Directory of Open Access Journals (Sweden)

    O. H. Galal

    2013-01-01

    Full Text Available This paper proposes a stochastic finite difference approach, based on homogenous chaos expansion (SFDHC. The said approach can handle time dependent nonlinear as well as linear systems with deterministic or stochastic initial and boundary conditions. In this approach, included stochastic parameters are modeled as second-order stochastic processes and are expanded using Karhunen-Loève expansion, while the response function is approximated using homogenous chaos expansion. Galerkin projection is used in converting the original stochastic partial differential equation (PDE into a set of coupled deterministic partial differential equations and then solved using finite difference method. Two well-known equations were used for efficiency validation of the method proposed. First one being the linear diffusion equation with stochastic parameter and the second is the nonlinear Burger's equation with stochastic parameter and stochastic initial and boundary conditions. In both of these examples, the probability distribution function of the response manifested close conformity to the results obtained from Monte Carlo simulation with optimized computational cost.

  19. An Autonomous Connectivity Restoration Algorithm Based on Finite State Machine for Wireless Sensor-Actor Networks

    Directory of Open Access Journals (Sweden)

    Ying Zhang

    2018-01-01

    Full Text Available With the development of autonomous unmanned intelligent systems, such as the unmanned boats, unmanned planes and autonomous underwater vehicles, studies on Wireless Sensor-Actor Networks (WSANs have attracted more attention. Network connectivity algorithms play an important role in data exchange, collaborative detection and information fusion. Due to the harsh application environment, abnormal nodes often appear, and the network connectivity will be prone to be lost. Network self-healing mechanisms have become critical for these systems. In order to decrease the movement overhead of the sensor-actor nodes, an autonomous connectivity restoration algorithm based on finite state machine is proposed. The idea is to identify whether a node is a critical node by using a finite state machine, and update the connected dominating set in a timely way. If an abnormal node is a critical node, the nearest non-critical node will be relocated to replace the abnormal node. In the case of multiple node abnormality, a regional network restoration algorithm is introduced. It is designed to reduce the overhead of node movements while restoration happens. Simulation results indicate the proposed algorithm has better performance on the total moving distance and the number of total relocated nodes compared with some other representative restoration algorithms.

  20. An Autonomous Connectivity Restoration Algorithm Based on Finite State Machine for Wireless Sensor-Actor Networks.

    Science.gov (United States)

    Zhang, Ying; Wang, Jun; Hao, Guan

    2018-01-08

    With the development of autonomous unmanned intelligent systems, such as the unmanned boats, unmanned planes and autonomous underwater vehicles, studies on Wireless Sensor-Actor Networks (WSANs) have attracted more attention. Network connectivity algorithms play an important role in data exchange, collaborative detection and information fusion. Due to the harsh application environment, abnormal nodes often appear, and the network connectivity will be prone to be lost. Network self-healing mechanisms have become critical for these systems. In order to decrease the movement overhead of the sensor-actor nodes, an autonomous connectivity restoration algorithm based on finite state machine is proposed. The idea is to identify whether a node is a critical node by using a finite state machine, and update the connected dominating set in a timely way. If an abnormal node is a critical node, the nearest non-critical node will be relocated to replace the abnormal node. In the case of multiple node abnormality, a regional network restoration algorithm is introduced. It is designed to reduce the overhead of node movements while restoration happens. Simulation results indicate the proposed algorithm has better performance on the total moving distance and the number of total relocated nodes compared with some other representative restoration algorithms.

  1. Tooth Fracture Detection in Spiral Bevel Gears System by Harmonic Response Based on Finite Element Method

    Directory of Open Access Journals (Sweden)

    Yuan Chen

    2017-01-01

    Full Text Available Spiral bevel gears occupy several advantages such as high contact ratio, strong carrying capacity, and smooth operation, which become one of the most widely used components in high-speed stage of the aeronautical transmission system. Its dynamic characteristics are addressed by many scholars. However, spiral bevel gears, especially tooth fracture occurrence and monitoring, are not to be investigated, according to the limited published issues. Therefore, this paper establishes a three-dimensional model and finite element model of the Gleason spiral bevel gear pair. The model considers the effect of tooth root fracture on the system due to fatigue. Finite element method is used to compute the mesh generation, set the boundary condition, and carry out the dynamic load. The harmonic response spectra of the base under tooth fracture are calculated and the influence of main parameters on monitoring failure is investigated as well. The results show that the change of torque affects insignificantly the determination of whether or not the system has tooth fracture. The intermediate frequency interval (200 Hz–1000 Hz is the best interval to judge tooth fracture occurrence. The best fault test region is located in the working area where the system is going through meshing. The simulation calculation provides a theoretical reference for spiral bevel gear system test and fault diagnosis.

  2. A molecular-mechanics based finite element model for strength prediction of single wall carbon nanotubes

    International Nuclear Information System (INIS)

    Meo, M.; Rossi, M.

    2007-01-01

    The aim of this work was to develop a finite element model based on molecular mechanics to predict the ultimate strength and strain of single wallet carbon nanotubes (SWCNT). The interactions between atoms was modelled by combining the use of non-linear elastic and torsional elastic spring. In particular, with this approach, it was tried to combine the molecular mechanics approach with finite element method without providing any not-physical data on the interactions between the carbon atoms, i.e. the CC-bond inertia moment or Young's modulus definition. Mechanical properties as Young's modulus, ultimate strength and strain for several CNTs were calculated. Further, a stress-strain curve for large deformation (up to 70%) is reported for a nanotube Zig-Zag (9,0). The results showed that good agreement with the experimental and numerical results of several authors was obtained. A comparison of the mechanical properties of nanotubes with same diameter and different chirality was carried out. Finally, the influence of the presence of defects on the strength and strain of a SWNT was also evaluated. In particular, the stress-strain curve a nanotube with one-vacancy defect was evaluated and compared with the curve of a pristine one, showing a reduction of the ultimate strength and strain for the defected nanotube. The FE model proposed demonstrate to be a reliable tool to simulate mechanical behaviour of carbon nanotubes both in the linear elastic field and the non-linear elastic field

  3. Dynamic analysis of suspension cable based on vector form intrinsic finite element method

    Science.gov (United States)

    Qin, Jian; Qiao, Liang; Wan, Jiancheng; Jiang, Ming; Xia, Yongjun

    2017-10-01

    A vector finite element method is presented for the dynamic analysis of cable structures based on the vector form intrinsic finite element (VFIFE) and mechanical properties of suspension cable. Firstly, the suspension cable is discretized into different elements by space points, the mass and external forces of suspension cable are transformed into space points. The structural form of cable is described by the space points at different time. The equations of motion for the space points are established according to the Newton’s second law. Then, the element internal forces between the space points are derived from the flexible truss structure. Finally, the motion equations of space points are solved by the central difference method with reasonable time integration step. The tangential tension of the bearing rope in a test ropeway with the moving concentrated loads is calculated and compared with the experimental data. The results show that the tangential tension of suspension cable with moving loads is consistent with the experimental data. This method has high calculated precision and meets the requirements of engineering application.

  4. NUMERICAL SIMULATION OF ELECTRICAL IMPEDANCE TOMOGRAPHY PROBLEM AND STUDY OF APPROACH BASED ON FINITE VOLUME METHOD

    Directory of Open Access Journals (Sweden)

    Ye. S. Sherina

    2014-01-01

    Full Text Available This research has been aimed to carry out a study of peculiarities that arise in a numerical simulation of the electrical impedance tomography (EIT problem. Static EIT image reconstruction is sensitive to a measurement noise and approximation error. A special consideration has been given to reducing of the approximation error, which originates from numerical implementation drawbacks. This paper presents in detail two numerical approaches for solving EIT forward problem. The finite volume method (FVM on unstructured triangular mesh is introduced. In order to compare this approach, the finite element (FEM based forward solver was implemented, which has gained the most popularity among researchers. The calculated potential distribution with the assumed initial conductivity distribution has been compared to the analytical solution of a test Neumann boundary problem and to the results of problem simulation by means of ANSYS FLUENT commercial software. Two approaches to linearized EIT image reconstruction are discussed. Reconstruction of the conductivity distribution is an ill-posed problem, typically requiring a large amount of computation and resolved by minimization techniques. The objective function to be minimized is constructed of measured voltage and calculated boundary voltage on the electrodes. A classical modified Newton type iterative method and the stochastic differential evolution method are employed. A software package has been developed for the problem under investigation. Numerical tests were conducted on simulated data. The obtained results could be helpful to researches tackling the hardware and software issues for medical applications of EIT.

  5. An Autonomous Connectivity Restoration Algorithm Based on Finite State Machine for Wireless Sensor-Actor Networks

    Science.gov (United States)

    Zhang, Ying; Wang, Jun; Hao, Guan

    2018-01-01

    With the development of autonomous unmanned intelligent systems, such as the unmanned boats, unmanned planes and autonomous underwater vehicles, studies on Wireless Sensor-Actor Networks (WSANs) have attracted more attention. Network connectivity algorithms play an important role in data exchange, collaborative detection and information fusion. Due to the harsh application environment, abnormal nodes often appear, and the network connectivity will be prone to be lost. Network self-healing mechanisms have become critical for these systems. In order to decrease the movement overhead of the sensor-actor nodes, an autonomous connectivity restoration algorithm based on finite state machine is proposed. The idea is to identify whether a node is a critical node by using a finite state machine, and update the connected dominating set in a timely way. If an abnormal node is a critical node, the nearest non-critical node will be relocated to replace the abnormal node. In the case of multiple node abnormality, a regional network restoration algorithm is introduced. It is designed to reduce the overhead of node movements while restoration happens. Simulation results indicate the proposed algorithm has better performance on the total moving distance and the number of total relocated nodes compared with some other representative restoration algorithms. PMID:29316702

  6. Telemetry Standards, IRIG Standard 106-17, Chapter 22, Network Based Protocol Suite

    Science.gov (United States)

    2017-07-01

    requirements. 22.2 Network Access Layer 22.2.1 Physical Layer Connectors and cable media should meet the electrical or optical properties required by the...Telemetry Standards, IRIG Standard 106-17 Chapter 22, July 2017 i CHAPTER 22 Network -Based Protocol Suite Acronyms...iii Chapter 22. Network -Based Protocol Suite

  7. A Matlab-based finite-difference solver for the Poisson problem with mixed Dirichlet-Neumann boundary conditions

    Science.gov (United States)

    Reimer, Ashton S.; Cheviakov, Alexei F.

    2013-03-01

    A Matlab-based finite-difference numerical solver for the Poisson equation for a rectangle and a disk in two dimensions, and a spherical domain in three dimensions, is presented. The solver is optimized for handling an arbitrary combination of Dirichlet and Neumann boundary conditions, and allows for full user control of mesh refinement. The solver routines utilize effective and parallelized sparse vector and matrix operations. Computations exhibit high speeds, numerical stability with respect to mesh size and mesh refinement, and acceptable error values even on desktop computers. Catalogue identifier: AENQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License v3.0 No. of lines in distributed program, including test data, etc.: 102793 No. of bytes in distributed program, including test data, etc.: 369378 Distribution format: tar.gz Programming language: Matlab 2010a. Computer: PC, Macintosh. Operating system: Windows, OSX, Linux. RAM: 8 GB (8, 589, 934, 592 bytes) Classification: 4.3. Nature of problem: To solve the Poisson problem in a standard domain with “patchy surface”-type (strongly heterogeneous) Neumann/Dirichlet boundary conditions. Solution method: Finite difference with mesh refinement. Restrictions: Spherical domain in 3D; rectangular domain or a disk in 2D. Unusual features: Choice between mldivide/iterative solver for the solution of large system of linear algebraic equations that arise. Full user control of Neumann/Dirichlet boundary conditions and mesh refinement. Running time: Depending on the number of points taken and the geometry of the domain, the routine may take from less than a second to several hours to execute.

  8. Automatic CT-based finite element model generation for temperature-based death time estimation: feasibility study and sensitivity analysis.

    Science.gov (United States)

    Schenkl, Sebastian; Muggenthaler, Holger; Hubig, Michael; Erdmann, Bodo; Weiser, Martin; Zachow, Stefan; Heinrich, Andreas; Güttler, Felix Victor; Teichgräber, Ulf; Mall, Gita

    2017-05-01

    Temperature-based death time estimation is based either on simple phenomenological models of corpse cooling or on detailed physical heat transfer models. The latter are much more complex but allow a higher accuracy of death time estimation, as in principle, all relevant cooling mechanisms can be taken into account.Here, a complete workflow for finite element-based cooling simulation is presented. The following steps are demonstrated on a CT phantom: Computer tomography (CT) scan Segmentation of the CT images for thermodynamically relevant features of individual geometries and compilation in a geometric computer-aided design (CAD) model Conversion of the segmentation result into a finite element (FE) simulation model Computation of the model cooling curve (MOD) Calculation of the cooling time (CTE) For the first time in FE-based cooling time estimation, the steps from the CT image over segmentation to FE model generation are performed semi-automatically. The cooling time calculation results are compared to cooling measurements performed on the phantoms under controlled conditions. In this context, the method is validated using a CT phantom. Some of the phantoms' thermodynamic material parameters had to be determined via independent experiments.Moreover, the impact of geometry and material parameter uncertainties on the estimated cooling time is investigated by a sensitivity analysis.

  9. A New Method for 3D Finite Element Modeling of Human Mandible Based on CT Data

    Institute of Scientific and Technical Information of China (English)

    于力牛; 叶铭; 王成焘

    2004-01-01

    This study presents a reliable method for the semi-automatic generation of an FE model, which determines both geometrical data and bone properties from patient CT scans.3D FE analysis is one of the best approaches to predict the stress and strain distribution in complex bone structures, but its accuracy strongly depends on the precision of input information. In geometric reconstruction, various methods of image processing, geometric modeling and finite element analysis are combined and extended. Emphasis is given to the assignment of the material properties based on the density values computed from CT data. Through this technique, the model with high geometric and material similarities were generated in an easy way. Consequently, the patient-specific FE model from mandible CT data is realized also.

  10. Strain-Based Damage Determination Using Finite Element Analysis for Structural Health Management

    Science.gov (United States)

    Hochhalter, Jacob D.; Krishnamurthy, Thiagaraja; Aguilo, Miguel A.

    2016-01-01

    A damage determination method is presented that relies on in-service strain sensor measurements. The method employs a gradient-based optimization procedure combined with the finite element method for solution to the forward problem. It is demonstrated that strains, measured at a limited number of sensors, can be used to accurately determine the location, size, and orientation of damage. Numerical examples are presented to demonstrate the general procedure. This work is motivated by the need to provide structural health management systems with a real-time damage characterization. The damage cases investigated herein are characteristic of point-source damage, which can attain critical size during flight. The procedure described can be used to provide prognosis tools with the current damage configuration.

  11. Computational statics and dynamics an introduction based on the finite element method

    CERN Document Server

    Öchsner, Andreas

    2016-01-01

    This book introduces readers to modern computational mechanics based on the finite element method. It helps students succeed in mechanics courses by showing them how to apply the fundamental knowledge they gained in the first years of their engineering education to more advanced topics. In order to deepen readers’ understanding of the derived equations and theories, each chapter also includes supplementary problems. These problems start with fundamental knowledge questions on the theory presented in the chapter, followed by calculation problems. In total over 80 such calculation problems are provided, along with brief solutions for each. This book is especially designed to meet the needs of Australian students, reviewing the mathematics covered in their first two years at university. The 13-week course comprises three hours of lectures and two hours of tutorials per week.

  12. A Kriging Model Based Finite Element Model Updating Method for Damage Detection

    Directory of Open Access Journals (Sweden)

    Xiuming Yang

    2017-10-01

    Full Text Available Model updating is an effective means of damage identification and surrogate modeling has attracted considerable attention for saving computational cost in finite element (FE model updating, especially for large-scale structures. In this context, a surrogate model of frequency is normally constructed for damage identification, while the frequency response function (FRF is rarely used as it usually changes dramatically with updating parameters. This paper presents a new surrogate model based model updating method taking advantage of the measured FRFs. The Frequency Domain Assurance Criterion (FDAC is used to build the objective function, whose nonlinear response surface is constructed by the Kriging model. Then, the efficient global optimization (EGO algorithm is introduced to get the model updating results. The proposed method has good accuracy and robustness, which have been verified by a numerical simulation of a cantilever and experimental test data of a laboratory three-story structure.

  13. Static Object Detection Based on a Dual Background Model and a Finite-State Machine

    Directory of Open Access Journals (Sweden)

    Heras Evangelio Rubén

    2011-01-01

    Full Text Available Detecting static objects in video sequences has a high relevance in many surveillance applications, such as the detection of abandoned objects in public areas. In this paper, we present a system for the detection of static objects in crowded scenes. Based on the detection of two background models learning at different rates, pixels are classified with the help of a finite-state machine. The background is modelled by two mixtures of Gaussians with identical parameters except for the learning rate. The state machine provides the meaning for the interpretation of the results obtained from background subtraction; it can be implemented as a look-up table with negligible computational cost and it can be easily extended. Due to the definition of the states in the state machine, the system can be used either full automatically or interactively, making it extremely suitable for real-life surveillance applications. The system was successfully validated with several public datasets.

  14. Can experimental data in humans verify the finite element-based bone remodeling algorithm?

    DEFF Research Database (Denmark)

    Wong, C.; Gehrchen, P.M.; Kiaer, T.

    2008-01-01

    STUDY DESIGN: A finite element analysis-based bone remodeling study in human was conducted in the lumbar spine operated on with pedicle screws. Bone remodeling results were compared to prospective experimental bone mineral content data of patients operated on with pedicle screws. OBJECTIVE......: The validity of 2 bone remodeling algorithms was evaluated by comparing against prospective bone mineral content measurements. Also, the potential stress shielding effect was examined using the 2 bone remodeling algorithms and the experimental bone mineral data. SUMMARY OF BACKGROUND DATA: In previous studies...... operated on with pedicle screws between L4 and L5. The stress shielding effect was also examined. The bone remodeling results were compared with prospective bone mineral content measurements of 4 patients. They were measured after surgery, 3-, 6- and 12-months postoperatively. RESULTS: After 1 year...

  15. Implementation of Finite Volume based Navier Stokes Algorithm Within General Purpose Flow Network Code

    Science.gov (United States)

    Schallhorn, Paul; Majumdar, Alok

    2012-01-01

    This paper describes a finite volume based numerical algorithm that allows multi-dimensional computation of fluid flow within a system level network flow analysis. There are several thermo-fluid engineering problems where higher fidelity solutions are needed that are not within the capacity of system level codes. The proposed algorithm will allow NASA's Generalized Fluid System Simulation Program (GFSSP) to perform multi-dimensional flow calculation within the framework of GFSSP s typical system level flow network consisting of fluid nodes and branches. The paper presents several classical two-dimensional fluid dynamics problems that have been solved by GFSSP's multi-dimensional flow solver. The numerical solutions are compared with the analytical and benchmark solution of Poiseulle, Couette and flow in a driven cavity.

  16. Some practical considerations in finite element-based digital image correlation

    KAUST Repository

    Wang, Bo

    2015-04-20

    As an alternative to subset-based digital image correlation (DIC), finite element-based (FE-based) DIC method has gained increasing attention in the experimental mechanics community. However, the literature survey reveals that some important issues have not been well addressed in the published literatures. This work therefore aims to point out a few important considerations in the practical algorithm implementation of the FE-based DIC method, along with simple but effective solutions that can effectively tackle these issues. First, to better accommodate the possible intensity variations of the deformed images practically occurred in real experiments, a robust zero-mean normalized sum of squared difference criterion, instead of the commonly used sum of squared difference criterion, is introduced to quantify the similarity between reference and deformed elements in FE-based DIC. Second, to reduce the bias error induced by image noise and imperfect intensity interpolation, low-pass filtering of the speckle images with a 5×5 pixels Gaussian filter prior to correlation analysis, is presented. Third, to ensure the iterative calculation of FE-based DIC converges correctly and rapidly, an efficient subset-based DIC method, instead of simple integer-pixel displacement searching, is used to provide accurate initial guess of deformation for each calculation point. Also, the effects of various convergence criteria on the efficiency and accuracy of FE-based DIC are carefully examined, and a proper convergence criterion is recommended. The efficacy of these solutions is verified by numerical and real experiments. The results reveal that the improved FE-based DIC offers evident advantages over existing FE-based DIC method in terms of accuracy and efficiency. © 2015 Elsevier Ltd. All rights reserved.

  17. An Image-Based Finite Element Approach for Simulating Viscoelastic Response of Asphalt Mixture

    Directory of Open Access Journals (Sweden)

    Wenke Huang

    2016-01-01

    Full Text Available This paper presents an image-based micromechanical modeling approach to predict the viscoelastic behavior of asphalt mixture. An improved image analysis technique based on the OTSU thresholding operation was employed to reduce the beam hardening effect in X-ray CT images. We developed a voxel-based 3D digital reconstruction model of asphalt mixture with the CT images after being processed. In this 3D model, the aggregate phase and air void were considered as elastic materials while the asphalt mastic phase was considered as linear viscoelastic material. The viscoelastic constitutive model of asphalt mastic was implemented in a finite element code using the ABAQUS user material subroutine (UMAT. An experimental procedure for determining the parameters of the viscoelastic constitutive model at a given temperature was proposed. To examine the capability of the model and the accuracy of the parameter, comparisons between the numerical predictions and the observed laboratory results of bending and compression tests were conducted. Finally, the verified digital sample of asphalt mixture was used to predict the asphalt mixture viscoelastic behavior under dynamic loading and creep-recovery loading. Simulation results showed that the presented image-based digital sample may be appropriate for predicting the mechanical behavior of asphalt mixture when all the mechanical properties for different phases became available.

  18. Transmission control unit drive based on the AUTOSAR standard

    Science.gov (United States)

    Guo, Xiucai; Qin, Zhen

    2018-03-01

    It is a trend of automotive electronics industry in the future that automotive electronics embedded system development based on the AUTOSAR standard. AUTOSAR automotive architecture standard has proposed the transmission control unit (TCU) development architecture and designed its interfaces and configurations in detail. This essay has discussed that how to drive the TCU based on AUTOSAR standard architecture. The results show that driving the TCU with the AUTOSAR system improves reliability and shortens development cycles.

  19. 3D CSEM inversion based on goal-oriented adaptive finite element method

    Science.gov (United States)

    Zhang, Y.; Key, K.

    2016-12-01

    We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with

  20. A rapid estimation of tsunami run-up based on finite fault models

    Science.gov (United States)

    Campos, J.; Fuentes, M. A.; Hayes, G. P.; Barrientos, S. E.; Riquelme, S.

    2014-12-01

    Many efforts have been made to estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori. However, such models are generally based on uniform slip distributions and thus oversimplify our knowledge of the earthquake source. Instead, we can use finite fault models of earthquakes to give a more accurate prediction of the tsunami run-up. Here we show how to accurately predict tsunami run-up from any seismic source model using an analytic solution found by Fuentes et al, 2013 that was especially calculated for zones with a very well defined strike, i.e, Chile, Japan, Alaska, etc. The main idea of this work is to produce a tool for emergency response, trading off accuracy for quickness. Our solutions for three large earthquakes are promising. Here we compute models of the run-up for the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake, and the recent 2014 Mw 8.2 Iquique Earthquake. Our maximum rup-up predictions are consistent with measurements made inland after each event, with a peak of 15 to 20 m for Maule, 40 m for Tohoku, and 2,1 m for the Iquique earthquake. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first five minutes after the occurrence of any such event. Such calculations will thus provide more accurate run-up information than is otherwise available from existing uniform-slip seismic source databases.

  1. An element-based finite-volume method approach for naturally fractured compositional reservoir simulation

    Energy Technology Data Exchange (ETDEWEB)

    Marcondes, Francisco [Federal University of Ceara, Fortaleza (Brazil). Dept. of Metallurgical Engineering and Material Science], e-mail: marcondes@ufc.br; Varavei, Abdoljalil; Sepehrnoori, Kamy [The University of Texas at Austin (United States). Petroleum and Geosystems Engineering Dept.], e-mails: varavei@mail.utexas.edu, kamys@mail.utexas.edu

    2010-07-01

    An element-based finite-volume approach in conjunction with unstructured grids for naturally fractured compositional reservoir simulation is presented. In this approach, both the discrete fracture and the matrix mass balances are taken into account without any additional models to couple the matrix and discrete fractures. The mesh, for two dimensional domains, can be built of triangles, quadrilaterals, or a mix of these elements. However, due to the available mesh generator to handle both matrix and discrete fractures, only results using triangular elements will be presented. The discrete fractures are located along the edges of each element. To obtain the approximated matrix equation, each element is divided into three sub-elements and then the mass balance equations for each component are integrated along each interface of the sub-elements. The finite-volume conservation equations are assembled from the contribution of all the elements that share a vertex, creating a cell vertex approach. The discrete fracture equations are discretized only along the edges of each element and then summed up with the matrix equations in order to obtain a conservative equation for both matrix and discrete fractures. In order to mimic real field simulations, the capillary pressure is included in both matrix and discrete fracture media. In the implemented model, the saturation field in the matrix and discrete fractures can be different, but the potential of each phase in the matrix and discrete fracture interface needs to be the same. The results for several naturally fractured reservoirs are presented to demonstrate the applicability of the method. (author)

  2. 77 FR 68717 - Updating OSHA Standards Based on National Consensus Standards; Head Protection

    Science.gov (United States)

    2012-11-16

    ..., 1918, and 1926 [Docket No. OSH-2011-0184] RIN 1218-AC65 Updating OSHA Standards Based on National Consensus Standards; Head Protection AGENCY: Occupational Safety and Health Administration (OSHA), Labor. ACTION: Proposed rule; withdrawal. SUMMARY: With this notice, OSHA is withdrawing the proposed rule that...

  3. 77 FR 68684 - Updating OSHA Standards Based on National Consensus Standards; Head Protection

    Science.gov (United States)

    2012-11-16

    ..., 1918, and 1926 [Docket No. OSHA-2011-0184] RIN 1218-AC65 Updating OSHA Standards Based on National Consensus Standards; Head Protection AGENCY: Occupational Safety and Health Administration (OSHA), Labor. ACTION: Final rule; confirmation of effective date. SUMMARY: OSHA is confirming the effective date of its...

  4. Reconcilable Differences: Standards-based Teaching and Differentiation.

    Science.gov (United States)

    Tomlinson, Carol Ann

    2000-01-01

    There is no contradiction between effective standards-based instruction and differentiation. Curriculum tells teachers what to teach; differentiation tells how. Teachers can challenge all learners by providing standards-based materials and tasks calling for varied difficulty levels, scaffolding, instructional styles, and learning times. (MLH)

  5. AUDIT OF FINANCIAL REPORTS, BASED ON INTERNATIONAL ACCOUNTING STANDARDS

    OpenAIRE

    Islom Kuziev

    2011-01-01

    In this article are given main notion about international standard of financial reporting, order of the auditing on the base of IFRS, scheduling the report of the auditor, auditor conclusions and are given analysis of reporting based on the auditor procedures. At the audit of financial reporting are taken into account international standard to financial reporting 29 "Financial reporting in hyperinflationary economies".

  6. Finite element analysis of the stress distributions in peri-implant bone in modified and standard-threaded dental implants

    Directory of Open Access Journals (Sweden)

    Serkan Dundar

    2016-01-01

    Full Text Available The aim of this study was to examine the stress distributions with three different loads in two different geometric and threaded types of dental implants by finite element analysis. For this purpose, two different implant models, Nobel Replace and Nobel Active (Nobel Biocare, Zurich, Switzerland, which are currently used in clinical cases, were constructed by using ANSYS Workbench 12.1. The stress distributions on components of the implant system under three different static loadings were analysed for the two models. The maximum stress values that occurred in all components were observed in FIII (300 N. The maximum stress values occurred in FIII (300 N when the Nobel Replace implant is used, whereas the lowest ones, in the case of FI (150 N loading in the Nobel Active implant. In all models, the maximum tensions were observed to be in the neck region of the implants. Increasing the connection between the implant and the bone surface may allow more uniform distribution of the forces of the dental implant and may protect the bone around the implant. Thus, the implant could remain in the mouth for longer periods. Variable-thread tapered implants can increase the implant and bone contact.

  7. An enhanced matrix-free edge-based finite volume approach to model structures

    CSIR Research Space (South Africa)

    Suliman, Ridhwaan

    2010-01-01

    Full Text Available application to a number of test-cases. As will be demonstrated, the finite volume approach exhibits distinct advantages over the Q4 finite element formulation. This provides an alternative approach to the analysis of solid mechanics and allows...

  8. Multi-rate sensor fusion-based adaptive discrete finite-time synergetic control for flexible-joint mechanical systems

    International Nuclear Information System (INIS)

    Xue Guang-Yue; Ren Xue-Mei; Xia Yuan-Qing

    2013-01-01

    This paper proposes an adaptive discrete finite-time synergetic control (ADFTSC) scheme based on a multi-rate sensor fusion estimator for flexible-joint mechanical systems in the presence of unmeasured states and dynamic uncertainties. Multi-rate sensors are employed to observe the system states which cannot be directly obtained by encoders due to the existence of joint flexibilities. By using an extended Kalman filter (EKF), the finite-time synergetic controller is designed based on a sensor fusion estimator which estimates states and parameters of the mechanical system with multi-rate measurements. The proposed controller can guarantee the finite-time convergence of tracking errors by the theoretical derivation. Simulation and experimental studies are included to validate the effectiveness of the proposed approach. (general)

  9. Finite Element Method-Based Kinematics and Closed-Loop Control of Soft, Continuum Manipulators.

    Science.gov (United States)

    Bieze, Thor Morales; Largilliere, Frederick; Kruszewski, Alexandre; Zhang, Zhongkai; Merzouki, Rochdi; Duriez, Christian

    2018-06-01

    This article presents a modeling methodology and experimental validation for soft manipulators to obtain forward kinematic model (FKM) and inverse kinematic model (IKM) under quasi-static conditions (in the literature, these manipulators are usually classified as continuum robots. However, their main characteristic of interest in this article is that they create motion by deformation, as opposed to the classical use of articulations). It offers a way to obtain the kinematic characteristics of this type of soft robots that is suitable for offline path planning and position control. The modeling methodology presented relies on continuum mechanics, which does not provide analytic solutions in the general case. Our approach proposes a real-time numerical integration strategy based on finite element method with a numerical optimization based on Lagrange multipliers to obtain FKM and IKM. To reduce the dimension of the problem, at each step, a projection of the model to the constraint space (gathering actuators, sensors, and end-effector) is performed to obtain the smallest number possible of mathematical equations to be solved. This methodology is applied to obtain the kinematics of two different manipulators with complex structural geometry. An experimental comparison is also performed in one of the robots, between two other geometric approaches and the approach that is showcased in this article. A closed-loop controller based on a state estimator is proposed. The controller is experimentally validated and its robustness is evaluated using Lypunov stability method.

  10. Inverse Analysis of Pavement Structural Properties Based on Dynamic Finite Element Modeling and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaochao Tang

    2013-03-01

    Full Text Available With the movement towards the implementation of mechanistic-empirical pavement design guide (MEPDG, an accurate determination of pavement layer moduli is vital for predicting pavement critical mechanistic responses. A backcalculation procedure is commonly used to estimate the pavement layer moduli based on the non-destructive falling weight deflectometer (FWD tests. Backcalculation of flexible pavement layer properties is an inverse problem with known input and output signals based upon which unknown parameters of the pavement system are evaluated. In this study, an inverse analysis procedure that combines the finite element analysis and a population-based optimization technique, Genetic Algorithm (GA has been developed to determine the pavement layer structural properties. A lightweight deflectometer (LWD was used to infer the moduli of instrumented three-layer scaled flexible pavement models. While the common practice in backcalculating pavement layer properties still assumes a static FWD load and uses only peak values of the load and deflections, dynamic analysis was conducted to simulate the impulse LWD load. The recorded time histories of the LWD load were used as the known inputs into the pavement system while the measured time-histories of surface central deflections and subgrade deflections measured with a linear variable differential transformers (LVDT were considered as the outputs. As a result, consistent pavement layer moduli can be obtained through this inverse analysis procedure.

  11. A Hamiltonian-based derivation of Scaled Boundary Finite Element Method for elasticity problems

    International Nuclear Information System (INIS)

    Hu Zhiqiang; Lin Gao; Wang Yi; Liu Jun

    2010-01-01

    The Scaled Boundary Finite Method (SBFEM) is a semi-analytical solution approach for solving partial differential equation. For problem in elasticity, the governing equations can be obtained by mechanically based formulation, Scaled-boundary-transformation-based formulation and principle of virtual work. The governing equations are described in the frame of Lagrange system and the unknowns are displacements. But in the solution procedure, the auxiliary variables are introduced and the equations are solved in the state space. Based on the observation that the duality system to solve elastic problem proposed by W.X. Zhong is similar to the above solution approach, the discretization of the SBFEM and the duality system are combined to derive the governing equations in the Hamilton system by introducing the dual variables in this paper. The Precise Integration Method (PIM) used in Duality system is also an efficient method for the solution of the governing equations of SBFEM in displacement and boundary stiffness matrix especially for the case which results some numerical difficulties in the usually uses the eigenvalue method. Numerical examples are used to demonstrate the validity and effectiveness of the PIM for solution of boundary static stiffness.

  12. Finite-element modelling and preliminary validation of microneedle-based electrodes for enhanced tissue electroporation.

    Science.gov (United States)

    Houlihan, Ruth; Grygoryev, Konstantin; Zhenfei Ning; Williams, John; Moore, Tom; O'Mahony, Conor

    2017-07-01

    This paper investigates the use of microneedle-based electrodes for enhanced testis electroporation, with specific application to the production of transgenic mice. During the design phase, finite-element software has been used to construct a tissue model and to compare the relative performance of electrodes employing a) conventional flat plates, b) microneedle arrays, and c) invasive needles. Results indicate that microneedle-based electrodes can achieve internal tissue field strengths which are an order of magnitude higher than those generated using conventional flat electrodes, and which are comparable to fields produced using invasive needles. Using a double-sided etching process, conductive microneedle arrays were then fabricated and used in prototype electrodes. In a series of mouse model experiments involving injection of a DNA vector expressing Green Fluorescent Protein (GFP), the performance of flat and microneedle electrodes was compared by measuring GFP expression after electroporation. The main finding, supported by experimental and simulated data, is that use of microneedle-based electrodes significantly enhanced electroporation of testis.

  13. Analysis of elastic-plastic problems using edge-based smoothed finite element method

    International Nuclear Information System (INIS)

    Cui, X.Y.; Liu, G.R.; Li, G.Y.; Zhang, G.Y.; Sun, G.Y.

    2009-01-01

    In this paper, an edge-based smoothed finite element method (ES-FEM) is formulated for stress field determination of elastic-plastic problems using triangular meshes, in which smoothing domains associated with the edges of the triangles are used for smoothing operations to improve the accuracy and the convergence rate of the method. The smoothed Galerkin weak form is adopted to obtain the discretized system equations, and the numerical integration becomes a simple summation over the edge-based smoothing domains. The pseudo-elastic method is employed for the determination of stress field and Hencky's total deformation theory is used to define effective elastic material parameters, which are treated as field variables and considered as functions of the final state of stress fields. The effective elastic material parameters are then obtained in an iterative manner based on the strain controlled projection method from the uniaxial material curve. Some numerical examples are investigated and excellent results have been obtained demonstrating the effectivity of the present method.

  14. Development of polygon elements based on the scaled boundary finite element method

    International Nuclear Information System (INIS)

    Chiong, Irene; Song Chongmin

    2010-01-01

    We aim to extend the scaled boundary finite element method to construct conforming polygon elements. The development of the polygonal finite element is highly anticipated in computational mechanics as greater flexibility and accuracy can be achieved using these elements. The scaled boundary polygonal finite element will enable new developments in mesh generation, better accuracy from a higher order approximation and better transition elements in finite element meshes. Polygon elements of arbitrary number of edges and order have been developed successfully. The edges of an element are discretised with line elements. The displacement solution of the scaled boundary finite element method is used in the development of shape functions. They are shown to be smooth and continuous within the element, and satisfy compatibility and completeness requirements. Furthermore, eigenvalue decomposition has been used to depict element modes and outcomes indicate the ability of the scaled boundary polygonal element to express rigid body and constant strain modes. Numerical tests are presented; the patch test is passed and constant strain modes verified. Accuracy and convergence of the method are also presented and the performance of the scaled boundary polygonal finite element is verified on Cook's swept panel problem. Results show that the scaled boundary polygonal finite element method outperforms a traditional mesh and accuracy and convergence are achieved from fewer nodes. The proposed method is also shown to be truly flexible, and applies to arbitrary n-gons formed of irregular and non-convex polygons.

  15. Detachments of the subducted Indian continental lithosphere based on 3D finite-frequency tomographic images

    Science.gov (United States)

    Liang, X.; Tian, X.; Wang, M.

    2017-12-01

    Indian plate collided with Eurasian plate at 60 Ma and there are about 3000 km crustal shortening since the continental-continental collision. At least one third of the total amount of crustal shortening between Indian and Eurasian plates could not be accounted by thickened Tibetan crust and surface erosion. It will need a combination of possible transfer of lower crust to the mantle by eclogitization and lateral extrusion. Based on the lithosphere-asthenosphere boundary images beneath the Tibetan plateau, there is also at least the same amount deficit for lithospheric mantle subducted into upper/lower mantle or lateral extrusion with the crust. We have to recover a detailed Indian continental lithosphere image beneath the plateau in order to explain this deficit of mass budget. Combining the new teleseismic body waves recorded by SANDWICH passive seismic array with waveforms from several previous temporary seismic arrays, we carried out finite-frequency tomographic inversions to image three-dimensional velocity structures beneath southern and central Tibetan plateau to examine the possible image of subducted Indian lithosphere in the Tibetan upper mantle. We have recovered a continuous high velocity body in upper mantle and piece-wised high velocity anomalies in the mantle transition zone. Based on their geometry and relative locations, we interpreted these high velocity anomalies as the subducted and detached Indian lithosphere at different episodes of the plateau evolution. Detachments of the subducted Indian lithosphere should have a crucial impact on the volcanism activities and uplift history of the plateau.

  16. A NURBS-based finite element model applied to geometrically nonlinear elastodynamics using a corotational approach

    KAUST Repository

    Espath, L. F R; Braun, Alexandre Luis; Awruch, Armando Miguel; Dalcin, Lisandro

    2015-01-01

    A numerical model to deal with nonlinear elastodynamics involving large rotations within the framework of the finite element based on NURBS (Non-Uniform Rational B-Spline) basis is presented. A comprehensive kinematical description using a corotational approach and an orthogonal tensor given by the exact polar decomposition is adopted. The state equation is written in terms of corotational variables according to the hypoelastic theory, relating the Jaumann derivative of the Cauchy stress to the Eulerian strain rate.The generalized-α method (Gα) method and Generalized Energy-Momentum Method with an additional parameter (GEMM+ξ) are employed in order to obtain a stable and controllable dissipative time-stepping scheme with algorithmic conservative properties for nonlinear dynamic analyses.The main contribution is to show that the energy-momentum conservation properties and numerical stability may be improved once a NURBS-based FEM in the spatial discretization is used. Also it is shown that high continuity can postpone the numerical instability when GEMM+ξ with consistent mass is employed; likewise, increasing the continuity class yields a decrease in the numerical dissipation. A parametric study is carried out in order to show the stability and energy budget in terms of several properties such as continuity class, spectral radius and lumped as well as consistent mass matrices.

  17. A NURBS-based finite element model applied to geometrically nonlinear elastodynamics using a corotational approach

    KAUST Repository

    Espath, L. F R

    2015-02-03

    A numerical model to deal with nonlinear elastodynamics involving large rotations within the framework of the finite element based on NURBS (Non-Uniform Rational B-Spline) basis is presented. A comprehensive kinematical description using a corotational approach and an orthogonal tensor given by the exact polar decomposition is adopted. The state equation is written in terms of corotational variables according to the hypoelastic theory, relating the Jaumann derivative of the Cauchy stress to the Eulerian strain rate.The generalized-α method (Gα) method and Generalized Energy-Momentum Method with an additional parameter (GEMM+ξ) are employed in order to obtain a stable and controllable dissipative time-stepping scheme with algorithmic conservative properties for nonlinear dynamic analyses.The main contribution is to show that the energy-momentum conservation properties and numerical stability may be improved once a NURBS-based FEM in the spatial discretization is used. Also it is shown that high continuity can postpone the numerical instability when GEMM+ξ with consistent mass is employed; likewise, increasing the continuity class yields a decrease in the numerical dissipation. A parametric study is carried out in order to show the stability and energy budget in terms of several properties such as continuity class, spectral radius and lumped as well as consistent mass matrices.

  18. A Wavelet-Based Finite Element Method for the Self-Shielding Issue in Neutron Transport

    International Nuclear Information System (INIS)

    Le Tellier, R.; Fournier, D.; Ruggieri, J. M.

    2009-01-01

    This paper describes a new approach for treating the energy variable of the neutron transport equation in the resolved resonance energy range. The aim is to avoid recourse to a case-specific spatially dependent self-shielding calculation when considering a broad group structure. This method consists of a discontinuous Galerkin discretization of the energy using wavelet-based elements. A Σ t -orthogonalization of the element basis is presented in order to make the approach tractable for spatially dependent problems. First numerical tests of this method are carried out in a limited framework under the Livolant-Jeanpierre hypotheses in an infinite homogeneous medium. They are mainly focused on the way to construct the wavelet-based element basis. Indeed, the prior selection of these wavelet functions by a thresholding strategy applied to the discrete wavelet transform of a given quantity is a key issue for the convergence rate of the method. The Canuto thresholding approach applied to an approximate flux is found to yield a nearly optimal convergence in many cases. In these tests, the capability of such a finite element discretization to represent the flux depression in a resonant region is demonstrated; a relative accuracy of 10 -3 on the flux (in L 2 -norm) is reached with less than 100 wavelet coefficients per group. (authors)

  19. Finite element-based limit load of piping branch junctions under combined loadings

    International Nuclear Information System (INIS)

    Xuan Fuzhen; Li Peining

    2004-01-01

    The limit load is an important input parameter in engineering defect-assessment procedures and strength design. In the present work, a total of 100 different piping branch junction models for the limit load calculation were performed under combined internal pressure and moments in use of non-linear finite element (FE) method. Three different existing accumulation rules for limit load, i.e., linear equation, parabolic equation and quadratic equation were discussed on the basis of FE results. A novel limit load solution was developed based on detailed three-dimensional FE limit analyses which accommodated the geometrical parameter influence, together with analytical solutions based on equilibrium stress fields. Finally, six experimental results were provided to justify the presented equation. According to the FE limit analysis, limit load interaction of the piping tees under combined pressure and moments has a relationship with the geometrical parameters, especially with the diameter ratio d/D. The predicted limit loads from the presented formula are very close to the experimental data. The resulting limit load solution is given in a closed form, and thus can be easily used in practice

  20. Trend analysis using non-stationary time series clustering based on the finite element method

    Science.gov (United States)

    Gorji Sefidmazgi, M.; Sayemuzzaman, M.; Homaifar, A.; Jha, M. K.; Liess, S.

    2014-05-01

    In order to analyze low-frequency variability of climate, it is useful to model the climatic time series with multiple linear trends and locate the times of significant changes. In this paper, we have used non-stationary time series clustering to find change points in the trends. Clustering in a multi-dimensional non-stationary time series is challenging, since the problem is mathematically ill-posed. Clustering based on the finite element method (FEM) is one of the methods that can analyze multidimensional time series. One important attribute of this method is that it is not dependent on any statistical assumption and does not need local stationarity in the time series. In this paper, it is shown how the FEM-clustering method can be used to locate change points in the trend of temperature time series from in situ observations. This method is applied to the temperature time series of North Carolina (NC) and the results represent region-specific climate variability despite higher frequency harmonics in climatic time series. Next, we investigated the relationship between the climatic indices with the clusters/trends detected based on this clustering method. It appears that the natural variability of climate change in NC during 1950-2009 can be explained mostly by AMO and solar activity.

  1. Parameter estimation of a nonlinear Burger's model using nanoindentation and finite element-based inverse analysis

    Science.gov (United States)

    Hamim, Salah Uddin Ahmed

    Nanoindentation involves probing a hard diamond tip into a material, where the load and the displacement experienced by the tip is recorded continuously. This load-displacement data is a direct function of material's innate stress-strain behavior. Thus, theoretically it is possible to extract mechanical properties of a material through nanoindentation. However, due to various nonlinearities associated with nanoindentation the process of interpreting load-displacement data into material properties is difficult. Although, simple elastic behavior can be characterized easily, a method to characterize complicated material behavior such as nonlinear viscoelasticity is still lacking. In this study, a nanoindentation-based material characterization technique is developed to characterize soft materials exhibiting nonlinear viscoelasticity. Nanoindentation experiment was modeled in finite element analysis software (ABAQUS), where a nonlinear viscoelastic behavior was incorporated using user-defined subroutine (UMAT). The model parameters were calibrated using a process called inverse analysis. In this study, a surrogate model-based approach was used for the inverse analysis. The different factors affecting the surrogate model performance are analyzed in order to optimize the performance with respect to the computational cost.

  2. Wavelet-based adaptation methodology combined with finite difference WENO to solve ideal magnetohydrodynamics

    Science.gov (United States)

    Do, Seongju; Li, Haojun; Kang, Myungjoo

    2017-06-01

    In this paper, we present an accurate and efficient wavelet-based adaptive weighted essentially non-oscillatory (WENO) scheme for hydrodynamics and ideal magnetohydrodynamics (MHD) equations arising from the hyperbolic conservation systems. The proposed method works with the finite difference weighted essentially non-oscillatory (FD-WENO) method in space and the third order total variation diminishing (TVD) Runge-Kutta (RK) method in time. The philosophy of this work is to use the lifted interpolating wavelets as not only detector for singularities but also interpolator. Especially, flexible interpolations can be performed by an inverse wavelet transformation. When the divergence cleaning method introducing auxiliary scalar field ψ is applied to the base numerical schemes for imposing divergence-free condition to the magnetic field in a MHD equation, the approximations to derivatives of ψ require the neighboring points. Moreover, the fifth order WENO interpolation requires large stencil to reconstruct high order polynomial. In such cases, an efficient interpolation method is necessary. The adaptive spatial differentiation method is considered as well as the adaptation of grid resolutions. In order to avoid the heavy computation of FD-WENO, in the smooth regions fixed stencil approximation without computing the non-linear WENO weights is used, and the characteristic decomposition method is replaced by a component-wise approach. Numerical results demonstrate that with the adaptive method we are able to resolve the solutions that agree well with the solution of the corresponding fine grid.

  3. Finite Element Based Lagrangian Vortex Dynamics Model for Wind Turbine Aerodynamics

    International Nuclear Information System (INIS)

    McWilliam, Michael K; Crawford, Curran

    2014-01-01

    This paper presents a novel aerodynamic model based on Lagrangian Vortex Dynamics (LVD) formulated using a Finite Element (FE) approach. The advantage of LVD is improved fidelity over Blade Element Momentum Theory (BEMT) while being faster than Numerical Navier-Stokes Models (NNSM) in either primitive or velocity-vorticity formulations. The model improves on conventional LVD in three ways. First, the model is based on an error minimization formulation that can be solved with fast root finding algorithms. In addition to improving accuracy, this eliminates the intrinsic numerical instability of conventional relaxed wake simulations. The method has further advantages in optimization and aero-elastic simulations for two reasons. The root finding algorithm can solve the aerodynamic and structural equations simultaneously, avoiding Gauss-Seidel iteration for compatibility constraints. The second is that the formulation allows for an analytical definition for sensitivity calculations. The second improvement comes from a new discretization scheme based on an FE formulation and numerical quadrature that decouples the spatial, influencing and temporal meshes. The shape for each trailing filament uses basis functions (interpolating splines) that allow for both local polynomial order and element size refinement. A completely independent scheme distributes the influencing (vorticity) elements along the basis functions. This allows for concentrated elements in the near wake for accuracy and progressively less in the far-wake for efficiency. Finally the third improvement is the use of a far-wake model based on semi-infinite vortex cylinders where the radius and strength are related to the wake state. The error-based FE formulation allows the transition to the far wake to occur across a fixed plane

  4. Finite element type of stress analysis for parts based on S235 JR steel welding

    Directory of Open Access Journals (Sweden)

    C. Babis

    2014-04-01

    Full Text Available The determination of static and/or variable stress, in case of complex shaped welded structures is hard to achieve. One solution, though, is the use of finite element method, implemented by means of various specialized software. Nowadays, this method has become very popular due to its high precision of data obtained through both research and finite element analysis. Hence, the present paper deals with the modelling of the pull-out behaviour of concave and convex welded joints through finite element method.

  5. Domain decomposition based iterative methods for nonlinear elliptic finite element problems

    Energy Technology Data Exchange (ETDEWEB)

    Cai, X.C. [Univ. of Colorado, Boulder, CO (United States)

    1994-12-31

    The class of overlapping Schwarz algorithms has been extensively studied for linear elliptic finite element problems. In this presentation, the author considers the solution of systems of nonlinear algebraic equations arising from the finite element discretization of some nonlinear elliptic equations. Several overlapping Schwarz algorithms, including the additive and multiplicative versions, with inexact Newton acceleration will be discussed. The author shows that the convergence rate of the Newton`s method is independent of the mesh size used in the finite element discretization, and also independent of the number of subdomains into which the original domain in decomposed. Numerical examples will be presented.

  6. The structure of the Hamiltonian in a finite-dimensional formalism based on Weyl's quantum mechanics

    International Nuclear Information System (INIS)

    Santhanam, T.S.; Madivanane, S.

    1982-01-01

    Any discussion on finite-dimensional formulation of quantum mechanics involves the Sylvester matrix (finite Fourier transform). In the usual formulation, a remarkable relation exists that gives the Fourier transform as the exponential of the Hamiltonian of a simple harmonic oscillator. In this paper, assuming that such a relation holds also in the finite dimensional case, we extract the logarithm of the Sylvester matrix and in some cases this can be interpreted as the Hamiltonian of the truncated oscillator. We calculate the Hamiltonian matrix explicitly for some special cases of n = 3,4. (author)

  7. New Multigrid Method Including Elimination Algolithm Based on High-Order Vector Finite Elements in Three Dimensional Magnetostatic Field Analysis

    Science.gov (United States)

    Hano, Mitsuo; Hotta, Masashi

    A new multigrid method based on high-order vector finite elements is proposed in this paper. Low level discretizations in this method are obtained by using low-order vector finite elements for the same mesh. Gauss-Seidel method is used as a smoother, and a linear equation of lowest level is solved by ICCG method. But it is often found that multigrid solutions do not converge into ICCG solutions. An elimination algolithm of constant term using a null space of the coefficient matrix is also described. In three dimensional magnetostatic field analysis, convergence time and number of iteration of this multigrid method are discussed with the convectional ICCG method.

  8. 3D controlled-source electromagnetic modeling in anisotropic medium using edge-based finite element method

    DEFF Research Database (Denmark)

    Cai, Hongzhu; Xiong, Bin; Han, Muran

    2014-01-01

    This paper presents a linear edge-based finite element method for numerical modeling of 3D controlled-source electromagnetic data in an anisotropic conductive medium. We use a nonuniform rectangular mesh in order to capture the rapid change of diffusive electromagnetic field within the regions of...... are in a good agreement with the solutions obtained by the integral equation method....

  9. A 3D finite-strain-based constitutive model for shape memory alloys accounting for thermomechanical coupling and martensite reorientation

    Science.gov (United States)

    Wang, Jun; Moumni, Ziad; Zhang, Weihong; Xu, Yingjie; Zaki, Wael

    2017-06-01

    The paper presents a finite-strain constitutive model for shape memory alloys (SMAs) that accounts for thermomechanical coupling and martensite reorientation. The finite-strain formulation is based on a two-tier, multiplicative decomposition of the deformation gradient into thermal, elastic, and inelastic parts, where the inelastic deformation is further split into phase transformation and martensite reorientation components. A time-discrete formulation of the constitutive equations is proposed and a numerical integration algorithm is presented featuring proper symmetrization of the tensor variables and explicit formulation of the material and spatial tangent operators involved. The algorithm is used for finite element analysis of SMA components subjected to various loading conditions, including uniaxial, non-proportional, isothermal and adiabatic loading cases. The analysis is carried out using the FEA software Abaqus by means of a user-defined material subroutine, which is then utilized to simulate a SMA archwire undergoing large strains and rotations.

  10. A finite-element simulation of galvanic coupling intra-body communication based on the whole human body.

    Science.gov (United States)

    Song, Yong; Zhang, Kai; Hao, Qun; Hu, Lanxin; Wang, Jingwen; Shang, Fuzhou

    2012-10-09

    Simulation based on the finite-element (FE) method plays an important role in the investigation of intra-body communication (IBC). In this paper, a finite-element model of the whole body model used for the IBC simulation is proposed and verified, while the FE simulation of the galvanic coupling IBC with different signal transmission paths has been achieved. Firstly, a novel finite-element method for modeling the whole human body is proposed, and a FE model of the whole human body used for IBC simulation was developed. Secondly, the simulations of the galvanic coupling IBC with the different signal transmission paths were implemented. Finally, the feasibility of the proposed method was verified by using in vivo measurements within the frequency range of 10 kHz-5 MHz, whereby some important conclusions were deduced. Our results indicate that the proposed method will offer significant advantages in the investigation of the galvanic coupling intra-body communication.

  11. Neural Network Observer-Based Finite-Time Formation Control of Mobile Robots

    Directory of Open Access Journals (Sweden)

    Caihong Zhang

    2014-01-01

    Full Text Available This paper addresses the leader-following formation problem of nonholonomic mobile robots. In the formation, only the pose (i.e., the position and direction angle of the leader robot can be obtained by the follower. First, the leader-following formation is transformed into special trajectory tracking. And then, a neural network (NN finite-time observer of the follower robot is designed to estimate the dynamics of the leader robot. Finally, finite-time formation control laws are developed for the follower robot to track the leader robot in the desired separation and bearing in finite time. The effectiveness of the proposed NN finite-time observer and the formation control laws are illustrated by both qualitative analysis and simulation results.

  12. The case for regime-based water quality standards

    Science.gov (United States)

    G.C. Poole; J.B. Dunham; D.M. Keenan; S.T. Sauter; D.A. McCullough; C. Mebane; J.C. Lockwood; D.A. Essig; M.P. Hicks; D.J. Sturdevant; E.J. Materna; S.A. Spalding; J. Risley; M. Deppman

    2004-01-01

    Conventional water quality standards have been successful in reducing the concentration of toxic substances in US waters. However, conventional standards are based on simple thresholds and are therefore poorly structured to address human-caused imbalances in dynamic, natural water quality parameters, such as nutrients, sediment, and temperature. A more applicable type...

  13. Defining nuclear medical file formal based on DICOM standard

    International Nuclear Information System (INIS)

    He Bin; Jin Yongjie; Li Yulan

    2001-01-01

    With the wide application of computer technology in medical area, DICOM is becoming the standard of digital imaging and communication. The author discusses how to define medical imaging file formal based on DICOM standard. It also introduces the format of ANMIS system the authors defined the validity and integrality of this format

  14. Finite Elements Based on Strong and Weak Formulations for Structural Mechanics: Stability, Accuracy and Reliability

    Directory of Open Access Journals (Sweden)

    Francesco Tornabene

    2017-07-01

    Full Text Available The authors are presenting a novel formulation based on the Differential Quadrature (DQ method which is used to approximate derivatives and integrals. The resulting scheme has been termed strong and weak form finite elements (SFEM or WFEM, according to the numerical scheme employed in the computation. Such numerical methods are applied to solve some structural problems related to the mechanical behavior of plates and shells, made of isotropic or composite materials. The main differences between these two approaches rely on the initial formulation – which is strong or weak (variational – and the implementation of the boundary conditions, that for the former include the continuity of stresses and displacements, whereas in the latter can consider the continuity of the displacements or both. The two methodologies consider also a mapping technique to transform an element of general shape described in Cartesian coordinates into the same element in the computational space. Such technique can be implemented by employing the classic Lagrangian-shaped elements with a fixed number of nodes along the element edges or blending functions which allow an “exact mapping” of the element. In particular, the authors are employing NURBS (Not-Uniform Rational B-Splines for such nonlinear mapping in order to use the “exact” shape of CAD designs.

  15. Quadratic temporal finite element method for linear elastic structural dynamics based on mixed convolved action

    International Nuclear Information System (INIS)

    Kim, Jin Kyu; Kim, Dong Keon

    2016-01-01

    A common approach for dynamic analysis in current practice is based on a discrete time-integration scheme. This approach can be largely attributed to the absence of a true variational framework for initial value problems. To resolve this problem, a new stationary variational principle was recently established for single-degree-of-freedom oscillating systems using mixed variables, fractional derivatives and convolutions of convolutions. In this mixed convolved action, all the governing differential equations and initial conditions are recovered from the stationarity of a single functional action. Thus, the entire description of linear elastic dynamical systems is encapsulated. For its practical application to structural dynamics, this variational formalism is systemically extended to linear elastic multidegree- of-freedom systems in this study, and a corresponding weak form is numerically implemented via a quadratic temporal finite element method. The developed numerical method is symplectic and unconditionally stable with respect to a time step for the underlying conservative system. For the forced-damped vibration, a three-story shear building is used as an example to investigate the performance of the developed numerical method, which provides accurate results with good convergence characteristics

  16. A novel QC-LDPC code based on the finite field multiplicative group for optical communications

    Science.gov (United States)

    Yuan, Jian-guo; Xu, Liang; Tong, Qing-zhen

    2013-09-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) code is proposed based on the finite field multiplicative group, which has easier construction, more flexible code-length code-rate adjustment and lower encoding/decoding complexity. Moreover, a regular QC-LDPC(5334,4962) code is constructed. The simulation results show that the constructed QC-LDPC(5334,4962) code can gain better error correction performance under the condition of the additive white Gaussian noise (AWGN) channel with iterative decoding sum-product algorithm (SPA). At the bit error rate (BER) of 10-6, the net coding gain (NCG) of the constructed QC-LDPC(5334,4962) code is 1.8 dB, 0.9 dB and 0.2 dB more than that of the classic RS(255,239) code in ITU-T G.975, the LDPC(32640,30592) code in ITU-T G.975.1 and the SCG-LDPC(3969,3720) code constructed by the random method, respectively. So it is more suitable for optical communication systems.

  17. Micro-CT based finite element models for elastic properties of glass-ceramic scaffolds.

    Science.gov (United States)

    Tagliabue, Stefano; Rossi, Erica; Baino, Francesco; Vitale-Brovarone, Chiara; Gastaldi, Dario; Vena, Pasquale

    2017-01-01

    In this study, the mechanical properties of porous glass-ceramic scaffolds are investigated by means of three-dimensional finite element models based on micro-computed tomography (micro-CT) scan data. In particular, the quantitative relationship between the morpho-architectural features of the obtained scaffolds, such as macroscopic porosity and strut thickness, and elastic properties, is sought. The macroscopic elastic properties of the scaffolds have been obtained through numerical homogenization approaches using the mechanical characteristics of the solid walls of the scaffolds (assessed through nanoindentation) as input parameters for the numerical simulations. Anisotropic mechanical properties of the produced scaffolds have also been investigated by defining a suitable anisotropy index. A comparison with morphological data obtained through the micro-CT scans is also presented. The proposed study shows that the produced glass-ceramic scaffolds exhibited a macroscopic porosity ranging between 29% and 97% which corresponds to an average stiffness ranging between 42.4GPa and 36MPa. A quantitative estimation of the isotropy of the macroscopic elastic properties has been performed showing that the samples with higher solid fractions were those closest to an isotropic material. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. A study of gradient strengthening based on a finite-deformation gradient crystal-plasticity model

    Science.gov (United States)

    Pouriayevali, Habib; Xu, Bai-Xiang

    2017-11-01

    A comprehensive study on a finite-deformation gradient crystal-plasticity model which has been derived based on Gurtin's framework (Int J Plast 24:702-725, 2008) is carried out here. This systematic investigation on the different roles of governing components of the model represents the strength of this framework in the prediction of a wide range of hardening behaviors as well as rate-dependent and scale-variation responses in a single crystal. The model is represented in the reference configuration for the purpose of numerical implementation and then implemented in the FEM software ABAQUS via a user-defined subroutine (UEL). Furthermore, a function of accumulation rates of dislocations is employed and viewed as a measure of formation of short-range interactions. Our simulation results reveal that the dissipative gradient strengthening can be identified as a source of isotropic-hardening behavior, which may represent the effect of irrecoverable work introduced by Gurtin and Ohno (J Mech Phys Solids 59:320-343, 2011). Here, the variation of size dependency at different magnitude of a rate-sensitivity parameter is also discussed. Moreover, an observation of effect of a distinctive feature in the model which explains the effect of distortion of crystal lattice in the reference configuration is reported in this study for the first time. In addition, plastic flows in predefined slip systems and expansion of accumulation of GNDs are distinctly observed in varying scales and under different loading conditions.

  19. Finite state projection based bounds to compare chemical master equation models using single-cell data

    Energy Technology Data Exchange (ETDEWEB)

    Fox, Zachary [School of Biomedical Engineering, Colorado State University, Fort Collins, Colorado 80523 (United States); Neuert, Gregor [Department of Molecular Physiology and Biophysics, Vanderbilt University School of Medicine, Nashville, Tennessee 37232 (United States); Department of Pharmacology, School of Medicine, Vanderbilt University, Nashville, Tennessee 37232 (United States); Department of Biomedical Engineering, Vanderbilt University School of Engineering, Nashville, Tennessee 37232 (United States); Munsky, Brian [School of Biomedical Engineering, Colorado State University, Fort Collins, Colorado 80523 (United States); Department of Chemical and Biological Engineering, Colorado State University, Fort Collins, Colorado 80523 (United States)

    2016-08-21

    Emerging techniques now allow for precise quantification of distributions of biological molecules in single cells. These rapidly advancing experimental methods have created a need for more rigorous and efficient modeling tools. Here, we derive new bounds on the likelihood that observations of single-cell, single-molecule responses come from a discrete stochastic model, posed in the form of the chemical master equation. These strict upper and lower bounds are based on a finite state projection approach, and they converge monotonically to the exact likelihood value. These bounds allow one to discriminate rigorously between models and with a minimum level of computational effort. In practice, these bounds can be incorporated into stochastic model identification and parameter inference routines, which improve the accuracy and efficiency of endeavors to analyze and predict single-cell behavior. We demonstrate the applicability of our approach using simulated data for three example models as well as for experimental measurements of a time-varying stochastic transcriptional response in yeast.

  20. Three-dimensional finite element model for flexible pavement analyses based field modulus measurements

    International Nuclear Information System (INIS)

    Lacey, G.; Thenoux, G.; Rodriguez-Roa, F.

    2008-01-01

    In accordance with the present development of empirical-mechanistic tools, this paper presents an alternative to traditional analysis methods for flexible pavements using a three-dimensional finite element formulation based on a liner-elastic perfectly-plastic Drucker-Pager model for granular soil layers and a linear-elastic stress-strain law for the asphalt layer. From the sensitivity analysis performed, it was found that variations of +-4 degree in the internal friction angle of granular soil layers did not significantly affect the analyzed pavement response. On the other hand, a null dilation angle is conservatively proposed for design purposes. The use of a Light Falling Weight Deflectometer is also proposed as an effective and practical tool for on-site elastic modulus determination of granular soil layers. However, the stiffness value obtained from the tested layer should be corrected when the measured peak deflection and the peak force do not occur at the same time. In addition, some practical observations are given to achieve successful field measurements. The importance of using a 3D FE analysis to predict the maximum tensile strain at the bottom of the asphalt layer (related to pavement fatigue) and the maximum vertical comprehensive strain transmitted to the top of the granular soil layers (related to rutting) is also shown. (author)

  1. Finite Element Based Response Surface Methodology to Optimize Segmental Tunnel Lining

    Directory of Open Access Journals (Sweden)

    A. Rastbood

    2017-04-01

    Full Text Available The main objective of this paper is to optimize the geometrical and engineering characteristics of concrete segments of tunnel lining using Finite Element (FE based Response Surface Methodology (RSM. Input data for RSM statistical analysis were obtained using FEM. In RSM analysis, thickness (t and elasticity modulus of concrete segments (E, tunnel height (H, horizontal to vertical stress ratio (K and position of key segment in tunnel lining ring (θ were considered as input independent variables. Maximum values of Mises and Tresca stresses and tunnel ring displacement (UMAX were set as responses. Analysis of variance (ANOVA was carried out to investigate the influence of each input variable on the responses. Second-order polynomial equations in terms of influencing input variables were obtained for each response. It was found that elasticity modulus and key segment position variables were not included in yield stresses and ring displacement equations, and only tunnel height and stress ratio variables were included in ring displacement equation. Finally optimization analysis of tunnel lining ring was performed. Due to absence of elasticity modulus and key segment position variables in equations, their values were kept to average level and other variables were floated in related ranges. Response parameters were set to minimum. It was concluded that to obtain optimum values for responses, ring thickness and tunnel height must be near to their maximum and minimum values, respectively and ground state must be similar to hydrostatic conditions.

  2. Quadratic temporal finite element method for linear elastic structural dynamics based on mixed convolved action

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Kyu [School of Architecture and Architectural Engineering, Hanyang University, Ansan (Korea, Republic of); Kim, Dong Keon [Dept. of Architectural Engineering, Dong A University, Busan (Korea, Republic of)

    2016-09-15

    A common approach for dynamic analysis in current practice is based on a discrete time-integration scheme. This approach can be largely attributed to the absence of a true variational framework for initial value problems. To resolve this problem, a new stationary variational principle was recently established for single-degree-of-freedom oscillating systems using mixed variables, fractional derivatives and convolutions of convolutions. In this mixed convolved action, all the governing differential equations and initial conditions are recovered from the stationarity of a single functional action. Thus, the entire description of linear elastic dynamical systems is encapsulated. For its practical application to structural dynamics, this variational formalism is systemically extended to linear elastic multidegree- of-freedom systems in this study, and a corresponding weak form is numerically implemented via a quadratic temporal finite element method. The developed numerical method is symplectic and unconditionally stable with respect to a time step for the underlying conservative system. For the forced-damped vibration, a three-story shear building is used as an example to investigate the performance of the developed numerical method, which provides accurate results with good convergence characteristics.

  3. Leak-Before-Break assessment of a welded piping based on 3D finite element method

    International Nuclear Information System (INIS)

    Chen, Mingya; Yu, Weiwei; Chen, Zhilin; Qian, Guian; Lu, Feng; Xue, Fei

    2017-01-01

    Highlights: • The effects of load reduction, strength match, welding width, load level, crack size and constraint are studied. • The results show that the LBB margin is dependent on the load level. • The results show that higher strength-match of WPJs will have higher crack-front constraints. • The results show that the engineering method has a high precision only if the width of weld is comparable to the crack depth. - Abstract: The paper studies the effects of the load reduction (discrepancy between designing and real loadings), strength match of the welded piping joint (WPJ), welding width, crack size and crack tip constraint on the Leak-Before-Break (LBB) assessment of a welded piping. The 3D finite element (FE) method is used in the study of a surge line of the steam generator in a nuclear power plant. It is demonstrated that the LBB margin is dependent on the loading level and the load reduction effect should be considered. When the loading is high enough, there is a quite large deviation between the J-integral calculated based on the real material property of WPJ and that calculated based on the engineering method, e.g. Zahoor handbook of Electric Power Research Institute (EPRI). The engineering method assumes that the whole piping is made of the unique welding material in the calculation. As the influence of the strength matching and welding width is ignored in the engineering method for J-integral calculation, the engineering method has a sufficient precision only if the width of welding is comparable to the crack depth. Narrower welding width leads to higher constraint of the plastic deformation in the welding and larger high stress areas in the base for the low strength-match WPJ. Higher strength matching of WPJs has higher crack-front constraints.

  4. SU-F-I-50: Finite Element-Based Deformable Image Registration of Lung and Heart

    Energy Technology Data Exchange (ETDEWEB)

    Penjweini, R [University of Pennsylvania, Philadelphia, Pennsylvania (United States); Kim, M [University of Pennsylvania, Philadelphia, PA (United States); Zhu, T [University Pennsylvania, Philadelphia, PA (United States)

    2016-06-15

    Purpose: Photodynamic therapy (PDT) is used after surgical resection to treat the microscopic disease for malignant pleural mesothelioma and to increase survival rates. Although accurate light delivery is imperative to PDT efficacy, the deformation of the pleural volume during the surgery impacts the delivered light dose. To facilitate treatment planning, we use a finite-element-based (FEM) deformable image registration to quantify the anatomical variation of lung and heart volumes between CT pre-(or post-) surgery and surface contours obtained during PDT using an infrared camera-based navigation system (NDI). Methods: NDI is used during PDT to obtain the information of the cumulative light fluence on every cavity surface point that is being treated. A wand, comprised of a modified endotrachial tube filled with Intralipid and an optical fiber inside the tube, is used to deliver the light during PDT. The position of the treatment is tracked using an attachment with nine reflective passive markers that are seen by the NDI system. Then, the position points are plotted as three-dimensional volume of the pleural cavity using Matlab and Meshlab. A series of computed tomography (CT) scans of the lungs and heart, in the same patient, are also acquired before and after the surgery. The NDI and CT contours are imported into COMSOL Multiphysics, where the FEM-based deformable image registration is obtained. The NDI and CT contours acquired during and post-PDT are considered as the reference, and the Pre-PDT CT contours are used as the target, which will be deformed. Results: Anatomical variation of the lung and heart volumes, taken at different times from different imaging devices, was determined by using our model. The resulting three-dimensional deformation map along x, y and z-axes was obtained. Conclusion: Our model fuses images acquired by different modalities and provides insights into the variation in anatomical structures over time.

  5. Development of a LED based standard for luminous flux

    Science.gov (United States)

    Sardinha, André; Ázara, Ivo; Torres, Miguel; Menegotto, Thiago; Grieneisen, Hans Peter; Borghi, Giovanna; Couceiro, Iakyra; Zim, Alexandre; Muller, Filipe

    2018-03-01

    Incandescent lamps, simple artifacts with radiation spectrum very similar to a black-body emitter, are traditional standards in photometry. Nowadays LEDs are broadly used in lighting, with great variety of spectra, and it is convenient to use standards for photometry with spectral distribution similar to that of the measured artifact. Research and development of such standards occur in several National Metrology Institutes. In Brazil, Inmetro is working on a practical solution for providing a LED based standard to be used for luminous flux measurements in the field of general lighting. This paper shows the measurements made for the developing of a prototype, that in sequence will be characterized in photometric quantities.

  6. Rethinking Game Based Learning: applying pedagogical standards to educational games

    NARCIS (Netherlands)

    Schmitz, Birgit; Kelle, Sebastian

    2010-01-01

    Schmitz, B., & Kelle, S. (2010, 1-6 February). Rethinking Game Based Learning: applying pedagogical standards to educational games. Presentation at JTEL Winter School 2010 on Advanced Learning Technologies, Innsbruck, Austria.

  7. The ASEAN community-based tourism standards: looking beyond certification

    OpenAIRE

    Novelli, M.; Klatte, N.; Dolezal, C.

    2017-01-01

    This paper reports findings from an opportunity study on the appropriateness of implementing community-based tourism standards (CBTS) certification through the Association of Southeast Asian Nations (ASEAN) criteria, as a way to improve sustainable tourism provision in the region. Framed by critical reflections on community-based tourism (CBT) literature and existing sustainable tourism standards (STS) practices, qualitative research consisting of interviews with six key industry experts prov...

  8. Atlas-Based Automatic Generation of Subject-Specific Finite Element Tongue Meshes.

    Science.gov (United States)

    Bijar, Ahmad; Rohan, Pierre-Yves; Perrier, Pascal; Payan, Yohan

    2016-01-01

    Generation of subject-specific 3D finite element (FE) models requires the processing of numerous medical images in order to precisely extract geometrical information about subject-specific anatomy. This processing remains extremely challenging. To overcome this difficulty, we present an automatic atlas-based method that generates subject-specific FE meshes via a 3D registration guided by Magnetic Resonance images. The method extracts a 3D transformation by registering the atlas' volume image to the subject's one, and establishes a one-to-one correspondence between the two volumes. The 3D transformation field deforms the atlas' mesh to generate the subject-specific FE mesh. To preserve the quality of the subject-specific mesh, a diffeomorphic non-rigid registration based on B-spline free-form deformations is used, which guarantees a non-folding and one-to-one transformation. Two evaluations of the method are provided. First, a publicly available CT-database is used to assess the capability to accurately capture the complexity of each subject-specific Lung's geometry. Second, FE tongue meshes are generated for two healthy volunteers and two patients suffering from tongue cancer using MR images. It is shown that the method generates an appropriate representation of the subject-specific geometry while preserving the quality of the FE meshes for subsequent FE analysis. To demonstrate the importance of our method in a clinical context, a subject-specific mesh is used to simulate tongue's biomechanical response to the activation of an important tongue muscle, before and after cancer surgery.

  9. Quasi-static earthquake cycle simulation based on nonlinear viscoelastic finite element analyses

    Science.gov (United States)

    Agata, R.; Ichimura, T.; Hyodo, M.; Barbot, S.; Hori, T.

    2017-12-01

    To explain earthquake generation processes, simulation methods of earthquake cycles have been studied. For such simulations, the combination of the rate- and state-dependent friction law at the fault plane and the boundary integral method based on Green's function in an elastic half space is widely used (e.g. Hori 2009; Barbot et al. 2012). In this approach, stress change around the fault plane due to crustal deformation can be computed analytically, while the effects of complex physics such as mantle rheology and gravity are generally not taken into account. To consider such effects, we seek to develop an earthquake cycle simulation combining crustal deformation computation based on the finite element (FE) method with the rate- and state-dependent friction law. Since the drawback of this approach is the computational cost associated with obtaining numerical solutions, we adopt a recently developed fast and scalable FE solver (Ichimura et al. 2016), which assumes use of supercomputers, to solve the problem in a realistic time. As in the previous approach, we solve the governing equations consisting of the rate- and state-dependent friction law. In solving the equations, we compute stress changes along the fault plane due to crustal deformation using FE simulation, instead of computing them by superimposing slip response function as in the previous approach. In stress change computation, we take into account nonlinear viscoelastic deformation in the asthenosphere. In the presentation, we will show simulation results in a normative three-dimensional problem, where a circular-shaped velocity-weakening area is set in a square-shaped fault plane. The results with and without nonlinear viscosity in the asthenosphere will be compared. We also plan to apply the developed code to simulate the post-earthquake deformation of a megathrust earthquake, such as the 2011 Tohoku earthquake. Acknowledgment: The results were obtained using the K computer at the RIKEN (Proposal number

  10. Autonomy and Accountability in Standards-Based Reform

    Directory of Open Access Journals (Sweden)

    Susan Watson

    2001-08-01

    Full Text Available In this article we discuss the effects of one urban school district's efforts to increase the autonomy and accountability of schools and teams of teachers through a standards-based reform known as team- based schooling. Team-based schooling is designed to devolve decision-making authority down to the school level by increasing teachers' autonomy to make decisions. Increased accountability is enacted in the form of a state-level standards-based initiative. Based on our evaluation over a two-year period involving extensive fieldwork and quantitative analysis, we describe the ways that teachers, teams and school administrators responded to the implementation of team-based schooling. What are the effects of increasing school-level autonomy and accountability in the context of standards- based reform? Our analysis highlights several issues: the "lived reality" of teaming as it interacts with the existing culture within schools, the ways that teachers respond to the pressures created by increased internal and external accountability, and the effects of resource constraints on the effectiveness of implementation. We conclude by using our findings to consider more broadly the trade-off between increased autonomy and accountability on which standards-based reforms like team-based schooling are based.

  11. Stability Analysis of Anchored Soil Slope Based on Finite Element Limit Equilibrium Method

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    2016-01-01

    Full Text Available Under the condition of the plane strain, finite element limit equilibrium method is used to study some key problems of stability analysis for anchored slope. The definition of safe factor in slices method is generalized into FEM. The “true” stress field in the whole structure can be obtained by elastic-plastic finite element analysis. Then, the optimal search for the most dangerous sliding surface with Hooke-Jeeves optimized searching method is introduced. Three cases of stability analysis of natural slope, anchored slope with seepage, and excavation anchored slope are conducted. The differences in safety factor quantity, shape and location of slip surface, anchoring effect among slices method, finite element strength reduction method (SRM, and finite element limit equilibrium method are comparatively analyzed. The results show that the safety factor given by the FEM is greater and the unfavorable slip surface is deeper than that by the slice method. The finite element limit equilibrium method has high calculation accuracy, and to some extent the slice method underestimates the effect of anchor, and the effect of anchor is overrated in the SRM.

  12. A customized fixation plate with novel structure designed by topological optimization for mandibular angle fracture based on finite element analysis.

    Science.gov (United States)

    Liu, Yun-Feng; Fan, Ying-Ying; Jiang, Xian-Feng; Baur, Dale A

    2017-11-15

    The purpose of this study was to design a customized fixation plate for mandibular angle fracture using topological optimization based on the biomechanical properties of the two conventional fixation systems, and compare the results of stress, strain and displacement distributions calculated by finite element analysis (FEA). A three-dimensional (3D) virtual mandible was reconstructed from CT images with a mimic angle fracture and a 1 mm gap between two bone segments, and then a FEA model, including volume mesh with inhomogeneous bone material properties, three loading conditions and constraints (muscles and condyles), was created to design a customized plate using topological optimization method, then the shape of the plate was referenced from the stress concentrated area on an initial part created from thickened bone surface for optimal calculation, and then the plate was formulated as "V" pattern according to dimensions of standard mini-plate finally. To compare the biomechanical behavior of the "V" plate and other conventional mini-plates for angle fracture fixation, two conventional fixation systems were used: type A, one standard mini-plate, and type B, two standard mini-plates, and the stress, strain and displacement distributions within the three fixation systems were compared and discussed. The stress, strain and displacement distributions to the angle fractured mandible with three different fixation modalities were collected, respectively, and the maximum stress for each model emerged at the mandibular ramus or screw holes. Under the same loading conditions, the maximum stress on the customized fixation system decreased 74.3, 75.6 and 70.6% compared to type A, and 34.9, 34.1, and 39.6% compared to type B. All maximum von Mises stresses of mandible were well below the allowable stress of human bone, as well as maximum principal strain. And the displacement diagram of bony segments indicated the effect of treatment with different fixation systems. The

  13. Statistical parameters of random heterogeneity estimated by analysing coda waves based on finite difference method

    Science.gov (United States)

    Emoto, K.; Saito, T.; Shiomi, K.

    2017-12-01

    Short-period (2 s) seismograms. We found that the energy of the coda of long-period seismograms shows a spatially flat distribution. This phenomenon is well known in short-period seismograms and results from the scattering by small-scale heterogeneities. We estimate the statistical parameters that characterize the small-scale random heterogeneity by modelling the spatiotemporal energy distribution of long-period seismograms. We analyse three moderate-size earthquakes that occurred in southwest Japan. We calculate the spatial distribution of the energy density recorded by a dense seismograph network in Japan at the period bands of 8-16 s, 4-8 s and 2-4 s and model them by using 3-D finite difference (FD) simulations. Compared to conventional methods based on statistical theories, we can calculate more realistic synthetics by using the FD simulation. It is not necessary to assume a uniform background velocity, body or surface waves and scattering properties considered in general scattering theories. By taking the ratio of the energy of the coda area to that of the entire area, we can separately estimate the scattering and the intrinsic absorption effects. Our result reveals the spectrum of the random inhomogeneity in a wide wavenumber range including the intensity around the corner wavenumber as P(m) = 8πε2a3/(1 + a2m2)2, where ε = 0.05 and a = 3.1 km, even though past studies analysing higher-frequency records could not detect the corner. Finally, we estimate the intrinsic attenuation by modelling the decay rate of the energy. The method proposed in this study is suitable for quantifying the statistical properties of long-wavelength subsurface random inhomogeneity, which leads the way to characterizing a wider wavenumber range of spectra, including the corner wavenumber.

  14. Application of Finite Element Modeling Methods in Magnetic Resonance Imaging-Based Research and Clinical Management

    Science.gov (United States)

    Fwu, Peter Tramyeon

    The medical image is very complex by its nature. Modeling built upon the medical image is challenging due to the lack of analytical solution. Finite element method (FEM) is a numerical technique which can be used to solve the partial differential equations. It utilized the transformation from a continuous domain into solvable discrete sub-domains. In three-dimensional space, FEM has the capability dealing with complicated structure and heterogeneous interior. That makes FEM an ideal tool to approach the medical-image based modeling problems. In this study, I will address the three modeling in (1) photon transport inside the human breast by implanting the radiative transfer equation to simulate the diffuse optical spectroscopy imaging (DOSI) in order to measurement the percent density (PD), which has been proven as a cancer risk factor in mammography. Our goal is to use MRI as the ground truth to optimize the DOSI scanning protocol to get a consistent measurement of PD. Our result shows DOSI measurement is position and depth dependent and proper scanning scheme and body configuration are needed; (2) heat flow in the prostate by implementing the Penne's bioheat equation to evaluate the cooling performance of regional hypothermia during the robot assisted radical prostatectomy for the individual patient in order to achieve the optimal cooling setting. Four factors are taken into account during the simulation: blood abundance, artery perfusion, cooling balloon temperature, and the anatomical distance. The result shows that blood abundance, prostate size, and anatomical distance are significant factors to the equilibrium temperature of neurovascular bundle; (3) shape analysis in hippocampus by using the radial distance mapping, and two registration methods to find the correlation between sub-regional change to the age and cognition performance, which might not reveal in the volumetric analysis. The result gives a fundamental knowledge of normal distribution in young

  15. A novel soft tissue prediction methodology for orthognathic surgery based on probabilistic finite element modelling.

    Science.gov (United States)

    Knoops, Paul G M; Borghi, Alessandro; Ruggiero, Federica; Badiali, Giovanni; Bianchi, Alberto; Marchetti, Claudio; Rodriguez-Florez, Naiara; Breakey, Richard W F; Jeelani, Owase; Dunaway, David J; Schievano, Silvia

    2018-01-01

    Repositioning of the maxilla in orthognathic surgery is carried out for functional and aesthetic purposes. Pre-surgical planning tools can predict 3D facial appearance by computing the response of the soft tissue to the changes to the underlying skeleton. The clinical use of commercial prediction software remains controversial, likely due to the deterministic nature of these computational predictions. A novel probabilistic finite element model (FEM) for the prediction of postoperative facial soft tissues is proposed in this paper. A probabilistic FEM was developed and validated on a cohort of eight patients who underwent maxillary repositioning and had pre- and postoperative cone beam computed tomography (CBCT) scans taken. Firstly, a variables correlation assessed various modelling parameters. Secondly, a design of experiments (DOE) provided a range of potential outcomes based on uniformly distributed input parameters, followed by an optimisation. Lastly, the second DOE iteration provided optimised predictions with a probability range. A range of 3D predictions was obtained using the probabilistic FEM and validated using reconstructed soft tissue surfaces from the postoperative CBCT data. The predictions in the nose and upper lip areas accurately include the true postoperative position, whereas the prediction under-estimates the position of the cheeks and lower lip. A probabilistic FEM has been developed and validated for the prediction of the facial appearance following orthognathic surgery. This method shows how inaccuracies in the modelling and uncertainties in executing surgical planning influence the soft tissue prediction and it provides a range of predictions including a minimum and maximum, which may be helpful for patients in understanding the impact of surgery on the face.

  16. Rational bases and generalized barycentrics applications to finite elements and graphics

    CERN Document Server

    Wachspress, Eugene

    2016-01-01

    This three-part volume explores theory for construction of rational interpolation functions for continuous patchwork approximation.  Authored by the namesake of the Wachspress Coordinates, the book develops construction of basis functions for a broad class of elements which have widespread graphics and finite element application. Part one is the 1975 book A Rational Finite Element Basis (with minor updates and corrections) written by Dr. Wachspress.  Part two describes theoretical advances since 1975 and includes analysis of elements not considered previously.  Part three consists of annotated MATLAB programs implementing theory presented in parts one and two.

  17. The simulation of electrostatic coupling intra-body communication based on the finite-element method

    Institute of Scientific and Technical Information of China (English)

    Song Yong; Zhang Kai; Yang Guang; Zhu Kang; Hao Qun

    2011-01-01

    In this paper, investigation has been done in the computer simulation of the electrostatic coupling IBC by using the developed finite-element models, in which a. the incidence and reflection of electronic signal in the upper arm model were analyzed by using the theory of electromagnetic wave; b. the finite-element models of electrostatic coupling IBC were developed by using the electromagnetic analysis package of ANSYS software; c. the signal attenuation of electrostatic coupling IBC were simulated under the conditions of different signal frequencies, electrodes directions, electrodes sizes and transmission distances. Finally, some important conclusions are deduced on the basis of simulation results.

  18. 77 FR 42988 - Updating OSHA Construction Standards Based on National Consensus Standards; Head Protection...

    Science.gov (United States)

    2012-07-23

    .... OSHA-2011-0184] RIN 1218-AC65 Updating OSHA Construction Standards Based on National Consensus... Administration (OSHA), Department of Labor. ACTION: Direct final rule; correction. SUMMARY: OSHA is correcting a... confusion resulting from a drafting error. OSHA published the DFR on June 22, 2012 (77 FR 37587). OSHA also...

  19. 77 FR 43018 - Updating OSHA Construction Standards Based on National Consensus Standards; Head Protection...

    Science.gov (United States)

    2012-07-23

    .... OSHA-2011-0184] RIN 1218-AC65 Updating OSHA Construction Standards Based on National Consensus... Health Administration (OSHA), Department of Labor. ACTION: Notice of proposed rulemaking; correction. SUMMARY: OSHA is correcting a notice of proposed rulemaking (NPRM) with regard to the construction...

  20. Cloud-Based Collaborative Writing and the Common Core Standards

    Science.gov (United States)

    Yim, Soobin; Warschauer, Mark; Zheng, Binbin; Lawrence, Joshua F.

    2014-01-01

    The Common Core State Standards emphasize the integration of technology skills into English Language Arts (ELA) instruction, recognizing the demand for technology-based literacy skills to be college- and career- ready. This study aims to examine how collaborative cloud-based writing is used in in a Colorado school district, where one-to-one…

  1. Finite element based bladder modeling for image-guided radiotherapy of bladder cancer

    NARCIS (Netherlands)

    Chai, Xiangfei; van Herk, Marcel; van de Kamer, Jeroen B.; Hulshof, Maarten C. C. M.; Remeijer, Peter; Lotz, Heidi T.; Bel, Arjan

    2011-01-01

    Purpose: A biomechanical model was constructed to give insight into pelvic organ motion as a result of bladder filling changes. Methods: The authors used finite element (FE) modeling to simulate bladder wall deformation caused by urine inflow. For ten volunteers, a series of MRI scans of the pelvic

  2. Stress and Deformation Analysis in Base Isolation Elements Using the Finite Element Method

    Directory of Open Access Journals (Sweden)

    Claudiu Iavornic

    2011-01-01

    Full Text Available In Modern tools as Finite Element Method can be used to study the behavior of elastomeric isolation systems. The simulation results obtained in this way provide a large series of data about the behavior of elastomeric isolation bearings under different types of loads and help in taking right decisions regarding geometrical optimizations needed for improve such kind of devices.

  3. Elastically deformable models based on the finite element method accelerated on graphics hardware using CUDA

    NARCIS (Netherlands)

    Verschoor, M.; Jalba, A.C.

    2012-01-01

    Elastically deformable models have found applications in various areas ranging from mechanical sciences and engineering to computer graphics. The method of Finite Elements has been the tool of choice for solving the underlying PDE, when accuracy and stability of the computations are more important

  4. Thermodynamic modeling of the Ca-Sn system based on finite temperature quantities from first-principles and experiment

    International Nuclear Information System (INIS)

    Ohno, M.; Kozlov, A.; Arroyave, R.; Liu, Z.K.; Schmid-Fetzer, R.

    2006-01-01

    The thermodynamic model of the Ca-Sn system was obtained, utilizing the first-principles total energies and heat capacities calculated from 0 K to the melting points of the major phases. Since the first-principles result for the formation energy of the dominating Ca 2 Sn intermetallic phase is drastically different from the reported experimental data, we performed two types of thermodynamic modeling: one based on the first-principles output and the other based on the experimental data. In the former modeling, the Gibbs energies of the intermetallic compounds were fully quantified from the first-principles finite temperature properties and the superiority of the former thermodynamic description is demonstrated. It is shown that it is the combination of finite temperature first-principle calculations and the Calphad modeling tool that provides a sound basis for identifying and deciding on conflicting key thermodynamic data in the Ca-Sn system

  5. Development of a finite-element-based design sensitivity analysis for buckling and postbuckling of composite plates

    Directory of Open Access Journals (Sweden)

    Guo Ruijiang

    1995-01-01

    Full Text Available A finite element based sensitivity analysis procedure is developed for buckling and postbuckling of composite plates. This procedure is based on the direct differentiation approach combined with the reference volume concept. Linear elastic material model and nonlinear geometric relations are used. The sensitivity analysis technique results in a set of linear algebraic equations which are easy to solve. The procedure developed provides the sensitivity derivatives directly from the current load and responses by solving the set of linear equations. Numerical results are presented and are compared with those obtained using finite difference technique. The results show good agreement except at points near critical buckling load where discontinuities occur. The procedure is very efficient computationally.

  6. Finite element based nonlinear normalization of human lumbar intervertebral disc stiffness to account for its morphology.

    Science.gov (United States)

    Maquer, Ghislain; Laurent, Marc; Brandejsky, Vaclav; Pretterklieber, Michael L; Zysset, Philippe K

    2014-06-01

    Disc degeneration, usually associated with low back pain and changes of intervertebral stiffness, represents a major health issue. As the intervertebral disc (IVD) morphology influences its stiffness, the link between mechanical properties and degenerative grade is partially lost without an efficient normalization of the stiffness with respect to the morphology. Moreover, although the behavior of soft tissues is highly nonlinear, only linear normalization protocols have been defined so far for the disc stiffness. Thus, the aim of this work is to propose a nonlinear normalization based on finite elements (FE) simulations and evaluate its impact on the stiffness of human anatomical specimens of lumbar IVD. First, a parameter study involving simulations of biomechanical tests (compression, flexion/extension, bilateral torsion and bending) on 20 FE models of IVDs with various dimensions was carried out to evaluate the effect of the disc's geometry on its compliance and establish stiffness/morphology relations necessary to the nonlinear normalization. The computed stiffness was then normalized by height (H), cross-sectional area (CSA), polar moment of inertia (J) or moments of inertia (Ixx, Iyy) to quantify the effect of both linear and nonlinear normalizations. In the second part of the study, T1-weighted MRI images were acquired to determine H, CSA, J, Ixx and Iyy of 14 human lumbar IVDs. Based on the measured morphology and pre-established relation with stiffness, linear and nonlinear normalization routines were then applied to the compliance of the specimens for each quasi-static biomechanical test. The variability of the stiffness prior to and after normalization was assessed via coefficient of variation (CV). The FE study confirmed that larger and thinner IVDs were stiffer while the normalization strongly attenuated the effect of the disc geometry on its stiffness. Yet, notwithstanding the results of the FE study, the experimental stiffness showed consistently

  7. A finite volume solver for three dimensional debris flow simulations based on a single calibration parameter

    Science.gov (United States)

    von Boetticher, Albrecht; Turowski, Jens M.; McArdell, Brian; Rickenmann, Dieter

    2016-04-01

    Debris flows are frequent natural hazards that cause massive damage. A wide range of debris flow models try to cover the complex flow behavior that arises from the inhomogeneous material mixture of water with clay, silt, sand, and gravel. The energy dissipation between moving grains depends on grain collisions and tangential friction, and the viscosity of the interstitial fine material suspension depends on the shear gradient. Thus a rheology description needs to be sensitive to the local pressure and shear rate, making the three-dimensional flow structure a key issue for flows in complex terrain. Furthermore, the momentum exchange between the granular and fluid phases should account for the presence of larger particles. We model the fine material suspension with a Herschel-Bulkley rheology law, and represent the gravel with the Coulomb-viscoplastic rheology of Domnik & Pudasaini (Domnik et al. 2013). Both composites are described by two phases that can mix; a third phase accounting for the air is kept separate to account for the free surface. The fluid dynamics are solved in three dimensions using the finite volume open-source code OpenFOAM. Computational costs are kept reasonable by using the Volume of Fluid method to solve only one phase-averaged system of Navier-Stokes equations. The Herschel-Bulkley parameters are modeled as a function of water content, volumetric solid concentration of the mixture, clay content and its mineral composition (Coussot et al. 1989, Yu et al. 2013). The gravel phase properties needed for the Coulomb-viscoplastic rheology are defined by the angle of repose of the gravel. In addition to this basic setup, larger grains and the corresponding grain collisions can be introduced by a coupled Lagrangian particle simulation. Based on the local Savage number a diffusive term in the gravel phase can activate phase separation. The resulting model can reproduce the sensitivity of the debris flow to water content and channel bed roughness, as

  8. A Smoothed Finite Element-Based Elasticity Model for Soft Bodies

    Directory of Open Access Journals (Sweden)

    Juan Zhang

    2017-01-01

    Full Text Available One of the major challenges in mesh-based deformation simulation in computer graphics is to deal with mesh distortion. In this paper, we present a novel mesh-insensitive and softer method for simulating deformable solid bodies under the assumptions of linear elastic mechanics. A face-based strain smoothing method is adopted to alleviate mesh distortion instead of the traditional spatial adaptive smoothing method. Then, we propose a way to combine the strain smoothing method and the corotational method. With this approach, the amplitude and frequency of transient displacements are slightly affected by the distorted mesh. Realistic simulation results are generated under large rotation using a linear elasticity model without adding significant complexity or computational cost to the standard corotational FEM. Meanwhile, softening effect is a by-product of our method.

  9. Experimental research and use of finite elements method on mechanical behaviors of honeycomb structures assembled with epoxy-based adhesives reinforced with nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Akkus, Harun [Technical Sciences Vocational School, Amasya University, Amasya (Turkmenistan); Duzcukoglu, Hayrettin; Sahin, Omer Sinan [Mechanical Engineering Department, Selcuk University, Selcuk (Turkmenistan)

    2017-01-15

    This study utilized experimental and finite element methods to investigate the mechanical behavior of aluminum honeycomb structures under compression. Aluminum honeycomb composite structures were subjected to pressing experiments according to the standard ASTM C365. Resistive forces in response to compression and maximum compressive force values were measured. Structural damage was observed. In the honeycomb structure, the cell width decreased as the compressive force increased. Results obtained with finite element models generated using ANSYS Workbench 15 were validated. Experimental results paralleled the finite element modeling results. The ANSYS results were approximately 85 % reliable.

  10. A finite element method based microwave heat transfer modeling of frozen multi-component foods

    Science.gov (United States)

    Pitchai, Krishnamoorthy

    Microwave heating is fast and convenient, but is highly non-uniform. Non-uniform heating in microwave cooking affects not only food quality but also food safety. Most food industries develop microwavable food products based on "cook-and-look" approach. This approach is time-consuming, labor intensive and expensive and may not result in optimal food product design that assures food safety and quality. Design of microwavable food can be realized through a simulation model which describes the physical mechanisms of microwave heating in mathematical expressions. The objective of this study was to develop a microwave heat transfer model to predict spatial and temporal profiles of various heterogeneous foods such as multi-component meal (chicken nuggets and mashed potato), multi-component and multi-layered meal (lasagna), and multi-layered food with active packages (pizza) during microwave heating. A microwave heat transfer model was developed by solving electromagnetic and heat transfer equations using finite element method in commercially available COMSOL Multiphysics v4.4 software. The microwave heat transfer model included detailed geometry of the cavity, phase change, and rotation of the food on the turntable. The predicted spatial surface temperature patterns and temporal profiles were validated against the experimental temperature profiles obtained using a thermal imaging camera and fiber-optic sensors. The predicted spatial surface temperature profile of different multi-component foods was in good agreement with the corresponding experimental profiles in terms of hot and cold spot patterns. The root mean square error values of temporal profiles ranged from 5.8 °C to 26.2 °C in chicken nuggets as compared 4.3 °C to 4.7 °C in mashed potatoes. In frozen lasagna, root mean square error values at six locations ranged from 6.6 °C to 20.0 °C for 6 min of heating. A microwave heat transfer model was developed to include susceptor assisted microwave heating of a

  11. Standardized computer-based organized reporting of EEG

    DEFF Research Database (Denmark)

    Beniczky, Sándor; Aurlien, Harald; Brøgger, Jan C.

    2017-01-01

    Standardized terminology for computer-based assessment and reporting of EEG has been previously developed in Europe. The International Federation of Clinical Neurophysiology established a taskforce in 2013 to develop this further, and to reach international consensus. This work resulted in the se......Standardized terminology for computer-based assessment and reporting of EEG has been previously developed in Europe. The International Federation of Clinical Neurophysiology established a taskforce in 2013 to develop this further, and to reach international consensus. This work resulted...... in the second, revised version of SCORE (Standardized Computer-based Organized Reporting of EEG), which is presented in this paper. The revised terminology was implemented in a software package (SCORE EEG), which was tested in clinical practice on 12,160 EEG recordings. Standardized terms implemented in SCORE....... In the end, the diagnostic significance is scored, using a standardized list of terms. SCORE has specific modules for scoring seizures (including seizure semiology and ictal EEG patterns), neonatal recordings (including features specific for this age group), and for Critical Care EEG Terminology. SCORE...

  12. Application of Finite Element, Phase-field, and CALPHAD-based Methods to Additive Manufacturing of Ni-based Superalloys.

    Science.gov (United States)

    Keller, Trevor; Lindwall, Greta; Ghosh, Supriyo; Ma, Li; Lane, Brandon M; Zhang, Fan; Kattner, Ursula R; Lass, Eric A; Heigel, Jarred C; Idell, Yaakov; Williams, Maureen E; Allen, Andrew J; Guyer, Jonathan E; Levine, Lyle E

    2017-10-15

    Numerical simulations are used in this work to investigate aspects of microstructure and microseg-regation during rapid solidification of a Ni-based superalloy in a laser powder bed fusion additive manufacturing process. Thermal modeling by finite element analysis simulates the laser melt pool, with surface temperatures in agreement with in situ thermographic measurements on Inconel 625. Geometric and thermal features of the simulated melt pools are extracted and used in subsequent mesoscale simulations. Solidification in the melt pool is simulated on two length scales. For the multicomponent alloy Inconel 625, microsegregation between dendrite arms is calculated using the Scheil-Gulliver solidification model and DICTRA software. Phase-field simulations, using Ni-Nb as a binary analogue to Inconel 625, produced microstructures with primary cellular/dendritic arm spacings in agreement with measured spacings in experimentally observed microstructures and a lesser extent of microsegregation than predicted by DICTRA simulations. The composition profiles are used to compare thermodynamic driving forces for nucleation against experimentally observed precipitates identified by electron and X-ray diffraction analyses. Our analysis lists the precipitates that may form from FCC phase of enriched interdendritic compositions and compares these against experimentally observed phases from 1 h heat treatments at two temperatures: stress relief at 1143 K (870 °C) or homogenization at 1423 K (1150 °C).

  13. Flexibility of short DNA helices with finite-length effect: From base pairs to tens of base pairs

    International Nuclear Information System (INIS)

    Wu, Yuan-Yan; Bao, Lei; Zhang, Xi; Tan, Zhi-Jie

    2015-01-01

    Flexibility of short DNA helices is important for the biological functions such as nucleosome formation and DNA-protein recognition. Recent experiments suggest that short DNAs of tens of base pairs (bps) may have apparently higher flexibility than those of kilo bps, while there is still the debate on such high flexibility. In the present work, we have studied the flexibility of short DNAs with finite-length of 5–50 bps by the all-atomistic molecular dynamics simulations and Monte Carlo simulations with the worm-like chain model. Our microscopic analyses reveal that short DNAs have apparently high flexibility which is attributed to the significantly strong bending and stretching flexibilities of ∼6 bps at each helix end. Correspondingly, the apparent persistence length l p of short DNAs increases gradually from ∼29 nm to ∼45 nm as DNA length increases from 10 to 50 bps, in accordance with the available experimental data. Our further analyses show that the short DNAs with excluding ∼6 bps at each helix end have the similar flexibility with those of kilo bps and can be described by the worm-like chain model with l p ∼ 50 nm

  14. A study of science leadership and science standards in exemplary standards-based science programs

    Science.gov (United States)

    Carpenter, Wendy Renae

    The purpose for conducting this qualitative study was to explore best practices of exemplary standards-based science programs and instructional leadership practices in a charter high school and in a traditional high school. The focus of this study included how twelve participants aligned practices to National Science Education Standards to describe their science programs and science instructional practices. This study used a multi-site case study qualitative design. Data were obtained through a review of literature, interviews, observations, review of educational documents, and researcher's notes collected in a field log. The methodology used was a multi-site case study because of the potential, through cross analysis, for providing greater explanation of the findings in the study (Merriam, 1988). This study discovered six characteristics about the two high school's science programs that enhance the literature found in the National Science Education Standards; (a) Culture of expectations for learning-In exemplary science programs teachers are familiar with a wide range of curricula. They have the ability to examine critically and select activities to use with their students to promote the understanding of science; (b) Culture of varied experiences-In exemplary science programs students are provided different paths to learning, which help students, take in information and make sense of concepts and skills that are set forth by the standards; (c) Culture of continuous feedback-In exemplary science programs teachers and students work together to engage students in ongoing assessments of their work and that of others as prescribed in the standards; (d) Culture of Observations-In exemplary science programs students, teachers, and principals reflect on classroom instructional practices; teachers receive ongoing evaluations about their teaching and apply feedback towards improving practices as outlined in the standards; (e) Culture of continuous learning-In exemplary

  15. FSM-F: Finite State Machine Based Framework for Denial of Service and Intrusion Detection in MANET.

    Science.gov (United States)

    N Ahmed, Malik; Abdullah, Abdul Hanan; Kaiwartya, Omprakash

    2016-01-01

    Due to the continuous advancements in wireless communication in terms of quality of communication and affordability of the technology, the application area of Mobile Adhoc Networks (MANETs) significantly growing particularly in military and disaster management. Considering the sensitivity of the application areas, security in terms of detection of Denial of Service (DoS) and intrusion has become prime concern in research and development in the area. The security systems suggested in the past has state recognition problem where the system is not able to accurately identify the actual state of the network nodes due to the absence of clear definition of states of the nodes. In this context, this paper proposes a framework based on Finite State Machine (FSM) for denial of service and intrusion detection in MANETs. In particular, an Interruption Detection system for Adhoc On-demand Distance Vector (ID-AODV) protocol is presented based on finite state machine. The packet dropping and sequence number attacks are closely investigated and detection systems for both types of attacks are designed. The major functional modules of ID-AODV includes network monitoring system, finite state machine and attack detection model. Simulations are carried out in network simulator NS-2 to evaluate the performance of the proposed framework. A comparative evaluation of the performance is also performed with the state-of-the-art techniques: RIDAN and AODV. The performance evaluations attest the benefits of proposed framework in terms of providing better security for denial of service and intrusion detection attacks.

  16. FSM-F: Finite State Machine Based Framework for Denial of Service and Intrusion Detection in MANET.

    Directory of Open Access Journals (Sweden)

    Malik N Ahmed

    Full Text Available Due to the continuous advancements in wireless communication in terms of quality of communication and affordability of the technology, the application area of Mobile Adhoc Networks (MANETs significantly growing particularly in military and disaster management. Considering the sensitivity of the application areas, security in terms of detection of Denial of Service (DoS and intrusion has become prime concern in research and development in the area. The security systems suggested in the past has state recognition problem where the system is not able to accurately identify the actual state of the network nodes due to the absence of clear definition of states of the nodes. In this context, this paper proposes a framework based on Finite State Machine (FSM for denial of service and intrusion detection in MANETs. In particular, an Interruption Detection system for Adhoc On-demand Distance Vector (ID-AODV protocol is presented based on finite state machine. The packet dropping and sequence number attacks are closely investigated and detection systems for both types of attacks are designed. The major functional modules of ID-AODV includes network monitoring system, finite state machine and attack detection model. Simulations are carried out in network simulator NS-2 to evaluate the performance of the proposed framework. A comparative evaluation of the performance is also performed with the state-of-the-art techniques: RIDAN and AODV. The performance evaluations attest the benefits of proposed framework in terms of providing better security for denial of service and intrusion detection attacks.

  17. Extensions to a nonlinear finite element axisymmetric shell model based on Reissner's shell theory

    International Nuclear Information System (INIS)

    Cook, W.A.

    1981-01-01

    A finite element shell-of-revolution model has been developed to analyze shipping containers under severe impact conditions. To establish the limits for this shell model, I studied the basic assumptions used in its development; these are listed in this paper. Several extensions were evident from the study of these limits: a thick shell, a plastic hinge, and a linear normal stress. (orig./HP)

  18. Sparse direct solver for large finite element problems based on the minimum degree algorithm

    Czech Academy of Sciences Publication Activity Database

    Pařík, Petr; Plešek, Jiří

    2017-01-01

    Roč. 113, November (2017), s. 2-6 ISSN 0965-9978 R&D Projects: GA ČR(CZ) GA15-20666S; GA MŠk(CZ) EF15_003/0000493 Institutional support: RVO:61388998 Keywords : sparse direct solution * finite element method * large sparse Linear systems Subject RIV: JR - Other Machinery OBOR OECD: Mechanical engineering Impact factor: 3.000, year: 2016 https://www.sciencedirect.com/science/article/pii/S0965997817302582

  19. Finite element method for solving Kohn-Sham equations based on self-adaptive tetrahedral mesh

    International Nuclear Information System (INIS)

    Zhang Dier; Shen Lihua; Zhou Aihui; Gong Xingao

    2008-01-01

    A finite element (FE) method with self-adaptive mesh-refinement technique is developed for solving the density functional Kohn-Sham equations. The FE method adopts local piecewise polynomials basis functions, which produces sparsely structured matrices of Hamiltonian. The method is well suitable for parallel implementation without using Fourier transform. In addition, the self-adaptive mesh-refinement technique can control the computational accuracy and efficiency with optimal mesh density in different regions

  20. Stress analysis and deformation prediction of sheet metal workpieces based on finite element simulation

    OpenAIRE

    Ren Penghao; Wang Aimin; Wang Xiaolong; Zhang Yanlin

    2017-01-01

    After aluminum alloy sheet metal parts machining, the residual stress release will cause a large deformation. To solve this problem, this paper takes a aluminum alloy sheet aerospace workpiece as an example, establishes the theoretical model of elastic deformation and the finite element model, and places quantitative initial stress in each element of machining area, analyses stress release simulation and deformation. Through different initial stress release simulative analysis of deformation ...

  1. Investigation of High-Speed Cryogenic Machining Based on Finite Element Approach

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashaki

    Full Text Available Abstract The simulation of cryogenic machining process because of using a three-dimensional model and high process duration time in the finite element method, have been studied rarely. In this study, to overcome this limitation, a 2.5D finite element model using the commercial finite element software ABAQUS has been developed for the cryogenic machining process and by considering more realistic assumptions, the chip formation procedure investigated. In the proposed method, the liquid nitrogen has been used as a coolant. At the modeling of friction during the interaction of tools - chip, the Coulomb law has been used. In order to simulate the behavior of plasticity and failure criterion, Johnson-Cook model was used, and unlike previous investigations, thermal and mechanical properties of materials as a function of temperature were applied to the software. After examining accuracy of the model with present experimental data, the effect of parameters such as rake angle and the cutting speed as well as dry machining of aluminum alloy by the use of coupled dynamic temperature solution has been studied. Results indicated that at the cutting velocity of 10 m/s, cryogenic cooling has caused into decreasing 60 percent of tools temperature in comparison with the dry cooling. Furthermore, a chip which has been made by cryogenic machining were connected and without fracture in contrast to dry machining.

  2. An induction-based magnetohydrodynamic 3D code for finite magnetic Reynolds number liquid-metal flows in fusion blankets

    International Nuclear Information System (INIS)

    Kawczynski, Charlie; Smolentsev, Sergey; Abdou, Mohamed

    2016-01-01

    Highlights: • A new induction-based magnetohydrodynamic code was developed using a finite difference method. • The code was benchmarked against purely hydrodynamic and MHD flows for low and finite magnetic Reynolds number. • Possible applications of the new code include liquid-metal MHD flows in the breeder blanket during unsteady events in the plasma. - Abstract: Most numerical analysis performed in the past for MHD flows in liquid-metal blankets were based on the assumption of low magnetic Reynolds number and involved numerical codes that utilized electric potential as the main electromagnetic variable. One limitation of this approach is that such codes cannot be applied to truly unsteady processes, for example, MHD flows of liquid-metal breeder/coolant during unsteady events in plasma, such as major plasma disruptions, edge-localized modes and vertical displacements, when changes in plasmas occur at millisecond timescales. Our newly developed code MOONS (Magnetohydrodynamic Object-Oriented Numerical Solver) uses the magnetic field as the main electromagnetic variable to relax the limitations of the low magnetic Reynolds number approximation for more realistic fusion reactor environments. The new code, written in Fortran, implements a 3D finite-difference method and is capable of simulating multi-material domains. The constrained transport method was implemented to evolve the magnetic field in time and assure that the magnetic field remains solenoidal within machine accuracy at every time step. Various verification tests have been performed including purely hydrodynamic flows and MHD flows at low and finite magnetic Reynolds numbers. Test results have demonstrated very good accuracy against known analytic solutions and other numerical data.

  3. An induction-based magnetohydrodynamic 3D code for finite magnetic Reynolds number liquid-metal flows in fusion blankets

    Energy Technology Data Exchange (ETDEWEB)

    Kawczynski, Charlie; Smolentsev, Sergey, E-mail: sergey@fusion.ucla.edu; Abdou, Mohamed

    2016-11-01

    Highlights: • A new induction-based magnetohydrodynamic code was developed using a finite difference method. • The code was benchmarked against purely hydrodynamic and MHD flows for low and finite magnetic Reynolds number. • Possible applications of the new code include liquid-metal MHD flows in the breeder blanket during unsteady events in the plasma. - Abstract: Most numerical analysis performed in the past for MHD flows in liquid-metal blankets were based on the assumption of low magnetic Reynolds number and involved numerical codes that utilized electric potential as the main electromagnetic variable. One limitation of this approach is that such codes cannot be applied to truly unsteady processes, for example, MHD flows of liquid-metal breeder/coolant during unsteady events in plasma, such as major plasma disruptions, edge-localized modes and vertical displacements, when changes in plasmas occur at millisecond timescales. Our newly developed code MOONS (Magnetohydrodynamic Object-Oriented Numerical Solver) uses the magnetic field as the main electromagnetic variable to relax the limitations of the low magnetic Reynolds number approximation for more realistic fusion reactor environments. The new code, written in Fortran, implements a 3D finite-difference method and is capable of simulating multi-material domains. The constrained transport method was implemented to evolve the magnetic field in time and assure that the magnetic field remains solenoidal within machine accuracy at every time step. Various verification tests have been performed including purely hydrodynamic flows and MHD flows at low and finite magnetic Reynolds numbers. Test results have demonstrated very good accuracy against known analytic solutions and other numerical data.

  4. 6 CFR 27.230 - Risk-based performance standards.

    Science.gov (United States)

    2010-01-01

    ... 6 Domestic Security 1 2010-01-01 2010-01-01 false Risk-based performance standards. 27.230 Section 27.230 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY CHEMICAL FACILITY... the facility and that discourages abuse through established disciplinary measures; (4) Deter, Detect...

  5. Thermal-Diffusivity-Based Frequency References in Standard CMOS

    NARCIS (Netherlands)

    Kashmiri, S.M.

    2012-01-01

    In recent years, a lot of research has been devoted to the realization of accurate integrated frequency references. A thermal-diffusivity-based (TD) frequency reference provides an alternative method of on-chip frequency generation in standard CMOS technology. A frequency-locked loop locks the

  6. Maintenance Staffing Standards for Zero-Based Budgeting.

    Science.gov (United States)

    Adams, Matthew C.; And Others

    1998-01-01

    Discusses school preventive maintenance and the variables associated with maintenance staffing standards that address a zero-based budgeting environment. Explores preventive-maintenance measurement for staffing requirements, defines staffing levels and job descriptions, and outlines the factors to consider when creating a maintenance program and…

  7. Finite element method for the rising and the slip of column-plate base for usual connections

    Directory of Open Access Journals (Sweden)

    Alliche A.

    2010-06-01

    Full Text Available In the present paper, a finite element approach calculating the rising and the relative slip of steel base plate connections is proposed. Two types of connections are studied, the first consists on a base plate welded to the column end and attached to the reinforced concrete foundation by two anchor bolts. These bolts are placed on the major axis of the I shaped section used as column, one anchor bolt on each side of the web. In the second configuration, the connection includes a plate base and four anchor bolts placed out side the flanges of the I shaped section or hallow form. To take in account the real behaviour of this connection, a model by finite elements which considers count geometrical and material no linearties of the contact and cracking in the concrete foundation. To study the rising of the base plate, an approach treating problems of contact-friction between the base plate and the foundation is developed. This approach is based on a unilateral contact law in which a Coulomb friction is added. The numerical resolution is ensured by the increased Lagrangien method. For the behaviour of the concrete foundation, the developed model is based of a compressive elastoplastic model. The heights rising-rotations and the heights rising- slip displacements curves are plotted.

  8. Proprietary, standard, and government-supported nuclear data bases

    International Nuclear Information System (INIS)

    Poncelet, C.G.; Ozer, O.; Harris, D.R.

    1975-07-01

    This study presents an assessment of the complex situation surrounding nuclear data bases for nuclear power technology. Requirements for nuclear data bases are identified as regards engineering functions and system applications for the many and various user groups that rely on nuclear data bases. Current practices in the development and generation of nuclear data sets are described, and the competitive aspect of design nuclear data set development is noted. The past and current role of the federal government in nuclear data base development is reviewed, and the relative merits of continued government involvement are explored. National policies of the United States and other industrial countries regarding the availability of nationally supported nuclear data information are reviewed. Current proprietary policies of reactor vendors regarding design library data sets are discussed along with the basis for such proprietary policies. The legal aspects of protective policies are explored as are their impacts on the nuclear power industry as a whole. The effect of the regulatory process on the availability and documentation of nuclear data bases is examined. Current nuclear data standard developments are reviewed, including a discussion of the standard preparation process. Standards currently proposed or in preparation that directly relate to nuclear data bases are discussed in some detail. (auth)

  9. Modified Three-Step Search Block Matching Motion Estimation and Weighted Finite Automata based Fractal Video Compression

    Directory of Open Access Journals (Sweden)

    Shailesh Kamble

    2017-08-01

    Full Text Available The major challenge with fractal image/video coding technique is that, it requires more encoding time. Therefore, how to reduce the encoding time is the research component remains in the fractal coding. Block matching motion estimation algorithms are used, to reduce the computations performed in the process of encoding. The objective of the proposed work is to develop an approach for video coding using modified three step search (MTSS block matching algorithm and weighted finite automata (WFA coding with a specific focus on reducing the encoding time. The MTSS block matching algorithm are used for computing motion vectors between the two frames i.e. displacement of pixels and WFA is used for the coding as it behaves like the Fractal Coding (FC. WFA represents an image (frame or motion compensated prediction error based on the idea of fractal that the image has self-similarity in itself. The self-similarity is sought from the symmetry of an image, so the encoding algorithm divides an image into multi-levels of quad-tree segmentations and creates an automaton from the sub-images. The proposed MTSS block matching algorithm is based on the combination of rectangular and hexagonal search pattern and compared with the existing New Three-Step Search (NTSS, Three-Step Search (TSS, and Efficient Three-Step Search (ETSS block matching estimation algorithm. The performance of the proposed MTSS block matching algorithm is evaluated on the basis of performance evaluation parameters i.e. mean absolute difference (MAD and average search points required per frame. Mean of absolute difference (MAD distortion function is used as the block distortion measure (BDM. Finally, developed approaches namely, MTSS and WFA, MTSS and FC, and Plane FC (applied on every frame are compared with each other. The experimentations are carried out on the standard uncompressed video databases, namely, akiyo, bus, mobile, suzie, traffic, football, soccer, ice etc. Developed

  10. Image standards in Tissue-Based Diagnosis (Diagnostic Surgical Pathology

    Directory of Open Access Journals (Sweden)

    Vollmer Ekkehard

    2008-04-01

    Full Text Available Abstract Background Progress in automated image analysis, virtual microscopy, hospital information systems, and interdisciplinary data exchange require image standards to be applied in tissue-based diagnosis. Aims To describe the theoretical background, practical experiences and comparable solutions in other medical fields to promote image standards applicable for diagnostic pathology. Theory and experiences Images used in tissue-based diagnosis present with pathology – specific characteristics. It seems appropriate to discuss their characteristics and potential standardization in relation to the levels of hierarchy in which they appear. All levels can be divided into legal, medical, and technological properties. Standards applied to the first level include regulations or aims to be fulfilled. In legal properties, they have to regulate features of privacy, image documentation, transmission, and presentation; in medical properties, features of disease – image combination, human – diagnostics, automated information extraction, archive retrieval and access; and in technological properties features of image acquisition, display, formats, transfer speed, safety, and system dynamics. The next lower second level has to implement the prescriptions of the upper one, i.e. describe how they are implemented. Legal aspects should demand secure encryption for privacy of all patient related data, image archives that include all images used for diagnostics for a period of 10 years at minimum, accurate annotations of dates and viewing, and precise hardware and software information. Medical aspects should demand standardized patients' files such as DICOM 3 or HL 7 including history and previous examinations, information of image display hardware and software, of image resolution and fields of view, of relation between sizes of biological objects and image sizes, and of access to archives and retrieval. Technological aspects should deal with image

  11. Standardized computer-based organized reporting of EEG

    DEFF Research Database (Denmark)

    Beniczky, Sándor; Aurlien, Harald; Brøgger, Jan C.

    2017-01-01

    Standardized terminology for computer-based assessment and reporting of EEG has been previously developed in Europe. The International Federation of Clinical Neurophysiology established a taskforce in 2013 to develop this further, and to reach international consensus. This work resulted in the se......Standardized terminology for computer-based assessment and reporting of EEG has been previously developed in Europe. The International Federation of Clinical Neurophysiology established a taskforce in 2013 to develop this further, and to reach international consensus. This work resulted...... in the second, revised version of SCORE (Standardized Computer-based Organized Reporting of EEG), which is presented in this paper. The revised terminology was implemented in a software package (SCORE EEG), which was tested in clinical practice on 12,160 EEG recordings. Standardized terms implemented in SCORE...... are used to report the features of clinical relevance, extracted while assessing the EEGs. Selection of the terms is context sensitive: initial choices determine the subsequently presented sets of additional choices. This process automatically generates a report and feeds these features into a database...

  12. Development of a three-dimensional neutron transport code DFEM based on the double finite element method

    International Nuclear Information System (INIS)

    Fujimura, Toichiro

    1996-01-01

    A three-dimensional neutron transport code DFEM has been developed by the double finite element method to analyze reactor cores with complex geometry as large fast reactors. Solution algorithm is based on the double finite element method in which the space and angle finite elements are employed. A reactor core system can be divided into some triangular and/or quadrangular prism elements, and the spatial distribution of neutron flux in each element is approximated with linear basis functions. As for the angular variables, various basis functions are applied, and their characteristics were clarified by comparison. In order to enhance the accuracy, a general method is derived to remedy the truncation errors at reflective boundaries, which are inherent in the conventional FEM. An adaptive acceleration method and the source extrapolation method were applied to accelerate the convergence of the iterations. The code structure is outlined and explanations are given on how to prepare input data. A sample input list is shown for reference. The eigenvalue and flux distribution for real scale fast reactors and the NEA benchmark problems were presented and discussed in comparison with the results of other transport codes. (author)

  13. Modeling of storage tank settlement based on the United States standards

    Directory of Open Access Journals (Sweden)

    Gruchenkova Alesya

    2018-01-01

    Full Text Available Up to 60% of storage tanks in operation have uneven settlement of the outer bottom contour, which often leads to accidents. Russian and foreign regulatory documents have different requirements for strain limits of metal structures. There is an increasing need for harmonizing regulatory documents. The aim of this study is to theoretically justify and to assess the possibility of applying the U.S. standards for specifying the allowable settlement of storage tanks used in Russia. The allowable uneven settlement was calculated for a vertical steel tank (VST-20000 according to API-653, a standard of the American Petroleum Institute. The calculated allowable settlement levels were compared with those established by Russian standards. Based on the finite element method, the uneven settlement development process of a storage tank was modeled. Stress-strain state parameters of tank structures were obtained at the critical levels established in API-653. Relationships of maximum equivalent stresses in VST metal structures to the vertical settlement component for settlement zones of 6 to 72 m in length were determined. When the uneven settlement zone is 6 m in length, the limit state is found to be caused by 30-mm vertical settlement, while stresses in the wall exceed 330 MPa. When the uneven settlement zone is 36 m in length, stresses reach the yield point only at 100-mm vertical settlement.

  14. Finite element based design optimization of WENDELSTEIN 7-X divertor components under high heat flux loading

    International Nuclear Information System (INIS)

    Plankensteiner, A.; Leuprecht, A.; Schedler, B.; Scheiber, K.-H.; Greuner, H.

    2007-01-01

    In the divertor of the nuclear fusion experiment WENDELSTEIN 7-X (W7-X) plasma facing high heat flux target elements have to withstand severe loading conditions. The thermally induced mechanical stressing turns out to be most critical with respect to lifetime predictions of the target elements. Therefore, different design variants of those CFC flat tile armoured high heat flux components have been analysed via the finite element package ABAQUS aiming at derivation of an optimized component design under high heat flux conditions. The investigated design variants comprise also promising alterations in the cooling channel design and castellation of the CFC flat tiles which, however, from a system integration and manufacturing standpoint of view, respectively, are evaluated to be critical. Therefore, the numerical study as presented here mainly comprises a reference variant that is comparatively studied with a variant incorporating a bi-layer-type AMC-Cu/OF-Cu interlayer at the CFC/Cu-interface. The thermo-mechanical material characteristics are accounted for in the finite element models with elastic-plastic properties being assigned to the metallic sections CuCrZr, AMC-Cu and OF-Cu, respectively, and orthotropic nonlinear-elastic properties being used for the CFC sections. The calculated temporal and spatial evolution of temperatures, stresses, and strains for the individual design variants are evaluated with special attention being paid to stress measures, plastic strains, and damage parameters indicating the risk of failure of CFC and the CFC/Cu-interface, respectively. This way the finite element analysis allows to numerically derive an optimized design variant within the framework of expected operating conditions in W7-X

  15. Parametric optimization and design validation based on finite element analysis of hybrid socket adapter for transfemoral prosthetic knee.

    Science.gov (United States)

    Kumar, Neelesh

    2014-10-01

    Finite element analysis has been universally employed for the stress and strain analysis in lower extremity prosthetics. The socket adapter was the principal subject of interest due to its importance in deciding the knee motion range. This article focused on the static and dynamic stress analysis of the designed hybrid adapter developed by the authors. A standard mechanical design validation approach using von Mises was followed. Four materials were considered for the analysis, namely, carbon fiber, oil-filled nylon, Al-6061, and mild steel. The paper analyses the static and dynamic stress on designed hybrid adapter which incorporates features of conventional male and female socket adapters. The finite element analysis was carried out for possible different angles of knee flexion simulating static and dynamic gait situation. Research was carried out on available design of socket adapter. Mechanical design of hybrid adapter was conceptualized and a CAD model was generated using Inventor modelling software. Static and dynamic stress analysis was carried out on different materials for optimization. The finite element analysis was carried out on the software Autodesk Inventor Professional Ver. 2011. The peak value of von Mises stress occurred in the neck region of the adapter and in the lower face region at rod eye-adapter junction in static and dynamic analyses, respectively. Oil-filled nylon was found to be the best material among the four with respect to strength, weight, and cost. Research investigations on newer materials for development of improved prosthesis will immensely benefit the amputees. The study analyze the static and dynamic stress on the knee joint adapter to provide better material used for hybrid design of adapter. © The International Society for Prosthetics and Orthotics 2013.

  16. Residual-based a posteriori error estimation for multipoint flux mixed finite element methods

    KAUST Repository

    Du, Shaohong; Sun, Shuyu; Xie, Xiaoping

    2015-01-01

    A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.

  17. Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements

    Science.gov (United States)

    Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.

    1988-01-01

    The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.

  18. Extensions to a nonlinear finite-element axisymmetric shell model based on Reissner's shell theory

    International Nuclear Information System (INIS)

    Cook, W.A.

    1981-01-01

    Extensions to shell analysis not usually associated with shell theory are described in this paper. These extensions involve thick shells, nonlinear materials, a linear normal stress approximation, and a changing shell thickness. A finite element shell-of-revolution model has been developed to analyze nuclear material shipping containers under severe impact conditions. To establish the limits for this shell model, the basic assumptions used in its development were studied; these are listed in this paper. Several extensions were evident from the study of these limits: a thick shell, a plastic hinge, and a linear normal stress

  19. Residual-based a posteriori error estimation for multipoint flux mixed finite element methods

    KAUST Repository

    Du, Shaohong

    2015-10-26

    A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.

  20. Probabilistic finite element stiffness of a laterally loaded monopile based on an improved asymptotic sampling method

    DEFF Research Database (Denmark)

    Vahdatirad, Mohammadjavad; Bayat, Mehdi; Andersen, Lars Vabbersgaard

    2015-01-01

    shear strength of clay. Normal and Sobol sampling are employed to provide the asymptotic sampling method to generate the probability distribution of the foundation stiffnesses. Monte Carlo simulation is used as a benchmark. Asymptotic sampling accompanied with Sobol quasi random sampling demonstrates......The mechanical responses of an offshore monopile foundation mounted in over-consolidated clay are calculated by employing a stochastic approach where a nonlinear p–y curve is incorporated with a finite element scheme. The random field theory is applied to represent a spatial variation for undrained...... an efficient method for estimating the probability distribution of stiffnesses for the offshore monopile foundation....

  1. Complex wavenumber Fourier analysis of the B-spline based finite element method

    Czech Academy of Sciences Publication Activity Database

    Kolman, Radek; Plešek, Jiří; Okrouhlík, Miloslav

    2014-01-01

    Roč. 51, č. 2 (2014), s. 348-359 ISSN 0165-2125 R&D Projects: GA ČR(CZ) GAP101/11/0288; GA ČR(CZ) GAP101/12/2315; GA ČR GPP101/10/P376; GA ČR GA101/09/1630 Institutional support: RVO:61388998 Keywords : elastic wave propagation * dispersion errors * B-spline * finite element method * isogeometric analysis Subject RIV: JR - Other Machinery Impact factor: 1.513, year: 2014 http://www.sciencedirect.com/science/article/pii/S0165212513001479

  2. Spatially dependent burnup implementation into the nodal program based on the finite element response matrix method

    International Nuclear Information System (INIS)

    Yoriyaz, H.

    1986-01-01

    In this work a spatial burnup scheme and feedback effects has been implemented into the FERM ( 'Finite Element Response Matrix' )program. The spatially dependent neutronic parameters have been considered in three levels: zonewise calculation, assembly wise calculation and pointwise calculation. Flux and power distributions and the multiplication factor were calculated and compared with the results obtained by CITATIOn program. These comparisons showed that processing time in the Ferm code has been hundred of times shorter and no significant difference has been observed in the assembly average power distribution. (Author) [pt

  3. OPTIMIZATION-BASED APPROACH TO TILING OF FINITE AREAS WITH ARBITRARY SETS OF WANG TILES

    Directory of Open Access Journals (Sweden)

    Marek Tyburec

    2017-11-01

    Full Text Available Wang tiles proved to be a convenient tool for the design of aperiodic tilings in computer graphics and in materials engineering. While there are several algorithms for generation of finite-sized tilings, they exploit the specific structure of individual tile sets, which prevents their general usage. In this contribution, we reformulate the NP-complete tiling generation problem as a binary linear program, together with its linear and semidefinite relaxations suitable for the branch and bound method. Finally, we assess the performance of the established formulations on generations of several aperiodic tilings reported in the literature, and conclude that the linear relaxation is better suited for the problem.

  4. Finite fields and applications

    CERN Document Server

    Mullen, Gary L

    2007-01-01

    This book provides a brief and accessible introduction to the theory of finite fields and to some of their many fascinating and practical applications. The first chapter is devoted to the theory of finite fields. After covering their construction and elementary properties, the authors discuss the trace and norm functions, bases for finite fields, and properties of polynomials over finite fields. Each of the remaining chapters details applications. Chapter 2 deals with combinatorial topics such as the construction of sets of orthogonal latin squares, affine and projective planes, block designs, and Hadamard matrices. Chapters 3 and 4 provide a number of constructions and basic properties of error-correcting codes and cryptographic systems using finite fields. Each chapter includes a set of exercises of varying levels of difficulty which help to further explain and motivate the material. Appendix A provides a brief review of the basic number theory and abstract algebra used in the text, as well as exercises rel...

  5. Combinatorics of transformations from standard to non-standard bases in Brauer algebras

    International Nuclear Information System (INIS)

    Chilla, Vincenzo

    2007-01-01

    Transformation coefficients between standard bases for irreducible representations of the Brauer centralizer algebra B f (x) and split bases adapted to the B f 1 (x) x B f 2 (x) subset of B f (x) subalgebra (f 1 + f 2 = f) are considered. After providing the suitable combinatorial background, based on the definition of the i-coupling relation on nodes of the subduction grid, we introduce a generalized version of the subduction graph which extends the one given in Chilla (2006 J. Phys. A: Math. Gen. 39 7657) for symmetric groups. Thus, we can describe the structure of the subduction system arising from the linear method and give an outline of the form of the solution space. An ordering relation on the grid is also given and then, as in the case of symmetric groups, the choices of the phases and of the free factors governing the multiplicity separations are discussed

  6. A non-conformal finite element/finite volume scheme for the non-structured grid-based approximation of low Mach number flows

    International Nuclear Information System (INIS)

    Ansanay-Alex, G.

    2009-01-01

    The development of simulation codes aimed at a precise simulation of fires requires a precise approach of flame front phenomena by using very fine grids. The need to take different spatial scale into consideration leads to a local grid refinement and to a discretization with homogeneous grid for computing time and memory purposes. The author reports the approximation of the non-linear convection term, the scalar advection-diffusion in finite volumes, numerical simulations of a flow in a bent tube, of a three-dimensional laminar flame and of a low Mach number an-isotherm flow. Non conformal finite elements are also presented (Rannacher-Turek and Crouzeix-Raviart elements)

  7. CONVEC: a computer program for transient incompressible fluid flow based on quadratic finite elements. Part 1: theoretical aspects

    International Nuclear Information System (INIS)

    Laval, H.

    1981-01-01

    This report describes the theoretical and numerical aspects of the finite element computer code CONVEC designed for the transient analysis of two-dimensional plane or three-dimensional axisymmetric incompressible flows including the effects of heat transfer. The governing equations for the above class of problems are the time-dependent incompressible Navier-Stokes equations and the thermal energy equation. The general class of flow problems analysed by CONVEC is discussed and the equations for the initial-boundary value problem are represented. A brief description of the finite element method and the weighted residual formulation is presented. The numerical solution of the incompressible equations is achieved by using a fractional step method. The mass lumping process associated with an explicit time integration scheme is described. The time integration is analysed and the stability conditions are derived. Numerical applications are presented. Standard problems of natural and forced convection are solved and the solutions obtained are compared with other numerical solutions published in the literature

  8. A Finite Element Model of a MEMS-based Surface Acoustic Wave Hydrogen Sensor

    Directory of Open Access Journals (Sweden)

    Walied A. Moussa

    2010-02-01

    Full Text Available Hydrogen plays a significant role in various industrial applications, but careful handling and continuous monitoring are crucial since it is explosive when mixed with air. Surface Acoustic Wave (SAW sensors provide desirable characteristics for hydrogen detection due to their small size, low fabrication cost, ease of integration and high sensitivity. In this paper a finite element model of a Surface Acoustic Wave sensor is developed using ANSYS12© and tested for hydrogen detection. The sensor consists of a YZ-lithium niobate substrate with interdigital electrodes (IDT patterned on the surface. A thin palladium (Pd film is added on the surface of the sensor due to its high affinity for hydrogen. With increased hydrogen absorption the palladium hydride structure undergoes a phase change due to the formation of the β-phase, which deteriorates the crystal structure. Therefore with increasing hydrogen concentration the stiffness and the density are significantly reduced. The values of the modulus of elasticity and the density at different hydrogen concentrations in palladium are utilized in the finite element model to determine the corresponding SAW sensor response. Results indicate that with increasing the hydrogen concentration the wave velocity decreases and the attenuation of the wave is reduced.

  9. Facilitating Stewardship of scientific data through standards based workflows

    Science.gov (United States)

    Bastrakova, I.; Kemp, C.; Potter, A. K.

    2013-12-01

    scientific data acquisition and analysis requirements and effective interoperable data management and delivery. This includes participating in national and international dialogue on development of standards, embedding data management activities in business processes, and developing scientific staff as effective data stewards. Similar approach is applied to the geophysical data. By ensuring the geophysical datasets at GA strictly follow metadata and industry standards we are able to implement a provenance based workflow where the data is easily discoverable, geophysical processing can be applied to it and results can be stored. The provenance based workflow enables metadata records for the results to be produced automatically from the input dataset metadata.

  10. Specialized Finite Set Statistics (FISST)-Based Estimation Methods to Enhance Space Situational Awareness in Medium Earth Orbit (MEO) and Geostationary Earth Orbit (GEO)

    Science.gov (United States)

    2016-08-17

    Specialized Finite Set Statistics (FISST)-based Estimation Methods to Enhance Space Situational Awareness in Medium Earth Orbit (MEO) and Geostationary...terms of specialized Geostationary Earth Orbit (GEO) elements to estimate the state of resident space objects in the geostationary regime. Justification...AFRL-RV-PS- AFRL-RV-PS- TR-2016-0114 TR-2016-0114 SPECIALIZED FINITE SET STATISTICS (FISST)- BASED ESTIMATION METHODS TO ENHANCE SPACE SITUATIONAL

  11. Standards-Based Wireless Sensor Networking Protocols for Spaceflight Applications

    Science.gov (United States)

    Wagner, Raymond S.

    2010-01-01

    Wireless sensor networks (WSNs) have the capacity to revolutionize data gathering in both spaceflight and terrestrial applications. WSNs provide a huge advantage over traditional, wired instrumentation since they do not require wiring trunks to connect sensors to a central hub. This allows for easy sensor installation in hard to reach locations, easy expansion of the number of sensors or sensing modalities, and reduction in both system cost and weight. While this technology offers unprecedented flexibility and adaptability, implementing it in practice is not without its difficulties. Recent advances in standards-based WSN protocols for industrial control applications have come a long way to solving many of the challenges facing practical WSN deployments. In this paper, we will overview two of the more promising candidates - WirelessHART from the HART Communication Foundation and ISA100.11a from the International Society of Automation - and present the architecture for a new standards-based sensor node for networking and applications research.

  12. A parallel finite element procedure for contact-impact problems using edge-based smooth triangular element and GPU

    Science.gov (United States)

    Cai, Yong; Cui, Xiangyang; Li, Guangyao; Liu, Wenyang

    2018-04-01

    The edge-smooth finite element method (ES-FEM) can improve the computational accuracy of triangular shell elements and the mesh partition efficiency of complex models. In this paper, an approach is developed to perform explicit finite element simulations of contact-impact problems with a graphical processing unit (GPU) using a special edge-smooth triangular shell element based on ES-FEM. Of critical importance for this problem is achieving finer-grained parallelism to enable efficient data loading and to minimize communication between the device and host. Four kinds of parallel strategies are then developed to efficiently solve these ES-FEM based shell element formulas, and various optimization methods are adopted to ensure aligned memory access. Special focus is dedicated to developing an approach for the parallel construction of edge systems. A parallel hierarchy-territory contact-searching algorithm (HITA) and a parallel penalty function calculation method are embedded in this parallel explicit algorithm. Finally, the program flow is well designed, and a GPU-based simulation system is developed, using Nvidia's CUDA. Several numerical examples are presented to illustrate the high quality of the results obtained with the proposed methods. In addition, the GPU-based parallel computation is shown to significantly reduce the computing time.

  13. Construction method of QC-LDPC codes based on multiplicative group of finite field in optical communication

    Science.gov (United States)

    Huang, Sheng; Ao, Xiang; Li, Yuan-yuan; Zhang, Rui

    2016-09-01

    In order to meet the needs of high-speed development of optical communication system, a construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on multiplicative group of finite field is proposed. The Tanner graph of parity check matrix of the code constructed by this method has no cycle of length 4, and it can make sure that the obtained code can get a good distance property. Simulation results show that when the bit error rate ( BER) is 10-6, in the same simulation environment, the net coding gain ( NCG) of the proposed QC-LDPC(3 780, 3 540) code with the code rate of 93.7% in this paper is improved by 2.18 dB and 1.6 dB respectively compared with those of the RS(255, 239) code in ITU-T G.975 and the LDPC(3 2640, 3 0592) code in ITU-T G.975.1. In addition, the NCG of the proposed QC-LDPC(3 780, 3 540) code is respectively 0.2 dB and 0.4 dB higher compared with those of the SG-QC-LDPC(3 780, 3 540) code based on the two different subgroups in finite field and the AS-QC-LDPC(3 780, 3 540) code based on the two arbitrary sets of a finite field. Thus, the proposed QC-LDPC(3 780, 3 540) code in this paper can be well applied in optical communication systems.

  14. An automation of physics research on base of open standards

    International Nuclear Information System (INIS)

    Smirnov, V.A.

    1997-01-01

    A wide range of problems is considered concerning an automation of Laboratory of High Energies, JINR set-ups oriented to carry out the experimental researches in high energy and relativistic nuclear physics. Electronics of discussed automation systems is performed in open standards. Main peculiarities in the creation process of automation tools for experimental set-ups, stands and accelerators are shown. Some possibilities to build some accelerator control subsystems on base of industrial automation methods and techniques are discussed

  15. 77 FR 37587 - Updating OSHA Standards Based on National Consensus Standards; Head Protection

    Science.gov (United States)

    2012-06-22

    ... Z89.1-2003 as Appendix E, to the main text. Adds ``ASTM E1164-02 Colorimetry--Standard Practice for... National complete citations for standards on Standards Referred to in This colorimetry, headforms, and...

  16. 77 FR 37617 - Updating OSHA Standards Based on National Consensus Standards; Head Protection

    Science.gov (United States)

    2012-06-22

    ... Z89.1-2003 as Appendix E, to the main text. Adds ``ASTM E1164-02 Colorimetry--Standard Practice for... National complete citations for standards on Standards Referred to in This colorimetry, headforms, and...

  17. Improving Stiffness-to-weight Ratio of Spot-welded Structures based upon Nonlinear Finite Element Modelling

    Science.gov (United States)

    Zhang, Shengyong

    2017-07-01

    Spot welding has been widely used for vehicle body construction due to its advantages of high speed and adaptability for automation. An effort to increase the stiffness-to-weight ratio of spot-welded structures is investigated based upon nonlinear finite element analysis. Topology optimization is conducted for reducing weight in the overlapping regions by choosing an appropriate topology. Three spot-welded models (lap, doubt-hat and T-shape) that approximate “typical” vehicle body components are studied for validating and illustrating the proposed method. It is concluded that removing underutilized material from overlapping regions can result in a significant increase in structural stiffness-to-weight ratio.

  18. An interactive algorithm for identifying multiattribute measurable value functions based on finite-order independence of structural difference

    International Nuclear Information System (INIS)

    Tamura, Hiroyuki; Hikita, Shiro

    1985-01-01

    In this paper, we develop an interactive algorithm for identifying multiattribute measurable value functions based on the concept of finite-order independence of structural difference. This concept includes Dyer and Sarin's weak difference independence as special cases. The algorithm developed is composed of four major parts: 1) formulation of the problem 2) assessment of normalized conditional value functions and structural difference functions 3) assessment of corner values 4) assessment of the order of independence of structural difference and selection of the model. A hypothetical numerical example of a trade-off analysis for siting a nuclear power plant is included. (author)

  19. Efficient Lattice-Based Signcryption in Standard Model

    Directory of Open Access Journals (Sweden)

    Jianhua Yan

    2013-01-01

    Full Text Available Signcryption is a cryptographic primitive that can perform digital signature and public encryption simultaneously at a significantly reduced cost. This advantage makes it highly useful in many applications. However, most existing signcryption schemes are seriously challenged by the booming of quantum computations. As an interesting stepping stone in the post-quantum cryptographic community, two lattice-based signcryption schemes were proposed recently. But both of them were merely proved to be secure in the random oracle models. Therefore, the main contribution of this paper is to propose a new lattice-based signcryption scheme that can be proved to be secure in the standard model.

  20. A local level set method based on a finite element method for unstructured meshes

    International Nuclear Information System (INIS)

    Ngo, Long Cu; Choi, Hyoung Gwon

    2016-01-01

    A local level set method for unstructured meshes has been implemented by using a finite element method. A least-square weighted residual method was employed for implicit discretization to solve the level set advection equation. By contrast, a direct re-initialization method, which is directly applicable to the local level set method for unstructured meshes, was adopted to re-correct the level set function to become a signed distance function after advection. The proposed algorithm was constructed such that the advection and direct reinitialization steps were conducted only for nodes inside the narrow band around the interface. Therefore, in the advection step, the Gauss–Seidel method was used to update the level set function using a node-by-node solution method. Some benchmark problems were solved by using the present local level set method. Numerical results have shown that the proposed algorithm is accurate and efficient in terms of computational time

  1. B-spline based finite element method in one-dimensional discontinuous elastic wave propagation

    Czech Academy of Sciences Publication Activity Database

    Kolman, Radek; Okrouhlík, Miloslav; Berezovski, A.; Gabriel, Dušan; Kopačka, Ján; Plešek, Jiří

    2017-01-01

    Roč. 46, June (2017), s. 382-395 ISSN 0307-904X R&D Projects: GA ČR(CZ) GAP101/12/2315; GA MŠk(CZ) EF15_003/0000493 Grant - others:AV ČR(CZ) DAAD-16-12; AV ČR(CZ) ETA-15-03 Program:Bilaterální spolupráce; Bilaterální spolupráce Institutional support: RVO:61388998 Keywords : discontinuous elastic wave propagation * B-spline finite element method * isogeometric analysis * implicit and explicit time integration * dispersion * spurious oscillations Subject RIV: BI - Acoustics OBOR OECD: Acoustics Impact factor: 2.350, year: 2016 http://www.sciencedirect.com/science/article/pii/S0307904X17300835

  2. A local level set method based on a finite element method for unstructured meshes

    Energy Technology Data Exchange (ETDEWEB)

    Ngo, Long Cu; Choi, Hyoung Gwon [School of Mechanical Engineering, Seoul National University of Science and Technology, Seoul (Korea, Republic of)

    2016-12-15

    A local level set method for unstructured meshes has been implemented by using a finite element method. A least-square weighted residual method was employed for implicit discretization to solve the level set advection equation. By contrast, a direct re-initialization method, which is directly applicable to the local level set method for unstructured meshes, was adopted to re-correct the level set function to become a signed distance function after advection. The proposed algorithm was constructed such that the advection and direct reinitialization steps were conducted only for nodes inside the narrow band around the interface. Therefore, in the advection step, the Gauss–Seidel method was used to update the level set function using a node-by-node solution method. Some benchmark problems were solved by using the present local level set method. Numerical results have shown that the proposed algorithm is accurate and efficient in terms of computational time.

  3. Bending Moment Calculations for Piles Based on the Finite Element Method

    Directory of Open Access Journals (Sweden)

    Yu-xin Jie

    2013-01-01

    Full Text Available Using the finite element analysis program ABAQUS, a series of calculations on a cantilever beam, pile, and sheet pile wall were made to investigate the bending moment computational methods. The analyses demonstrated that the shear locking is not significant for the passive pile embedded in soil. Therefore, higher-order elements are not always necessary in the computation. The number of grids across the pile section is important for bending moment calculated with stress and less significant for that calculated with displacement. Although computing bending moment with displacement requires fewer grid numbers across the pile section, it sometimes results in variation of the results. For displacement calculation, a pile row can be suitably represented by an equivalent sheet pile wall, whereas the resulting bending moments may be different. Calculated results of bending moment may differ greatly with different grid partitions and computational methods. Therefore, a comparison of results is necessary when performing the analysis.

  4. Design of Thermal Barrier Coatings Thickness for Gas Turbine Blade Based on Finite Element Analysis

    Directory of Open Access Journals (Sweden)

    Biao Li

    2017-01-01

    Full Text Available Thermal barrier coatings (TBCs are deposited on the turbine blade to reduce the temperature of underlying substrate, as well as providing protection against the oxidation and hot corrosion from high temperature gas. Optimal ceramic top-coat thickness distribution on the blade can improve the performance and efficiency of the coatings. Design of the coatings thickness is a multiobjective optimization problem due to the conflicts among objectives of high thermal insulation performance, long operation durability, and low fabrication cost. This work developed a procedure for designing the TBCs thickness distribution for the gas turbine blade. Three-dimensional finite element models were built and analyzed, and weighted-sum approach was employed to solve the multiobjective optimization problem herein. Suitable multiregion top-coat thickness distribution scheme was designed with the considerations of manufacturing accuracy, productivity, and fabrication cost.

  5. Implementing finite state machines in a computer-based teaching system

    Science.gov (United States)

    Hacker, Charles H.; Sitte, Renate

    1999-09-01

    Finite State Machines (FSM) are models for functions commonly implemented in digital circuits such as timers, remote controls, and vending machines. Teaching FSM is core in the curriculum of many university digital electronic or discrete mathematics subjects. Students often have difficulties grasping the theoretical concepts in the design and analysis of FSM. This has prompted the author to develop an MS-WindowsTM compatible software, WinState, that provides a tutorial style teaching aid for understanding the mechanisms of FSM. The animated computer screen is ideal for visually conveying the required design and analysis procedures. WinState complements other software for combinatorial logic previously developed by the author, and enhances the existing teaching package by adding sequential logic circuits. WinState enables the construction of a students own FSM, which can be simulated, to test the design for functionality and possible errors.

  6. Stress analysis and deformation prediction of sheet metal workpieces based on finite element simulation

    Directory of Open Access Journals (Sweden)

    Ren Penghao

    2017-01-01

    Full Text Available After aluminum alloy sheet metal parts machining, the residual stress release will cause a large deformation. To solve this problem, this paper takes a aluminum alloy sheet aerospace workpiece as an example, establishes the theoretical model of elastic deformation and the finite element model, and places quantitative initial stress in each element of machining area, analyses stress release simulation and deformation. Through different initial stress release simulative analysis of deformation of the workpiece, a linear relationship between initial stress and deformation is found; Through simulative analysis of coupling direction-stress release, the superposing relationship between the deformation caused by coupling direction-stress and the deformation caused by single direction stress is found. The research results provide important theoretical support for the stress threshold setting and deformation controlling of the workpieces in the production practice.

  7. 78 FR 35559 - Updating OSHA Standards Based on National Consensus Standards; Signage

    Science.gov (United States)

    2013-06-13

    ...; Signage AGENCY: Occupational Safety and Health Administration (OSHA), Department of Labor. ACTION: Direct... signage standards by adding references to the latest versions of the American National Standards Institute... earlier ANSI standards, ANSI Z53.1-1967, Z35.1-1968 and Z35.2-1968, in its signage standards, thereby...

  8. The integrated indicator of sustainable urban development based on standardization

    Directory of Open Access Journals (Sweden)

    Leonova Tatiana

    2018-01-01

    Full Text Available The paper justifies the necessity for the system of planned indicators for sustainable urban development design in accordance with the requirements of international standards and the Russian standard GOST R ISO 37120-2015, and the estimation of their actual achievement based on complex qualimetric models. An analysis of opinions on this issue and an overview of Russian normative documents for assessing the effectiveness of the municipalities, including urban development are presented. General methodological principles and sequence for the construction of qualimetric models, as well as formulas for the calculation of complex indicators, taking into account the specific weights obtained on the basis of expert assessment, are presented, the need for careful selection of experts and determination of the consistency of expert opinions is indicated. The advantages and disadvantages of this approach are shown. Conclusions are drawn on the use of qualimetric models for sustainable urban development.

  9. Workshop : ACPSEM/ARPS competency based standards project

    International Nuclear Information System (INIS)

    Collins, L.

    1996-01-01

    The ACPSEM together with the Australian Radiation Protection Society has been working for nearly two years now on a competency based standards project for the professions of medical and health physicists. Competencies are being used increasingly in industry and the professions as a means of determining skill levels. For example, all the medical radiation technology streams have a CBS system in the final stages of development, and our engineering colleagues have completed theirs. Last year there was a draft document sent to all members asking for feedback. Following a vote of funding by both bodies, a project offect (Dr David Waggett) has been appointed, and has produced a very much improved set of competency standards covering all significant subspecialties in our profession. This workshop will detail the work done so far, and preview the draft document. A healthy discussion will be encouraged, as the project steering group will shortly be arranging the next steps in process. (author)

  10. Topology optimization for three-dimensional electromagnetic waves using an edge element-based finite-element method.

    Science.gov (United States)

    Deng, Yongbo; Korvink, Jan G

    2016-05-01

    This paper develops a topology optimization procedure for three-dimensional electromagnetic waves with an edge element-based finite-element method. In contrast to the two-dimensional case, three-dimensional electromagnetic waves must include an additional divergence-free condition for the field variables. The edge element-based finite-element method is used to both discretize the wave equations and enforce the divergence-free condition. For wave propagation described in terms of the magnetic field in the widely used class of non-magnetic materials, the divergence-free condition is imposed on the magnetic field. This naturally leads to a nodal topology optimization method. When wave propagation is described using the electric field, the divergence-free condition must be imposed on the electric displacement. In this case, the material in the design domain is assumed to be piecewise homogeneous to impose the divergence-free condition on the electric field. This results in an element-wise topology optimization algorithm. The topology optimization problems are regularized using a Helmholtz filter and a threshold projection method and are analysed using a continuous adjoint method. In order to ensure the applicability of the filter in the element-wise topology optimization version, a regularization method is presented to project the nodal into an element-wise physical density variable.

  11. Analysis of wave motion in one-dimensional structures through fast-Fourier-transform-based wavelet finite element method

    Science.gov (United States)

    Shen, Wei; Li, Dongsheng; Zhang, Shuaifang; Ou, Jinping

    2017-07-01

    This paper presents a hybrid method that combines the B-spline wavelet on the interval (BSWI) finite element method and spectral analysis based on fast Fourier transform (FFT) to study wave propagation in One-Dimensional (1D) structures. BSWI scaling functions are utilized to approximate the theoretical wave solution in the spatial domain and construct a high-accuracy dynamic stiffness matrix. Dynamic reduction on element level is applied to eliminate the interior degrees of freedom of BSWI elements and substantially reduce the size of the system matrix. The dynamic equations of the system are then transformed and solved in the frequency domain through FFT-based spectral analysis which is especially suitable for parallel computation. A comparative analysis of four different finite element methods is conducted to demonstrate the validity and efficiency of the proposed method when utilized in high-frequency wave problems. Other numerical examples are utilized to simulate the influence of crack and delamination on wave propagation in 1D rods and beams. Finally, the errors caused by FFT and their corresponding solutions are presented.

  12. Sampled-data-based vibration control for structural systems with finite-time state constraint and sensor outage.

    Science.gov (United States)

    Weng, Falu; Liu, Mingxin; Mao, Weijie; Ding, Yuanchun; Liu, Feifei

    2018-05-10

    The problem of sampled-data-based vibration control for structural systems with finite-time state constraint and sensor outage is investigated in this paper. The objective of designing controllers is to guarantee the stability and anti-disturbance performance of the closed-loop systems while some sensor outages happen. Firstly, based on matrix transformation, the state-space model of structural systems with sensor outages and uncertainties appearing in the mass, damping and stiffness matrices is established. Secondly, by considering most of those earthquakes or strong winds happen in a very short time, and it is often the peak values make the structures damaged, the finite-time stability analysis method is introduced to constrain the state responses in a given time interval, and the H-infinity stability is adopted in the controller design to make sure that the closed-loop system has a prescribed level of disturbance attenuation performance during the whole control process. Furthermore, all stabilization conditions are expressed in the forms of linear matrix inequalities (LMIs), whose feasibility can be easily checked by using the LMI Toolbox. Finally, numerical examples are given to demonstrate the effectiveness of the proposed theorems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  13. A variational numerical method based on finite elements for the nonlinear solution characteristics of the periodically forced Chen system

    Science.gov (United States)

    Khan, Sabeel M.; Sunny, D. A.; Aqeel, M.

    2017-09-01

    Nonlinear dynamical systems and their solutions are very sensitive to initial conditions and therefore need to be approximated carefully. In this article, we present and analyze nonlinear solution characteristics of the periodically forced Chen system with the application of a variational method based on the concept of finite time-elements. Our approach is based on the discretization of physical time space into finite elements where each time-element is mapped to a natural time space. The solution of the system is then determined in natural time space using a set of suitable basis functions. The numerical algorithm is presented and implemented to compute and analyze nonlinear behavior at different time-step sizes. The obtained results show an excellent agreement with the classical RK-4 and RK-5 methods. The accuracy and convergence of the method is shown by comparing numerically computed results with the exact solution for a test problem. The presented method has shown a great potential in dealing with the solutions of nonlinear dynamical systems and thus can be utilized in delineating different features and characteristics of their solutions.

  14. PDE-based geophysical modelling using finite elements: examples from 3D resistivity and 2D magnetotellurics

    Science.gov (United States)

    Schaa, R.; Gross, L.; du Plessis, J.

    2016-04-01

    We present a general finite-element solver, escript, tailored to solve geophysical forward and inverse modeling problems in terms of partial differential equations (PDEs) with suitable boundary conditions. Escript’s abstract interface allows geoscientists to focus on solving the actual problem without being experts in numerical modeling. General-purpose finite element solvers have found wide use especially in engineering fields and find increasing application in the geophysical disciplines as these offer a single interface to tackle different geophysical problems. These solvers are useful for data interpretation and for research, but can also be a useful tool in educational settings. This paper serves as an introduction into PDE-based modeling with escript where we demonstrate in detail how escript is used to solve two different forward modeling problems from applied geophysics (3D DC resistivity and 2D magnetotellurics). Based on these two different cases, other geophysical modeling work can easily be realized. The escript package is implemented as a Python library and allows the solution of coupled, linear or non-linear, time-dependent PDEs. Parallel execution for both shared and distributed memory architectures is supported and can be used without modifications to the scripts.

  15. PDE-based geophysical modelling using finite elements: examples from 3D resistivity and 2D magnetotellurics

    International Nuclear Information System (INIS)

    Schaa, R; Gross, L; Du Plessis, J

    2016-01-01

    We present a general finite-element solver, escript, tailored to solve geophysical forward and inverse modeling problems in terms of partial differential equations (PDEs) with suitable boundary conditions. Escript’s abstract interface allows geoscientists to focus on solving the actual problem without being experts in numerical modeling. General-purpose finite element solvers have found wide use especially in engineering fields and find increasing application in the geophysical disciplines as these offer a single interface to tackle different geophysical problems. These solvers are useful for data interpretation and for research, but can also be a useful tool in educational settings. This paper serves as an introduction into PDE-based modeling with escript where we demonstrate in detail how escript is used to solve two different forward modeling problems from applied geophysics (3D DC resistivity and 2D magnetotellurics). Based on these two different cases, other geophysical modeling work can easily be realized. The escript package is implemented as a Python library and allows the solution of coupled, linear or non-linear, time-dependent PDEs. Parallel execution for both shared and distributed memory architectures is supported and can be used without modifications to the scripts. (paper)

  16. Spectral responsivity-based calibration of photometer and colorimeter standards

    Science.gov (United States)

    Eppeldauer, George P.

    2013-08-01

    Several new generation transfer- and working-standard illuminance meters and tristimulus colorimeters have been developed at the National Institute of Standards and Technology (NIST) [1] to measure all kinds of light sources with low uncertainty. The spectral and broad-band (illuminance) responsivities of the photometer (Y) channels of two tristimulus meters were determined at both the Spectral Irradiance and Radiance Responsivity Calibrations using Uniform Sources (SIRCUS) facility and the Spectral Comparator Facility (SCF) [2]. The two illuminance responsivities agreed within 0.1% with an overall uncertainty of 0.2% (k = 2), which is a factor of two improvement over the present NIST photometric scale. The first detector-based tristimulus color scale [3] was realized. All channels of the reference tristimulus colorimeter were calibrated at the SIRCUS. The other tristimulus meters were calibrated at the SCF and also against the reference meter on the photometry bench in broad-band measurement mode. The agreement between detector- and source-based calibrations was within 3 K when a tungsten lamp-standard was measured at 2856 K and 3100 K [4]. The color-temperature uncertainty of tungsten lamp measurements was 4 K (k = 2) between 2300 K and 3200 K, which is a factor of two improvement over the presently used NIST source-based color temperature scale. One colorimeter was extended with an additional (fifth) channel to apply software implemented matrix corrections. With this correction, the spectral mismatch caused color difference errors were decreased by a factor of 20 for single-color LEDs.

  17. A study on the nonlinear finite element analysis of reinforced concrete structures: shell finite element formulation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang Jin; Seo, Jeong Moon

    2000-08-01

    The main goal of this research is to establish a methodology of finite element analysis of containment building predicting not only global behaviour but also local failure mode. In this report, we summerize some existing numerical analysis techniques to be improved for containment building. In other words, a complete description of the standard degenerated shell finite element formulation is provided for nonlinear stress analysis of nuclear containment structure. A shell finite element is derived using the degenerated solid concept which does not rely on a specific shell theory. Reissner-Mindlin assumptions are adopted to consider the transverse shear deformation effect. In order to minimize the sensitivity of the constitutive equation to structural types, microscopic material model is adopted. The four solution algorithms based on the standard Newton-Raphson method are discussed. Finally, two numerical examples are carried out to test the performance of the adopted shell medel.

  18. A study on the nonlinear finite element analysis of reinforced concrete structures: shell finite element formulation

    International Nuclear Information System (INIS)

    Lee, Sang Jin; Seo, Jeong Moon

    2000-08-01

    The main goal of this research is to establish a methodology of finite element analysis of containment building predicting not only global behaviour but also local failure mode. In this report, we summerize some existing numerical analysis techniques to be improved for containment building. In other words, a complete description of the standard degenerated shell finite element formulation is provided for nonlinear stress analysis of nuclear containment structure. A shell finite element is derived using the degenerated solid concept which does not rely on a specific shell theory. Reissner-Mindlin assumptions are adopted to consider the transverse shear deformation effect. In order to minimize the sensitivity of the constitutive equation to structural types, microscopic material model is adopted. The four solution algorithms based on the standard Newton-Raphson method are discussed. Finally, two numerical examples are carried out to test the performance of the adopted shell medel

  19. Setting Standards for Medically-Based Running Analysis

    Science.gov (United States)

    Vincent, Heather K.; Herman, Daniel C.; Lear-Barnes, Leslie; Barnes, Robert; Chen, Cong; Greenberg, Scott; Vincent, Kevin R.

    2015-01-01

    Setting standards for medically based running analyses is necessary to ensure that runners receive a high-quality service from practitioners. Medical and training history, physical and functional tests, and motion analysis of running at self-selected and faster speeds are key features of a comprehensive analysis. Self-reported history and movement symmetry are critical factors that require follow-up therapy or long-term management. Pain or injury is typically the result of a functional deficit above or below the site along the kinematic chain. PMID:25014394

  20. Portable atomic frequency standard based on coherent population trapping

    Science.gov (United States)

    Shi, Fan; Yang, Renfu; Nian, Feng; Zhang, Zhenwei; Cui, Yongshun; Zhao, Huan; Wang, Nuanrang; Feng, Keming

    2015-05-01

    In this work, a portable atomic frequency standard based on coherent population trapping is designed and demonstrated. To achieve a portable prototype, in the system, a single transverse mode 795nm VCSEL modulated by a 3.4GHz RF source is used as a pump laser which generates coherent light fields. The pump beams pass through a vapor cell containing atom gas and buffer gas. This vapor cell is surrounded by a magnetic shield and placed inside a solenoid which applies a longitudinal magnetic field to lift the Zeeman energy levels' degeneracy and to separate the resonance signal, which has no first-order magnetic field dependence, from the field-dependent resonances. The electrical control system comprises two control loops. The first one locks the laser wavelength to the minimum of the absorption spectrum; the second one locks the modulation frequency and output standard frequency. Furthermore, we designed the micro physical package and realized the locking of a coherent population trapping atomic frequency standard portable prototype successfully. The short-term frequency stability of the whole system is measured to be 6×10-11 for averaging times of 1s, and reaches 5×10-12 at an averaging time of 1000s.

  1. Full wave simulation of waves in ECRIS plasmas based on the finite element method

    Energy Technology Data Exchange (ETDEWEB)

    Torrisi, G. [INFN - Laboratori Nazionali del Sud, via S. Sofia 62, 95123, Catania, Italy and Università Mediterranea di Reggio Calabria, Dipartimento di Ingegneria dell' Informazione, delle Infrastrutture e dell' Energia Sostenibile (DIIES), Via Graziella, I (Italy); Mascali, D.; Neri, L.; Castro, G.; Patti, G.; Celona, L.; Gammino, S.; Ciavola, G. [INFN - Laboratori Nazionali del Sud, via S. Sofia 62, 95123, Catania (Italy); Di Donato, L. [Università degli Studi di Catania, Dipartimento di Ingegneria Elettrica Elettronica ed Informatica (DIEEI), Viale Andrea Doria 6, 95125 Catania (Italy); Sorbello, G. [INFN - Laboratori Nazionali del Sud, via S. Sofia 62, 95123, Catania, Italy and Università degli Studi di Catania, Dipartimento di Ingegneria Elettrica Elettronica ed Informatica (DIEEI), Viale Andrea Doria 6, 95125 Catania (Italy); Isernia, T. [Università Mediterranea di Reggio Calabria, Dipartimento di Ingegneria dell' Informazione, delle Infrastrutture e dell' Energia Sostenibile (DIIES), Via Graziella, I-89100 Reggio Calabria (Italy)

    2014-02-12

    This paper describes the modeling and the full wave numerical simulation of electromagnetic waves propagation and absorption in an anisotropic magnetized plasma filling the resonant cavity of an electron cyclotron resonance ion source (ECRIS). The model assumes inhomogeneous, dispersive and tensorial constitutive relations. Maxwell's equations are solved by the finite element method (FEM), using the COMSOL Multiphysics{sup ®} suite. All the relevant details have been considered in the model, including the non uniform external magnetostatic field used for plasma confinement, the local electron density profile resulting in the full-3D non uniform magnetized plasma complex dielectric tensor. The more accurate plasma simulations clearly show the importance of cavity effect on wave propagation and the effects of a resonant surface. These studies are the pillars for an improved ECRIS plasma modeling, that is mandatory to optimize the ion source output (beam intensity distribution and charge state, especially). Any new project concerning the advanced ECRIS design will take benefit by an adequate modeling of self-consistent wave absorption simulations.

  2. Finite element analysis of electroactive polymer and magnetoactive elastomer based actuation for origami folding

    Science.gov (United States)

    Zhang, Wei; Ahmed, Saad; Masters, Sarah; Ounaies, Zoubeida; Frecker, Mary

    2017-10-01

    The incorporation of smart materials such as electroactive polymers and magnetoactive elastomers in origami structures can result in active folding using external electric and magnetic stimuli, showing promise in many origami-inspired engineering applications. In this study, 3D finite element analysis (FEA) models are developed using COMSOL Multiphysics software for three configurations that incorporate a combination of active and passive material layers, namely: (1) a single-notch unimorph folding configuration actuated using only external electric field, (2) a double-notch unimorph folding configuration actuated using only external electric field, and (3) a bifold configuration which is actuated using multi-field (electric and magnetic) stimuli. The objectives of the study are to verify the effectiveness of the FEA models to simulate folding behavior and to investigate the influence of geometric parameters on folding quality. Equivalent mechanical pressure and surface stress are used as external loads in the FEA to simulate electric and magnetic fields, respectively. Compared quantitatively with experimental data, FEA captured the folding performance of electric actuation well for notched configurations and magnetic actuation for a bifold structure, but underestimated electric actuation for the bifold structure. By investigating the impact of geometric parameters and locations to place smart materials, FEA can be used in design, avoiding trial-and-error iterations of experiments.

  3. Development of triple scale finite element analyses based on crystallographic homogenization methods

    International Nuclear Information System (INIS)

    Nakamachi, Eiji

    2004-01-01

    Crystallographic homogenization procedure is implemented in the piezoelectric and elastic-crystalline plastic finite element (FE) code to assess its macro-continuum properties of piezoelectric ceramics and BCC and FCC sheet metals. Triple scale hierarchical structure consists of an atom cluster, a crystal aggregation and a macro- continuum. In this paper, we focus to discuss a triple scale numerical analysis for piezoelectric material, and apply to assess a macro-continuum material property. At first, we calculate material properties of Perovskite crystal of piezoelectric material, XYO3 (such as BaTiO3 and PbTiO3) by employing ab-initio molecular analysis code CASTEP. Next, measured results of SEM and EBSD observations of crystal orientation distributions, shapes and boundaries of a real material (BaTiO3) are employed to define an inhomogeneity of crystal aggregation, which corresponds to a unit cell of micro-structure, and satisfies the periodicity condition. This procedure is featured as a first scaling up from the molecular to the crystal aggregation. Finally, the conventional homogenization procedure is implemented in FE code to evaluate a macro-continuum property. This final procedure is featured as a second scaling up from the crystal aggregation (unit cell) to macro-continuum. This triple scale analysis is applied to design piezoelectric ceramic and finds an optimum crystal orientation distribution, in which a macroscopic piezoelectric constant d33 has a maximum value

  4. Finite element analysis of the mechanical properties of cellular aluminium based on micro-computed tomography

    International Nuclear Information System (INIS)

    Veyhl, C.; Belova, I.V.; Murch, G.E.; Fiedler, T.

    2011-01-01

    Research highlights: → Elastic and plastic anisotropy is observed for both materials → Both show qualitatively similar characteristics with quantitative differences → Distinctly higher mechanical properties for closed-cell foam → The 'big' and 'small' models show good agreement for the closed-cell foam. - Abstract: In the present paper, the macroscopic mechanical properties of open-cell M-Pore sponge (porosity of 91-93%) and closed-cell Alporas foam (porosity of 80-86%) are investigated. The complex geometry of these cellular materials is scanned by micro-computed tomography and used in finite element (FE) analysis. The mechanical properties are determined by uni-axial compression simulations in three perpendicular directions (x-, y- and z-direction). M-Pore and Alporas exhibit the same qualitative mechanical characteristics but with quantitative differences. In both cases, strong anisotropy is observed for Young's modulus and the 0.002 offset yield stress. Furthermore, for the investigated relative density range a linear dependence between relative density and mechanical properties is found. Finally, a distinctly higher Young's modulus and 0.002 offset yield stress is observed for Alporas.

  5. Research on burnout fault of moulded case circuit breaker based on finite element simulation

    Science.gov (United States)

    Xue, Yang; Chang, Shuai; Zhang, Penghe; Xu, Yinghui; Peng, Chuning; Shi, Erwei

    2017-09-01

    In the failure event of molded case circuit breaker, overheating of the molded case near the wiring terminal has a very important proportion. The burnout fault has become an important factor restricting the development of molded case circuit breaker. This paper uses the finite element simulation software to establish the model of molded case circuit breaker by coupling multi-physics field. This model can simulate the operation and study the law of the temperature distribution. The simulation results show that the temperature near the wiring terminal, especially the incoming side of the live wire, of the molded case circuit breaker is much higher than that of the other areas. The steady-state and transient simulation results show that the temperature at the wiring terminals is abnormally increased by increasing the contact resistance of the wiring terminals. This is consistent with the frequent occurrence of burnout of the molded case in this area. Therefore, this paper holds that the burnout failure of the molded case circuit breaker is mainly caused by the abnormal increase of the contact resistance of the wiring terminal.

  6. Comparison of Internal Fixations for Distal Clavicular Fractures Based on Loading Tests and Finite Element Analyses

    Directory of Open Access Journals (Sweden)

    Rina Sakai

    2014-01-01

    Full Text Available It is difficult to apply strong and stable internal fixation to a fracture of the distal end of the clavicle because it is unstable, the distal clavicle fragment is small, and the fractured region is near the acromioclavicular joint. In this study, to identify a superior internal fixation method for unstable distal clavicular fracture, we compared three types of internal fixation (tension band wiring, scorpion, and LCP clavicle hook plate. Firstly, loading tests were performed, in which fixations were evaluated using bending stiffness and torsional stiffness as indices, followed by finite element analysis to evaluate fixability using the stress and strain as indices. The bending and torsional stiffness were significantly higher in the artificial clavicles fixed with the two types of plate than in that fixed by tension band wiring (P<0.05. No marked stress concentration on the clavicle was noted in the scorpion because the arm plate did not interfere with the acromioclavicular joint, suggesting that favorable shoulder joint function can be achieved. The stability of fixation with the LCP clavicle hook plate and the scorpion was similar, and plate fixations were stronger than fixation by tension band wiring.

  7. Computable Error Estimates for Finite Element Approximations of Elliptic Partial Differential Equations with Rough Stochastic Data

    KAUST Repository

    Hall, Eric Joseph; Hoel, Hå kon; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2016-01-01

    posteriori error estimates fail to capture. We propose goal-oriented estimates, based on local error indicators, for the pathwise Galerkin and expected quadrature errors committed in standard, continuous, piecewise linear finite element approximations

  8. Updating OSHA Standards Based on National Consensus Standards; Eye and Face Protection. Final rule.

    Science.gov (United States)

    2016-03-25

    On March 13, 2015, OSHA published in the Federal Register a notice of proposed rulemaking (NPRM) to revise its eye and face protection standards for general industry, shipyard employment, marine terminals, longshoring, and construction by updating the references to national consensus standards approved by the American National Standards Institute (ANSI). OSHA received no significant objections from commenters and therefore is adopting the amendments as proposed. This final rule updates the references in OSHA's eye and face standards to reflect the most recent edition of the ANSI/International Safety Equipment Association (ISEA) eye and face protection standard. It removes the oldest-referenced edition of the same ANSI standard. It also amends other provisions of the construction eye and face protection standard to bring them into alignment with OSHA's general industry and maritime standards.

  9. Accessing the Common Core Standards for Students with Learning Disabilities: Strategies for Writing Standards-Based IEP Goals

    Science.gov (United States)

    Caruana, Vicki

    2015-01-01

    Since the reauthorization of the Individuals With Disabilities Education Act (IDEA) in 2004, standards-based individualized education plans (IEPs) have been an expectation for serving students with disabilities in the K-12 public school setting. Nearly a decade after the mandates calling for standards-based IEPs, special educators still struggle…

  10. Measuring Item Fill-Rate Performance in a Finite Horizon

    OpenAIRE

    Douglas J. Thomas

    2005-01-01

    The standard treatment of fill rate relies on stationary and serially independent demand over an infinite horizon. Even if demand is stationary, managers are held accountable for performance over a finite horizon. In a finite horizon, the fill rate is a random variable. Studying the distribution is relevant because a vendor may be subject to financial penalty if she fails to achieve her target fill rate over a specified finite period. It is known that for a zero lead time, base-stock model, t...

  11. Performance-based standards for South African car-carriers

    CSIR Research Space (South Africa)

    De Saxe, C

    2012-12-01

    Full Text Available Until recently, car-carriers in South Africa operated under abnormal load permits allowing a finite relaxation of legal height and length limits. This practice is being phased out, and exemption will only be granted if a car-carrier complies...

  12. Accurate kinematic measurement at interfaces between dissimilar materials using conforming finite-element-based digital image correlation

    KAUST Repository

    Tao, Ran

    2016-02-11

    Digital image correlation (DIC) is now an extensively applied full-field measurement technique with subpixel accuracy. A systematic drawback of this technique, however, is the smoothening of the kinematic field (e.g., displacement and strains) across interfaces between dissimilar materials, where the deformation gradient is known to be large. This can become an issue when a high level of accuracy is needed, for example, in the interfacial region of composites or joints. In this work, we described the application of global conforming finite-element-based DIC technique to obtain precise kinematic fields at interfaces between dissimilar materials. Speckle images from both numerical and actual experiments processed by the described global DIC technique better captured sharp strain gradient at the interface than local subset-based DIC. © 2016 Elsevier Ltd. All rights reserved.

  13. An unstructured finite volume solver for two phase water/vapour flows based on an elliptic oriented fractional step method

    International Nuclear Information System (INIS)

    Mechitoua, N.; Boucker, M.; Lavieville, J.; Pigny, S.; Serre, G.

    2003-01-01

    Based on experience gained at EDF and Cea, a more general and robust 3-dimensional (3D) multiphase flow solver has been being currently developed for over three years. This solver, based on an elliptic oriented fractional step approach, is able to simulate multicomponent/multiphase flows. Discretization follows a 3D full unstructured finite volume approach, with a collocated arrangement of all variables. The non linear behaviour between pressure and volume fractions and a symmetric treatment of all fields are taken into account in the iterative procedure, within the time step. It greatly enforces the realizability of volume fractions (i.e 0 < α < 1), without artificial numerical needs. Applications to widespread test cases as static sedimentation, water hammer and phase separation are shown to assess the accuracy and the robustness of the flow solver in different flow conditions, encountered in nuclear reactors pipes. (authors)

  14. [Remodeling simulation of human femur under bed rest and spaceflight circumstances based on three dimensional finite element analysis].

    Science.gov (United States)

    Yang, Wenting; Wang, Dongmei; Lei, Zhoujixin; Wang, Chunhui; Chen, Shanguang

    2017-12-01

    Astronauts who are exposed to weightless environment in long-term spaceflight might encounter bone density and mass loss for the mechanical stimulus is smaller than normal value. This study built a three dimensional model of human femur to simulate the remodeling process of human femur during bed rest experiment based on finite element analysis (FEA). The remodeling parameters of this finite element model was validated after comparing experimental and numerical results. Then, the remodeling process of human femur in weightless environment was simulated, and the remodeling function of time was derived. The loading magnitude and loading cycle on human femur during weightless environment were increased to simulate the exercise against bone loss. Simulation results showed that increasing loading magnitude is more effective in diminishing bone loss than increasing loading cycles, which demonstrated that exercise of certain intensity could help resist bone loss during long-term spaceflight. At the end, this study simulated the bone recovery process after spaceflight. It was found that the bone absorption rate is larger than bone formation rate. We advise that astronauts should take exercise during spaceflight to resist bone loss.

  15. Dynamic analysis of a needle insertion for soft materials: Arbitrary Lagrangian-Eulerian-based three-dimensional finite element analysis.

    Science.gov (United States)

    Yamaguchi, Satoshi; Tsutsui, Kihei; Satake, Koji; Morikawa, Shigehiro; Shirai, Yoshiaki; Tanaka, Hiromi T

    2014-10-01

    Our goal was to develop a three-dimensional finite element model that enables dynamic analysis of needle insertion for soft materials. To demonstrate large deformation and fracture, we used the arbitrary Lagrangian-Eulerian (ALE) method for fluid analysis. We performed ALE-based finite element analysis for 3% agar gel and three types of copper needle with bevel tips. To evaluate simulation results, we compared the needle deflection and insertion force with corresponding experimental results acquired with a uniaxial manipulator. We studied the shear stress distribution of agar gel on various time scales. For 30°, 45°, and 60°, differences in deflections of each needle between both sets of results were 2.424, 2.981, and 3.737mm, respectively. For the insertion force, there was no significant difference for mismatching area error (p<0.05) between simulation and experimental results. Our results have the potential to be a stepping stone to develop pre-operative surgical planning to estimate an optimal needle insertion path for MR image-guided microwave coagulation therapy and for analyzing large deformation and fracture in biological tissues. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Prediction of Path Deviation in Robot Based Incremental Sheet Metal Forming by Means of a New Solid-Shell Finite Element Technology and a Finite Elastoplastic Model with Combined Hardening

    Science.gov (United States)

    Kiliclar, Yalin; Laurischkat, Roman; Vladimirov, Ivaylo N.; Reese, Stefanie

    2011-08-01

    The presented project deals with a robot based incremental sheet metal forming process, which is called roboforming and has been developed at the Chair of Production Systems. It is characterized by flexible shaping using a freely programmable path-synchronous movement of two industrial robots. The final shape is produced by the incremental infeed of the forming tool in depth direction and its movement along the part contour in lateral direction. However, the resulting geometries formed in roboforming deviate several millimeters from the reference geometry. This results from the compliance of the involved machine structures and the springback effects of the workpiece. The project aims to predict these deviations caused by resiliences and to carry out a compensative path planning based on this prediction. Therefore a planning tool is implemented which compensates the robots's compliance and the springback effects of the sheet metal. The forming process is simulated by means of a finite element analysis using a material model developed at the Institute of Applied Mechanics (IFAM). It is based on the multiplicative split of the deformation gradient in the context of hyperelasticity and combines nonlinear kinematic and isotropic hardening. Low-order finite elements used to simulate thin sheet structures, such as used for the experiments, have the major problem of locking, a nonphysical stiffening effect. For an efficient finite element analysis a special solid-shell finite element formulation based on reduced integration with hourglass stabilization has been developed. To circumvent different locking effects, the enhanced assumed strain (EAS) and the assumed natural strain (ANS) concepts are included in this formulation. Having such powerful tools available we obtain more accurate geometries.

  17. Finite Discrete Gabor Analysis

    DEFF Research Database (Denmark)

    Søndergaard, Peter Lempel

    2007-01-01

    frequency bands at certain times. Gabor theory can be formulated for both functions on the real line and for discrete signals of finite length. The two theories are largely the same because many aspects come from the same underlying theory of locally compact Abelian groups. The two types of Gabor systems...... can also be related by sampling and periodization. This thesis extends on this theory by showing new results for window construction. It also provides a discussion of the problems associated to discrete Gabor bases. The sampling and periodization connection is handy because it allows Gabor systems...... on the real line to be well approximated by finite and discrete Gabor frames. This method of approximation is especially attractive because efficient numerical methods exists for doing computations with finite, discrete Gabor systems. This thesis presents new algorithms for the efficient computation of finite...

  18. Fast Computation of Ground Motion Shaking Map base on the Modified Stochastic Finite Fault Modeling

    Science.gov (United States)

    Shen, W.; Zhong, Q.; Shi, B.

    2012-12-01

    Rapidly regional MMI mapping soon after a moderate-large earthquake is crucial to loss estimation, emergency services and planning of emergency action by the government. In fact, many countries show different degrees of attention on the technology of rapid estimation of MMI , and this technology has made significant progress in earthquake-prone countries. In recent years, numerical modeling of strong ground motion has been well developed with the advances of computation technology and earthquake science. The computational simulation of strong ground motion caused by earthquake faulting has become an efficient way to estimate the regional MMI distribution soon after earthquake. In China, due to the lack of strong motion observation in network sparse or even completely missing areas, the development of strong ground motion simulation method has become an important means of quantitative estimation of strong motion intensity. In many of the simulation models, stochastic finite fault model is preferred to rapid MMI estimating for its time-effectiveness and accuracy. In finite fault model, a large fault is divided into N subfaults, and each subfault is considered as a small point source. The ground motions contributed by each subfault are calculated by the stochastic point source method which is developed by Boore, and then summed at the observation point to obtain the ground motion from the entire fault with a proper time delay. Further, Motazedian and Atkinson proposed the concept of Dynamic Corner Frequency, with the new approach, the total radiated energy from the fault and the total seismic moment are conserved independent of subfault size over a wide range of subfault sizes. In current study, the program EXSIM developed by Motazedian and Atkinson has been modified for local or regional computations of strong motion parameters such as PGA, PGV and PGD, which are essential for MMI estimating. To make the results more reasonable, we consider the impact of V30 for the

  19. A finite-volume HLLC-based scheme for compressible interfacial flows with surface tension

    Energy Technology Data Exchange (ETDEWEB)

    Garrick, Daniel P. [Department of Aerospace Engineering, Iowa State University, Ames, IA (United States); Owkes, Mark [Department of Mechanical and Industrial Engineering, Montana State University, Bozeman, MT (United States); Regele, Jonathan D., E-mail: jregele@iastate.edu [Department of Aerospace Engineering, Iowa State University, Ames, IA (United States)

    2017-06-15

    Shock waves are often used in experiments to create a shear flow across liquid droplets to study secondary atomization. Similar behavior occurs inside of supersonic combustors (scramjets) under startup conditions, but it is challenging to study these conditions experimentally. In order to investigate this phenomenon further, a numerical approach is developed to simulate compressible multiphase flows under the effects of surface tension forces. The flow field is solved via the compressible multicomponent Euler equations (i.e., the five equation model) discretized with the finite volume method on a uniform Cartesian grid. The solver utilizes a total variation diminishing (TVD) third-order Runge–Kutta method for time-marching and second order TVD spatial reconstruction. Surface tension is incorporated using the Continuum Surface Force (CSF) model. Fluxes are upwinded with a modified Harten–Lax–van Leer Contact (HLLC) approximate Riemann solver. An interface compression scheme is employed to counter numerical diffusion of the interface. The present work includes modifications to both the HLLC solver and the interface compression scheme to account for capillary force terms and the associated pressure jump across the gas–liquid interface. A simple method for numerically computing the interface curvature is developed and an acoustic scaling of the surface tension coefficient is proposed for the non-dimensionalization of the model. The model captures the surface tension induced pressure jump exactly if the exact curvature is known and is further verified with an oscillating elliptical droplet and Mach 1.47 and 3 shock-droplet interaction problems. The general characteristics of secondary atomization at a range of Weber numbers are also captured in a series of simulations.

  20. Efficient inversion of volcano deformation based on finite element models : An application to Kilauea volcano, Hawaii

    Science.gov (United States)

    Charco, María; González, Pablo J.; Galán del Sastre, Pedro

    2017-04-01

    The Kilauea volcano (Hawaii, USA) is one of the most active volcanoes world-wide and therefore one of the better monitored volcanoes around the world. Its complex system provides a unique opportunity to investigate the dynamics of magma transport and supply. Geodetic techniques, as Interferometric Synthetic Aperture Radar (InSAR) are being extensively used to monitor ground deformation at volcanic areas. The quantitative interpretation of such surface ground deformation measurements using geodetic data requires both, physical modelling to simulate the observed signals and inversion approaches to estimate the magmatic source parameters. Here, we use synthetic aperture radar data from Sentinel-1 radar interferometry satellite mission to image volcano deformation sources during the inflation along Kilauea's Southwest Rift Zone in April-May 2015. We propose a Finite Element Model (FEM) for the calculation of Green functions in a mechanically heterogeneous domain. The key aspect of the methodology lies in applying the reciprocity relationship of the Green functions between the station and the source for efficient numerical inversions. The search for the best-fitting magmatic (point) source(s) is generally conducted for an array of 3-D locations extending below a predefined volume region. However, our approach allows to reduce the total number of Green functions to the number of the observation points by using the, above mentioned, reciprocity relationship. This new methodology is able to accurately represent magmatic processes using physical models capable of simulating volcano deformation in non-uniform material properties distribution domains, which eventually will lead to better description of the status of the volcano.

  1. Electrical Resistivity Tomography using a finite element based BFGS algorithm with algebraic multigrid preconditioning

    Science.gov (United States)

    Codd, A. L.; Gross, L.

    2018-03-01

    We present a new inversion method for Electrical Resistivity Tomography which, in contrast to established approaches, minimizes the cost function prior to finite element discretization for the unknown electric conductivity and electric potential. Minimization is performed with the Broyden-Fletcher-Goldfarb-Shanno method (BFGS) in an appropriate function space. BFGS is self-preconditioning and avoids construction of the dense Hessian which is the major obstacle to solving large 3-D problems using parallel computers. In addition to the forward problem predicting the measurement from the injected current, the so-called adjoint problem also needs to be solved. For this problem a virtual current is injected through the measurement electrodes and an adjoint electric potential is obtained. The magnitude of the injected virtual current is equal to the misfit at the measurement electrodes. This new approach has the advantage that the solution process of the optimization problem remains independent to the meshes used for discretization and allows for mesh adaptation during inversion. Computation time is reduced by using superposition of pole loads for the forward and adjoint problems. A smoothed aggregation algebraic multigrid (AMG) preconditioned conjugate gradient is applied to construct the potentials for a given electric conductivity estimate and for constructing a first level BFGS preconditioner. Through the additional reuse of AMG operators and coarse grid solvers inversion time for large 3-D problems can be reduced further. We apply our new inversion method to synthetic survey data created by the resistivity profile representing the characteristics of subsurface fluid injection. We further test it on data obtained from a 2-D surface electrode survey on Heron Island, a small tropical island off the east coast of central Queensland, Australia.

  2. Estimating Finite Rate of Population Increase for Sharks Based on Vital Parameters.

    Directory of Open Access Journals (Sweden)

    Kwang-Ming Liu

    Full Text Available The vital parameter data for 62 stocks, covering 38 species, collected from the literature, including parameters of age, growth, and reproduction, were log-transformed and analyzed using multivariate analyses. Three groups were identified and empirical equations were developed for each to describe the relationships between the predicted finite rates of population increase (λ' and the vital parameters, maximum age (Tmax, age at maturity (Tm, annual fecundity (f/Rc, size at birth (Lb, size at maturity (Lm, and asymptotic length (L∞. Group (1 included species with slow growth rates (0.034 yr(-1 < k < 0.103 yr(-1 and extended longevity (26 yr < Tmax < 81 yr, e.g., shortfin mako Isurus oxyrinchus, dusky shark Carcharhinus obscurus, etc.; Group (2 included species with fast growth rates (0.103 yr(-1 < k < 0.358 yr(-1 and short longevity (9 yr < Tmax < 26 yr, e.g., starspotted smoothhound Mustelus manazo, gray smoothhound M. californicus, etc.; Group (3 included late maturing species (Lm/L∞ ≧ 0.75 with moderate longevity (Tmax < 29 yr, e.g., pelagic thresher Alopias pelagicus, sevengill shark Notorynchus cepedianus. The empirical equation for all data pooled was also developed. The λ' values estimated by these empirical equations showed good agreement with those calculated using conventional demographic analysis. The predictability was further validated by an independent data set of three species. The empirical equations developed in this study not only reduce the uncertainties in estimation but also account for the difference in life history among groups. This method therefore provides an efficient and effective approach to the implementation of precautionary shark management measures.

  3. Building Standards based Science Information Systems: A Survey of ISO and other standards

    Science.gov (United States)

    King, Todd; Walker, Raymond

    Science Information systems began with individual researchers maintaining personal collec-tions of data and managing them by using ad hoc, specialized approaches. Today information systems are an enterprise consisting of federated systems that manage and distribute both historical and contemporary data from distributed sources. Information systems have many components. Among these are metadata models, metadata registries, controlled vocabularies and ontologies which are used to describe entities and resources. Other components include services to exchange information and data; tools to populate the system and tools to utilize available resources. When constructing information systems today a variety of standards can be useful. The benefit of adopting standards is clear; it can shorten the design cycle, enhance software reuse and enable interoperability. We look at standards from the International Stan-dards Organization (ISO), International Telecommunication Union (ITU), Organization for the Advancement of Structured Information Standards (OASIS), Internet Engineering Task Force (IETF), American National Standards Institute (ANSI) which have influenced the develop-ment of information systems in the Heliophysics and Planetary sciences. No standard can solve the needs of every community. Individual disciplines often must fill the gap between general purpose standards and the unique needs of the discipline. To this end individual science dis-ciplines are developing standards, Examples include the International Virtual Observatory Al-liance (IVOA), Planetary Data System (PDS)/ International Planetary Data Alliance (IPDA), Dublin-Core Science, and the Space Physics Archive Search and Extract (SPASE) consortium. This broad survey of ISO and other standards provides some guidance for the development information systems. The development of the SPASE data model is reviewed and provides some insights into the value of applying appropriate standards and is used to illustrate

  4. A reference standard-based quality assurance program for radiology.

    Science.gov (United States)

    Liu, Patrick T; Johnson, C Daniel; Miranda, Rafael; Patel, Maitray D; Phillips, Carrie J

    2010-01-01

    The authors have developed a comprehensive radiology quality assurance (QA) program that evaluates radiology interpretations and procedures by comparing them with reference standards. Performance metrics are calculated and then compared with benchmarks or goals on the basis of published multicenter data and meta-analyses. Additional workload for physicians is kept to a minimum by having trained allied health staff members perform the comparisons of radiology reports with the reference standards. The performance metrics tracked by the QA program include the accuracy of CT colonography for detecting polyps, the false-negative rate for mammographic detection of breast cancer, the accuracy of CT angiography detection of coronary artery stenosis, the accuracy of meniscal tear detection on MRI, the accuracy of carotid artery stenosis detection on MR angiography, the accuracy of parathyroid adenoma detection by parathyroid scintigraphy, the success rate for obtaining cortical tissue on ultrasound-guided core biopsies of pelvic renal transplants, and the technical success rate for peripheral arterial angioplasty procedures. In contrast with peer-review programs, this reference standard-based QA program minimizes the possibilities of reviewer bias and erroneous second reviewer interpretations. The more objective assessment of performance afforded by the QA program will provide data that can easily be used for education and management conferences, research projects, and multicenter evaluations. Additionally, such performance data could be used by radiology departments to demonstrate their value over nonradiology competitors to referring clinicians, hospitals, patients, and third-party payers. Copyright 2010 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  5. Updating OSHA standards based on national consensus standards. Direct final rule.

    Science.gov (United States)

    2007-12-14

    In this direct final rule, the Agency is removing several references to consensus standards that have requirements that duplicate, or are comparable to, other OSHA rules; this action includes correcting a paragraph citation in one of these OSHA rules. The Agency also is removing a reference to American Welding Society standard A3.0-1969 ("Terms and Definitions") in its general-industry welding standards. This rulemaking is a continuation of OSHA's ongoing effort to update references to consensus and industry standards used throughout its rules.

  6. Discrimination of Inrush from Fault Currents in Power Transformers Based on Equivalent Instantaneous Inductance Technique Coupled with Finite Element Method

    Directory of Open Access Journals (Sweden)

    M. Jamali

    2011-09-01

    Full Text Available The phenomenon of magnetizing inrush is a transient condition, which occurs primarily when a transformer is energized. The magnitude of inrush current may be as high as ten times or more times of transformer rated current that causes malfunction of protection system. So, for safe running of a transformer, it is necessary to distinguish inrush current from fault currents. In this paper, an equivalent instantaneous inductance (EII technique is used to discriminate inrush current from fault currents. For this purpose, a three-phase power transformer has been simulated in Maxwell software that is based on finite elements. This three-phase power transformer has been used to simulate different conditions. Then, the results have been used as inputs in MATLAB program to implement the equivalent instantaneous inductance technique. The results show that in the case of inrush current, the equivalent instantaneous inductance has a drastic variation, while it is almost constant in the cases of fault conditions.

  7. Cluster decay half-lives of trans-lead nuclei based on a finite-range nucleon–nucleon interaction

    Energy Technology Data Exchange (ETDEWEB)

    Adel, A., E-mail: aa.ahmed@mu.edu.sa [Physics Department, Faculty of Science, Cairo University, Giza (Egypt); Physics Department, College of Science, Majmaah University, Zulfi (Saudi Arabia); Alharbi, T. [Physics Department, College of Science, Majmaah University, Zulfi (Saudi Arabia)

    2017-02-15

    Nuclear cluster radioactivity is investigated using microscopic potentials in the framework of the Wentzel–Kramers–Brillouin approximation of quantum tunneling by considering the Bohr–Sommerfeld quantization condition. The microscopic cluster–daughter potential is numerically constructed in the well-established double-folding model. A realistic M3Y-Paris NN interaction with the finite-range exchange part as well as the ordinary zero-range exchange NN force is considered in the present work. The influence of nuclear deformations on the cluster decay half-lives is investigated. Based on the available experimental data, the cluster preformation factors are extracted from the calculated and the measured half lives of cluster radioactivity. Some useful predictions of cluster emission half-lives are made for emissions of known clusters from possible candidates, which may guide future experiments.

  8. A unified model of hydride cracking based on elasto-plastic energy release rate over a finite crack extension

    International Nuclear Information System (INIS)

    Zheng, X.J.; Metzger, D.R.; Sauve, R.G.

    1995-01-01

    A fracture criterion based on energy balance is proposed for elasto-plastic cracking at hydrides in zirconium, assuming a finite length of crack advance. The proposed elasto-plastic energy release rate is applied to the crack initiation at hydrides in smooth and notched surfaces, as well as the subsequent delayed hydride cracking (DHC) considering limited crack-tip plasticity. For a smooth or notched surface of an elastic body, the fracture parameter is related to the stress intensity factor for the initiated crack. For DHC, a unique curve relates the non-dimensionalized elasto-plastic energy release rate with the length of crack extension relative to the plastic zone size. This fracture criterion explains experimental observations concerning DHC in a qualitative manner. Quantitative comparison with experiments is made for fracture toughness and DHC tests on specimens containing certain hydride structures; very good agreement is obtained. ((orig.))

  9. Structural analysis and incipient failure detection of primary circuit components based on correlation-analysis and finite-element models

    International Nuclear Information System (INIS)

    Olma, B.J.

    1977-01-01

    A method is presented to compute vibrational power spectral densities (VPSD's) of primary circuit components based on a finite-element representation of the primary circuit. First this method has been applied to the sodium cooled reactor KNK, Karlsruhe. Now a further application is being developed for a BWR-nuclear power plant. The experimentally determined VPSD's can be considered as the output of a multiple input-output system. They have to be explained as the frequency response of a multidimensional mechanical system, which is excited by stochastic and deterministic mechanical driving forces. The stochastic mechanical forces are generated by the dynamic pressure fluctuations of the fluid. The deterministic mechanical forces are caused by the pressure fluctuations, which are induced by the main coolant pumps or by standing waves. The excitation matrix can be obtained from measured pressure fluctuations. The vibration transfer function matrix can be computed from the mass matrix, damping matrix and stiffness matrix of a theoretical finite-element model or mass-spring model. Based on this theory the computer code 'STAMPO' has been established. This program has been applied to the KNK reactor. The excitation matrix was created from measured jet-noise pressure fluctuations. The mass-, stiffness- and damping matrix has been extracted from a SAP-IV-model of the primary system. Sequentially for each frequency point the complete VPSD matrix has been computed. The diagonal elements of this matrix represent the vibrational auto-power spectral densities, the off-diagonal elements represent the vibrational cross-power spectral densities. The calculations give good agreement with measured VPSD's. The comparison shows that the measured jet-noise pressure fluctuations act nearly uncorrelated on the structure, whereas the output VPSD's are well correlated

  10. Image contrast enhancement based on a local standard deviation model

    International Nuclear Information System (INIS)

    Chang, Dah-Chung; Wu, Wen-Rong

    1996-01-01

    The adaptive contrast enhancement (ACE) algorithm is a widely used image enhancement method, which needs a contrast gain to adjust high frequency components of an image. In the literature, the gain is usually inversely proportional to the local standard deviation (LSD) or is a constant. But these cause two problems in practical applications, i.e., noise overenhancement and ringing artifact. In this paper a new gain is developed based on Hunt's Gaussian image model to prevent the two defects. The new gain is a nonlinear function of LSD and has the desired characteristic emphasizing the LSD regions in which details are concentrated. We have applied the new ACE algorithm to chest x-ray images and the simulations show the effectiveness of the proposed algorithm

  11. 78 FR 35585 - Updating OSHA Standards Based on National Consensus Standards; Signage

    Science.gov (United States)

    2013-06-13

    ...; Signage AGENCY: Occupational Safety and Health Administration (OSHA), Department of Labor. ACTION: Notice... Administration (``OSHA'' or ``the Agency'') proposes to update its general industry and construction signage... standards, ANSI Z53.1-1967, Z35.1-1968, and Z35.2-1968, in its signage standards, thereby providing...

  12. 78 FR 66642 - Updating OSHA Standards Based on National Consensus Standards; Signage

    Science.gov (United States)

    2013-11-06

    ...; Signage AGENCY: Occupational Safety and Health Administration (OSHA), Department of Labor. ACTION: Final... (78 FR 35559) a direct final rule that revised its signage standards for general industry and... revised its signage standards for general industry at 29 CFR 1910.97, 1910.145, and 1910.261, and...

  13. Finite rotation shells basic equations and finite elements for Reissner kinematics

    CERN Document Server

    Wisniewski, K

    2010-01-01

    This book covers theoretical and computational aspects of non-linear shells. Several advanced topics of shell equations and finite elements - not included in standard textbooks on finite elements - are addressed, and the book includes an extensive bibliography.

  14. A control volume based finite difference method for solving the equilibrium equations in terms of displacements

    DEFF Research Database (Denmark)

    Hattel, Jesper; Hansen, Preben

    1995-01-01

    This paper presents a novel control volume based FD method for solving the equilibrium equations in terms of displacements, i.e. the generalized Navier equations. The method is based on the widely used cv-FDM solution of heat conduction and fluid flow problems involving a staggered grid formulati....... The resulting linear algebraic equations are solved by line-Gauss-Seidel....

  15. A Standardized Generalized Dimensionality Discrepancy Measure and a Standardized Model-Based Covariance for Dimensionality Assessment for Multidimensional Models

    Science.gov (United States)

    Levy, Roy; Xu, Yuning; Yel, Nedim; Svetina, Dubravka

    2015-01-01

    The standardized generalized dimensionality discrepancy measure and the standardized model-based covariance are introduced as tools to critique dimensionality assumptions in multidimensional item response models. These tools are grounded in a covariance theory perspective and associated connections between dimensionality and local independence.…

  16. Field Strain Measurement on the Fiber Scale in Carbon Fiber Reinforced Polymers Using Global Finite-Element Based Digital Image Correlation

    KAUST Repository

    Tao, Ran

    2015-01-01

    is aimed to accurately measure the displacement and strain fields at the fiber-matrix scale in a cross-ply composite. First, the theories of both local subset-based digital image correlation (DIC) and global finite-element based DIC are outlined. Second, in

  17. 78 FR 1256 - Guam Military Base Realignment Contractor Recruitment Standards

    Science.gov (United States)

    2013-01-08

    ... Contractor Recruitment Standards AGENCY: Employment and Training Administration, Labor. ACTION: Final notice... issuing this notice to announce recruitment standards that construction contractors are required to follow... B) by adding a new subsection (6). This provision prohibits contractors engaged in construction...

  18. A technology mapping based on graph of excitations and outputs for finite state machines

    Science.gov (United States)

    Kania, Dariusz; Kulisz, Józef

    2017-11-01

    A new, efficient technology mapping method of FSMs, dedicated for PAL-based PLDs is proposed. The essence of the method consists in searching for the minimal set of PAL-based logic blocks that cover a set of multiple-output implicants describing the transition and output functions of an FSM. The method is based on a new concept of graph: the Graph of Excitations and Outputs. The proposed algorithm was tested using the FSM benchmarks. The obtained results were compared with the classical technology mapping of FSM.

  19. 76 FR 42395 - Business Conduct Standards for Security-Based Swap Dealers and Major Security-Based Swap...

    Science.gov (United States)

    2011-07-18

    ... Business Conduct Standards for Security-Based Swap Dealers and Major Security-Based Swap Participants...-11] RIN 3235-AL10 Business Conduct Standards for Security-Based Swap Dealers and Major Security-Based...'') relating to external business conduct standards for security-based swap dealers (``SBS Dealers'') and major...

  20. Bayesian-based estimation of acoustic surface impedance: Finite difference frequency domain approach.

    Science.gov (United States)

    Bockman, Alexander; Fackler, Cameron; Xiang, Ning

    2015-04-01

    Acoustic performance for an interior requires an accurate description of the boundary materials' surface acoustic impedance. Analytical methods may be applied to a small class of test geometries, but inverse numerical methods provide greater flexibility. The parameter estimation problem requires minimizing prediction vice observed acoustic field pressure. The Bayesian-network sampling approach presented here mitigates other methods' susceptibility to noise inherent to the experiment, model, and numerics. A geometry agnostic method is developed here and its parameter estimation performance is demonstrated for an air-backed micro-perforated panel in an impedance tube. Good agreement is found with predictions from the ISO standard two-microphone, impedance-tube method, and a theoretical model for the material. Data by-products exclusive to a Bayesian approach are analyzed to assess sensitivity of the method to nuisance parameters.

  1. Nonlinear micromechanics-based finite element analysis of the interfacial behaviour of FRP-strengthened reinforced concrete beams

    Science.gov (United States)

    Abd El Baky, Hussien

    This research work is devoted to theoretical and numerical studies on the flexural behaviour of FRP-strengthened concrete beams. The objectives of this research are to extend and generalize the results of simple experiments, to recommend new design guidelines based on accurate numerical tools, and to enhance our comprehension of the bond performance of such beams. These numerical tools can be exploited to bridge the existing gaps in the development of analysis and modelling approaches that can predict the behaviour of FRP-strengthened concrete beams. The research effort here begins with the formulation of a concrete model and development of FRP/concrete interface constitutive laws, followed by finite element simulations for beams strengthened in flexure. Finally, a statistical analysis is carried out taking the advantage of the aforesaid numerical tools to propose design guidelines. In this dissertation, an alternative incremental formulation of the M4 microplane model is proposed to overcome the computational complexities associated with the original formulation. Through a number of numerical applications, this incremental formulation is shown to be equivalent to the original M4 model. To assess the computational efficiency of the incremental formulation, the "arc-length" numerical technique is also considered and implemented in the original Bazant et al. [2000] M4 formulation. Finally, the M4 microplane concrete model is coded in FORTRAN and implemented as a user-defined subroutine into the commercial software package ADINA, Version 8.4. Then this subroutine is used with the finite element package to analyze various applications involving FRP strengthening. In the first application a nonlinear micromechanics-based finite element analysis is performed to investigate the interfacial behaviour of FRP/concrete joints subjected to direct shear loadings. The intention of this part is to develop a reliable bond--slip model for the FRP/concrete interface. The bond

  2. A new reliability measure based on specified minimum distances before the locations of random variables in a finite interval

    International Nuclear Information System (INIS)

    Todinov, M.T.

    2004-01-01

    A new reliability measure is proposed and equations are derived which determine the probability of existence of a specified set of minimum gaps between random variables following a homogeneous Poisson process in a finite interval. Using the derived equations, a method is proposed for specifying the upper bound of the random variables' number density which guarantees that the probability of clustering of two or more random variables in a finite interval remains below a maximum acceptable level. It is demonstrated that even for moderate number densities the probability of clustering is substantial and should not be neglected in reliability calculations. In the important special case where the random variables are failure times, models have been proposed for determining the upper bound of the hazard rate which guarantees a set of minimum failure-free operating intervals before the random failures, with a specified probability. A model has also been proposed for determining the upper bound of the hazard rate which guarantees a minimum availability target. Using the models proposed, a new strategy, models and reliability tools have been developed for setting quantitative reliability requirements which consist of determining the intersection of the hazard rate envelopes (hazard rate upper bounds) which deliver a minimum failure-free operating period before random failures, a risk of premature failure below a maximum acceptable level and a minimum required availability. It is demonstrated that setting reliability requirements solely based on an availability target does not necessarily mean a low risk of premature failure. Even at a high availability level, the probability of premature failure can be substantial. For industries characterised by a high cost of failure, the reliability requirements should involve a hazard rate envelope limiting the risk of failure below a maximum acceptable level

  3. Data distribution architecture based on standard real time protocol

    International Nuclear Information System (INIS)

    Castro, R.; Vega, J.; Pereira, A.; Portas, A.

    2009-01-01

    Data distribution architecture (DDAR) has been designed conforming to new requirements, taking into account the type of data that is going to be generated from experiments in International Thermonuclear Experimental Reactor (ITER). The main goal of this architecture is to implement a system that is able to manage on line all data that is being generated by an experiment, supporting its distribution for: processing, storing, analysing or visualizing. The first objective is to have a distribution architecture that supports long pulse experiments (even hours). The described system is able to distribute, using real time protocol (RTP), stored data or live data generated while the experiment is running. It enables researchers to access data on line instead of waiting for the end of the experiment. Other important objective is scalability, so the presented architecture can easily grow based on actual necessities, simplifying estimation and design tasks. A third important objective is security. In this sense, the architecture is based on standards, so complete security mechanisms can be applied, from secure transmission solutions until elaborated access control policies, and it is full compatible with multi-organization federation systems as PAPI or Shibboleth.

  4. POTENTIAL OF UAV BASED CONVERGENT PHOTOGRAMMETRY IN MONITORING REGENERATION STANDARDS

    Directory of Open Access Journals (Sweden)

    U. Vepakomma

    2015-08-01

    Full Text Available Several thousand hectares of forest blocks are regenerating after harvest in Canada. Monitoring their performance over different stages of growth is critical in ensuring future productivity and ecological balance. Tools for rapid evaluation can support timely and reliable planning of interventions. Conventional ground surveys or visual image assessments are either time intensive or inaccurate, while alternate operational remote sensing tools are unavailable. In this study, we test the feasibility and strength of UAV-based photogrammetry with an EO camera on a UAV platform in assessing regeneration performance. Specifically we evaluated stocking, spatial density and height distribution of naturally growing (irregularly spaced stems or planted (regularly spaced stems conifer regeneration in different phases of growth. Standard photogrammetric workflow was applied on the 785 acquired images for 3D reconstruction of the study sites. The required parameters were derived based on automated single stem detection algorithm developed in-house. Comparing with field survey data, preliminary results hold promise. Future studies are planned to expand the scope to larger areas and different stand conditions.

  5. Data distribution architecture based on standard real time protocol

    Energy Technology Data Exchange (ETDEWEB)

    Castro, R. [Asociacion EURATOM/CIEMAT para Fusion, Avda. Complutense No. 22, 28040 Madrid (Spain)], E-mail: rodrigo.castro@ciemat.es; Vega, J.; Pereira, A.; Portas, A. [Asociacion EURATOM/CIEMAT para Fusion, Avda. Complutense No. 22, 28040 Madrid (Spain)

    2009-06-15

    Data distribution architecture (DDAR) has been designed conforming to new requirements, taking into account the type of data that is going to be generated from experiments in International Thermonuclear Experimental Reactor (ITER). The main goal of this architecture is to implement a system that is able to manage on line all data that is being generated by an experiment, supporting its distribution for: processing, storing, analysing or visualizing. The first objective is to have a distribution architecture that supports long pulse experiments (even hours). The described system is able to distribute, using real time protocol (RTP), stored data or live data generated while the experiment is running. It enables researchers to access data on line instead of waiting for the end of the experiment. Other important objective is scalability, so the presented architecture can easily grow based on actual necessities, simplifying estimation and design tasks. A third important objective is security. In this sense, the architecture is based on standards, so complete security mechanisms can be applied, from secure transmission solutions until elaborated access control policies, and it is full compatible with multi-organization federation systems as PAPI or Shibboleth.

  6. Updating OSHA standards based on national consensus standards. final rule; confirmation of effective date.

    Science.gov (United States)

    2008-03-14

    OSHA is confirming the effective date of its direct final rule that revises a number of standards for general industry that refer to national consensus standards. The direct final rule states that it would become effective on March 13, 2008 unless OSHA receives significant adverse comment on these revisions by January 14, 2008. OSHA received no adverse comments by that date and, therefore, is confirming that the rule will become effective on March 13, 2008.

  7. OFF, Open source Finite volume Fluid dynamics code: A free, high-order solver based on parallel, modular, object-oriented Fortran API

    Science.gov (United States)

    Zaghi, S.

    2014-07-01

    OFF, an open source (free software) code for performing fluid dynamics simulations, is presented. The aim of OFF is to solve, numerically, the unsteady (and steady) compressible Navier-Stokes equations of fluid dynamics by means of finite volume techniques: the research background is mainly focused on high-order (WENO) schemes for multi-fluids, multi-phase flows over complex geometries. To this purpose a highly modular, object-oriented application program interface (API) has been developed. In particular, the concepts of data encapsulation and inheritance available within Fortran language (from standard 2003) have been stressed in order to represent each fluid dynamics "entity" (e.g. the conservative variables of a finite volume, its geometry, etc…) by a single object so that a large variety of computational libraries can be easily (and efficiently) developed upon these objects. The main features of OFF can be summarized as follows: Programming LanguageOFF is written in standard (compliant) Fortran 2003; its design is highly modular in order to enhance simplicity of use and maintenance without compromising the efficiency; Parallel Frameworks Supported the development of OFF has been also targeted to maximize the computational efficiency: the code is designed to run on shared-memory multi-cores workstations and distributed-memory clusters of shared-memory nodes (supercomputers); the code's parallelization is based on Open Multiprocessing (OpenMP) and Message Passing Interface (MPI) paradigms; Usability, Maintenance and Enhancement in order to improve the usability, maintenance and enhancement of the code also the documentation has been carefully taken into account; the documentation is built upon comprehensive comments placed directly into the source files (no external documentation files needed): these comments are parsed by means of doxygen free software producing high quality html and latex documentation pages; the distributed versioning system referred as git

  8. Finite element limit analysis based plastic limit pressure solutions for cracked pipes

    International Nuclear Information System (INIS)

    Shim, Do Jun; Huh, Nam Su; Kim, Yun Jae; Kim, Young Jin

    2002-01-01

    Based on detailed FE limit analyses, the present paper provides tractable approximations for plastic limit pressure solutions for axial through-wall cracked pipe; axial (inner) surface cracked pipe; circumferential through-wall cracked pipe; and circumferential (inner) surface cracked pipe. Comparisons with existing analytical and empirical solutions show a large discrepancy in circumferential short through-wall cracks and in surface cracks (both axial and circumferential). Being based on detailed 3-D FE limit analysis, the present solutions are believed to be the most accurate, and thus to be valuable information not only for plastic collapse analysis of pressurised piping but also for estimating non-linear fracture mechanics parameters based on the reference stress approach

  9. A fast image encryption system based on chaotic maps with finite precision representation

    International Nuclear Information System (INIS)

    Kwok, H.S.; Tang, Wallace K.S.

    2007-01-01

    In this paper, a fast chaos-based image encryption system with stream cipher structure is proposed. In order to achieve a fast throughput and facilitate hardware realization, 32-bit precision representation with fixed point arithmetic is assumed. The major core of the encryption system is a pseudo-random keystream generator based on a cascade of chaotic maps, serving the purpose of sequence generation and random mixing. Unlike the other existing chaos-based pseudo-random number generators, the proposed keystream generator not only achieves a very fast throughput, but also passes the statistical tests of up-to-date test suite even under quantization. The overall design of the image encryption system is to be explained while detail cryptanalysis is given and compared with some existing schemes

  10. A Novel Fuzzing Method for Zigbee Based on Finite State Machine

    OpenAIRE

    Baojiang Cui; Shurui Liang; Shilei Chen; Bing Zhao; Xiaobing Liang

    2014-01-01

    With the extensive application of Zigbee, some bodies of literature were devoted into finding the vulnerabilities of Zigbee by fuzzing. According to earlier test records, the majority of defects were exposed due to a series of testing cases. However, the context of malformed inputs is not taken account into the previous algorithms. In this paper, we propose a refined structure-based fuzzing algorithm for Zigbee based on FSM, FSM-fuzzing. Any malformed input in FSM-Fuzzing is injected to the t...

  11. Mixed finite element simulations in two-dimensional groundwater flow problems

    International Nuclear Information System (INIS)

    Kimura, Hideo

    1989-01-01

    A computer code of groundwater flow in two-dimensional porous media based on the mixed finite element method was developed for accurate approximations of Darcy velocities in safety evaluation of radioactive waste disposal. The mixed finite element procedure solves for both the Darcy velocities and pressure heads simultaneously in the Darcy equation and continuity equation. Numerical results of a single well pumping at a constant rate in a uniform flow field showed that the mixed finite element method gives more accurate Darcy velocities nearly 50 % on average error than standard finite element method. (author)

  12. A GPU-based finite-size pencil beam algorithm with 3D-density correction for radiotherapy dose calculation

    International Nuclear Information System (INIS)

    Gu Xuejun; Jia Xun; Jiang, Steve B; Jelen, Urszula; Li Jinsheng

    2011-01-01

    Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite-size pencil beam (FSPB) algorithm with a 3D-density correction method on graphics processing unit (GPU). This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework (Gu et al 2009 Phys. Med. Biol. 54 6287-97). Dosimetric evaluations against Monte Carlo dose calculations are conducted on ten IMRT treatment plans (five head-and-neck cases and five lung cases). For all cases, there is improvement with the 3D-density correction over the conventional FSPB algorithm and for most cases the improvement is significant. Regarding the efficiency, because of the appropriate arrangement of memory access and the usage of GPU intrinsic functions, the dose calculation for an IMRT plan can be accomplished well within 1 s (except for one case) with this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB algorithm without 3D-density correction, this new algorithm, though slightly sacrificing the computational efficiency (∼5-15% lower), has significantly improved the dose calculation accuracy, making it more suitable for online IMRT replanning.

  13. X-ray based micromechanical finite element modeling of composite materials

    DEFF Research Database (Denmark)

    Mikkelsen, Lars Pilgaard; Emerson, Monica Jane; Jespersen, Kristine Munk

    2016-01-01

    This is a study of a uni-directional non-crimp fabric reinforced epoxy composite material typically used as the load carrying laminate in wind turbine blades. Based on a 3D xray tomography scan, the bundle and fibre/matrix structure of the composite is segmented. This segmentation is used...

  14. Finite-Element Model-Based Design Synthesis of Axial Flux PMBLDC Motors

    DEFF Research Database (Denmark)

    Fasil, Muhammed; Mijatovic, Nenad; Jensen, Bogi Bech

    2016-01-01

    of a unique solution. The designer can later select a design, based on comparing parameters of the designs, which are critical to the application that the motor will be used. The presented approach makes it easier to define constraints for a design synthesis problem. A detailed description of the setting up...

  15. In-vivo assessment of femoral bone strength using Finite Element Analysis (FEA based on routine MDCT imaging: a preliminary study on patients with vertebral fractures.

    Directory of Open Access Journals (Sweden)

    Hans Liebl

    Full Text Available To experimentally validate a non-linear finite element analysis (FEA modeling approach assessing in-vitro fracture risk at the proximal femur and to transfer the method to standard in-vivo multi-detector computed tomography (MDCT data of the hip aiming to predict additional hip fracture risk in subjects with and without osteoporosis associated vertebral fractures using bone mineral density (BMD measurements as gold standard.One fresh-frozen human femur specimen was mechanically tested and fractured simulating stance and clinically relevant fall loading configurations to the hip. After experimental in-vitro validation, the FEA simulation protocol was transferred to standard contrast-enhanced in-vivo MDCT images to calculate individual hip fracture risk each for 4 subjects with and without a history of osteoporotic vertebral fractures matched by age and gender. In addition, FEA based risk factor calculations were compared to manual femoral BMD measurements of all subjects.In-vitro simulations showed good correlation with the experimentally measured strains both in stance (R2 = 0.963 and fall configuration (R2 = 0.976. The simulated maximum stress overestimated the experimental failure load (4743 N by 14.7% (5440 N while the simulated maximum strain overestimated by 4.7% (4968 N. The simulated failed elements coincided precisely with the experimentally determined fracture locations. BMD measurements in subjects with a history of osteoporotic vertebral fractures did not differ significantly from subjects without fragility fractures (femoral head: p = 0.989; femoral neck: p = 0.366, but showed higher FEA based risk factors for additional incident hip fractures (p = 0.028.FEA simulations were successfully validated by elastic and destructive in-vitro experiments. In the subsequent in-vivo analyses, MDCT based FEA based risk factor differences for additional hip fractures were not mirrored by according BMD measurements. Our data suggests, that MDCT

  16. GPU-based acceleration of computations in nonlinear finite element deformation analysis.

    Science.gov (United States)

    Mafi, Ramin; Sirouspour, Shahin

    2014-03-01

    The physics of deformation for biological soft-tissue is best described by nonlinear continuum mechanics-based models, which then can be discretized by the FEM for a numerical solution. However, computational complexity of such models have limited their use in applications requiring real-time or fast response. In this work, we propose a graphic processing unit-based implementation of the FEM using implicit time integration for dynamic nonlinear deformation analysis. This is the most general formulation of the deformation analysis. It is valid for large deformations and strains and can account for material nonlinearities. The data-parallel nature and the intense arithmetic computations of nonlinear FEM equations make it particularly suitable for implementation on a parallel computing platform such as graphic processing unit. In this work, we present and compare two different designs based on the matrix-free and conventional preconditioned conjugate gradients algorithms for solving the FEM equations arising in deformation analysis. The speedup achieved with the proposed parallel implementations of the algorithms will be instrumental in the development of advanced surgical simulators and medical image registration methods involving soft-tissue deformation. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Implementation of fatigue model for unidirectional laminate based on finite element analysis: theory and practice

    Directory of Open Access Journals (Sweden)

    D. Carrella-Payan

    2016-10-01

    Full Text Available The aim of this study is to deal with the simulation of intralaminar fatigue damage in unidirectional composite under multi-axial and variable amplitude loadings. The variable amplitude and multi-axial loading is accounted for by using the damage hysteresis operator based on Brokate method [6]. The proposed damage model for fatigue is based on stiffness degradation laws from Van Paepegem combined with the ‘damage’ cycle jump approach extended to deal with unidirectional carbon fibres. The parameter identification method is here presented and parameter sensitivities are discussed. The initial static damage of the material is accounted for by using the Ladevèze damage model and the permanent shear strain accumulation based on Van Paepegem’s formulation. This approach is implemented into commercial software (Siemens PLM. The validation case is run on a bending test coupon (with arbitrary stacking sequence and load level in order to minimise the risk of inter-laminar damages. This intra-laminar fatigue damage model combined efficient methods with a low number of tests to identify the parameters of the stiffness degradation law, this overall procedure for fatigue life prediction is demonstrated to be cost efficient at industrial level. This work concludes on the next challenges to be addressed (validation tests, multiple-loadings validation, failure criteria, inter-laminar damages….

  18. Anisotropic mesh adaptation for solution of finite element problems using hierarchical edge-based error estimates

    Energy Technology Data Exchange (ETDEWEB)

    Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON; Vassilevski, Yuri [Los Alamos National Laboratory

    2009-01-01

    We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.

  19. Validation of a Methodology to Predict Micro-Vibrations Based on Finite Element Model Approach

    Science.gov (United States)

    Soula, Laurent; Rathband, Ian; Laduree, Gregory

    2014-06-01

    This paper presents the second part of the ESA R&D study called "METhodology for Analysis of structure- borne MICro-vibrations" (METAMIC). After defining an integrated analysis and test methodology to help predicting micro-vibrations [1], a full-scale validation test campaign has been carried out. It is based on a bread-board representative of typical spacecraft (S/C) platform consisting in a versatile structure made of aluminium sandwich panels equipped with different disturbance sources and a dummy payload made of a silicon carbide (SiC) bench. The bread-board has been instrumented with a large set of sensitive accelerometers and tests have been performed including back-ground noise measurement, modal characterization and micro- vibration tests. The results provided responses to the perturbation coming from a reaction wheel or cryo-cooler compressors, operated independently then simultaneously with different operation modes. Using consistent modelling and associated experimental characterization techniques, a correlation status has been assessed by comparing test results with predictions based on FEM approach. Very good results have been achieved particularly for the case of a wheel in sweeping rate operation with test results over-predicted within a reasonable margin lower than two. Some limitations of the methodology have also been identified for sources operating at a fixed rate or coming with a small number of dominant harmonics and recommendations have been issued in order to deal with model uncertainties and stay conservative.

  20. Thermoelastic Stress Field Investigation of GaN Material for Laser Lift-off Technique based on Finite Element Method

    International Nuclear Information System (INIS)

    Ting, Wang; Zhan-Zhong, Cui; Li-Xin, Xu

    2009-01-01

    The transient thermoelastic stress fields of GaN films is analyzed by the finite element method for the laser lift-off (LLO) technique. Stress distributions in GaN films irradiated by pulse laser with different energy densities as functions of time and depth are simulated. The results show that the high thermoelastic stress distributions in GaN films localize within about 1 μm below the GaN/Al 2 O 3 interface using proper laser parameters. It is also found that GaN films can avoid the thermal deformation because the maximum thermoelastic stress 4.28 GPa is much smaller than the yield strength of GaN 15GPa. The effects of laser beam dimension and the thickness of GaN films on stress distribution are also analyzed. The variation range of laser beam dimension as a function of the thickness of GaN films is simulated to keep the GaN films free of thermal deformation. LLO experiments are also carried out. GaN-based light-emitting diodes (LEDs) are separated from sapphire substrates using the parameters obtained from the simulation. Compared with devices before LLO, P–I–V measurements of GaN-based LEDs after LLO show that the electrical and optical characteristics improve greatly, indicating that no stress damage is brought to GaN films using proper parameters obtained by calculation during LLO

  1. A Multiscale Finite Element Model Validation Method of Composite Cable-Stayed Bridge Based on Structural Health Monitoring System

    Directory of Open Access Journals (Sweden)

    Rumian Zhong

    2015-01-01

    Full Text Available A two-step response surface method for multiscale finite element model (FEM updating and validation is presented with respect to Guanhe Bridge, a composite cable-stayed bridge in the National Highway number G15, in China. Firstly, the state equations of both multiscale and single-scale FEM are established based on the basic equation in structural dynamic mechanics to update the multiscale coupling parameters and structural parameters. Secondly, based on the measured data from the structural health monitoring (SHM system, a Monte Carlo simulation is employed to analyze the uncertainty quantification and transmission, where the uncertainties of the multiscale FEM and measured data were considered. The results indicate that the relative errors between the calculated and measured frequencies are less than 2%, and the overlap ratio indexes of each modal frequency are larger than 80% without the average absolute value of relative errors. These demonstrate that the proposed method can be applied to validate the multiscale FEM, and the validated FEM can reflect the current conditions of the real bridge; thus it can be used as the basis for bridge health monitoring, damage prognosis (DP, and safety prognosis (SP.

  2. 78 FR 65932 - Updating OSHA Standards Based on National Consensus Standards; Signage

    Science.gov (United States)

    2013-11-04

    ...; Signage AGENCY: Occupational Safety and Health Administration (OSHA), Department of Labor. ACTION... accompanied its direct final rule revising its signage standards for general industry and construction. DATES... proposed rule (NPRM) along with the direct final rule (DFR) (see 78 FR 35585) updating its signage...

  3. 77 FR 3503 - Guam Military Base Realignment Contractor Recruitment Standards

    Science.gov (United States)

    2012-01-24

    ... Contractor Recruitment Standards AGENCY: Employment and Training Administration, Labor. ACTION: Notice... issuing this notice to announce the recruitment standards that construction contractors are required to... contractors engaged in construction projects related to the realignment of U.S. military forces from Okinawa...

  4. Adapting standards to the site. Example of Seismic Base Isolation

    International Nuclear Information System (INIS)

    Viallet, Emmanuel

    2014-01-01

    Emmanuel Viallet, Civil Design Manager at EDF engineering center SEPTEN, concluded the morning's lectures with a presentation on how to adapt a standard design to site characteristics. He presented the example of the seismic isolation of the Cruas NPP for which the standard 900 MW design was indeed built on 'anti-seismic pads' to withstand local seismic load

  5. Design Concepts of Polycarbonate-Based Intervertebral Lumbar Cages: Finite Element Analysis and Compression Testing

    Directory of Open Access Journals (Sweden)

    J. Obedt Figueroa-Cavazos

    2016-01-01

    Full Text Available This work explores the viability of 3D printed intervertebral lumbar cages based on biocompatible polycarbonate (PC-ISO® material. Several design concepts are proposed for the generation of patient-specific intervertebral lumbar cages. The 3D printed material achieved compressive yield strength of 55 MPa under a specific combination of manufacturing parameters. The literature recommends a reference load of 4,000 N for design of intervertebral lumbar cages. Under compression testing conditions, the proposed design concepts withstand between 7,500 and 10,000 N of load before showing yielding. Although some stress concentration regions were found during analysis, the overall viability of the proposed design concepts was validated.

  6. Reliability Quantification Method for Safety Critical Software Based on a Finite Test Set

    International Nuclear Information System (INIS)

    Shin, Sung Min; Kim, Hee Eun; Kang, Hyun Gook; Lee, Seung Jun

    2014-01-01

    Software inside of digitalized system have very important role because it may cause irreversible consequence and affect the whole system as common cause failure. However, test-based reliability quantification method for some safety critical software has limitations caused by difficulties in developing input sets as a form of trajectory which is series of successive values of variables. To address these limitations, this study proposed another method which conduct the test using combination of single values of variables. To substitute the trajectory form of input using combination of variables, the possible range of each variable should be identified. For this purpose, assigned range of each variable, logical relations between variables, plant dynamics under certain situation, and characteristics of obtaining information of digital device are considered. A feasibility of the proposed method was confirmed through an application to the Reactor Protection System (RPS) software trip logic

  7. Groebner bases for finite-temperature quantum computing and their complexity

    International Nuclear Information System (INIS)

    Crompton, P. R.

    2011-01-01

    Following the recent approach of using order domains to construct Groebner bases from general projective varieties, we examine the parity and time-reversal arguments relating to the Wightman axioms of quantum field theory and propose that the definition of associativity in these axioms should be introduced a posteriori to the cluster property in order to generalize the anyon conjecture for quantum computing to indefinite metrics. We then show that this modification, which we define via ideal quotients, does not admit a faithful representation of the Braid group, because the generalized twisted inner automorphisms that we use to reintroduce associativity are only parity invariant for the prime spectra of the exterior algebra. We then use a coordinate prescription for the quantum deformations of toric varieties to show how a faithful representation of the Braid group can be reconstructed and argue that for a degree reverse lexicographic (monomial) ordered Groebner basis, the complexity class of this problem is bounded quantum polynomial.

  8. A finite element based substructuring procedure for design analysis of large smart structural systems

    International Nuclear Information System (INIS)

    Ashwin, U; Raja, S; Dwarakanathan, D

    2009-01-01

    A substructuring based design analysis procedure is presented for large smart structural system using the Craig–Bampton method. The smart structural system is distinctively characterized as an active substructure, modelled as a design problem, and a passive substructure, idealized as an analysis problem. Furthermore, a novel thought has been applied by introducing the electro–elastic coupling into the reduction scheme to solve the global structural control problem in a local domain. As an illustration, a smart composite box beam with surface bonded actuators/sensors is considered, and results of the local to global control analysis are presented to show the potential use of the developed procedure. The present numerical scheme is useful for optimally designing the active substructures to study their locations, coupled structure–actuator interaction and provide a solution to the global design of large smart structural systems

  9. Adaptive beamlet-based finite-size pencil beam dose calculation for independent verification of IMRT and VMAT.

    Science.gov (United States)

    Park, Justin C; Li, Jonathan G; Arhjoul, Lahcen; Yan, Guanghua; Lu, Bo; Fan, Qiyong; Liu, Chihray

    2015-04-01

    The use of sophisticated dose calculation procedure in modern radiation therapy treatment planning is inevitable in order to account for complex treatment fields created by multileaf collimators (MLCs). As a consequence, independent volumetric dose verification is time consuming, which affects the efficiency of clinical workflow. In this study, the authors present an efficient adaptive beamlet-based finite-size pencil beam (AB-FSPB) dose calculation algorithm that minimizes the computational procedure while preserving the accuracy. The computational time of finite-size pencil beam (FSPB) algorithm is proportional to the number of infinitesimal and identical beamlets that constitute an arbitrary field shape. In AB-FSPB, dose distribution from each beamlet is mathematically modeled such that the sizes of beamlets to represent an arbitrary field shape no longer need to be infinitesimal nor identical. As a result, it is possible to represent an arbitrary field shape with combinations of different sized and minimal number of beamlets. In addition, the authors included the model parameters to consider MLC for its rounded edge and transmission. Root mean square error (RMSE) between treatment planning system and conventional FSPB on a 10 × 10 cm(2) square field using 10 × 10, 2.5 × 2.5, and 0.5 × 0.5 cm(2) beamlet sizes were 4.90%, 3.19%, and 2.87%, respectively, compared with RMSE of 1.10%, 1.11%, and 1.14% for AB-FSPB. This finding holds true for a larger square field size of 25 × 25 cm(2), where RMSE for 25 × 25, 2.5 × 2.5, and 0.5 × 0.5 cm(2) beamlet sizes were 5.41%, 4.76%, and 3.54% in FSPB, respectively, compared with RMSE of 0.86%, 0.83%, and 0.88% for AB-FSPB. It was found that AB-FSPB could successfully account for the MLC transmissions without major discrepancy. The algorithm was also graphical processing unit (GPU) compatible to maximize its computational speed. For an intensity modulated radiation therapy (∼12 segments) and a volumetric modulated arc

  10. Generalized finite elements

    International Nuclear Information System (INIS)

    Wachspress, E.

    2009-01-01

    Triangles and rectangles are the ubiquitous elements in finite element studies. Only these elements admit polynomial basis functions. Rational functions provide a basis for elements having any number of straight and curved sides. Numerical complexities initially associated with rational bases precluded extensive use. Recent analysis has reduced these difficulties and programs have been written to illustrate effectiveness. Although incorporation in major finite element software requires considerable effort, there are advantages in some applications which warrant implementation. An outline of the basic theory and of recent innovations is presented here. (authors)

  11. Determination of the mechanical and physical properties of cartilage by coupling poroelastic-based finite element models of indentation with artificial neural networks

    NARCIS (Netherlands)

    Arbabi, Vahid; Pouran, B; Campoli, Gianni; Weinans, Harrie; Zadpoor, Amir A

    2016-01-01

    One of the most widely used techniques to determine the mechanical properties of cartilage is based on indentation tests and interpretation of the obtained force-time or displacement-time data. In the current computational approaches, one needs to simulate the indentation test with finite element

  12. The Determining Finite Automata Process

    Directory of Open Access Journals (Sweden)

    M. S. Vinogradova

    2017-01-01

    Full Text Available The theory of formal languages widely uses finite state automata both in implementation of automata-based approach to programming, and in synthesis of logical control algorithms.To ensure unambiguous operation of the algorithms, the synthesized finite state automata must be deterministic. Within the approach to the synthesis of the mobile robot controls, for example, based on the theory of formal languages, there are problems concerning the construction of various finite automata, but such finite automata, as a rule, will not be deterministic. The algorithm of determinization can be applied to the finite automata, as specified, in various ways. The basic ideas of the algorithm of determinization can be most simply explained using the representations of a finite automaton in the form of a weighted directed graph.The paper deals with finite automata represented as weighted directed graphs, and discusses in detail the procedure for determining the finite automata represented in this way. Gives a detailed description of the algorithm for determining finite automata. A large number of examples illustrate a capability of the determinization algorithm.

  13. Finite element analysis of displacement actuator based on giant magnetostrictive thin film

    Directory of Open Access Journals (Sweden)

    Shaopeng Yu

    2018-05-01

    Full Text Available With the rapid development of science and technology, mechanical and electrical equipment become more and more miniature. In order to achieve precise control in less than 1cm3, the giant magnetostrictive thin film has become a research hotspot. The micro displacement actuator with planar and arc film is designed by the dynamic coupling model based on J-A model and magneto-mechanical effect method which is proposed in this paper. The different structure and thickness of films are analyzed by COMSOL Multiphysics software when the current flows through driving coil. After comparing the simulation results with the test ones, it can be seen that the coupling model is accurate and the structure is reliable. At the same time, MATLAB is used to fit the current density-displacement curve and higher order equation is obtained, and then the feasibility of design can be verified. The actuator with arc structure had advantages of small volume, fast response, high precision, easy integration, etc., which has a broad application prospect in the field of vibration control, micro positioning, robot and so on.

  14. Finite element based stress analysis of BWR internals exposed to accident loads

    Energy Technology Data Exchange (ETDEWEB)

    Altstadt, E; Weiss, F P; Werner, M; Willschuetz, H G

    1998-10-01

    During a hypothetical accident the reactor pressure vessel internals of boiling water reactors can be exposed to considerable loads resulting from temperature gradients and pressure waves. Three dimensional FE models were developed for the core shroud, the upper and the lower core supporting structure, the steam separator pipes and the feed water distributor. The models of core shroud, upper core structure and lower core structure were coupled by means of the substructure technique. All FE models can be used for thermal and for structural mechanical analyses. As an example the FE analysis for the case of a station black-out scenario (loss of power supply for the main circulating pumps) with subsequent emergency core cooling is demonstrated. The transient temperature distributions within the core shroud and within the steam dryer pipes as well were calculated based on the fluid temperatures and the heat transfer coefficients provided by thermo-hydraulic codes. At the maximum temperature gradients in the core shroud, the mechanical stress distribution was computed in a static analysis with the actual temperature field being the load. (orig.)

  15. New finite element-based modeling of reactor core support plate failure

    Energy Technology Data Exchange (ETDEWEB)

    Pandazis, Peter; Lovasz, Liviusz [Gesellschaft fuer Anlagen- und Reaktorsicherheit gGmbH, Garching (Germany). Forschungszentrum; Babcsany, Boglarka [Budapest Univ. of Technology and Economics, Budapest (Hungary). Inst. of Nuclear Techniques; Hajas, Tamas

    2017-12-15

    ATHLET-CD is the severe accident module of the code system AC{sup 2} that is designed to simulate the core degradation phenomena including fission product release and transport in the reactor circuit, as well as the late phase processes in the lower plenum. In case of a severe accident degradation of the reactor core occurs, the fuel assemblies start to melt. The evolution of such processes is usually accompanied with the failure of the core support plate and relocation of the molten core to the lower plenum. Currently, the criterion for the failure of the support plate applied by ATHLET-CD is a user-defined signal which can be a specific time or process variable like mass, temperature, etc. A new method, based on FEM approach, was developed that could lead in the future to a more realistic criterion for the failure of the core support plate. This paper presents the basic idea and theory of this new method as well as preliminary verification calculations and an outlook on the planned future development.

  16. Finite-element modelling of physics-based hillslope hydrology, Keith Beven, and beyond

    Science.gov (United States)

    Loague, Keith; Ebel, Brian A.

    2016-01-01

    Keith Beven is a voice of reason on the intelligent use of models and the subsequent acknowledgement/assessment of the uncertainties associated with environmental simula-tion. With several books and hundreds of papers, Keith’s work is widespread, well known, and highly referenced. Four of Keith’s most notable contributions are the iconic TOPMODEL (Beven and Kirkby, 1979), classic papers on macropores and preferential flow (Beven and Germann, 1982, 2013), two editions of the rainfall-runoff modelling bible (Beven, 2000a, 2012), and the selection/commentary for the first volume from the Benchmark Papers in Hydrology series (Beven, 2006b). Remarkably, the thirty-one papers in his benchmark volume, entitled Streamflow Generation Processes, are not tales of modelling wizardry but describe measurements designed to better understand the dynamics of near-surface systems (quintessential Keith). The impetus for this commentary is Keith’sPhD research (Beven, 1975), where he developed a new finite-element model and conducted concept-development simu-lations based upon the processes identified by, for example, Richards (1931), Horton (1933), Hubbert (1940), Hewlett and Hibbert (1963), and Dunne and Black (1970a,b). Readers not familiar with the different mechanisms of streamflow generation are referred to Dunne (1978).

  17. Finite element analysis of displacement actuator based on giant magnetostrictive thin film

    Science.gov (United States)

    Yu, Shaopeng; Wang, Bowen; Zhang, Changgeng; Cui, Baozhi

    2018-05-01

    With the rapid development of science and technology, mechanical and electrical equipment become more and more miniature. In order to achieve precise control in less than 1cm3, the giant magnetostrictive thin film has become a research hotspot. The micro displacement actuator with planar and arc film is designed by the dynamic coupling model based on J-A model and magneto-mechanical effect method which is proposed in this paper. The different structure and thickness of films are analyzed by COMSOL Multiphysics software when the current flows through driving coil. After comparing the simulation results with the test ones, it can be seen that the coupling model is accurate and the structure is reliable. At the same time, MATLAB is used to fit the current density-displacement curve and higher order equation is obtained, and then the feasibility of design can be verified. The actuator with arc structure had advantages of small volume, fast response, high precision, easy integration, etc., which has a broad application prospect in the field of vibration control, micro positioning, robot and so on.

  18. Tissue-Based MRI Intensity Standardization: Application to Multicentric Datasets

    Directory of Open Access Journals (Sweden)

    Nicolas Robitaille

    2012-01-01

    Full Text Available Intensity standardization in MRI aims at correcting scanner-dependent intensity variations. Existing simple and robust techniques aim at matching the input image histogram onto a standard, while we think that standardization should aim at matching spatially corresponding tissue intensities. In this study, we present a novel automatic technique, called STI for STandardization of Intensities, which not only shares the simplicity and robustness of histogram-matching techniques, but also incorporates tissue spatial intensity information. STI uses joint intensity histograms to determine intensity correspondence in each tissue between the input and standard images. We compared STI to an existing histogram-matching technique on two multicentric datasets, Pilot E-ADNI and ADNI, by measuring the intensity error with respect to the standard image after performing nonlinear registration. The Pilot E-ADNI dataset consisted in 3 subjects each scanned in 7 different sites. The ADNI dataset consisted in 795 subjects scanned in more than 50 different sites. STI was superior to the histogram-matching technique, showing significantly better intensity matching for the brain white matter with respect to the standard image.

  19. Mixed multiscale finite element methods using approximate global information based on partial upscaling

    KAUST Repository

    Jiang, Lijian

    2009-10-02

    The use of limited global information in multiscale simulations is needed when there is no scale separation. Previous approaches entail fine-scale simulations in the computation of the global information. The computation of the global information is expensive. In this paper, we propose the use of approximate global information based on partial upscaling. A requirement for partial homogenization is to capture long-range (non-local) effects present in the fine-scale solution, while homogenizing some of the smallest scales. The local information at these smallest scales is captured in the computation of basis functions. Thus, the proposed approach allows us to avoid the computations at the scales that can be homogenized. This results in coarser problems for the computation of global fields. We analyze the convergence of the proposed method. Mathematical formalism is introduced, which allows estimating the errors due to small scales that are homogenized. The proposed method is applied to simulate two-phase flows in heterogeneous porous media. Numerical results are presented for various permeability fields, including those generated using two-point correlation functions and channelized permeability fields from the SPE Comparative Project (Christie and Blunt, SPE Reserv Evalu Eng 4:308-317, 2001). We consider simple cases where one can identify the scales that can be homogenized. For more general cases, we suggest the use of upscaling on the coarse grid with the size smaller than the target coarse grid where multiscale basis functions are constructed. This intermediate coarse grid renders a partially upscaled solution that contains essential non-local information. Numerical examples demonstrate that the use of approximate global information provides better accuracy than purely local multiscale methods. © 2009 Springer Science+Business Media B.V.

  20. Radiological Evaluation Standards in the Radiology Department of Shahid Beheshti Hospital (RAH) YASUJ Based on Radiology standards in 92

    OpenAIRE

    A َKalantari; SAM Khosravani

    2014-01-01

    Background & aim: Radiology personnel’s working in terms of performance and safety is one of the most important functions in order to increase the quality and quantity. This study aimed to evaluate the radiological standards in Shahid Beheshti Hospital of Yasuj, Iran, in 2013. Methods: The present cross-sectional study was based on a 118 randomly selected graphs and the ranking list, with full knowledge of the standards in radiology was performed two times. Data were analyzed using descri...

  1. Efficiency of Management Systems, Based on International Standards

    Directory of Open Access Journals (Sweden)

    Elena B. Gafforova

    2012-03-01

    Full Text Available The article considers major trends of management systems standardization development and efficiency. The authors determine possible structure of effects in the process of integrated management systems implementation.

  2. Bending analysis of embedded nanoplates based on the integral formulation of Eringen's nonlocal theory using the finite element method

    Science.gov (United States)

    Ansari, R.; Torabi, J.; Norouzzadeh, A.

    2018-04-01

    Due to the capability of Eringen's nonlocal elasticity theory to capture the small length scale effect, it is widely used to study the mechanical behaviors of nanostructures. Previous studies have indicated that in some cases, the differential form of this theory cannot correctly predict the behavior of structure, and the integral form should be employed to avoid obtaining inconsistent results. The present study deals with the bending analysis of nanoplates resting on elastic foundation based on the integral formulation of Eringen's nonlocal theory. Since the formulation is presented in a general form, arbitrary kernel functions can be used. The first order shear deformation plate theory is considered to model the nanoplates, and the governing equations for both integral and differential forms are presented. Finally, the finite element method is applied to solve the problem. Selected results are given to investigate the effects of elastic foundation and to compare the predictions of integral nonlocal model with those of its differential nonlocal and local counterparts. It is found that by the use of proposed integral formulation of Eringen's nonlocal model, the paradox observed for the cantilever nanoplate is resolved.

  3. Finite-time adaptive sliding mode force control for electro-hydraulic load simulator based on improved GMS friction model

    Science.gov (United States)

    Kang, Shuo; Yan, Hao; Dong, Lijing; Li, Changchun

    2018-03-01

    This paper addresses the force tracking problem of electro-hydraulic load simulator under the influence of nonlinear friction and uncertain disturbance. A nonlinear system model combined with the improved generalized Maxwell-slip (GMS) friction model is firstly derived to describe the characteristics of load simulator system more accurately. Then, by using particle swarm optimization (PSO) algorithm ​combined with the system hysteresis characteristic analysis, the GMS friction parameters are identified. To compensate for nonlinear friction and uncertain disturbance, a finite-time adaptive sliding mode control method is proposed based on the accurate system model. This controller has the ability to ensure that the system state moves along the nonlinear sliding surface to steady state in a short time as well as good dynamic properties under the influence of parametric uncertainties and disturbance, which further improves the force loading accuracy and rapidity. At the end of this work, simulation and experimental results are employed to demonstrate the effectiveness of the proposed sliding mode control strategy.

  4. Three nonlinear performance relationships in the start-up state of IPMC strips based on finite element analysis

    International Nuclear Information System (INIS)

    Peng, Han Min; Ding, Qing Jun; Hui, Yao; Li, Hua Feng; Zhao, Chun Sheng

    2010-01-01

    Ionic polymer–metal composites (IPMC) are a class of electroactive polymers (EAP), and they currently attract numerous researchers to study their performance characteristics and applications. However, research on its start-up characteristics still requires more attention. In the IPMC start-up state (the moment of applying an actuation voltage at the very beginning), its mechanical performance is different in the stable working state (working for at least 10 min). Therefore, this paper focuses on three performance relationships of an IPMC strip between its maximal tip deformation and voltage, its maximal stress and voltage, as well as its maximal strain and voltage, both in the two states. Different from other reports, we found that they present nonlinear tendencies in the start-up state rather than linear ones. Therefore, based on the equivalent bimorph beam model, a finite element electromechanical coupling calculation module in the ANSYS software was utilized to simulate these characteristics. Furthermore, a test system is introduced to validate the phenomena. As a whole, these three relationships and the FEA method may be beneficial for providing control strategies effectively to IPMC actuators, especially in their start-up states

  5. A new heat transfer analysis in machining based on two steps of 3D finite element modelling and experimental validation

    Science.gov (United States)

    Haddag, B.; Kagnaya, T.; Nouari, M.; Cutard, T.

    2013-01-01

    Modelling machining operations allows estimating cutting parameters which are difficult to obtain experimentally and in particular, include quantities characterizing the tool-workpiece interface. Temperature is one of these quantities which has an impact on the tool wear, thus its estimation is important. This study deals with a new modelling strategy, based on two steps of calculation, for analysis of the heat transfer into the cutting tool. Unlike the classical methods, considering only the cutting tool with application of an approximate heat flux at the cutting face, estimated from experimental data (e.g. measured cutting force, cutting power), the proposed approach consists of two successive 3D Finite Element calculations and fully independent on the experimental measurements; only the definition of the behaviour of the tool-workpiece couple is necessary. The first one is a 3D thermomechanical modelling of the chip formation process, which allows estimating cutting forces, chip morphology and its flow direction. The second calculation is a 3D thermal modelling of the heat diffusion into the cutting tool, by using an adequate thermal loading (applied uniform or non-uniform heat flux). This loading is estimated using some quantities obtained from the first step calculation, such as contact pressure, sliding velocity distributions and contact area. Comparisons in one hand between experimental data and the first calculation and at the other hand between measured temperatures with embedded thermocouples and the second calculation show a good agreement in terms of cutting forces, chip morphology and cutting temperature.

  6. Adaptive twisting sliding mode algorithm for hypersonic reentry vehicle attitude control based on finite-time observer.

    Science.gov (United States)

    Guo, Zongyi; Chang, Jing; Guo, Jianguo; Zhou, Jun

    2018-06-01

    This paper focuses on the adaptive twisting sliding mode control for the Hypersonic Reentry Vehicles (HRVs) attitude tracking issue. The HRV attitude tracking model is transformed into the error dynamics in matched structure, whereas an unmeasurable state is redefined by lumping the existing unmatched disturbance with the angular rate. Hence, an adaptive finite-time observer is used to estimate the unknown state. Then, an adaptive twisting algorithm is proposed for systems subject to disturbances with unknown bounds. The stability of the proposed observer-based adaptive twisting approach is guaranteed, and the case of noisy measurement is analyzed. Also, the developed control law avoids the aggressive chattering phenomenon of the existing adaptive twisting approaches because the adaptive gains decrease close to the disturbance once the trajectories reach the sliding surface. Finally, numerical simulations on the attitude control of the HRV are conducted to verify the effectiveness and benefit of the proposed approach. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Characterization of Aircraft Structural Damage Using Guided Wave Based Finite Element Analysis for In-Flight Structural Health Management

    Science.gov (United States)

    Seshadri, Banavara R.; Krishnamurthy, Thiagarajan; Ross, Richard W.

    2016-01-01

    The development of multidisciplinary Integrated Vehicle Health Management (IVHM) tools will enable accurate detection, diagnosis and prognosis of damage under normal and adverse conditions during flight. The adverse conditions include loss of control caused by environmental factors, actuator and sensor faults or failures, and structural damage conditions. A major concern is the growth of undetected damage/cracks due to fatigue and low velocity foreign object impact that can reach a critical size during flight, resulting in loss of control of the aircraft. To avoid unstable catastrophic propagation of damage during a flight, load levels must be maintained that are below the load-carrying capacity for damaged aircraft structures. Hence, a capability is needed for accurate real-time predictions of safe load carrying capacity for aircraft structures with complex damage configurations. In the present work, a procedure is developed that uses guided wave responses to interrogate damage. As the guided wave interacts with damage, the signal attenuates in some directions and reflects in others. This results in a difference in signal magnitude as well as phase shifts between signal responses for damaged and undamaged structures. Accurate estimation of damage size and location is made by evaluating the cumulative signal responses at various pre-selected sensor locations using a genetic algorithm (GA) based optimization procedure. The damage size and location is obtained by minimizing the difference between the reference responses and the responses obtained by wave propagation finite element analysis of different representative cracks, geometries and sizes.

  8. Multicomplementary operators via finite Fourier transform

    International Nuclear Information System (INIS)

    Klimov, Andrei B; Sanchez-Soto, Luis L; Guise, Hubert de

    2005-01-01

    A complete set of d + 1 mutually unbiased bases exists in a Hilbert space of dimension d, whenever d is a power of a prime. We discuss a simple construction of d + 1 disjoint classes (each one having d - 1 commuting operators) such that the corresponding eigenstates form sets of unbiased bases. Such a construction works properly for prime dimension. We investigate an alternative construction in which the real numbers that label the classes are replaced by a finite field having d elements. One of these classes is diagonal, and can be mapped to cyclic operators by means of the finite Fourier transform, which allows one to understand complementarity in a similar way as for the position-momentum pair in standard quantum mechanics. The relevant examples of two and three qubits and two qutrits are discussed in detail

  9. [Accreditation of clinical laboratories based on ISO standards].

    Science.gov (United States)

    Kawai, Tadashi

    2004-11-01

    International Organization for Standardization (ISO) have published two international standards (IS) to be used for accreditation of clinical laboratories; ISO/IEC 17025:1999 and ISO 15189:2003. Any laboratory accreditation body must satisfy the requirements stated in ISO/IEC Guide 58. In order to maintain the quality of the laboratory accreditation bodies worldwide, the International Laboratory Accreditation Cooperation (ILAC) has established the mutual recognition arrangement (MRA). In Japan, the International Accreditation Japan (IAJapan) and the Japan Accreditation Board for Conformity Assessment (JAB) are the members of the ILAC/MRA group. In 2003, the Japanese Committee for Clinical Laboratory Standards (JCCLS) and the JAB have established the Development Committee of Clinical Laboratory Accreditation Program (CLAP), in order to establish the CLAP, probably starting in 2005.

  10. OPTIMIZATION OF THE TEMPERATURE CONTROL SCHEME FOR ROLLER COMPACTED CONCRETE DAMS BASED ON FINITE ELEMENT AND SENSITIVITY ANALYSIS METHODS

    Directory of Open Access Journals (Sweden)

    Huawei Zhou

    2016-10-01

    Full Text Available Achieving an effective combination of various temperature control measures is critical for temperature control and crack prevention of concrete dams. This paper presents a procedure for optimizing the temperature control scheme of roller compacted concrete (RCC dams that couples the finite element method (FEM with a sensitivity analysis method. In this study, seven temperature control schemes are defined according to variations in three temperature control measures: concrete placement temperature, water-pipe cooling time, and thermal insulation layer thickness. FEM is employed to simulate the equivalent temperature field and temperature stress field obtained under each of the seven designed temperature control schemes for a typical overflow dam monolith based on the actual characteristics of a RCC dam located in southwestern China. A sensitivity analysis is subsequently conducted to investigate the degree of influence each of the three temperature control measures has on the temperature field and temperature tensile stress field of the dam. Results show that the placement temperature has a substantial influence on the maximum temperature and tensile stress of the dam, and that the placement temperature cannot exceed 15 °C. The water-pipe cooling time and thermal insulation layer thickness have little influence on the maximum temperature, but both demonstrate a substantial influence on the maximum tensile stress of the dam. The thermal insulation thickness is significant for reducing the probability of cracking as a result of high thermal stress, and the maximum tensile stress can be controlled under the specification limit with a thermal insulation layer thickness of 10 cm. Finally, an optimized temperature control scheme for crack prevention is obtained based on the analysis results.

  11. Finite Element Based Pelvic Injury Metric Creation and Validation in Lateral Impact for a Human Body Model.

    Science.gov (United States)

    Weaver, Caitlin; Baker, Alexander; Davis, Matthew; Miller, Anna; Stitzel, Joel D

    2018-02-20

    Pelvic fractures are serious injuries resulting in high mortality and morbidity. The objective of this study is to develop and validate local pelvic anatomical, cross-section-based injury risk metrics for a finite element (FE) model of the human body. Cross-sectional instrumentation was implemented in the pelvic region of the Global Human Body Models Consortium (GHBMC M50-O) 50th percentile detailed male FE model (v4.3). In total, 25 lateral impact FE simulations were performed using input data from cadaveric lateral impact tests performed by Bouquet et al. The experimental force-time data was scaled using five normalization techniques, which were evaluated using log rank, Wilcoxon rank sum, and correlation and analysis (CORA) testing. Survival analyses with Weibull distribution were performed on the experimental peak force (scaled and unscaled) and the simulation test data to generate injury risk curves (IRCs) for total pelvic injury. Additionally, IRCs were developed for regional injury using cross-sectional forces from the simulation results and injuries documented in the experimental autopsies. These regional IRCs were also evaluated using the receiver operator characteristic (ROC) curve analysis. Based on the results of the all the evaluation methods, the Equal Stress Equal Velocity (ESEV) and ESEV using effective mass (ESEV-EM) scaling techniques performed best. The simulation IRC shows slight under prediction of injury in comparison to these scaled experimental data curves. However, this difference was determined to not be statistically significant. Additionally, the ROC curve analysis showed moderate predictive power for all regional IRCs.

  12. SU-F-T-428: An Optimization-Based Commissioning Tool for Finite Size Pencil Beam Dose Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y; Tian, Z; Song, T; Jia, X; Gu, X; Jiang, S [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: Finite size pencil beam (FSPB) algorithms are commonly used to pre-calculate the beamlet dose distribution for IMRT treatment planning. FSPB commissioning, which usually requires fine tuning of the FSPB kernel parameters, is crucial to the dose calculation accuracy and hence the plan quality. Yet due to the large number of beamlets, FSPB commissioning could be very tedious. This abstract reports an optimization-based FSPB commissioning tool we have developed in MatLab to facilitate the commissioning. Methods: A FSPB dose kernel generally contains two types of parameters: the profile parameters determining the dose kernel shape, and a 2D scaling factors accounting for the longitudinal and off-axis corrections. The former were fitted using the penumbra of a reference broad beam’s dose profile with Levenberg-Marquardt algorithm. Since the dose distribution of a broad beam is simply a linear superposition of the dose kernel of each beamlet calculated with the fitted profile parameters and scaled using the scaling factors, these factors could be determined by solving an optimization problem which minimizes the discrepancies between the calculated dose of broad beams and the reference dose. Results: We have commissioned a FSPB algorithm for three linac photon beams (6MV, 15MV and 6MVFFF). Dose of four field sizes (6*6cm2, 10*10cm2, 15*15cm2 and 20*20cm2) were calculated and compared with the reference dose exported from Eclipse TPS system. For depth dose curves, the differences are less than 1% of maximum dose after maximum dose depth for most cases. For lateral dose profiles, the differences are less than 2% of central dose at inner-beam regions. The differences of the output factors are within 1% for all the three beams. Conclusion: We have developed an optimization-based commissioning tool for FSPB algorithms to facilitate the commissioning, providing sufficient accuracy of beamlet dose calculation for IMRT optimization.

  13. A risk standard based on societal cost with bounded consequences

    International Nuclear Information System (INIS)

    Worledge, D.H.

    1982-01-01

    A risk standard is proposed that relates the frequency of occurrence of single events to the consequences of the events. Maximum consequences and risk aversion are used to give the cumulative risk curve a shape similar to the results of a risk assessment and to bound the expectation of deaths. Societal costs in terms of deaths are used to fix the parameters of the model together with an approximate comparison with individual risks. The proposed standard is compared with some practical applications of risk assessment to nuclear reactor systems

  14. Developing a community-based flood resilience measurement standard

    Science.gov (United States)

    Keating, Adriana; Szoenyi, Michael; Chaplowe, Scott; McQuistan, Colin; Campbell, Karen

    2015-04-01

    Given the increased attention to resilience-strengthening in international humanitarian and development work, there has been concurrent interest in its measurement and the overall accountability of "resilience strengthening" initiatives. The literature is reaching beyond the polemic of defining resilience to its measurement. Similarly, donors are increasingly expecting organizations to go beyond claiming resilience programing to measuring and showing it. However, key questions must be asked, in particular "Resilience of whom and to what?". There is no one-size-fits-all solution. The approach to measuring resilience is dependent on the audience and the purpose of the measurement exercise. Deriving a resilience measurement system needs to be based on the question it seeks to answer and needs to be specific. This session highlights key lessons from the Zurich Flood Resilience Alliance approach to develop a flood resilience measurement standard to measure and assess the impact of community based flood resilience interventions, and to inform decision-making to enhance the effectiveness of these interventions. We draw on experience in methodology development to-date, together with lessons from application in two case study sites in Latin America. Attention will be given to the use of a consistent measurement methodology for community resilience to floods over time and place; challenges to measuring a complex and dynamic phenomenon such as community resilience; methodological implications of measuring community resilience versus impact on and contribution to this goal; and using measurement and tools such as cost-benefit analysis to prioritize and inform strategic decision making for resilience interventions. The measurement tool follows the five categories of the Sustainable Livelihoods Framework and the 4Rs of complex adaptive systems - robustness, rapidity, redundancy and resourcefulness -5C-4R. A recent white paper by the Zurich Flood Resilience Alliance traces the

  15. 77 FR 53769 - Receipts-Based, Small Business Size Standard; Confirmation of Effective Date

    Science.gov (United States)

    2012-09-04

    ... Flexibility Act of 1980, as amended. The NRC is increasing its receipts-based, small business size standard from $6.5 million to $7 million to conform to the standard set by the Small Business Administration...-Based, Small Business Size Standard; Confirmation of Effective Date AGENCY: Nuclear Regulatory...

  16. DNA origami-based standards for quantitative fluorescence microscopy.

    Science.gov (United States)

    Schmied, Jürgen J; Raab, Mario; Forthmann, Carsten; Pibiri, Enrico; Wünsch, Bettina; Dammeyer, Thorben; Tinnefeld, Philip

    2014-01-01

    Validating and testing a fluorescence microscope or a microscopy method requires defined samples that can be used as standards. DNA origami is a new tool that provides a framework to place defined numbers of small molecules such as fluorescent dyes or proteins in a programmed geometry with nanometer precision. The flexibility and versatility in the design of DNA origami microscopy standards makes them ideally suited for the broad variety of emerging super-resolution microscopy methods. As DNA origami structures are durable and portable, they can become a universally available specimen to check the everyday functionality of a microscope. The standards are immobilized on a glass slide, and they can be imaged without further preparation and can be stored for up to 6 months. We describe a detailed protocol for the design, production and use of DNA origami microscopy standards, and we introduce a DNA origami rectangle, bundles and a nanopillar as fluorescent nanoscopic rulers. The protocol provides procedures for the design and realization of fluorescent marks on DNA origami structures, their production and purification, quality control, handling, immobilization, measurement and data analysis. The procedure can be completed in 1-2 d.

  17. Lack of research-based standards for accessible housing

    DEFF Research Database (Denmark)

    Helle, Tina; Brandt, Åse; Slaug, Bjørn

    2011-01-01

    openings at the entrance (defined ≥75cm) implied that the proportion of dwellings not meeting it was 11.3% compared to 64.4%, if the standard was set to ≥83cm. The proportion of individuals defined as having accessibility problems for profiles not using of mobility devices was 4-5%, 57% for profiles using...

  18. Phase transitions in finite systems

    Energy Technology Data Exchange (ETDEWEB)

    Chomaz, Ph. [Grand Accelerateur National d' Ions Lourds (GANIL), DSM-CEA / IN2P3-CNRS, 14 - Caen (France); Gulminelli, F. [Caen Univ., 14 (France). Lab. de Physique Corpusculaire

    2002-07-01

    In this series of lectures we will first review the general theory of phase transition in the framework of information theory and briefly address some of the well known mean field solutions of three dimensional problems. The theory of phase transitions in finite systems will then be discussed, with a special emphasis to the conceptual problems linked to a thermodynamical description for small, short-lived, open systems as metal clusters and data samples coming from nuclear collisions. The concept of negative heat capacity developed in the early seventies in the context of self-gravitating systems will be reinterpreted in the general framework of convexity anomalies of thermo-statistical potentials. The connection with the distribution of the order parameter will lead us to a definition of first order phase transitions in finite systems based on topology anomalies of the event distribution in the space of observations. Finally a careful study of the thermodynamical limit will provide a bridge with the standard theory of phase transitions and show that in a wide class of physical situations the different statistical ensembles are irreducibly inequivalent. (authors)

  19. Phase transitions in finite systems

    International Nuclear Information System (INIS)

    Chomaz, Ph.; Gulminelli, F.

    2002-01-01

    In this series of lectures we will first review the general theory of phase transition in the framework of information theory and briefly address some of the well known mean field solutions of three dimensional problems. The theory of phase transitions in finite systems will then be discussed, with a special emphasis to the conceptual problems linked to a thermodynamical description for small, short-lived, open systems as metal clusters and data samples coming from nuclear collisions. The concept of negative heat capacity developed in the early seventies in the context of self-gravitating systems will be reinterpreted in the general framework of convexity anomalies of thermo-statistical potentials. The connection with the distribution of the order parameter will lead us to a definition of first order phase transitions in finite systems based on topology anomalies of the event distribution in the space of observations. Finally a careful study of the thermodynamical limit will provide a bridge with the standard theory of phase transitions and show that in a wide class of physical situations the different statistical ensembles are irreducibly inequivalent. (authors)

  20. The Galerkin Finite Element Method for A Multi-term Time-Fractional Diffusion equation

    OpenAIRE

    Jin, Bangti; Lazarov, Raytcho; Liu, Yikan; Zhou, Zhi

    2014-01-01

    We consider the initial/boundary value problem for a diffusion equation involving multiple time-fractional derivatives on a bounded convex polyhedral domain. We analyze a space semidiscrete scheme based on the standard Galerkin finite element method using continuous piecewise linear functions. Nearly optimal error estimates for both cases of initial data and inhomogeneous term are derived, which cover both smooth and nonsmooth data. Further we develop a fully discrete scheme based on a finite...

  1. Osteoporosis imaging: effects of bone preservation on MDCT-based trabecular bone microstructure parameters and finite element models

    International Nuclear Information System (INIS)

    Baum, Thomas; Grande Garcia, Eduardo; Burgkart, Rainer; Gordijenko, Olga; Liebl, Hans; Jungmann, Pia M.; Gruber, Michael; Zahel, Tina; Rummeny, Ernst J.; Waldt, Simone; Bauer, Jan S.

    2015-01-01

    Osteoporosis is defined as a skeletal disorder characterized by compromised bone strength due to a reduction of bone mass and deterioration of bone microstructure predisposing an individual to an increased risk of fracture. Trabecular bone microstructure analysis and finite element models (FEM) have shown to improve the prediction of bone strength beyond bone mineral density (BMD) measurements. These computational methods have been developed and validated in specimens preserved in formalin solution or by freezing. However, little is known about the effects of preservation on trabecular bone microstructure and FEM. The purpose of this observational study was to investigate the effects of preservation on trabecular bone microstructure and FEM in human vertebrae. Four thoracic vertebrae were harvested from each of three fresh human cadavers (n = 12). Multi-detector computed tomography (MDCT) images were obtained at baseline, 3 and 6 month follow-up. In the intervals between MDCT imaging, two vertebrae from each donor were formalin-fixed and frozen, respectively. BMD, trabecular bone microstructure parameters (histomorphometry and fractal dimension), and FEM-based apparent compressive modulus (ACM) were determined in the MDCT images and validated by mechanical testing to failure of the vertebrae after 6 months. Changes of BMD, trabecular bone microstructure parameters, and FEM-based ACM in formalin-fixed and frozen vertebrae over 6 months ranged between 1.0–5.6 % and 1.3–6.1 %, respectively, and were not statistically significant (p > 0.05). BMD, trabecular bone microstructure parameters, and FEM-based ACM as assessed at baseline, 3 and 6 month follow-up correlated significantly with mechanically determined failure load (r = 0.89–0.99; p < 0.05). The correlation coefficients r were not significantly different for the two preservation methods (p > 0.05). Formalin fixation and freezing up to six months showed no significant effects on trabecular bone microstructure

  2. Adaptive beamlet-based finite-size pencil beam dose calculation for independent verification of IMRT and VMAT

    International Nuclear Information System (INIS)

    Park, Justin C.; Li, Jonathan G.; Arhjoul, Lahcen; Yan, Guanghua; Lu, Bo; Fan, Qiyong; Liu, Chihray

    2015-01-01

    Purpose: The use of sophisticated dose calculation procedure in modern radiation therapy treatment planning is inevitable in order to account for complex treatment fields created by multileaf collimators (MLCs). As a consequence, independent volumetric dose verification is time consuming, which affects the efficiency of clinical workflow. In this study, the authors present an efficient adaptive beamlet-based finite-size pencil beam (AB-FSPB) dose calculation algorithm that minimizes the computational procedure while preserving the accuracy. Methods: The computational time of finite-size pencil beam (FSPB) algorithm is proportional to the number of infinitesimal and identical beamlets that constitute an arbitrary field shape. In AB-FSPB, dose distribution from each beamlet is mathematically modeled such that the sizes of beamlets to represent an arbitrary field shape no longer need to be infinitesimal nor identical. As a result, it is possible to represent an arbitrary field shape with combinations of different sized and minimal number of beamlets. In addition, the authors included the model parameters to consider MLC for its rounded edge and transmission. Results: Root mean square error (RMSE) between treatment planning system and conventional FSPB on a 10 × 10 cm 2 square field using 10 × 10, 2.5 × 2.5, and 0.5 × 0.5 cm 2 beamlet sizes were 4.90%, 3.19%, and 2.87%, respectively, compared with RMSE of 1.10%, 1.11%, and 1.14% for AB-FSPB. This finding holds true for a larger square field size of 25 × 25 cm 2 , where RMSE for 25 × 25, 2.5 × 2.5, and 0.5 × 0.5 cm 2 beamlet sizes were 5.41%, 4.76%, and 3.54% in FSPB, respectively, compared with RMSE of 0.86%, 0.83%, and 0.88% for AB-FSPB. It was found that AB-FSPB could successfully account for the MLC transmissions without major discrepancy. The algorithm was also graphical processing unit (GPU) compatible to maximize its computational speed. For an intensity modulated radiation therapy (∼12 segments) and a

  3. 3d Finite Element Modelling of Non-Crimp Fabric Based Fibre Composite Based on X-Ray Ct Data

    DEFF Research Database (Denmark)

    Jespersen, Kristine Munk; Asp, Leif; Mikkelsen, Lars Pilgaard

    2017-01-01

    initiation and progression in the material. In the current study, the real bundle structure inside a non-crimp fabric based fibre composite is extracted from 3D X-ray CT images and imported into ABAQUS for numerical modelling.The local stress concentrations when loaded in tension caused by the fibre bundle...

  4. A Lattice-Based Identity-Based Proxy Blind Signature Scheme in the Standard Model

    Directory of Open Access Journals (Sweden)

    Lili Zhang

    2014-01-01

    Full Text Available A proxy blind signature scheme is a special form of blind signature which allowed a designated person called proxy signer to sign on behalf of original signers without knowing the content of the message. It combines the advantages of proxy signature and blind signature. Up to date, most proxy blind signature schemes rely on hard number theory problems, discrete logarithm, and bilinear pairings. Unfortunately, the above underlying number theory problems will be solvable in the postquantum era. Lattice-based cryptography is enjoying great interest these days, due to implementation simplicity and provable security reductions. Moreover, lattice-based cryptography is believed to be hard even for quantum computers. In this paper, we present a new identity-based proxy blind signature scheme from lattices without random oracles. The new scheme is proven to be strongly unforgeable under the standard hardness assumption of the short integer solution problem (SIS and the inhomogeneous small integer solution problem (ISIS. Furthermore, the secret key size and the signature length of our scheme are invariant and much shorter than those of the previous lattice-based proxy blind signature schemes. To the best of our knowledge, our construction is the first short lattice-based identity-based proxy blind signature scheme in the standard model.

  5. Development of a 3D finite element acoustic model to predict the sound reduction index of stud based double-leaf walls

    Science.gov (United States)

    Arjunan, A.; Wang, C. J.; Yahiaoui, K.; Mynors, D. J.; Morgan, T.; Nguyen, V. B.; English, M.

    2014-11-01

    Building standards incorporating quantitative acoustical criteria to ensure adequate sound insulation are now being implemented. Engineers are making great efforts to design acoustically efficient double-wall structures. Accordingly, efficient simulation models to predict the acoustic insulation of double-leaf wall structures are needed. This paper presents the development of a numerical tool that can predict the frequency dependent sound reduction index R of stud based double-leaf walls at one-third-octave band frequency range. A fully vibro-acoustic 3D model consisting of two rooms partitioned using a double-leaf wall, considering the structure and acoustic fluid coupling incorporating the existing fluid and structural solvers are presented. The validity of the finite element (FE) model is assessed by comparison with experimental test results carried out in a certified laboratory. Accurate representation of the structural damping matrix to effectively predict the R values are studied. The possibilities of minimising the simulation time using a frequency dependent mesh model was also investigated. The FEA model presented in this work is capable of predicting the weighted sound reduction index Rw along with A-weighted pink noise C and A-weighted urban noise Ctr within an error of 1 dB. The model developed can also be used to analyse the acoustically induced frequency dependent geometrical behaviour of the double-leaf wall components to optimise them for best acoustic performance. The FE modelling procedure reported in this paper can be extended to other building components undergoing fluid-structure interaction (FSI) to evaluate their acoustic insulation.

  6. Research on Standard and Automatic Judgment of Press-fit Curve of Locomotive Wheel-set Based on AAR Standard

    Science.gov (United States)

    Lu, Jun; Xiao, Jun; Gao, Dong Jun; Zong, Shu Yu; Li, Zhu

    2018-03-01

    In the production of the Association of American Railroads (AAR) locomotive wheel-set, the press-fit curve is the most important basis for the reliability of wheel-set assembly. In the past, Most of production enterprises mainly use artificial detection methods to determine the quality of assembly. There are cases of miscarriage of justice appear. For this reason, the research on the standard is carried out. And the automatic judgment of press-fit curve is analysed and designed, so as to provide guidance for the locomotive wheel-set production based on AAR standard.

  7. Locally Finite Root Supersystems

    OpenAIRE

    Yousofzadeh, Malihe

    2013-01-01

    We introduce the notion of locally finite root supersystems as a generalization of both locally finite root systems and generalized root systems. We classify irreducible locally finite root supersystems.

  8. System for digitalization of medical images based on DICOM standard

    Directory of Open Access Journals (Sweden)

    Čabarkapa Slobodan

    2009-01-01

    Full Text Available According to DICOM standard, which defines both medical image information and user information, a new system for digitalizing medical images is involved as a part of the main system for archiving and retrieving medical databases. The basic characteristics of this system are described in this paper. Furthermore, the analysis of some important DICOM header's tags which are used in this system, are presented, too. Having chosen the appropriate tags in order to preserve important information, the efficient system has been created. .

  9. Biological bases of the maximum permissible exposure levels of the UK laser standard BS 4803 1983

    CERN Document Server

    MacKinlay, Alistair F

    1983-01-01

    The use of lasers has increased greatly over the past 15 years or so, to the extent that they are now used routinely in many occupational and public situations. There has been an increasing awareness of the potential hazards presented by lasers and substantial efforts have been made to formulate safety standards. In the UK the relevant Safety Standard is the British Standards Institution Standard BS 4803. This Standard was originally published in 1972 and a revision has recently been published (BS 4803: 1983). The revised standard has been developed using the American National Standards Institute Standard, ANSI Z136.1 (1973 onwards), as a model. In other countries, national standards have been similarly formulated, resulting in a large measure of international agreement through participation in the work of the International Electrotechnical Commission (IEC). The bases of laser safety standards are biophysical data on threshold injury effects, particularly on the retina, and the development of theoretical mode...

  10. A General Finite Element Scheme for Limit State Analysis and Optimization

    DEFF Research Database (Denmark)

    Damkilde, Lars

    1999-01-01

    Limit State analysis which is based on a perfect material behaviour is used in many different applications primarily within Structural Engineering and Geotechnics. The calculation methods have not reached the same level of automation such as Finite Element Analysis for elastic structures....... The computer based systems are more ad hoc based and are typically not well-integrated with pre- and postprocessors well-known from commercial Finite Element codes.A finite element based formulation of limit state analysis is presented which allows an easy integration with standard Finite Element codes...... for elastic analysis. In this way the user is able to perform a limit state analysis on the same model used for elastic analysis only adding data for the yield surface.The method is based on the lower-bound theorem and uses stress-based elements with a linearized yield surface. The mathematical problem...

  11. Normal and Pathological NCAT Image and Phantom Data Based on Physiologically Realistic Left Ventricle Finite-Element Models

    International Nuclear Information System (INIS)

    Veress, Alexander I.; Segars, W. Paul; Weiss, Jeffrey A.; Tsui, Benjamin M.W.; Gullberg, Grant T.

    2006-01-01

    The 4D NURBS-based Cardiac-Torso (NCAT) phantom, which provides a realistic model of the normal human anatomy and cardiac and respiratory motions, is used in medical imaging research to evaluate and improve imaging devices and techniques, especially dynamic cardiac applications. One limitation of the phantom is that it lacks the ability to accurately simulate altered functions of the heart that result from cardiac pathologies such as coronary artery disease (CAD). The goal of this work was to enhance the 4D NCAT phantom by incorporating a physiologically based, finite-element (FE) mechanical model of the left ventricle (LV) to simulate both normal and abnormal cardiac motions. The geometry of the FE mechanical model was based on gated high-resolution x-ray multi-slice computed tomography (MSCT) data of a healthy male subject. The myocardial wall was represented as transversely isotropichyperelastic material, with the fiber angle varying from -90 degrees at the epicardial surface, through 0 degrees at the mid-wall, to 90 degrees at the endocardial surface. A time varying elastance model was used to simulate fiber contraction, and physiological intraventricular systolic pressure-time curves were applied to simulate the cardiac motion over the entire cardiac cycle. To demonstrate the ability of the FE mechanical model to accurately simulate the normal cardiac motion as well abnormal motions indicative of CAD, a normal case and two pathologic cases were simulated and analyzed. In the first pathologic model, a subendocardial anterior ischemic region was defined. A second model was created with a transmural ischemic region defined in the same location. The FE based deformations were incorporated into the 4D NCAT cardiac model through the control points that define the cardiac structures in the phantom which were set to move according to the predictions of the mechanical model. A simulation study was performed using the FE-NCAT combination to investigate how the

  12. Finite element and finite difference methods in electromagnetic scattering

    CERN Document Server

    Morgan, MA

    2013-01-01

    This second volume in the Progress in Electromagnetic Research series examines recent advances in computational electromagnetics, with emphasis on scattering, as brought about by new formulations and algorithms which use finite element or finite difference techniques. Containing contributions by some of the world's leading experts, the papers thoroughly review and analyze this rapidly evolving area of computational electromagnetics. Covering topics ranging from the new finite-element based formulation for representing time-harmonic vector fields in 3-D inhomogeneous media using two coupled sca

  13. Mercury Atomic Frequency Standards for Space Based Navigation and Timekeeping

    Science.gov (United States)

    Tjoelker, R. L.; Burt, E. A.; Chung, S.; Hamell, R. L.; Prestage, J. D.; Tucker, B.; Cash, P.; Lutwak, R.

    2012-01-01

    A low power Mercury Atomic Frequency Standard (MAFS) has been developed and demonstrated on the path towards future space clock applications. A self contained mercury ion breadboard clock: emulating flight clock interfaces, steering a USO local oscillator, and consuming approx 40 Watts has been operating at JPL for more than a year. This complete, modular ion clock instrument demonstrates that key GNSS size, weight, and power (SWaP) requirements can be achieved while still maintaining short and long term performance demonstrated in previous ground ion clocks. The MAFS breadboard serves as a flexible platform for optimizing further space clock development and guides engineering model design trades towards fabrication of an ion clock for space flight.

  14. A program for constructing finitely presented Lie algebras and superalgebras

    International Nuclear Information System (INIS)

    Gerdt, V.P.; Kornyak, V.V.

    1997-01-01

    The purpose of this paper is to describe a C program FPLSA for investigating finitely presented Lie algebras and superalgebras. The underlying algorithm is based on constructing the complete set of relations called also standard basis or Groebner basis of ideal of free Lie (super) algebra generated by the input set of relations. The program may be used, in particular, to compute the Lie (super)algebra basis elements and its structure constants, to classify the finitely presented algebras depending on the values of parameters in the relations, and to construct the Hilbert series. These problems are illustrated by examples. (orig.)

  15. Normal and Pathological NCAT Image and PhantomData Based onPhysiologically Realistic Left Ventricle Finite-Element Models

    Energy Technology Data Exchange (ETDEWEB)

    Veress, Alexander I.; Segars, W. Paul; Weiss, Jeffrey A.; Tsui,Benjamin M.W.; Gullberg, Grant T.

    2006-08-02

    The 4D NURBS-based Cardiac-Torso (NCAT) phantom, whichprovides a realistic model of the normal human anatomy and cardiac andrespiratory motions, is used in medical imaging research to evaluate andimprove imaging devices and techniques, especially dynamic cardiacapplications. One limitation of the phantom is that it lacks the abilityto accurately simulate altered functions of the heart that result fromcardiac pathologies such as coronary artery disease (CAD). The goal ofthis work was to enhance the 4D NCAT phantom by incorporating aphysiologically based, finite-element (FE) mechanical model of the leftventricle (LV) to simulate both normal and abnormal cardiac motions. Thegeometry of the FE mechanical model was based on gated high-resolutionx-ray multi-slice computed tomography (MSCT) data of a healthy malesubject. The myocardial wall was represented as transversely isotropichyperelastic material, with the fiber angle varying from -90 degrees atthe epicardial surface, through 0 degreesat the mid-wall, to 90 degreesat the endocardial surface. A time varying elastance model was used tosimulate fiber contraction, and physiological intraventricular systolicpressure-time curves were applied to simulate the cardiac motion over theentire cardiac cycle. To demonstrate the ability of the FE mechanicalmodel to accurately simulate the normal cardiac motion as well abnormalmotions indicative of CAD, a normal case and two pathologic cases weresimulated and analyzed. In the first pathologic model, a subendocardialanterior ischemic region was defined. A second model was created with atransmural ischemic region defined in the same location. The FE baseddeformations were incorporated into the 4D NCAT cardiac model through thecontrol points that define the cardiac structures in the phantom whichwere set to move according to the predictions of the mechanical model. Asimulation study was performed using the FE-NCAT combination toinvestigate how the differences in contractile function

  16. A Risk and Standards Based Approach to Quality Assurance in Australia's Diverse Higher Education Sector

    Science.gov (United States)

    Australian Government Tertiary Education Quality and Standards Agency, 2015

    2015-01-01

    The Australian Government Tertiary Education Quality and Standards Agency's (TEQSA's) role is to assure that quality standards are being met by all registered higher education providers. This paper explains how TEQSA's risk-based approach to assuring higher education standards is applied in broad terms to a diverse sector. This explanation is…

  17. Domain decomposition methods for mortar finite elements

    Energy Technology Data Exchange (ETDEWEB)

    Widlund, O.

    1996-12-31

    In the last few years, domain decomposition methods, previously developed and tested for standard finite element methods and elliptic problems, have been extended and modified to work for mortar and other nonconforming finite element methods. A survey will be given of work carried out jointly with Yves Achdou, Mario Casarin, Maksymilian Dryja and Yvon Maday. Results on the p- and h-p-version finite elements will also be discussed.

  18. Study on a Threat-Countermeasure Model Based on International Standard Information

    Directory of Open Access Journals (Sweden)

    Guillermo Horacio Ramirez Caceres

    2008-12-01

    Full Text Available Many international standards exist in the field of IT security. This research is based on the ISO/IEC 15408, 15446, 19791, 13335 and 17799 standards. In this paper, we propose a knowledge base comprising a threat countermeasure model based on international standards for identifying and specifying threats which affect IT environments. In addition, the proposed knowledge base system aims at fusing similar security control policies and objectives in order to create effective security guidelines for specific IT environments. As a result, a knowledge base of security objectives was developed on the basis of the relationships inside the standards as well as the relationships between different standards. In addition, a web application was developed which displays details about the most common threats to information systems, and for each threat presents a set of related security control policies from different international standards, including ISO/IEC 27002.

  19. Complex finite element sensitivity method for creep analysis

    International Nuclear Information System (INIS)

    Gomez-Farias, Armando; Montoya, Arturo; Millwater, Harry

    2015-01-01

    The complex finite element method (ZFEM) has been extended to perform sensitivity analysis for mechanical and structural systems undergoing creep deformation. ZFEM uses a complex finite element formulation to provide shape, material, and loading derivatives of the system response, providing an insight into the essential factors which control the behavior of the system as a function of time. A complex variable-based quadrilateral user element (UEL) subroutine implementing the power law creep constitutive formulation was incorporated within the Abaqus commercial finite element software. The results of the complex finite element computations were verified by comparing them to the reference solution for the steady-state creep problem of a thick-walled cylinder in the power law creep range. A practical application of the ZFEM implementation to creep deformation analysis is the calculation of the skeletal point of a notched bar test from a single ZFEM run. In contrast, the standard finite element procedure requires multiple runs. The value of the skeletal point is that it identifies the location where the stress state is accurate, regardless of the certainty of the creep material properties. - Highlights: • A novel finite element sensitivity method (ZFEM) for creep was introduced. • ZFEM has the capability to calculate accurate partial derivatives. • ZFEM can be used for identification of the skeletal point of creep structures. • ZFEM can be easily implemented in a commercial software, e.g. Abaqus. • ZFEM results were shown to be in excellent agreement with analytical solutions

  20. Finite element method simulating temperature distribution in skin induced by 980-nm pulsed laser based on pain stimulation.

    Science.gov (United States)

    Wang, Han; Dong, Xiao-Xi; Yang, Ji-Chun; Huang, He; Li, Ying-Xin; Zhang, Hai-Xia

    2017-07-01

    For predicting the temperature distribution within skin tissue in 980-nm laser-evoked potentials (LEPs) experiments, a five-layer finite element model (FEM-5) was constructed based on Pennes bio-heat conduction equation and the Lambert-Beer law. The prediction results of the FEM-5 model were verified by ex vivo pig skin and in vivo rat experiments. Thirty ex vivo pig skin samples were used to verify the temperature distribution predicted by the model. The output energy of the laser was 1.8, 3, and 4.4 J. The laser spot radius was 1 mm. The experiment time was 30 s. The laser stimulated the surface of the ex vivo pig skin beginning at 10 s and lasted for 40 ms. A thermocouple thermometer was used to measure the temperature of the surface and internal layers of the ex vivo pig skin, and the sampling frequency was set to 60 Hz. For the in vivo experiments, nine adult male Wistar rats weighing 180 ± 10 g were used to verify the prediction results of the model by tail-flick latency. The output energy of the laser was 1.4 and 2.08 J. The pulsed width was 40 ms. The laser spot radius was 1 mm. The Pearson product-moment correlation and Kruskal-Wallis test were used to analyze the correlation and the difference of data. The results of all experiments showed that the measured and predicted data had no significant difference (P > 0.05) and good correlation (r > 0.9). The safe laser output energy range (1.8-3 J) was also predicted. Using the FEM-5 model prediction, the effective pain depth could be accurately controlled, and the nociceptors could be selectively activated. The FEM-5 model can be extended to guide experimental research and clinical applications for humans.

  1. Biomechanics of the press-fit phenomenon in dental implantology: an image-based finite element analysis

    Directory of Open Access Journals (Sweden)

    Frisardi Gianni

    2012-05-01

    Full Text Available Abstract Background A fundamental pre-requisite for the clinical success in dental implant surgery is the fast and stable implant osseointegration. The press-fit phenomenon occurring at implant insertion induces biomechanical effects in the bone tissues, which ensure implant primary stability. In the field of dental surgery, the understanding of the key factors governing the osseointegration process still remains of utmost importance. A thorough analysis of the biomechanics of dental implantology requires a detailed knowledge of bone mechanical properties as well as an accurate definition of the jaw bone geometry. Methods In this work, a CT image-based approach, combined with the Finite Element Method (FEM, has been used to investigate the effect of the drill size on the biomechanics of the dental implant technique. A very accurate model of the human mandible bone segment has been created by processing high resolution micro-CT image data. The press-fit phenomenon has been simulated by FE analyses for different common drill diameters (DA = 2.8 mm, DB = 3.3 mm, and DC = 3.8 mm with depth L = 12 mm. A virtual implant model has been assumed with a cylindrical geometry having height L = 11 mm and diameter D = 4 mm. Results The maximum stresses calculated for drill diameters DA, DB and DC have been 12.31 GPa, 7.74 GPa and 4.52 GPa, respectively. High strain values have been measured in the cortical area for the models of diameters DA and DB, while a uniform distribution has been observed for the model of diameter DC . The maximum logarithmic strains, calculated in nonlinear analyses, have been ϵ = 2.46, 0.51 and 0.49 for the three models, respectively. Conclusions This study introduces a very powerful, accurate and non-destructive methodology for investigating the effect of the drill size on the biomechanics of the dental implant technique. Further studies could aim at understanding how different drill

  2. Standardized computer-based organized reporting of EEG:SCORE

    DEFF Research Database (Denmark)

    Beniczky, Sandor; H, Aurlien,; JC, Brøgger,

    2013-01-01

    process, organized by the European Chapter of the International Federation of Clinical Neurophysiology. The Standardised Computer-based Organised Reporting of EEG (SCORE) software was constructed based on the terms and features of the consensus statement and it was tested in the clinical practice...... in free-text format. The purpose of our endeavor was to create a computer-based system for EEG assessment and reporting, where the physicians would construct the reports by choosing from predefined elements for each relevant EEG feature, as well as the clinical phenomena (for video-EEG recordings....... SCORE can potentially improve the quality of EEG assessment and reporting; it will help incorporate the results of computer-assisted analysis into the report, it will make possible the build-up of a multinational database, and it will help in training young neurophysiologists....

  3. Quantum fields at finite temperature and density

    International Nuclear Information System (INIS)

    Blaizot, J.P.

    1991-01-01

    These lectures are an elementary introduction to standard many-body techniques applied to the study of quantum fields at finite temperature and density: perturbative expansion, linear response theory, quasiparticles and their interactions, etc... We emphasize the usefulness of the imaginary time formalism in a wide class of problems, as opposed to many recent approaches based on real time. Properties of elementary excitations in an ultrarelativistic plasma at high temperature or chemical potential are discussed, and recent progresses in the study of the quark-gluon plasma are briefly reviewed

  4. Ideas: NCTM Standards-Based Instruction, Grades K-4.

    Science.gov (United States)

    Hynes, Michael C., Ed.

    This document is a collection of activity-based mathematics lessons for grades K-4 from the "Ideas" department in "Arithmetic Teacher: Mathematics Education through the Middle Grades." Each lesson includes background information, objectives, directions, extensions, and student worksheets. A matrix is included which correlates…

  5. Ideas: NCTM Standards-Based Instruction, Grades 5-8.

    Science.gov (United States)

    Hynes, Michael C., Ed.

    This document is a collection of activity-based mathematics lessons for grades 5-8 from the "Ideas" department in "Arithmetic Teacher: Mathematics Education through the Middle Grades." Each lesson includes background information, objectives, directions, extensions, and student worksheets. A matrix is included which correlates…

  6. Conceptualizing Teaching to the Test under Standards-Based Reform

    Science.gov (United States)

    Welsh, Megan E.; Eastwood, Melissa; D'Agostino, Jerome V.

    2014-01-01

    Teacher and school accountability systems based on high-stakes tests are ubiquitous throughout the United States and appear to be growing as a catalyst for reform. As a result, educators have increased the proportion of instructional time devoted to test preparation. Although guidelines for what constitutes appropriate and inappropriate test…

  7. Mechanical Performance of Natural / Natural Fiber Reinforced Hybrid Composite Materials Using Finite Element Method Based Micromechanics and Experiments

    OpenAIRE

    Rahman, Muhammad Ziaur

    2017-01-01

    A micromechanical analysis of the representative volume element (RVE) of a unidirectional flax/jute fiber reinforced epoxy composite is performed using finite element analysis (FEA). To do so, first effective mechanical properties of flax fiber and jute fiber are evaluated numerically and then used in evaluating the effective properties of ax/jute/epoxy hybrid composite. Mechanics of Structure Genome (MSG), a new homogenization tool developed in Purdue University, is used to calculate the hom...

  8. A New Energy-Based Method for 3-D Finite-Element Nonlinear Flux Linkage computation of Electrical Machines

    DEFF Research Database (Denmark)

    Lu, Kaiyuan; Rasmussen, Peter Omand; Ritchie, Ewen

    2011-01-01

    This paper presents a new method for computation of the nonlinear flux linkage in 3-D finite-element models (FEMs) of electrical machines. Accurate computation of the nonlinear flux linkage in 3-D FEM is not an easy task. Compared to the existing energy-perturbation method, the new technique......-perturbation method. The new method proposed is validated using experimental results on two different permanent magnet machines....

  9. Accounting standards and earnings management : The role of rules-based and principles-based accounting standards and incentives on accounting and transaction decisions

    NARCIS (Netherlands)

    Beest, van F.

    2012-01-01

    This book examines the effect that rules-based and principles-based accounting standards have on the level and nature of earnings management decisions. A cherry picking experiment is conducted to test the hypothesis that a substitution effect is expected from accounting decisions to transaction

  10. Use of simple finite elements for mechanical systems impact analysis based on stereomechanics, stress wave propagation, and energy method approaches

    International Nuclear Information System (INIS)

    McCoy, Michael L.; Moradi, Rasoul; Lankarani, Hamid M.

    2011-01-01

    This paper examines the effectiveness of analyzing impact events in mechanical systems for design purposes using simple or low ordered finite elements. Traditional impact dynamics analyses of mechanical systems namely stereomechanics, energy method, stress-wave propagation and contact mechanics approaches are limited to very simplified geometries and provide basic analyses in making predictions and understanding the dominant features of the impact in a mechanical system. In engineering practice, impacted systems present a complexity of geometry, stiffness, mass distributions, contact areas and impact angles that are impossible to analyze and design with the traditional impact dynamics methods. In real cases, the effective tool is the finite element (FE) method. The high-end FEA codes though may be not available for typical engineer/designer. This paper provides information on whether impact events of mechanical systems can be successfully modeled using simple or low-order finite elements. FEA models using simple elements are benchmarked against theoretical impact problems and published experimental impact results. As a case study, an FE model using simple plastic beam elements is further tested to predict stresses and deflections in an experimental structural impact

  11. Density matrix renormalization group simulations of SU(N ) Heisenberg chains using standard Young tableaus: Fundamental representation and comparison with a finite-size Bethe ansatz

    Science.gov (United States)

    Nataf, Pierre; Mila, Frédéric

    2018-04-01

    We develop an efficient method to perform density matrix renormalization group simulations of the SU(N ) Heisenberg chain with open boundary conditions taking full advantage of the SU(N ) symmetry of the problem. This method is an extension of the method previously developed for exact diagonalizations and relies on a systematic use of the basis of standard Young tableaux. Concentrating on the model with the fundamental representation at each site (i.e., one particle per site in the fermionic formulation), we have benchmarked our results for the ground-state energy up to N =8 and up to 420 sites by comparing them with Bethe ansatz results on open chains, for which we have derived and solved the Bethe ansatz equations. The agreement for the ground-state energy is excellent for SU(3) (12 digits). It decreases with N , but it is still satisfactory for N =8 (six digits). Central charges c are also extracted from the entanglement entropy using the Calabrese-Cardy formula and agree with the theoretical values expected from the SU (N) 1 Wess-Zumino-Witten conformal field theories.

  12. Transformative Shifts in Art History Teaching: The Impact of Standards-Based Assessment

    Science.gov (United States)

    Ormond, Barbara

    2011-01-01

    This article examines pedagogical shifts in art history teaching that have developed as a response to the implementation of a standards-based assessment regime. The specific characteristics of art history standards-based assessment in the context of New Zealand secondary schools are explained to demonstrate how an exacting form of assessment has…

  13. The use of customised versus population-based birthweight standards in predicting perinatal mortality.

    Science.gov (United States)

    Zhang, X; Platt, R W; Cnattingius, S; Joseph, K S; Kramer, M S

    2007-04-01

    The objective of this study was to critically examine potential artifacts and biases underlying the use of 'customised' standards of birthweight for gestational age (GA). Population-based cohort study. Sweden. A total of 782,303 singletons > or =28 weeks of gestation born in 1992-2001 to Nordic mothers with complete data on birthweight; GA; and maternal age, parity, height, and pre-pregnancy weight. We compared perinatal mortality in four groups of infants based on the following classification of small for gestational age (SGA): non-SGA based on either population-based or customised standards (the reference group), SGA based on the population-based standard only, SGA based on the customised standard only, and SGA according to both standards. We used graphical methods to compare GA-specific birthweight cutoffs for SGA using the two standards and also used logistic regression to control for differences in GA and maternal pre-pregnancy body mass index (BMI) in the four groups. Perinatal mortality, including stillbirth and neonatal death. Customisation led to a large artifactual increase in the proportion of SGA infants born preterm. Adjustment for differences in GA and maternal BMI markedly reduced the excess risk among infants classified as SGA by customised standards only. The large increase in perinatal mortality risk among infants classified as SGA based on customised standards is largely an artifact due to inclusion of more preterm births.

  14. Teachers' Perceptions of the Efficacy of Standards-Based IEP Goals

    Science.gov (United States)

    Smith, Traci Nicole

    2013-01-01

    Although standards-based IEP goals have been mandated in many states for almost a decade, their effectiveness is unknown. Standards-based IEP goals were first created to meet the requirements of No Child Left Behind and Individuals with Disabilities Education Improvement Act, which increased accountability for all students as well as those with…

  15. School Sector and Student Achievement in the Era of Standards Based Reforms

    Science.gov (United States)

    Carbonaro, William; Covay, Elizabeth

    2010-01-01

    The authors examine whether standards based accountability reforms of the past two decades have closed the achievement gap among public and private high school students. They analyzed data from the Education Longitudinal Study (ELS) to examine sector differences in high school achievement in the era of standards based reforms. The authors found…

  16. A standard protocol for describing individual-based and agent-based models

    Science.gov (United States)

    Grimm, Volker; Berger, Uta; Bastiansen, Finn; Eliassen, Sigrunn; Ginot, Vincent; Giske, Jarl; Goss-Custard, John; Grand, Tamara; Heinz, Simone K.; Huse, Geir; Huth, Andreas; Jepsen, Jane U.; Jorgensen, Christian; Mooij, Wolf M.; Muller, Birgit; Pe'er, Guy; Piou, Cyril; Railsback, Steven F.; Robbins, Andrew M.; Robbins, Martha M.; Rossmanith, Eva; Ruger, Nadja; Strand, Espen; Souissi, Sami; Stillman, Richard A.; Vabo, Rune; Visser, Ute; DeAngelis, Donald L.

    2006-01-01

    Simulation models that describe autonomous individual organisms (individual based models, IBM) or agents (agent-based models, ABM) have become a widely used tool, not only in ecology, but also in many other disciplines dealing with complex systems made up of autonomous entities. However, there is no standard protocol for describing such simulation models, which can make them difficult to understand and to duplicate. This paper presents a proposed standard protocol, ODD, for describing IBMs and ABMs, developed and tested by 28 modellers who cover a wide range of fields within ecology. This protocol consists of three blocks (Overview, Design concepts, and Details), which are subdivided into seven elements: Purpose, State variables and scales, Process overview and scheduling, Design concepts, Initialization, Input, and Submodels. We explain which aspects of a model should be described in each element, and we present an example to illustrate the protocol in use. In addition, 19 examples are available in an Online Appendix. We consider ODD as a first step for establishing a more detailed common format of the description of IBMs and ABMs. Once initiated, the protocol will hopefully evolve as it becomes used by a sufficiently large proportion of modellers.

  17. Standardized Computer-based Organized Reporting of EEG: SCORE

    Science.gov (United States)

    Beniczky, Sándor; Aurlien, Harald; Brøgger, Jan C; Fuglsang-Frederiksen, Anders; Martins-da-Silva, António; Trinka, Eugen; Visser, Gerhard; Rubboli, Guido; Hjalgrim, Helle; Stefan, Hermann; Rosén, Ingmar; Zarubova, Jana; Dobesberger, Judith; Alving, Jørgen; Andersen, Kjeld V; Fabricius, Martin; Atkins, Mary D; Neufeld, Miri; Plouin, Perrine; Marusic, Petr; Pressler, Ronit; Mameniskiene, Ruta; Hopfengärtner, Rüdiger; Emde Boas, Walter; Wolf, Peter

    2013-01-01

    The electroencephalography (EEG) signal has a high complexity, and the process of extracting clinically relevant features is achieved by visual analysis of the recordings. The interobserver agreement in EEG interpretation is only moderate. This is partly due to the method of reporting the findings in free-text format. The purpose of our endeavor was to create a computer-based system for EEG assessment and reporting, where the physicians would construct the reports by choosing from predefined elements for each relevant EEG feature, as well as the clinical phenomena (for video-EEG recordings). A working group of EEG experts took part in consensus workshops in Dianalund, Denmark, in 2010 and 2011. The faculty was approved by the Commission on European Affairs of the International League Against Epilepsy (ILAE). The working group produced a consensus proposal that went through a pan-European review process, organized by the European Chapter of the International Federation of Clinical Neurophysiology. The Standardised Computer-based Organised Reporting of EEG (SCORE) software was constructed based on the terms and features of the consensus statement and it was tested in the clinical practice. The main elements of SCORE are the following: personal data of the patient, referral data, recording conditions, modulators, background activity, drowsiness and sleep, interictal findings, “episodes” (clinical or subclinical events), physiologic patterns, patterns of uncertain significance, artifacts, polygraphic channels, and diagnostic significance. The following specific aspects of the neonatal EEGs are scored: alertness, temporal organization, and spatial organization. For each EEG finding, relevant features are scored using predefined terms. Definitions are provided for all EEG terms and features. SCORE can potentially improve the quality of EEG assessment and reporting; it will help incorporate the results of computer-assisted analysis into the report, it will make

  18. Developing an integrated electronic nursing record based on standards.

    Science.gov (United States)

    van Grunsven, Arno; Bindels, Rianne; Coenen, Chel; de Bel, Ernst

    2006-01-01

    The Radboud University Nijmegen Medical Centre in the Netherlands develops a multidisciplinar (Electronic Health Record) based on the latest HL7 v3 (Health Level 7 version 3) D-MIM : Care provision. As part of this process we are trying to establish which nursing diagnoses and activities are minimally required. These NMDS (Nursing Minimal Data Set) are mapped or translated to ICF (for diagnoses) and CEN1828 Structures for (for activities). The mappings will be the foundation for the development of user interfaces for the registration of nursing activities. A homegrown custom-made web based configuration tool is used to exploit the possibilities of HL7 v3. This enables a sparkling launch of user interfaces that can contain the diversity of health care work processes. The first screens will be developed to support history taking for the nursing chart of the Neurology ward. The screens will contain both Dutch NMDS items and ward specific information. This will be configured dynamically per (group of) ward(s).

  19. Development of an international standard on instruments setpoints based on ISA S67.04 - 1994

    International Nuclear Information System (INIS)

    Quinn, E.L.

    1996-01-01

    This is a summary of the application for and development of an international standard on instrument setpoints, based on the Instrument Society of America (ISA) Standard S67.04 - 1994. The forum this new standard was proposed in is the International Electrotechnique Commission (IEC), based in Geneva, Switzerland, which is the international commission which oversees electrical and instrumentation standards for all applications around the world. The Instrument Society of America (ISA) is a United States based Society for the advancement of instrumentation and controls related science and technology and has 30,000 members. A division within the ISA is the Standard and Practices board which has over 5000 members actively involved in standards development and approval. In 1994, the ISA SP67, Nuclear Power Plant Standards Committee authorized that the IEC be approached to develop and issue an IEC standard on Instrument Setpoints. This application was formally submitted in January, 1995 to the IEC and approved for ballot to member countries in June, 1995. Approval for standard development by IEC was received in October, 1995 and the first draft vas issued in February, 1996, and is currently under review by the IEC working group. It is very important to focus on the approach that the U.S. and other countries are taking toward development of IEC standards that can apply to all nuclear instrumentation applications around the world. By referencing IEC standards in design specification, vendors can be solicited from many different countries, thereby ensuring that the highest quality products can be used. This also offsets the need to specify individual standards in the specification, based on the country that each vendor solicited, represents. In summary, this standard development process, with support from the American National Standards Institute (ANSI) will assist U.S. suppliers in competing in the global market for products and services into the next century. (author)

  20. Image-Based Macro-Micro Finite Element Models of a Canine Femur with Implant Design Implications

    Science.gov (United States)

    Ghosh, Somnath; Krishnan, Ganapathi; Dyce, Jonathan

    2006-06-01

    In this paper, a comprehensive model of a bone-cement-implant assembly is developed for a canine cemented femoral prosthesis system. Various steps in this development entail profiling the canine femur contours by computed tomography (CT) scanning, computer aided design (CAD) reconstruction of the canine femur from CT images, CAD modeling of the implant from implant blue prints and CAD modeling of the interface cement. Finite element analysis of the macroscopic assembly is conducted for stress analysis in individual components of the system, accounting for variation in density and material properties in the porous bone material. A sensitivity analysis is conducted with the macroscopic model to investigate the effect of implant design variables on the stress distribution in the assembly. Subsequently, rigorous microstructural analysis of the bone incorporating the morphological intricacies is conducted. Various steps in this development include acquisition of the bone microstructural data from histological serial sectioning, stacking of sections to obtain 3D renderings of void distributions, microstructural characterization and determination of properties and, finally, microstructural stress analysis using a 3D Voronoi cell finite element method. Generation of the simulated microstructure and analysis by the 3D Voronoi cell finite element model provides a new way of modeling complex microstructures and correlating to morphological characteristics. An inverse calculation of the material parameters of bone by combining macroscopic experiments with microstructural characterization and analysis provides a new approach to evaluating properties without having to do experiments at this scale. Finally, the microstructural stresses in the femur are computed using the 3D VCFEM to study the stress distribution at the scale of the bone porosity. Significant difference is observed between the macroscopic stresses and the peak microscopic stresses at different locations.

  1. Testing Linear Temporal Logic Formulae on Finite Execution Traces

    Science.gov (United States)

    Havelund, Klaus; Rosu, Grigore; Norvig, Peter (Technical Monitor)

    2001-01-01

    We present an algorithm for efficiently testing Linear Temporal Logic (LTL) formulae on finite execution traces. The standard models of LTL are infinite traces, reflecting the behavior of reactive and concurrent systems which conceptually may be continuously alive. In most past applications of LTL. theorem provers and model checkers have been used to formally prove that down-scaled models satisfy such LTL specifications. Our goal is instead to use LTL for up-scaled testing of real software applications. Such tests correspond to analyzing the conformance of finite traces against LTL formulae. We first describe what it means for a finite trace to satisfy an LTL property. We then suggest an optimized algorithm based on transforming LTL formulae. The work is done using the Maude rewriting system. which turns out to provide a perfect notation and an efficient rewriting engine for performing these experiments.

  2. Simulation of CNT-AFM tip based on finite element analysis for targeted probe of the biological cell

    Energy Technology Data Exchange (ETDEWEB)

    Yousefi, Amin Termeh, E-mail: at.tyousefi@gmail.com; Miyake, Mikio, E-mail: miyakejaist@gmail.com; Ikeda, Shoichiro, E-mail: sho16.ikeda@gmail.com [ChECA IKohza, Dept. Environmental & Green Technology (EGT), Malaysia, Japan International Institute of Technology (MJIIT), University Technology Malaysia - UTM, Kualalumpur (Malaysia); Mahmood, Mohamad Rusop, E-mail: nano@uitm.gmail.com [NANO-SciTech Centre, Institute of Science, Universiti Teknologi MARA (UiTM), Shah Alam, Selangor (Malaysia)

    2016-07-06

    Carbon nanotubes (CNTs) are potentially ideal tips for atomic force microscopy (AFM) due to the robust mechanical properties, nano scale diameter and also their ability to be functionalized by chemical and biological components at the tip ends. This contribution develops the idea of using CNTs as an AFM tip in computational analysis of the biological cell’s. Finite element analysis employed for each section and displacement of the nodes located in the contact area was monitored by using an output database (ODB). This reliable integration of CNT-AFM tip process provides a new class of high performance nanoprobes for single biological cell analysis.

  3. Transient finite element magnetic field calculation method in the anisotropic magnetic material based on the measured magnetization curves

    International Nuclear Information System (INIS)

    Jesenik, M.; Gorican, V.; Trlep, M.; Hamler, A.; Stumberger, B.

    2006-01-01

    A lot of magnetic materials are anisotropic. In the 3D finite element method calculation, anisotropy of the material is taken into account. Anisotropic magnetic material is described with magnetization curves for different magnetization directions. The 3D transient calculation of the rotational magnetic field in the sample of the round rotational single sheet tester with circular sample considering eddy currents is made and compared with the measurement to verify the correctness of the method and to analyze the magnetic field in the sample

  4. Radiological Evaluation Standards in the Radiology Department of Shahid Beheshti Hospital (RAH YASUJ Based on Radiology standards in 92

    Directory of Open Access Journals (Sweden)

    A َKalantari

    2014-08-01

    Full Text Available Background & aim: Radiology personnel’s working in terms of performance and safety is one of the most important functions in order to increase the quality and quantity. This study aimed to evaluate the radiological standards in Shahid Beheshti Hospital of Yasuj, Iran, in 2013. Methods: The present cross-sectional study was based on a 118 randomly selected graphs and the ranking list, with full knowledge of the standards in radiology was performed two times. Data were analyzed using descriptive statistics. Results: 87.3% of the students chose the cassette, 76.3%, patients chose the position, 87.3%, member state, the central ray 83.9%, and the distance between the tube and the patient 68.6% had been operated in accordance with the standards practice. Among all the factors and variables, between view with cassette, view with SID, sex with position patients, grid with central ray, grid with SID, Request with positioning the patient and between density with patient position and member position significant relationship were observed p<0.05 . Conclusions: Staff and students in terms of performance were at high levels, but in the levels of protection were in poor condition. Therefore, in order to promote their conservation, education and periodical monitoring should be carried out continuously.

  5. An outcome-based approach for the creation of fetal growth standards: do singletons and twins need separate standards?

    Science.gov (United States)

    Joseph, K S; Fahey, John; Platt, Robert W; Liston, Robert M; Lee, Shoo K; Sauve, Reg; Liu, Shiliang; Allen, Alexander C; Kramer, Michael S

    2009-03-01

    Contemporary fetal growth standards are created by using theoretical properties (percentiles) of birth weight (for gestational age) distributions. The authors used a clinically relevant, outcome-based methodology to determine if separate fetal growth standards are required for singletons and twins. All singleton and twin livebirths between 36 and 42 weeks' gestation in the United States (1995-2002) were included, after exclusions for missing information and other factors (n = 17,811,922). A birth weight range was identified, at each gestational age, over which serious neonatal morbidity and neonatal mortality rates were lowest. Among singleton males at 40 weeks, serious neonatal morbidity/mortality rates were lowest between 3,012 g (95% confidence interval (CI): 3,008, 3,018) and 3,978 g (95% CI: 3,976, 3,980). The low end of this optimal birth weight range for females was 37 g (95% CI: 21, 53) less. The low optimal birth weight was 152 g (95% CI: 121, 183) less for twins compared with singletons. No differences were observed in low optimal birth weight by period (1999-2002 vs. 1995-1998), but small differences were observed for maternal education, race, parity, age, and smoking status. Patterns of birth weight-specific serious neonatal morbidity/neonatal mortality support the need for plurality-specific fetal growth standards.

  6. Parametric design of silo steel framework of concrete mixing station based on the finite element method and MATLAB

    Directory of Open Access Journals (Sweden)

    Long Hui

    2016-01-01

    Full Text Available When the structure of the silo steel framework of concrete mixing station is designed, In most cases, the dimension parameters, shape parameters and position parameters of silo steel framework beams are changed as the productivity adjustment of the concrete mixing station, but the structure types of silo steel framework will remain the same. In order to acquire strength of silo steel framework rapidly and efficiently, it is need to provide specialized parametric strength computational software for engineering staff who does not understand the three-dimensional software such as PROE and finite element analysis software. By the finite element methods(FEM, the parametric stress calculation modal of the silo steel framework of concrete mixing station is established, which includes dimension parameters, shape parameters, position parameters and applied load parameters of each beams, and then the parametric calculation program is written with MATLAB. The stress equations reflect the internal relationship between the stress of the silo steel frames with the dimension parameters, shape parameters, position parameters and load parameters. Finally, an example is presented, the calculation results show the stress of all members and the size and location of the maximum stress, which agrees well with realistic cases.

  7. Stress Distribution in Single Dental Implant System: Three-Dimensional Finite Element Analysis Based on an In Vitro Experimental Model.

    Science.gov (United States)

    Rezende, Carlos Eduardo Edwards; Chase-Diaz, Melody; Costa, Max Doria; Albarracin, Max Laurent; Paschoeto, Gabriela; Sousa, Edson Antonio Capello; Rubo, José Henrique; Borges, Ana Flávia Sanches

    2015-10-01

    This study aimed to analyze the stress distribution in single implant system and to evaluate the compatibility of an in vitro model with finite element (FE) model. The in vitro model consisted of Brånemark implant; multiunit set abutment of 5 mm height; metal-ceramic screw-retained crown, and polyurethane simulating the bone. Deformations were recorded in the peri-implant region in the mesial and distal aspects, after an axial 300 N load application at the center of the occlusal aspect of the crown, using strain gauges. This in vitro model was scanned with micro CT to design a three-dimensional FE model and the strains in the peri-implant bone region were registered to check the compatibility between both models. The FE model was used to evaluate stress distribution in different parts of the system. The values obtained from the in vitro model (20-587 με) and the finite element analysis (81-588 με) showed agreement among them. The highest stresses because of axial and oblique load, respectively were 5.83 and 40 MPa for the cortical bone, 55 and 1200 MPa for the implant, and 80 and 470 MPa for the abutment screw. The FE method proved to be effective for evaluating the deformation around single implant. Oblique loads lead to higher stress concentrations.

  8. Field Strain Measurement on the Fiber Scale in Carbon Fiber Reinforced Polymers Using Global Finite-Element Based Digital Image Correlation

    KAUST Repository

    Tao, Ran

    2015-05-01

    Laminated composites are materials with complex architecture made of continuous fibers embedded within a polymeric resin. The properties of the raw materials can vary from one point to another due to different local processing conditions or complex geometrical features for example. A first step towards the identification of these spatially varying material parameters is to image with precision the displacement fields in this complex microstructure when subjected to mechanical loading. This thesis is aimed to accurately measure the displacement and strain fields at the fiber-matrix scale in a cross-ply composite. First, the theories of both local subset-based digital image correlation (DIC) and global finite-element based DIC are outlined. Second, in-situ secondary electron tensile images obtained by scanning electron microscopy (SEM) are post-processed by both DIC techniques. Finally, it is shown that when global DIC is applied with a conformal mesh, it can capture more accurately sharp local variations in the strain fields as it takes into account the underlying microstructure. In comparison to subset-based local DIC, finite-element based global DIC is better suited for capturing gradients across the fiber-matrix interfaces.

  9. Parallel iterative procedures for approximate solutions of wave propagation by finite element and finite difference methods

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S. [Purdue Univ., West Lafayette, IN (United States)

    1994-12-31

    Parallel iterative procedures based on domain decomposition techniques are defined and analyzed for the numerical solution of wave propagation by finite element and finite difference methods. For finite element methods, in a Lagrangian framework, an efficient way for choosing the algorithm parameter as well as the algorithm convergence are indicated. Some heuristic arguments for finding the algorithm parameter for finite difference schemes are addressed. Numerical results are presented to indicate the effectiveness of the methods.

  10. HSTL IO Standard Based Energy Efficient Multiplier Design using Nikhilam Navatashcaramam Dashatah on 28nm FPGA

    DEFF Research Database (Denmark)

    Madhok, Shivani; Pandey, Bishwajeet; Kaur, Amanpreet

    2015-01-01

    standards. Frequency scaling is one of the best energy efficient techniques for FPGA based VLSI design and is used in this paper. At the end we can conclude that we can conclude that there is 23-40% saving of total power dissipation by using SSTL IO standard at 25 degree Celsius. The main reason for power...... consumption is leakage power at different IO Standards and at different frequencies. In this research work only FPGA work has been performed not ultra scale FPGA....

  11. Risk assessment based on current release standards for radioactive surface contamination

    International Nuclear Information System (INIS)

    Chen, S.Y.

    1993-09-01

    Standards for uncontrolled releases of radioactive surface contamination have been in existence in the United States for about two decades. Such standards have been issued by various agencies, including the US Department of Energy. This paper reviews the technical basis of published standards, identifies areas in need of revision, provides risk interpretations based on current technical knowledge and the regulatory environment, and offers suggestions for improvements

  12. Biological bases of the maximum permissible exposure levels of the UK laser standard BS 4803: 1983

    International Nuclear Information System (INIS)

    McKinlay, A.F.; Harlen, F.

    1983-10-01

    The use of lasers has increased greatly over the past 15 years or so, to the extent that they are now used routinely in many occupational and public situations. There has been an increasing awareness of the potential hazards presented by lasers and substantial efforts have been made to formulate safety standards. In the UK the relevant Safety Standard is the British Standards Institution Standard BS 4803. This Standard was originally published in 1972 and a revision has recently been published (BS 4803: 1983). The revised standard has been developed using the American National Standards Institute Standard, ANSI Z136.1 (1973 onwards), as a model. In other countries, national standards have been similarly formulated, resulting in a large measure of international agreement through participation in the work of the International Electrotechnical Commission (IEC). The bases of laser safety standards are biophysical data on threshold injury effects, particularly on the retina, and the development of theoretical models of damage mechanisms. This report deals in some detail with the mechanisms of injury from over exposure to optical radiations, in particular with the dependency of the type and degree of damage on wavelength, image size and pulse duration. The maximum permissible exposure levels recommended in BS 4803: 1983 are compared with published data for damage thresholds and the adequacy of the standard is discussed. (author)

  13. Virtual standards of vibration-based defects diagnostics in railway industry

    Directory of Open Access Journals (Sweden)

    Vladimir TETTER

    2009-01-01

    Full Text Available The issues related to testing the functionality stated by producers of vibration-based diagnostic equipment have been considered. The introduction of virtual standards of defects found in bearing and geared assemblies of rolling stock is offered. The variants of virtual standards realization have been considered.

  14. A finite element method for SSI time history calculation

    International Nuclear Information System (INIS)

    Ni, X.; Gantenbein, F.; Petit, M.

    1989-01-01

    The method which is proposed is based on a finite element modelization for the soil and the structure and a time history calculation. It has been developed for plane and axisymmetric geometries. The principle of this method is presented, then applications are given, first to a linear calculation for which results will be compared to those obtained by standard methods. Then results for a non linear behavior are described

  15. A finite element method for SSI time history calculations

    International Nuclear Information System (INIS)

    Ni, X.M.; Gantenbein, F.; Petit, M.

    1989-01-01

    The method which is proposed is based on a finite element modelisation for the soil and the structure and a time history calculation. It has been developed for plane and axisymmetric geometries. The principle of this method will be presented, then applications will be given, first to a linear calculation for which results will be compared to those obtained by standard methods. Then results for a non linear behavior will be described

  16. Finite-temperature behavior of mass hierarchies in supersymmetric theories

    International Nuclear Information System (INIS)

    Ginsparg, P.

    1982-01-01

    It is shown that Witten's mechanism for producing a large gauge hierarchy in supersymmetric theories leads to a novel symmetry behavior at finite temperature. The exponentially large expectation value in such models develops at a critical temperature of order of the small (supersymmetry-breaking) scale. The phase transition can proceed without need of vacuum tunnelling. Models based on Witten's mechanism thus require a reexamination of the standard cosmological treatment of grand unified theories. (orig.)

  17. Determination of the mechanical and physical properties of cartilage by coupling poroelastic-based finite element models of indentation with artificial neural networks.

    Science.gov (United States)

    Arbabi, Vahid; Pouran, Behdad; Campoli, Gianni; Weinans, Harrie; Zadpoor, Amir A

    2016-03-21

    One of the most widely used techniques to determine the mechanical properties of cartilage is based on indentation tests and interpretation of the obtained force-time or displacement-time data. In the current computational approaches, one needs to simulate the indentation test with finite element models and use an optimization algorithm to estimate the mechanical properties of cartilage. The modeling procedure is cumbersome, and the simulations need to be repeated for every new experiment. For the first time, we propose a method for fast and accurate estimation of the mechanical and physical properties of cartilage as a poroelastic material with the aid of artificial neural networks. In our study, we used finite element models to simulate the indentation for poroelastic materials with wide combinations of mechanical and physical properties. The obtained force-time curves are then divided into three parts: the first two parts of the data is used for training and validation of an artificial neural network, while the third part is used for testing the trained network. The trained neural network receives the force-time curves as the input and provides the properties of cartilage as the output. We observed that the trained network could accurately predict the properties of cartilage within the range of properties for which it was trained. The mechanical and physical properties of cartilage could therefore be estimated very fast, since no additional finite element modeling is required once the neural network is trained. The robustness of the trained artificial neural network in determining the properties of cartilage based on noisy force-time data was assessed by introducing noise to the simulated force-time data. We found that the training procedure could be optimized so as to maximize the robustness of the neural network against noisy force-time data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Investigations on thermal properties, stress and deformation of Al/SiC metal matrix composite based on finite element method

    Directory of Open Access Journals (Sweden)

    K. A. Ramesh Kumar

    2014-09-01

    Full Text Available AlSiC is a metal matrix composite which comprises of aluminium matrix with silicon carbide particles. It is characterized by high thermal conductivity (180-200 W/m K, and its thermal expansion are attuned to match other important materials that finds enormous demand in industrial sectors. Although its application is very common, the physics behind the Al-SiC formation, functionality and behaviors are intricate owing to the temperature gradient of hundreds of degrees, over the volume, occurring on a time scale of a few seconds, involving multiple phases. In this study, various physical, metallurgical and numerical aspects such as equation of continuum for thermal, stress and deformation using finite element (FE matrix formulation, temperature dependent material properties, are analyzed. Modelling and simulation studies of Al/SiC composites are a preliminary attempt to view this research work from computational point of view.

  19. A modal-based reduction method for sound absorbing porous materials in poro-acoustic finite element models.

    Science.gov (United States)

    Rumpler, Romain; Deü, Jean-François; Göransson, Peter

    2012-11-01

    Structural-acoustic finite element models including three-dimensional (3D) modeling of porous media are generally computationally costly. While being the most commonly used predictive tool in the context of noise reduction applications, efficient solution strategies are required. In this work, an original modal reduction technique, involving real-valued modes computed from a classical eigenvalue solver is proposed to reduce the size of the problem associated with the porous media. In the form presented in this contribution, the method is suited for homogeneous porous layers. It is validated on a 1D poro-acoustic academic problem and tested for its performance on a 3D application, using a subdomain decomposition strategy. The performance of the proposed method is estimated in terms of degrees of freedom downsizing, computational time enhancement, as well as matrix sparsity of the reduced system.

  20. FPGA-based electrocardiography (ECG signal analysis system using least-square linear phase finite impulse response (FIR filter

    Directory of Open Access Journals (Sweden)

    Mohamed G. Egila

    2016-12-01

    Full Text Available This paper presents a proposed design for analyzing electrocardiography (ECG signals. This methodology employs highpass least-square linear phase Finite Impulse Response (FIR filtering technique to filter out the baseline wander noise embedded in the input ECG signal to the system. Discrete Wavelet Transform (DWT was utilized as a feature extraction methodology to extract the reduced feature set from the input ECG signal. The design uses back propagation neural network classifier to classify the input ECG signal. The system is implemented on Xilinx 3AN-XC3S700AN Field Programming Gate Array (FPGA board. A system simulation has been done. The design is compared with some other designs achieving total accuracy of 97.8%, and achieving reduction in utilizing resources on FPGA implementation.

  1. A study of infrasound propagation based on high-order finite difference solutions of the Navier-Stokes equations.

    Science.gov (United States)

    Marsden, O; Bogey, C; Bailly, C

    2014-03-01

    The feasibility of using numerical simulation of fluid dynamics equations for the detailed description of long-range infrasound propagation in the atmosphere is investigated. The two dimensional (2D) Navier Stokes equations are solved via high fidelity spatial finite differences and Runge-Kutta time integration, coupled with a shock-capturing filter procedure allowing large amplitudes to be studied. The accuracy of acoustic prediction over long distances with this approach is first assessed in the linear regime thanks to two test cases featuring an acoustic source placed above a reflective ground in a homogeneous and weakly inhomogeneous medium, solved for a range of grid resolutions. An atmospheric model which can account for realistic features affecting acoustic propagation is then described. A 2D study of the effect of source amplitude on signals recorded at ground level at varying distances from the source is carried out. Modifications both in terms of waveforms and arrival times are described.

  2. Time-history simulation of civil architecture earthquake disaster relief- based on the three-dimensional dynamic finite element method

    Directory of Open Access Journals (Sweden)

    Liu Bing

    2014-10-01

    Full Text Available Earthquake action is the main external factor which influences long-term safe operation of civil construction, especially of the high-rise building. Applying time-history method to simulate earthquake response process of civil construction foundation surrounding rock is an effective method for the anti-knock study of civil buildings. Therefore, this paper develops a civil building earthquake disaster three-dimensional dynamic finite element numerical simulation system. The system adopts the explicit central difference method. Strengthening characteristics of materials under high strain rate and damage characteristics of surrounding rock under the action of cyclic loading are considered. Then, dynamic constitutive model of rock mass suitable for civil building aseismic analysis is put forward. At the same time, through the earthquake disaster of time-history simulation of Shenzhen Children’s Palace, reliability and practicability of system program is verified in the analysis of practical engineering problems.

  3. Investigation on the Crack Behaviour in Kevlar 49 Based Composite Materials using Extended Finite Element Method for Aerospace Applications

    Science.gov (United States)

    Handa, Danish; Sekhar Dondapati, Raja; Kumar, Abhinav

    2017-08-01

    Ductile to brittle transition (DTBT) is extensively observed in materials under cryogenic temperatures, thereby observing brittle failure due to the non-resistance of crack propagation. Owing to its outstanding mechanical and thermal properties, Kevlar 49 composites are widely used in aerospace applications under cryogenic temperatures. Therefore, in this paper, involving the assumption of linear elastic fracture mechanics (LEFM), mechanical characterization of Kevlar 49 composite is done using Extended Finite Element Method (X-FEM) technique in Abaqus/CAE software. Further, the failure of Kevlar 49 composites due to the propagation of crack at room temperature and the cryogenic temperature is investigated. Stress, strain and strain energy density as a function of the width of the Kevlar specimen is predicted, indicates that Kevlar 49 composites are suitable for use under cryogenic temperatures.

  4. Status of existing federal environmental risk-based standards applicable to Department of Energy operations

    International Nuclear Information System (INIS)

    Bilyard, G.R.

    1991-09-01

    When conducting its environmental restoration, waste management, and decontamination and decommissioning activities, the US Department of Energy (DOE) must comply with a myriad of regulatory procedures and environmental standards. This paper assesses the status of existing federal risk-based standards that may be applied to chemical and radioactive substances on DOE sites. Gaps and inconsistencies among the existing standards and the technical issues associated with the application of those standards are identified. Finally, the implications of the gaps, inconsistencies, and technical issues on DOE operations are discussed, and approaches to resolving the gaps, inconsistencies, and technical issues are identified. 6 refs

  5. A Critical Examination of IT-21: Thinking Beyond Vendor-Based Standards

    National Research Council Canada - National Science Library

    Trupp, Travis

    1999-01-01

    .... This thesis takes a critical look at the IT-21 policy from an economic, security, availability, procurement, and practical level, and explores the role of vendor-based standards in the Navy computing architecture...

  6. Evaluation of template-based models in CASP8 with standard measures

    KAUST Repository

    Cozzetto, Domenico; Kryshtafovych, Andriy; Fidelis, Krzysztof; Moult, John; Rost, Burkhard; Tramontano, Anna

    2009-01-01

    The strategy for evaluating template-based models submitted to CASP has continuously evolved from CASP1 to CASP5, leading to a standard procedure that has been used in all subsequent editions. The established approach includes methods

  7. A finite landscape?

    International Nuclear Information System (INIS)

    Acharya, B.S.; Douglas, M.R.

    2006-06-01

    We present evidence that the number of string/M theory vacua consistent with experiments is finite. We do this both by explicit analysis of infinite sequences of vacua and by applying various mathematical finiteness theorems. (author)

  8. Nilpotent -local finite groups

    Science.gov (United States)

    Cantarero, José; Scherer, Jérôme; Viruel, Antonio

    2014-10-01

    We provide characterizations of -nilpotency for fusion systems and -local finite groups that are inspired by known result for finite groups. In particular, we generalize criteria by Atiyah, Brunetti, Frobenius, Quillen, Stammbach and Tate.

  9. Applying open source data visualization tools to standard based medical data.

    Science.gov (United States)

    Kopanitsa, Georgy; Taranik, Maxim

    2014-01-01

    Presentation of medical data in personal health records (PHRs) requires flexible platform independent tools to ensure easy access to the information. Different backgrounds of the patients, especially elder people require simple graphical presentation of the data. Data in PHRs can be collected from heterogeneous sources. Application of standard based medical data allows development of generic visualization methods. Focusing on the deployment of Open Source Tools, in this paper we applied Java Script libraries to create data presentations for standard based medical data.

  10. Basic Finite Element Method

    International Nuclear Information System (INIS)

    Lee, Byeong Hae

    1992-02-01

    This book gives descriptions of basic finite element method, which includes basic finite element method and data, black box, writing of data, definition of VECTOR, definition of matrix, matrix and multiplication of matrix, addition of matrix, and unit matrix, conception of hardness matrix like spring power and displacement, governed equation of an elastic body, finite element method, Fortran method and programming such as composition of computer, order of programming and data card and Fortran card, finite element program and application of nonelastic problem.

  11. [Establishment of database with standard 3D tooth crowns based on 3DS MAX].

    Science.gov (United States)

    Cheng, Xiaosheng; An, Tao; Liao, Wenhe; Dai, Ning; Yu, Qing; Lu, Peijun

    2009-08-01

    The database with standard 3D tooth crowns has laid the groundwork for dental CAD/CAM system. In this paper, we design the standard tooth crowns in 3DS MAX 9.0 and create a database with these models successfully. Firstly, some key lines are collected from standard tooth pictures. Then we use 3DS MAX 9.0 to design the digital tooth model based on these lines. During the design process, it is important to refer to the standard plaster tooth model. After some tests, the standard tooth models designed with this method are accurate and adaptable; furthermore, it is very easy to perform some operations on the models such as deforming and translating. This method provides a new idea to build the database with standard 3D tooth crowns and a basis for dental CAD/CAM system.

  12. Standardized Procedure Content And Data Structure Based On Human Factors Requirements For Computer-Based Procedures

    International Nuclear Information System (INIS)

    Bly, Aaron; Oxstrand, Johanna; Le Blanc, Katya L

    2015-01-01

    Most activities that involve human interaction with systems in a nuclear power plant are guided by procedures. Traditionally, the use of procedures has been a paper-based process that supports safe operation of the nuclear power industry. However, the nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. Advances in digital technology make computer-based procedures (CBPs) a valid option that provides further enhancement of safety by improving human performance related to procedure use. The transition from paper-based procedures (PBPs) to CBPs creates a need for a computer-based procedure system (CBPS). A CBPS needs to have the ability to perform logical operations in order to adjust to the inputs received from either users or real time data from plant status databases. Without the ability for logical operations the procedure is just an electronic copy of the paper-based procedure. In order to provide the CBPS with the information it needs to display the procedure steps to the user, special care is needed in the format used to deliver all data and instructions to create the steps. The procedure should be broken down into basic elements and formatted in a standard method for the CBPS. One way to build the underlying data architecture is to use an Extensible Markup Language (XML) schema, which utilizes basic elements to build each step in the smart procedure. The attributes of each step will determine the type of functionality that the system will generate for that step. The CBPS will provide the context for the step to deliver referential information, request a decision, or accept input from the user. The XML schema needs to provide all data necessary for the system to accurately perform each step without the need for the procedure writer to reprogram the CBPS. The research team at the Idaho National Laboratory has developed a prototype CBPS for field workers as well as the

  13. Standardized Procedure Content And Data Structure Based On Human Factors Requirements For Computer-Based Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Bly, Aaron; Oxstrand, Johanna; Le Blanc, Katya L

    2015-02-01

    Most activities that involve human interaction with systems in a nuclear power plant are guided by procedures. Traditionally, the use of procedures has been a paper-based process that supports safe operation of the nuclear power industry. However, the nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. Advances in digital technology make computer-based procedures (CBPs) a valid option that provides further enhancement of safety by improving human performance related to procedure use. The transition from paper-based procedures (PBPs) to CBPs creates a need for a computer-based procedure system (CBPS). A CBPS needs to have the ability to perform logical operations in order to adjust to the inputs received from either users or real time data from plant status databases. Without the ability for logical operations the procedure is just an electronic copy of the paper-based procedure. In order to provide the CBPS with the information it needs to display the procedure steps to the user, special care is needed in the format used to deliver all data and instructions to create the steps. The procedure should be broken down into basic elements and formatted in a standard method for the CBPS. One way to build the underlying data architecture is to use an Extensible Markup Language (XML) schema, which utilizes basic elements to build each step in the smart procedure. The attributes of each step will determine the type of functionality that the system will generate for that step. The CBPS will provide the context for the step to deliver referential information, request a decision, or accept input from the user. The XML schema needs to provide all data necessary for the system to accurately perform each step without the need for the procedure writer to reprogram the CBPS. The research team at the Idaho National Laboratory has developed a prototype CBPS for field workers as well as the

  14. The Standardization Method of Address Information for POIs from Internet Based on Positional Relation

    Directory of Open Access Journals (Sweden)

    WANG Yong

    2016-05-01

    Full Text Available As points of interest (POIon the internet, exists widely incomplete addresses and inconsistent literal expressions, a fast standardization processing method of network POIs address information based on spatial constraints was proposed. Based on the model of the extensible address expression, first of all, address information of POI was segmented and extracted. Address elements are updated by means of matching with the address tree layer by layer. Then, by defining four types of positional relations, corresponding set are selected from standard POI library as candidate for enrichment and amendment of non-standard address. At last, the fast standardized processing of POI address information was achieved with the help of backtracking address elements with minimum granularity. Experiments in this paper proved that the standardization processing of an address can be realized by means of this method with higher accuracy in order to build the address database.

  15. Composite Finite Sums

    KAUST Repository

    Alabdulmohsin, Ibrahim M.

    2018-01-01

    In this chapter, we extend the previous results of Chap. 2 to the more general case of composite finite sums. We describe what composite finite sums are and how their analysis can be reduced to the analysis of simple finite sums using the chain rule. We apply these techniques, next, on numerical integration and on some identities of Ramanujan.

  16. Composite Finite Sums

    KAUST Repository

    Alabdulmohsin, Ibrahim M.

    2018-03-07

    In this chapter, we extend the previous results of Chap. 2 to the more general case of composite finite sums. We describe what composite finite sums are and how their analysis can be reduced to the analysis of simple finite sums using the chain rule. We apply these techniques, next, on numerical integration and on some identities of Ramanujan.

  17. Finite Element Modelling of a Field-Sensed Magnetic Suspended System for Accurate Proximity Measurement Based on a Sensor Fusion Algorithm with Unscented Kalman Filter.

    Science.gov (United States)

    Chowdhury, Amor; Sarjaš, Andrej

    2016-09-15

    The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation.

  18. Material Characterization and Geometric Segmentation of a Composite Structure Using Microfocus X-Ray Computed Tomography Image-Based Finite Element Modeling

    Science.gov (United States)

    Abdul-Aziz, Ali; Roth, D. J.; Cotton, R.; Studor, George F.; Christiansen, Eric; Young, P. C.

    2011-01-01

    This study utilizes microfocus x-ray computed tomography (CT) slice sets to model and characterize the damage locations and sizes in thermal protection system materials that underwent impact testing. ScanIP/FE software is used to visualize and process the slice sets, followed by mesh generation on the segmented volumetric rendering. Then, the local stress fields around several of the damaged regions are calculated for realistic mission profiles that subject the sample to extreme temperature and other severe environmental conditions. The resulting stress fields are used to quantify damage severity and make an assessment as to whether damage that did not penetrate to the base material can still result in catastrophic failure of the structure. It is expected that this study will demonstrate that finite element modeling based on an accurate three-dimensional rendered model from a series of CT slices is an essential tool to quantify the internal macroscopic defects and damage of a complex system made out of thermal protection material. Results obtained showing details of segmented images; three-dimensional volume-rendered models, finite element meshes generated, and the resulting thermomechanical stress state due to impact loading for the material are presented and discussed. Further, this study is conducted to exhibit certain high-caliber capabilities that the nondestructive evaluation (NDE) group at NASA Glenn Research Center can offer to assist in assessing the structural durability of such highly specialized materials so improvements in their performance and capacities to handle harsh operating conditions can be made.

  19. Trabecular bone strains around a dental implant and associated micromotions--a micro-CT-based three-dimensional finite element study.

    Science.gov (United States)

    Limbert, Georges; van Lierde, Carl; Muraru, O Luiza; Walboomers, X Frank; Frank, Milan; Hansson, Stig; Middleton, John; Jaecques, Siegfried

    2010-05-07

    The first objective of this computational study was to assess the strain magnitude and distribution within the three-dimensional (3D) trabecular bone structure around an osseointegrated dental implant loaded axially. The second objective was to investigate the relative micromotions between the implant and the surrounding bone. The work hypothesis adopted was that these virtual measurements would be a useful indicator of bone adaptation (resorption, homeostasis, formation). In order to reach these objectives, a microCT-based finite element model of an oral implant implanted into a Berkshire pig mandible was developed along with a robust software methodology. The finite element mesh of the 3D trabecular bone architecture was generated from the segmentation of microCT scans. The implant was meshed independently from its CAD file obtained from the manufacturer. The meshes of the implant and the bone sample were registered together in an integrated software environment. A series of non-linear contact finite element (FE) analyses considering an axial load applied to the top of the implant in combination with three sets of mechanical properties for the trabecular bone tissue was devised. Complex strain distribution patterns are reported and discussed. It was found that considering the Young's modulus of the trabecular bone tissue to be 5, 10 and 15GPa resulted in maximum peri-implant bone microstrains of about 3000, 2100 and 1400. These results indicate that, for the three sets of mechanical properties considered, the magnitude of maximum strain lies within an homeostatic range known to be sufficient to maintain/form bone. The corresponding micro-motions of the implant with respect to the bone microstructure were shown to be sufficiently low to prevent fibrous tissue formation and to favour long-term osseointegration. Copyright 2010 Elsevier Ltd. All rights reserved.

  20. An Investigation of the Engagement of Elementary Students in the NCTM Process Standards after One Year of Standards-Based Instruction

    Science.gov (United States)

    Fillingim, Jennifer Gale

    2010-01-01

    Contemporary mathematics education reform has placed increased emphasis on K-12 mathematics curriculum. Reform-based curricula, often referred to as "Standards-based" due to philosophical alignment with the NCTM Process Standards, have generated controversy among families, educators, and researchers. The mathematics education research…