#### Sample records for numerical technique based

1. A textbook of computer based numerical and statistical techniques

CERN Document Server

Jaiswal, AK

2009-01-01

About the Book: Application of Numerical Analysis has become an integral part of the life of all the modern engineers and scientists. The contents of this book covers both the introductory topics and the more advanced topics such as partial differential equations. This book is different from many other books in a number of ways. Salient Features: Mathematical derivation of each method is given to build the students understanding of numerical analysis. A variety of solved examples are given. Computer programs for almost all numerical methods discussed have been presented in C langu

2. The effect of numerical techniques on differential equation based chaotic generators

KAUST Repository

Zidan, Mohammed A.

2012-07-29

In this paper, we study the effect of the numerical solution accuracy on the digital implementation of differential chaos generators. Four systems are built on a Xilinx Virtex 4 FPGA using Euler, mid-point, and Runge-Kutta fourth order techniques. The twelve implementations are compared based on the FPGA used area, maximum throughput, maximum Lyapunov exponent, and autocorrelation confidence region. Based on circuit performance and the chaotic response of the different implementations, it was found that less complicated numerical solution has better chaotic response and higher throughput.

3. A Dynamic Operation Permission Technique Based on an MFM Model and Numerical Simulation

International Nuclear Information System (INIS)

Akio, Gofuku; Masahiro, Yonemura

2011-01-01

It is important to support operator activities to an abnormal plant situation where many counter actions are taken in relatively short time. The authors proposed a technique called dynamic operation permission to decrease human errors without eliminating creative idea of operators to cope with an abnormal plant situation by checking if the counter action taken is consistent with emergency operation procedure. If the counter action is inconsistent, a dynamic operation permission system warns it to operators. It also explains how and why the counter action is inconsistent and what influence will appear on the future plant behavior by a qualitative influence inference technique based on a model by the Mf (Multilevel Flow Modeling). However, the previous dynamic operation permission is not able to explain quantitative effects on plant future behavior. Moreover, many possible influence paths are derived because a qualitative reasoning does not give a solution when positive and negative influences are propagated to the same node. This study extends the dynamic operation permission by combining the qualitative reasoning and the numerical simulation technique. The qualitative reasoning based on an Mf model of plant derives all possible influence propagation paths. Then, a numerical simulation gives a prediction of plant future behavior in the case of taking a counter action. The influence propagation that does not coincide with the simulation results is excluded from possible influence paths. The extended technique is implemented in a dynamic operation permission system for an oil refinery plant. An MFM model and a static numerical simulator are developed. The results of dynamic operation permission for some abnormal plant situations show the improvement of the accuracy of dynamic operation permission and the quality of explanation for the effects of the counter action taken

4. Optical asymmetric cryptography based on elliptical polarized light linear truncation and a numerical reconstruction technique.

Science.gov (United States)

Lin, Chao; Shen, Xueju; Wang, Zhisong; Zhao, Cheng

2014-06-20

We demonstrate a novel optical asymmetric cryptosystem based on the principle of elliptical polarized light linear truncation and a numerical reconstruction technique. The device of an array of linear polarizers is introduced to achieve linear truncation on the spatially resolved elliptical polarization distribution during image encryption. This encoding process can be characterized as confusion-based optical cryptography that involves no Fourier lens and diffusion operation. Based on the Jones matrix formalism, the intensity transmittance for this truncation is deduced to perform elliptical polarized light reconstruction based on two intensity measurements. Use of a quick response code makes the proposed cryptosystem practical, with versatile key sensitivity and fault tolerance. Both simulation and preliminary experimental results that support theoretical analysis are presented. An analysis of the resistance of the proposed method on a known public key attack is also provided.

5. The effect of numerical techniques on differential equation based chaotic generators

KAUST Repository

Zidan, Mohammed A.; Radwan, Ahmed G.; Salama, Khaled N.

2012-01-01

In this paper, we study the effect of the numerical solution accuracy on the digital implementation of differential chaos generators. Four systems are built on a Xilinx Virtex 4 FPGA using Euler, mid-point, and Runge-Kutta fourth order techniques

6. Numerical analysis of radiation propagation in innovative volumetric receivers based on selective laser melting techniques

Science.gov (United States)

Alberti, Fabrizio; Santiago, Sergio; Roccabruna, Mattia; Luque, Salvador; Gonzalez-Aguilar, Jose; Crema, Luigi; Romero, Manuel

2016-05-01

Volumetric absorbers constitute one of the key elements in order to achieve high thermal conversion efficiencies in concentrating solar power plants. Regardless of the working fluid or thermodynamic cycle employed, design trends towards higher absorber output temperatures are widespread, which lead to the general need of components of high solar absorptance, high conduction within the receiver material, high internal convection, low radiative and convective heat losses and high mechanical durability. In this context, the use of advanced manufacturing techniques, such as selective laser melting, has allowed for the fabrication of intricate geometries that are capable of fulfilling the previous requirements. This paper presents a parametric design and analysis of the optical performance of volumetric absorbers of variable porosity conducted by means of detailed numerical ray tracing simulations. Sections of variable macroscopic porosity along the absorber depth were constructed by the fractal growth of single-cell structures. Measures of performance analyzed include optical reflection losses from the absorber front and rear faces, penetration of radiation inside the absorber volume, and radiation absorption as a function of absorber depth. The effects of engineering design parameters such as absorber length and wall thickness, material reflectance and porosity distribution on the optical performance of absorbers are discussed, and general design guidelines are given.

7. Development of Numerical Analysis Techniques Based on Damage Mechanics and Fracture Mechanics

International Nuclear Information System (INIS)

Chang, Yoon Suk; Lee, Dock Jin; Choi, Shin Beom; Kim, Sun Hye; Cho, Doo Ho; Lee, Hyun Boo

2010-04-01

The scatter of measured fracture toughness data and transferability problems among different crack configurations as well as geometry and loading conditions are major obstacles for application of fracture mechanics. To address these issues, recently, concerns on the local approach employing reliable micro-mechanical damage models are being increased again in connection with a progress of computational technology. In the present research, as part of development of fracture mechanical evaluation model for material degradation of reactor pressure boundary, several investigations on fracture behaviors were carried out. Especially, a numerical scheme to determine key parameters consisting both cleavage and ductile fracture estimate models was changed efficiently by incorporating a genetic algorithm. Also, with regard to the well-known master curve, newly reported methods such as bimodal master curve, randomly inhomogeneous master curve and single point estimation were reviewed to deal with homogeneous and inhomogeneous material characteristics. A series of preliminary finite element analyses was conducted to examine the element size effect on micro-mechanical models. Then, a new thickness correction equation was derived from parametric three-dimensional numerical simulations, which was founded on the current test standard, ASTM E1921, but could lead to get more realistic fracture toughness values. As a result, promising modified master curves as well as fracture toughness diagrams to convert data between pre-cracked V-notched and compact tension specimens were generated. Moreover, a user-subroutine in relation to GTN(Gurson-Tvergaard-Needleman) model was made by adopting Hill's 48 yield potential theory. By applying GTN model combined with the subroutine to small punch specimens, the effect of inhomogeneous properties on fracture behaviors of miniature specimens was confirmed. Therefore, it is anticipated that the aforementioned enhanced research results can be utilized

8. Comparison between two meshless methods based on collocation technique for the numerical solution of four-species tumor growth model

Science.gov (United States)

2017-03-01

As is said in [27], the tumor-growth model is the incorporation of nutrient within the mixture as opposed to being modeled with an auxiliary reaction-diffusion equation. The formulation involves systems of highly nonlinear partial differential equations of surface effects through diffuse-interface models [27]. Simulations of this practical model using numerical methods can be applied for evaluating it. The present paper investigates the solution of the tumor growth model with meshless techniques. Meshless methods are applied based on the collocation technique which employ multiquadrics (MQ) radial basis function (RBFs) and generalized moving least squares (GMLS) procedures. The main advantages of these choices come back to the natural behavior of meshless approaches. As well as, a method based on meshless approach can be applied easily for finding the solution of partial differential equations in high-dimension using any distributions of points on regular and irregular domains. The present paper involves a time-dependent system of partial differential equations that describes four-species tumor growth model. To overcome the time variable, two procedures will be used. One of them is a semi-implicit finite difference method based on Crank-Nicolson scheme and another one is based on explicit Runge-Kutta time integration. The first case gives a linear system of algebraic equations which will be solved at each time-step. The second case will be efficient but conditionally stable. The obtained numerical results are reported to confirm the ability of these techniques for solving the two and three-dimensional tumor-growth equations.

9. Current control design for three-phase grid-connected inverters using a pole placement technique based on numerical models

OpenAIRE

Citro, Costantino; Gavriluta, Catalin; Nizak Md, H. K.; Beltran, H.

2012-01-01

This paper presents a design procedure for linear current controllers of three-phase grid-connected inverters. The proposed method consists in deriving a numerical model of the converter by using software simulations and applying the pole placement technique to design the controller with the desired performances. A clear example on how to apply the technique is provided. The effectiveness of the proposed design procedure has been verified through the experimental results obtained with ...

10. Numerical techniques for lattice gauge theories

International Nuclear Information System (INIS)

Creutz, M.

1981-01-01

The motivation for formulating gauge theories on a lattice is reviewed. Monte Carlo simulation techniques are then discussed for these systems. Finally, the Monte Carlo methods are combined with renormalization group analysis to give strong numerical evidence for confinement of quarks by non-Abelian gauge fields

11. A numerical technique for reactor subchannel analysis

International Nuclear Information System (INIS)

Fath, Hassan E.S.

1983-01-01

A numerical technique is developed for the solution of the transient boundary layer equations with a moving liquid-vapour interface boundary. The technique uses the finite difference method with the velocity components defined over an Eulerian mesh. A system of interface massless markers is defined where the markers move with the flow field according to a simple kinematic relation between the interface geometry and the fluid velocity. Different applications of nuclear engineering interest are reported with some available results. The present technique is capable of predicting the interface profile near the wall which is important in the reactor subchannel analysis

12. A review of numerical techniques approaching microstructures of crystalline rocks

Science.gov (United States)

Zhang, Yahui; Wong, Louis Ngai Yuen

2018-06-01

The macro-mechanical behavior of crystalline rocks including strength, deformability and failure pattern are dominantly influenced by their grain-scale structures. Numerical technique is commonly used to assist understanding the complicated mechanisms from a microscopic perspective. Each numerical method has its respective strengths and limitations. This review paper elucidates how numerical techniques take geometrical aspects of the grain into consideration. Four categories of numerical methods are examined: particle-based methods, block-based methods, grain-based methods, and node-based methods. Focusing on the grain-scale characters, specific relevant issues including increasing complexity of micro-structure, deformation and breakage of model elements, fracturing and fragmentation process are described in more detail. Therefore, the intrinsic capabilities and limitations of different numerical approaches in terms of accounting for the micro-mechanics of crystalline rocks and their phenomenal mechanical behavior are explicitly presented.

13. A correction scheme for thermal conductivity measurement using the comparative cut-bar technique based on 3D numerical simulation

International Nuclear Information System (INIS)

Xing, Changhu; Folsom, Charles; Jensen, Colby; Ban, Heng; Marshall, Douglas W

2014-01-01

As an important factor affecting the accuracy of thermal conductivity measurement, systematic (bias) error in the guarded comparative axial heat flow (cut-bar) method was mostly neglected by previous researches. This bias is primarily due to the thermal conductivity mismatch between sample and meter bars (reference), which is common for a sample of unknown thermal conductivity. A correction scheme, based on finite element simulation of the measurement system, was proposed to reduce the magnitude of the overall measurement uncertainty. This scheme was experimentally validated by applying corrections on four types of sample measurements in which the specimen thermal conductivity is much smaller, slightly smaller, equal and much larger than that of the meter bar. As an alternative to the optimum guarding technique proposed before, the correction scheme can be used to minimize the uncertainty contribution from the measurement system with non-optimal guarding conditions. It is especially necessary for large thermal conductivity mismatches between sample and meter bars. (paper)

14. Numerical evaluation of droplet sizing based on the ratio of fluorescent and scattered light intensities (LIF/Mie technique)

International Nuclear Information System (INIS)

Charalampous, Georgios; Hardalupas, Yannis

2011-01-01

The dependence of fluorescent and scattered light intensities from spherical droplets on droplet diameter was evaluated using Mie theory. The emphasis is on the evaluation of droplet sizing, based on the ratio of laser-induced fluorescence and scattered light intensities (LIF/Mie technique). A parametric study is presented, which includes the effects of scattering angle, the real part of the refractive index and the dye concentration in the liquid (determining the imaginary part of the refractive index). The assumption that the fluorescent and scattered light intensities are proportional to the volume and surface area of the droplets for accurate sizing measurements is not generally valid. More accurate sizing measurements can be performed with minimal dye concentration in the liquid and by collecting light at a scattering angle of 60 deg. rather than the commonly used angle of 90 deg. Unfavorable to the sizing accuracy are oscillations of the scattered light intensity with droplet diameter that are profound at the sidescatter direction (90 deg.) and for droplets with refractive indices around 1.4.

15. On performing of interference technique based on self-adjusting Zernike filters (SA-AVT method) to investigate flows and validate 3D flow numerical simulations

Science.gov (United States)

Pavlov, Al. A.; Shevchenko, A. M.; Khotyanovsky, D. V.; Pavlov, A. A.; Shmakov, A. S.; Golubev, M. P.

2017-10-01

We present a method for and results of determination of the field of integral density in the structure of flow corresponding to the Mach interaction of shock waves at Mach number M = 3. The optical diagnostics of flow was performed using an interference technique based on self-adjusting Zernike filters (SA-AVT method). Numerical simulations were carried out using the CFS3D program package for solving the Euler and Navier-Stokes equations. Quantitative data on the distribution of integral density on the path of probing radiation in one direction of 3D flow transillumination in the region of Mach interaction of shock waves were obtained for the first time.

16. Numerical and physical testing of upscaling techniques for constitutive properties

International Nuclear Information System (INIS)

McKenna, S.A.; Tidwell, V.C.

1995-01-01

This paper evaluates upscaling techniques for hydraulic conductivity measurements based on accuracy and practicality for implementation in evaluating the performance of the potential repository at Yucca Mountain. Analytical and numerical techniques are compared to one another, to the results of physical upscaling experiments, and to the results obtained on the original domain. The results from different scaling techniques are then compared to the case where unscaled point scale statistics are used to generate realizations directly at the flow model grid-block scale. Initital results indicate that analytical techniques provide upscaling constitutive properties from the point measurement scale to the flow model grid-block scale. However, no single analytic technique proves to be adequate for all situations. Numerical techniques are also accurate, but they are time intensive and their accuracy is dependent on knowledge of the local flow regime at every grid-block

17. Visualization techniques in plasma numerical simulations

International Nuclear Information System (INIS)

Kulhanek, P.; Smetana, M.

2004-01-01

Numerical simulations of plasma processes usually yield a huge amount of raw numerical data. Information about electric and magnetic fields and particle positions and velocities can be typically obtained. There are two major ways of elaborating these data. First of them is called plasma diagnostics. We can calculate average values, variances, correlations of variables, etc. These results may be directly comparable with experiments and serve as the typical quantitative output of plasma simulations. The second possibility is the plasma visualization. The results are qualitative only, but serve as vivid display of phenomena in the plasma followed-up. An experience with visualizing electric and magnetic fields via Line Integral Convolution method is described in the first part of the paper. The LIC method serves for visualization of vector fields in two dimensional section of the three dimensional plasma. The field values can be known only in grid points of three-dimensional grid. The second part of the paper is devoted to the visualization techniques of the charged particle motion. The colour tint can be used for particle temperature representation. The motion can be visualized by a trace fading away with the distance from the particle. In this manner the impressive animations of the particle motion can be achieved. (author)

18. Numerical modeling techniques for flood analysis

Science.gov (United States)

Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.

2016-12-01

Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.

19. Implementation of a revised numerical integration technique into QAD

International Nuclear Information System (INIS)

De Gangi, N.L.

1983-01-01

A technique for numerical integration through a uniform volume source is developed. It is applied to gamma radiation transport shielding problems. The method is based on performing a numerical angular and ray point kernel integration and is incorporated into the QAD-CG computer code (i.e. QAD-UE). Several test problems are analyzed with this technique. Convergence properties of the method are analyzed. Gamma dose rates from a large tank and post LOCA dose rates inside a containment building are evaluated. Results are consistent with data from other methods. The new technique provides several advantages. User setup requirements for large volume source problems are reduced from standard point kernel requirements. Calculational efficiencies are improved. An order of magnitude improvement is seen with a test problem

20. A Generalized Technique in Numerical Integration

Science.gov (United States)

Safouhi, Hassan

2018-02-01

Integration by parts is one of the most popular techniques in the analysis of integrals and is one of the simplest methods to generate asymptotic expansions of integral representations. The product of the technique is usually a divergent series formed from evaluating boundary terms; however, sometimes the remaining integral is also evaluated. Due to the successive differentiation and anti-differentiation required to form the series or the remaining integral, the technique is difficult to apply to problems more complicated than the simplest. In this contribution, we explore a generalized and formalized integration by parts to create equivalent representations to some challenging integrals. As a demonstrative archetype, we examine Bessel integrals, Fresnel integrals and Airy functions.

1. Advanced experimental and numerical techniques for cavitation erosion prediction

CERN Document Server

Chahine, Georges; Franc, Jean-Pierre; Karimi, Ayat

2014-01-01

This book provides a comprehensive treatment of the cavitation erosion phenomenon and state-of-the-art research in the field. It is divided into two parts. Part 1 consists of seven chapters, offering a wide range of computational and experimental approaches to cavitation erosion. It includes a general introduction to cavitation and cavitation erosion, a detailed description of facilities and measurement techniques commonly used in cavitation erosion studies, an extensive presentation of various stages of cavitation damage (including incubation and mass loss), and insights into the contribution of computational methods to the analysis of both fluid and material behavior. The proposed approach is based on a detailed description of impact loads generated by collapsing cavitation bubbles and a physical analysis of the material response to these loads. Part 2 is devoted to a selection of nine papers presented at the International Workshop on Advanced Experimental and Numerical Techniques for Cavitation Erosion (Gr...

2. On the theories, techniques, and computer codes used in numerical reactor criticality and burnup calculations

International Nuclear Information System (INIS)

El-Osery, I.A.

1981-01-01

The purpose of this paper is to discuss the theories, techniques and computer codes that are frequently used in numerical reactor criticality and burnup calculations. It is a part of an integrated nuclear reactor calculation scheme conducted by the Reactors Department, Inshas Nuclear Research Centre. The crude part in numerical reactor criticality and burnup calculations includes the determination of neutron flux distribution which can be obtained in principle as a solution of Boltzmann transport equation. Numerical methods used for solving transport equations are discussed. Emphasis are made on numerical techniques based on multigroup diffusion theory. These numerical techniques include nodal, modal, and finite difference ones. The most commonly known computer codes utilizing these techniques are reviewed. Some of the main computer codes that have been already developed at the Reactors Department and related to numerical reactor criticality and burnup calculations have been presented

3. Numerical model updating technique for structures using firefly algorithm

Science.gov (United States)

Sai Kubair, K.; Mohan, S. C.

2018-03-01

Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

4. A Numerical Study of Quantization-Based Integrators

Directory of Open Access Journals (Sweden)

Barros Fernando

2014-01-01

Full Text Available Adaptive step size solvers are nowadays considered fundamental to achieve efficient ODE integration. While, traditionally, ODE solvers have been designed based on discrete time machines, new approaches based on discrete event systems have been proposed. Quantization provides an efficient integration technique based on signal threshold crossing, leading to independent and modular solvers communicating through discrete events. These solvers can benefit from the large body of knowledge on discrete event simulation techniques, like parallelization, to obtain efficient numerical integration. In this paper we introduce new solvers based on quantization and adaptive sampling techniques. Preliminary numerical results comparing these solvers are presented.

5. Computational techniques for inelastic analysis and numerical experiments

International Nuclear Information System (INIS)

1977-01-01

A number of formulations have been proposed for inelastic analysis, particularly for the thermal elastic-plastic creep analysis of nuclear reactor components. In the elastic-plastic regime, which principally concerns with the time independent behavior, the numerical techniques based on the finite element method have been well exploited and computations have become a routine work. With respect to the problems in which the time dependent behavior is significant, it is desirable to incorporate a procedure which is workable on the mechanical model formulation as well as the method of equation of state proposed so far. A computer program should also take into account the strain-dependent and/or time-dependent micro-structural changes which often occur during the operation of structural components at the increasingly high temperature for a long period of time. Special considerations are crucial if the analysis is to be extended to large strain regime where geometric nonlinearities predominate. The present paper introduces a rational updated formulation and a computer program under development by taking into account the various requisites stated above. (Auth.)

6. Applying recursive numerical integration techniques for solving high dimensional integrals

International Nuclear Information System (INIS)

Ammon, Andreas; Genz, Alan; Hartung, Tobias; Jansen, Karl; Volmer, Julia; Leoevey, Hernan

2016-11-01

The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

7. Applying recursive numerical integration techniques for solving high dimensional integrals

Energy Technology Data Exchange (ETDEWEB)

Ammon, Andreas [IVU Traffic Technologies AG, Berlin (Germany); Genz, Alan [Washington State Univ., Pullman, WA (United States). Dept. of Mathematics; Hartung, Tobias [King' s College, London (United Kingdom). Dept. of Mathematics; Jansen, Karl; Volmer, Julia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leoevey, Hernan [Humboldt Univ. Berlin (Germany). Inst. fuer Mathematik

2016-11-15

The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

8. Numerical Computational Technique for Scattering from Underwater Objects

OpenAIRE

T. Ratna Mani; Raj Kumar; Odamapally Vijay Kumar

2013-01-01

This paper presents a computational technique for mono-static and bi-static scattering from underwater objects of different shape such as submarines. The scatter has been computed using finite element time domain (FETD) method, based on the superposition of reflections, from the different elements reaching the receiver at a particular instant in time. The results calculated by this method has been verified with the published results based on ramp response technique. An in-depth parametric s...

9. Do Mitochondrial Replacement Techniques Affect Qualitative or Numerical Identity?

Science.gov (United States)

Liao, S Matthew

2017-01-01

Mitochondrial replacement techniques (MRTs), known in the popular media as 'three-parent' or 'three-person' IVFs, have the potential to enable women with mitochondrial diseases to have children who are genetically related to them but without such diseases. In the debate regarding whether MRTs should be made available, an issue that has garnered considerable attention is whether MRTs affect the characteristics of an existing individual or whether they result in the creation of a new individual, given that MRTs involve the genetic manipulation of the germline. In other words, do MRTs affect the qualitative identity or the numerical identity of the resulting child? For instance, a group of panelists on behalf of the UK Human Fertilisation and Embryology Authority (HFEA) has claimed that MRTs affect only the qualitative identity of the resulting child, while the Working Group of the Nuffield Council on Bioethics (NCOB) has argued that MRTs would create a numerically distinct individual. In this article, I shall argue that MRTs do create a new and numerically distinct individual. Since my explanation is different from the NCOB's explanation, I shall also offer reasons why my explanation is preferable to the NCOB's explanation. © 2016 John Wiley & Sons Ltd.

10. Application of finite element numerical technique to nuclear reactor geometries

Energy Technology Data Exchange (ETDEWEB)

Rouai, N M [Nuclear engineering department faculty of engineering Al-fateh universty, Tripoli (Libyan Arab Jamahiriya)

1995-10-01

Determination of the temperature distribution in nuclear elements is of utmost importance to ensure that the temperature stays within safe limits during reactor operation. This paper discusses the use of Finite element numerical technique (FE) for the solution of the two dimensional heat conduction equation in geometries related to nuclear reactor cores. The FE solution stats with variational calculus which considers transforming the heat conduction equation into an integral equation I(O) and seeks a function that minimizes this integral and hence gives the solution to the heat conduction equation. In this paper FE theory as applied to heat conduction is briefly outlined and a 2-D program is used to apply the theory to simple shapes and to two gas cooled reactor fuel elements. Good results are obtained for both cases with reasonable number of elements. 7 figs.

11. Recent developments in numerical simulation techniques of thermal recovery processes

Energy Technology Data Exchange (ETDEWEB)

Tamim, M. [Bangladesh University of Engineering and Technology, Bangladesh (Bangladesh); Abou-Kassem, J.H. [Chemical and Petroleum Engineering Department, UAE University, Al-Ain 17555 (United Arab Emirates); Farouq Ali, S.M. [University of Alberta, Alberta (Canada)

2000-05-01

Numerical simulation of thermal processes (steam flooding, steam stimulation, SAGD, in-situ combustion, electrical heating, etc.) is an integral part of a thermal project design. The general tendency in the last 10 years has been to use commercial simulators. During the last decade, only a few new models have been reported in the literature. More work has been done to modify and refine solutions to existing problems to improve the efficiency of simulators. The paper discusses some of the recent developments in simulation techniques of thermal processes such as grid refinement, grid orientation, effect of temperature on relative permeability, mathematical models, and solution methods. The various aspects of simulation discussed here promote better understanding of the problems encountered in the simulation of thermal processes and will be of value to both simulator users and developers.

12. Application of finite element numerical technique to nuclear reactor geometries

International Nuclear Information System (INIS)

Rouai, N. M.

1995-01-01

Determination of the temperature distribution in nuclear elements is of utmost importance to ensure that the temperature stays within safe limits during reactor operation. This paper discusses the use of Finite element numerical technique (FE) for the solution of the two dimensional heat conduction equation in geometries related to nuclear reactor cores. The FE solution stats with variational calculus which considers transforming the heat conduction equation into an integral equation I(O) and seeks a function that minimizes this integral and hence gives the solution to the heat conduction equation. In this paper FE theory as applied to heat conduction is briefly outlined and a 2-D program is used to apply the theory to simple shapes and to two gas cooled reactor fuel elements. Good results are obtained for both cases with reasonable number of elements. 7 figs

13. A new numerical technique to design satellite energetic electron detectors

CERN Document Server

Tuszewski, M G; Ingraham, J C

2002-01-01

Energetic charged particles trapped in the magnetosphere are routinely detected by satellite instruments. However, it is generally difficult to extract quantitative energy and angular information from such measurements because the interaction of energetic electrons with matter is rather complex. Beam calibrations and Monte-Carlo (MC) simulations are often used to evaluate a flight instrument once it is built. However, rules of thumb and past experience are common tools to design the instrument in the first place. Hence, we have developed a simple numerical procedure, based on analytical probabilities, suitable for instrumental design and evaluation. In addition to the geometrical response, the contributions of surface backscattering, edge penetration, and bremsstrahlung radiation are estimated. The new results are benchmarked against MC calculations for a simple test case. Complicated effects, such as the contribution of the satellite to the instrumental response, can be estimated with the new formalism.

14. A numerical technique for enhanced efficiency and stability for the solution of the nuclear reactor equation

International Nuclear Information System (INIS)

Khotylev, V.A.; Hoogenboom, J.E.

1996-01-01

The paper presents new techniques for the solution of the nuclear reactor equation in diffusion approximation, that has enhanced efficiency and stability. The code system based on the new technique solves a number of steady-state and/or transient problems with coupled thermal hydraulics in one-, two-, or three dimensional geometry with reduced CPU time as compared to similar code systems of previous generations if well-posed neutronics problems are considered. Automated detection of ill-posed problem and selection of the appropriate numerical method makes the new code system capable of yielding a correct solution for wider range of problems without user intervention. (author)

15. A numerical technique for enhanced efficiency and stability for the solution of the nuclear reactor equation

Energy Technology Data Exchange (ETDEWEB)

Khotylev, V.A.; Hoogenboom, J.E. [Delft Univ. of Technology, Interfaculty Reactor Inst., Delft (Netherlands)

1996-07-01

The paper presents new techniques for the solution of the nuclear reactor equation in diffusion approximation, that has enhanced efficiency and stability. The code system based on the new technique solves a number of steady-state and/or transient problems with coupled thermal hydraulics in one-, two-, or three dimensional geometry with reduced CPU time as compared to similar code systems of previous generations if well-posed neutronics problems are considered. Automated detection of ill-posed problem and selection of the appropriate numerical method makes the new code system capable of yielding a correct solution for wider range of problems without user intervention. (author)

16. On numerical-analytic techniques for boundary value problems

Czech Academy of Sciences Publication Activity Database

Rontó, András; Rontó, M.; Shchobak, N.

2012-01-01

Roč. 12, č. 3 (2012), s. 5-10 ISSN 1335-8243 Institutional support: RVO:67985840 Keywords : numerical-analytic method * periodic successive approximations * Lyapunov-Schmidt method Subject RIV: BA - General Mathematics http://www.degruyter.com/view/j/aeei.2012.12.issue-3/v10198-012-0035-1/v10198-012-0035-1.xml?format=INT

17. Wave propagation in fluids models and numerical techniques

CERN Document Server

Guinot, Vincent

2012-01-01

This second edition with four additional chapters presents the physical principles and solution techniques for transient propagation in fluid mechanics and hydraulics. The application domains vary including contaminant transport with or without sorption, the motion of immiscible hydrocarbons in aquifers, pipe transients, open channel and shallow water flow, and compressible gas dynamics. The mathematical formulation is covered from the angle of conservation laws, with an emphasis on multidimensional problems and discontinuous flows, such as steep fronts and shock waves. Finite

18. SEM-based characterization techniques

International Nuclear Information System (INIS)

Russell, P.E.

1986-01-01

The scanning electron microscope is now a common instrument in materials characterization laboratories. The basic role of the SEM as a topographic imaging system has steadily been expanding to include a variety of SEM-based analytical techniques. These techniques cover the range of basic semiconductor materials characterization to live-time device characterization of operating LSI or VLSI devices. This paper introduces many of the more commonly used techniques, describes the modifications or additions to a conventional SEM required to utilize the techniques, and gives examples of the use of such techniques. First, the types of signals available from a sample being irradiated by an electron beam are reviewed. Then, where applicable, the type of spectroscopy or microscopy which has evolved to utilize the various signal types are described. This is followed by specific examples of the use of such techniques to solve problems related to semiconductor technology. Techniques emphasized include: x-ray fluorescence spectroscopy, electron beam induced current (EBIC), stroboscopic voltage analysis, cathodoluminescnece and electron beam IC metrology. Current and future trends of some of the these techniques, as related to the semiconductor industry are discussed

19. Finger-Based Numerical Skills Link Fine Motor Skills to Numerical Development in Preschoolers.

Science.gov (United States)

Suggate, Sebastian; Stoeger, Heidrun; Fischer, Ursula

2017-12-01

Previous studies investigating the association between fine-motor skills (FMS) and mathematical skills have lacked specificity. In this study, we test whether an FMS link to numerical skills is due to the involvement of finger representations in early mathematics. We gave 81 pre-schoolers (mean age of 4 years, 9 months) a set of FMS measures and numerical tasks with and without a specific finger focus. Additionally, we used receptive vocabulary and chronological age as control measures. FMS linked more closely to finger-based than to nonfinger-based numerical skills even after accounting for the control variables. Moreover, the relationship between FMS and numerical skill was entirely mediated by finger-based numerical skills. We concluded that FMS are closely related to early numerical skill development through finger-based numerical counting that aids the acquisition of mathematical mental representations.

20. A technique for increasing the accuracy of the numerical inversion of the Laplace transform with applications

Science.gov (United States)

Berger, B. S.; Duangudom, S.

1973-01-01

A technique is introduced which extends the range of useful approximation of numerical inversion techniques to many cycles of an oscillatory function without requiring either the evaluation of the image function for many values of s or the computation of higher-order terms. The technique consists in reducing a given initial value problem defined over some interval into a sequence of initial value problems defined over a set of subintervals. Several numerical examples demonstrate the utility of the method.

1. Numerical and modeling techniques used in the EPIC code

International Nuclear Information System (INIS)

Pizzica, P.A.; Abramson, P.B.

1977-01-01

EPIC models fuel and coolant motion which result from internal fuel pin pressure (from fission gas or fuel vapor) and/or from the generation of sodium vapor pressures in the coolant channel subsequent to pin failure in an LMFBR. The modeling includes the ejection of molten fuel from the pin into a coolant channel with any amount of voiding through a clad rip which may be of any length or which may expand with time. One-dimensional Eulerian hydrodynamics is used to model both the motion of fuel and fission gas inside a molten fuel cavity and the mixture of two-phase sodium and fission gas in the channel. Motion of molten fuel particles in the coolant channel is tracked with a particle-in-cell technique

2. Numerical techniques for large cosmological N-body simulations

International Nuclear Information System (INIS)

Efstathiou, G.; Davis, M.; Frenk, C.S.; White, S.D.M.

1985-01-01

We describe and compare techniques for carrying out large N-body simulations of the gravitational evolution of clustering in the fundamental cube of an infinite periodic universe. In particular, we consider both particle mesh (PM) codes and P 3 M codes in which a higher resolution force is obtained by direct summation of contributions from neighboring particles. We discuss the mesh-induced anisotropies in the forces calculated by these schemes, and the extent to which they can model the desired 1/r 2 particle-particle interaction. We also consider how transformation of the time variable can improve the efficiency with which the equations of motion are integrated. We present tests of the accuracy with which the resulting schemes conserve energy and are able to follow individual particle trajectories. We have implemented an algorithm which allows initial conditions to be set up to model any desired spectrum of linear growing mode density fluctuations. A number of tests demonstrate the power of this algorithm and delineate the conditions under which it is effective. We carry out several test simulations using a variety of techniques in order to show how the results are affected by dynamic range limitations in the force calculations, by boundary effects, by residual artificialities in the initial conditions, and by the number of particles employed. For most purposes cosmological simulations are limited by the resolution of their force calculation rather than by the number of particles they can employ. For this reason, while PM codes are quite adequate to study the evolution of structure on large scale, P 3 M methods are to be preferred, in spite of their greater cost and complexity, whenever the evolution of small-scale structure is important

3. Two dimensional numerical simulation of gas discharges: comparison between particle-in-cell and FCT techniques

Energy Technology Data Exchange (ETDEWEB)

Soria-Hoyo, C; Castellanos, A [Departamento de Electronica y Electromagnetismo, Facultad de Fisica, Universidad de Sevilla, Avda. Reina Mercedes s/n, 41012 Sevilla (Spain); Pontiga, F [Departamento de Fisica Aplicada II, EUAT, Universidad de Sevilla, Avda. Reina Mercedes s/n, 41012 Sevilla (Spain)], E-mail: cshoyo@us.es

2008-10-21

Two different numerical techniques have been applied to the numerical integration of equations modelling gas discharges: a finite-difference flux corrected transport (FD-FCT) technique and a particle-in-cell (PIC) technique. The PIC technique here implemented has been specifically designed for the simulation of 2D electrical discharges using cylindrical coordinates. The development and propagation of a streamer between two parallel electrodes has been used as a convenient test to compare the performance of both techniques. In particular, the phase velocity of the cathode directed streamer has been used to check the internal consistency of the numerical simulations. The results obtained from the two techniques are in reasonable agreement with each other, and both techniques have proved their ability to follow the high gradients of charge density and electric field present in this type of problems. Moreover, the streamer velocities predicted by the simulation are in accordance with the typical experimental values.

4. Two dimensional numerical simulation of gas discharges: comparison between particle-in-cell and FCT techniques

International Nuclear Information System (INIS)

Soria-Hoyo, C; Castellanos, A; Pontiga, F

2008-01-01

Two different numerical techniques have been applied to the numerical integration of equations modelling gas discharges: a finite-difference flux corrected transport (FD-FCT) technique and a particle-in-cell (PIC) technique. The PIC technique here implemented has been specifically designed for the simulation of 2D electrical discharges using cylindrical coordinates. The development and propagation of a streamer between two parallel electrodes has been used as a convenient test to compare the performance of both techniques. In particular, the phase velocity of the cathode directed streamer has been used to check the internal consistency of the numerical simulations. The results obtained from the two techniques are in reasonable agreement with each other, and both techniques have proved their ability to follow the high gradients of charge density and electric field present in this type of problems. Moreover, the streamer velocities predicted by the simulation are in accordance with the typical experimental values.

5. The Healthy Development of Yazd Province in 2013; using the Techniques of Numerical Taxonomy

Directory of Open Access Journals (Sweden)

2016-03-01

Full Text Available Introduction: Since the early 90s, the concept of human development were proposed as one of the development evaluation criteria, improving community health, which constituted an essential component of this development, the challenge for governments grew. This study was conducted to determine the level of health development of Yazd province in 2013, using the techniques numerical taxonomy. Methods: This descriptive study was to assess the health indicators in the 10 township of Yazd province in 2013. Required data were collected based on experts opinion and referring to the deputies of Hygiene, Treatment, Management and Resource Development, Food and Drug Administration of Shahid Sadoughi University of Medical Sciences, Yazd Province Health Center, Yazd province Statistics Center, Welfare Organization of Yazd province and were analyzed with AHP techniques and numerical taxonomy. Results: Mehriz and Abarkooh were the richest and most deprived townships, with degree of development of 0.474 and 0.987 and Bafgh, Yazd, Ardakan, Meybod, Taft, Bahabad, Saduq and Khatam, fall between them, respectively. Conclusion: There is difference and gap in the development of health, between townships of Yazd province, there is hope that the national and provincial authorities in the allocation of health facilities to each of the township of Yazd, plan and act based on the rate of development of the township.

6. Structural Analysis of Composite Laminates using Analytical and Numerical Techniques

Directory of Open Access Journals (Sweden)

Sanghi Divya

2016-01-01

Full Text Available A laminated composite material consists of different layers of matrix and fibres. Its properties can vary a lot with each layer’s or ply’s orientation, material property and the number of layers itself. The present paper focuses on a novel approach of incorporating an analytical method to arrive at a preliminary ply layup order of a composite laminate, which acts as a feeder data for the further detailed analysis done on FEA tools. The equations used in our MATLAB are based on analytical study code and supply results that are remarkably close to the final optimized layup found through extensive FEA analysis with a high probabilistic degree. This reduces significant computing time and saves considerable FEA processing to obtain efficient results quickly. The result output by our method also provides the user with the conditions that predicts the successive failure sequence of the composite plies, a result option which is not even available in popular FEM tools. The predicted results are further verified by testing the laminates in the laboratory and the results are found in good agreement.

7. Advanced Numerical Integration Techniques for HighFidelity SDE Spacecraft Simulation

Data.gov (United States)

National Aeronautics and Space Administration — Classic numerical integration techniques, such as the ones at the heart of several NASA GSFC analysis tools, are known to work well for deterministic differential...

8. Numerical simulation of 3D unsteady flow in a rotating pump by dynamic mesh technique

International Nuclear Information System (INIS)

Huang, S; Guo, J; Yang, F X

2013-01-01

In this paper, the numerical simulation of unsteady flow for three kinds of typical rotating pumps, roots blower, roto-jet pump and centrifugal pump, were performed using the three-dimensional Dynamic Mesh technique. In the unsteady simulation, all the computational domains, as stationary, were set in one inertial reference frame. The motions of the solid boundaries were defined by the Profile file in FLUENT commercial code, in which the rotational orientation and speed of the rotors were specified. Three methods (Spring-based Smoothing, Dynamic Layering and Local Re-meshing) were used to achieve mesh deformation and re-meshing. The unsteady solutions of flow field and pressure distribution were solved. After a start-up stage, the flow parameters exhibit time-periodic behaviour corresponding to blade passing frequency of rotor. This work shows that Dynamic Mesh technique could achieve numerical simulation of three-dimensional unsteady flow field in various kinds of rotating pumps and have a strong versatility and broad application prospects

9. Bases of technique of sprinting

Directory of Open Access Journals (Sweden)

Valeriy Druz

2015-06-01

Full Text Available Purpose: to determine the biomechanical consistent patterns of a movement of a body providing the highest speed of sprinting. Material and Methods: the analysis of scientific and methodical literature on the considered problem, the anthropometrical characteristics of the surveyed contingent of sportsmen, the analysis of high-speed shootings of the leading runners of the world. Results: the biomechanical bases of technique of sprinting make dispersal and movement of the general center of body weight of the sportsman on a parabolic curve in a start phase taking into account the initial height of its stay in a pose of a low start. Its further movement happens on a cycloidal trajectory which is formed due to a pendulum movement of the extremities creating the lifting power which provides flight duration more in a running step, than duration of a basic phase. Conclusions: the received biomechanical regularities of technique of sprinting allow increasing the efficiency of training of sportsmen in sprinting.

10. Integration of finite element analysis and numerical optimization techniques for RAM transport package design

International Nuclear Information System (INIS)

Harding, D.C.; Eldred, M.S.; Witkowski, W.R.

1995-01-01

Type B radioactive material transport packages must meet strict Nuclear Regulatory Commission (NRC) regulations specified in 10 CFR 71. Type B containers include impact limiters, radiation or thermal shielding layers, and one or more containment vessels. In the past, each component was typically designed separately based on its driving constraint and the expertise of the designer. The components were subsequently assembled and the design modified iteratively until all of the design criteria were met. This approach neglects the fact that components may serve secondary purposes as well as primary ones. For example, an impact limiter's primary purpose is to act as an energy absorber and protect the contents of the package, but can also act as a heat dissipater or insulator. Designing the component to maximize its performance with respect to both objectives can be accomplished using numerical optimization techniques

11. SMD-based numerical stochastic perturbation theory

Energy Technology Data Exchange (ETDEWEB)

Dalla Brida, Mattia [Universita di Milano-Bicocca, Dipartimento di Fisica, Milan (Italy); INFN, Sezione di Milano-Bicocca (Italy); Luescher, Martin [CERN, Theoretical Physics Department, Geneva (Switzerland); AEC, Institute for Theoretical Physics, University of Bern (Switzerland)

2017-05-15

The viability of a variant of numerical stochastic perturbation theory, where the Langevin equation is replaced by the SMD algorithm, is examined. In particular, the convergence of the process to a unique stationary state is rigorously established and the use of higher-order symplectic integration schemes is shown to be highly profitable in this context. For illustration, the gradient-flow coupling in finite volume with Schroedinger functional boundary conditions is computed to two-loop (i.e. NNL) order in the SU(3) gauge theory. The scaling behaviour of the algorithm turns out to be rather favourable in this case, which allows the computations to be driven close to the continuum limit. (orig.)

12. SMD-based numerical stochastic perturbation theory

International Nuclear Information System (INIS)

Dalla Brida, Mattia; Luescher, Martin

2017-01-01

The viability of a variant of numerical stochastic perturbation theory, where the Langevin equation is replaced by the SMD algorithm, is examined. In particular, the convergence of the process to a unique stationary state is rigorously established and the use of higher-order symplectic integration schemes is shown to be highly profitable in this context. For illustration, the gradient-flow coupling in finite volume with Schroedinger functional boundary conditions is computed to two-loop (i.e. NNL) order in the SU(3) gauge theory. The scaling behaviour of the algorithm turns out to be rather favourable in this case, which allows the computations to be driven close to the continuum limit. (orig.)

13. SMD-based numerical stochastic perturbation theory

Science.gov (United States)

Dalla Brida, Mattia; Lüscher, Martin

2017-05-01

The viability of a variant of numerical stochastic perturbation theory, where the Langevin equation is replaced by the SMD algorithm, is examined. In particular, the convergence of the process to a unique stationary state is rigorously established and the use of higher-order symplectic integration schemes is shown to be highly profitable in this context. For illustration, the gradient-flow coupling in finite volume with Schrödinger functional boundary conditions is computed to two-loop (i.e. NNL) order in the SU(3) gauge theory. The scaling behaviour of the algorithm turns out to be rather favourable in this case, which allows the computations to be driven close to the continuum limit.

14. Numerical study on air turbines with enhanced techniques for OWC wave energy conversion

Science.gov (United States)

Cui, Ying; Hyun, Beom-Soo; Kim, Kilwon

2017-10-01

In recent years, the oscillating water column (OWC) wave energy converter, which can capture wave energy from the ocean, has been widely applied all over the world. As the essential part of the OWC system, the impulse and Wells turbines are capable of converting the low pressure pneumatic energy into the mechanical shaft power. As an enhanced technique, the design of endplate or ring attached to the blade tip is investigated numerically in this paper. 3D numerical models based on a CFD-software FLUENT 12.0 are established and validated by the corresponding experimental results from the reports of Setoguchi et al. (2004) and Takao et al. (2001). Then the flow fields and non-dimensional evaluating coefficients are calculated and analyzed under steady conditions. Results show that the efficiency of impulse turbine with ring can reach up to 0.49 when ϕ=1, which is 4% higher than that in the cases for the endplate-type and the original one. And the ring-type Wells turbine with fixed guide vanes shows the best performance with the maximal efficiency of 0.55, which is 22% higher than that of the original one. In addition, the quasi-steady analysis is used to calculate the mean efficiency and output-work of a wave cycle under sinusoidal flow condition. Taking all together, this study provides support for structural optimization of impulse turbine and Wells turbine in the future.

15. Delamination of plasters applied to historical masonry walls: analysis by acoustic emission technique and numerical model

Science.gov (United States)

Grazzini, A.; Lacidogna, G.; Valente, S.; Accornero, F.

2018-06-01

Masonry walls of historical buildings are subject to rising damp effects due to capillary or rain infiltrations, which in the time produce decay and delamination of historical plasters. In the restoration of masonry buildings, the plaster detachment frequently occurs because of mechanical incompatibility in repair mortar. An innovative laboratory procedure is described for test mechanical adhesion of new repair mortars. Compression static tests were carried out on composite specimens stone block-repair mortar, which specific geometry can test the de-bonding process of mortar in adherence with a stone masonry structure. The acoustic emission (AE) technique was employed for estimating the amount of energy released from fracture propagation in adherence surface between mortar and stone. A numerical simulation was elaborated based on the cohesive crack model. The evolution of detachment process of mortar in a coupled stone brick-mortar system was analysed by triangulation of AE signals, which can improve the numerical model and predict the type of failure in the adhesion surface of repair plaster. Through the cohesive crack model, it was possible to interpret theoretically the de-bonding phenomena occurring at the interface between stone block and mortar. Therefore, the mechanical behaviour of the interface is characterized.

16. GPU based numerical simulation of core shooting process

Directory of Open Access Journals (Sweden)

Yi-zhong Zhang

2017-11-01

Full Text Available Core shooting process is the most widely used technique to make sand cores and it plays an important role in the quality of sand cores. Although numerical simulation can hopefully optimize the core shooting process, research on numerical simulation of the core shooting process is very limited. Based on a two-fluid model (TFM and a kinetic-friction constitutive correlation, a program for 3D numerical simulation of the core shooting process has been developed and achieved good agreements with in-situ experiments. To match the needs of engineering applications, a graphics processing unit (GPU has also been used to improve the calculation efficiency. The parallel algorithm based on the Compute Unified Device Architecture (CUDA platform can significantly decrease computing time by multi-threaded GPU. In this work, the program accelerated by CUDA parallelization method was developed and the accuracy of the calculations was ensured by comparing with in-situ experimental results photographed by a high-speed camera. The design and optimization of the parallel algorithm were discussed. The simulation result of a sand core test-piece indicated the improvement of the calculation efficiency by GPU. The developed program has also been validated by in-situ experiments with a transparent core-box, a high-speed camera, and a pressure measuring system. The computing time of the parallel program was reduced by nearly 95% while the simulation result was still quite consistent with experimental data. The GPU parallelization method can successfully solve the problem of low computational efficiency of the 3D sand shooting simulation program, and thus the developed GPU program is appropriate for engineering applications.

17. Numerical Model based Reliability Estimation of Selective Laser Melting Process

DEFF Research Database (Denmark)

Mohanty, Sankhya; Hattel, Jesper Henri

2014-01-01

Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

18. Improved numerical grid generation techniques for the B2 edge plasma code

International Nuclear Information System (INIS)

Stotler, D.P.; Coster, D.P.

1992-06-01

Techniques used to generate grids for edge fluid codes such as B2 from numerically computed equilibria are discussed. Fully orthogonal, numerically derived grids closely resembling analytically prescribed meshes can be obtained. But, the details of the poloidal field can vary, yielding significantly different plasma parameters in the simulations. The magnitude of these differences is consistent with the predictions of an analytic model of the scrape-off layer. Both numerical and analytic grids are insensitive to changes in their defining parameters. Methods for implementing nonorthogonal boundaries in these meshes are also presented; they differ slightly from those required for fully orthogonal grids

19. Base Oils Biodegradability Prediction with Data Mining Techniques

Directory of Open Access Journals (Sweden)

Malika Trabelsi

2010-02-01

Full Text Available In this paper, we apply various data mining techniques including continuous numeric and discrete classification prediction models of base oils biodegradability, with emphasis on improving prediction accuracy. The results show that highly biodegradable oils can be better predicted through numeric models. In contrast, classification models did not uncover a similar dichotomy. With the exception of Memory Based Reasoning and Decision Trees, tested classification techniques achieved high classification prediction. However, the technique of Decision Trees helped uncover the most significant predictors. A simple classification rule derived based on this predictor resulted in good classification accuracy. The application of this rule enables efficient classification of base oils into either low or high biodegradability classes with high accuracy. For the latter, a higher precision biodegradability prediction can be obtained using continuous modeling techniques.

20. Numerical Integration Techniques for Curved-Element Discretizations of Molecule–Solvent Interfaces

Science.gov (United States)

Bardhan, Jaydeep P.; Altman, Michael D.; Willis, David J.; Lippow, Shaun M.; Tidor, Bruce; White, Jacob K.

2012-01-01

Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, we have developed methods to model several important surface formulations using exact surface discretizations. Following and refining Zauhar’s work (J. Comp.-Aid. Mol. Des. 9:149-159, 1995), we define two classes of curved elements that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. We then present numerical integration techniques that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, we present a set of calculations that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planartriangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute–solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved

1. Numerical modelling techniques of soft soil improvement via stone columns: A brief review

Science.gov (United States)

Zukri, Azhani; Nazir, Ramli

2018-04-01

There are a number of numerical studies on stone column systems in the literature. Most of the studies found were involved with two-dimensional analysis of the stone column behaviour, while only a few studies used three-dimensional analysis. The most popular software utilised in those studies was Plaxis 2D and 3D. Other types of software that used for numerical analysis are DIANA, EXAMINE, ZSoil, ABAQUS, ANSYS, NISA, GEOSTUDIO, CRISP, TOCHNOG, CESAR, GEOFEM (2D & 3D), FLAC, and FLAC 3. This paper will review the methodological approaches to model stone column numerically, both in two-dimensional and three-dimensional analyses. The numerical techniques and suitable constitutive model used in the studies will also be discussed. In addition, the validation methods conducted were to verify the numerical analysis conducted will be presented. This review paper also serves as a guide for junior engineers through the applicable procedures and considerations when constructing and running a two or three-dimensional numerical analysis while also citing numerous relevant references.

2. Case-based reasoning diagnostic technique based on multi-attribute similarity

Energy Technology Data Exchange (ETDEWEB)

Makoto, Takahashi [Tohoku University, Miyagi (Japan); Akio, Gofuku [Okayama University, Okayamaa (Japan)

2014-08-15

Case-based diagnostic technique has been developed based on the multi-attribute similarity. Specific feature of the developed system is to use multiple attributes of process signals for similarity evaluation to retrieve a similar case stored in a case base. The present technique has been applied to the measurement data from Monju with some simulated anomalies. The results of numerical experiments showed that the present technique can be utilizes as one of the methods for a hybrid-type diagnosis system.

3. Polynomial chaos methods for hyperbolic partial differential equations numerical techniques for fluid dynamics problems in the presence of uncertainties

CERN Document Server

2015-01-01

This monograph presents computational techniques and numerical analysis to study conservation laws under uncertainty using the stochastic Galerkin formulation. With the continual growth of computer power, these methods are becoming increasingly popular as an alternative to more classical sampling-based techniques. The approach described in the text takes advantage of stochastic Galerkin projections applied to the original conservation laws to produce a large system of modified partial differential equations, the solutions to which directly provide a full statistical characterization of the effect of uncertainties. Polynomial Chaos Methods of Hyperbolic Partial Differential Equations focuses on the analysis of stochastic Galerkin systems obtained for linear and non-linear convection-diffusion equations and for a systems of conservation laws; a detailed well-posedness and accuracy analysis is presented to enable the design of robust and stable numerical methods. The exposition is restricted to one spatial dime...

4. Simulation of white light generation and near light bullets using a novel numerical technique

Science.gov (United States)

Zia, Haider

2018-01-01

An accurate and efficient simulation has been devised, employing a new numerical technique to simulate the derivative generalised non-linear Schrödinger equation in all three spatial dimensions and time. The simulation models all pertinent effects such as self-steepening and plasma for the non-linear propagation of ultrafast optical radiation in bulk material. Simulation results are compared to published experimental spectral data of an example ytterbium aluminum garnet system at 3.1 μm radiation and fits to within a factor of 5. The simulation shows that there is a stability point near the end of the 2 mm crystal where a quasi-light bullet (spatial temporal soliton) is present. Within this region, the pulse is collimated at a reduced diameter (factor of ∼2) and there exists a near temporal soliton at the spatial center. The temporal intensity within this stable region is compressed by a factor of ∼4 compared to the input. This study shows that the simulation highlights new physical phenomena based on the interplay of various linear, non-linear and plasma effects that go beyond the experiment and is thus integral to achieving accurate designs of white light generation systems for optical applications. An adaptive error reduction algorithm tailor made for this simulation will also be presented in appendix.

5. Thermal radiation characteristics of nonisothermal cylindrical enclosures using a numerical ray tracing technique

Science.gov (United States)

Baumeister, Joseph F.

1990-01-01

Analysis of energy emitted from simple or complex cavity designs can lead to intricate solutions due to nonuniform radiosity and irradiation within a cavity. A numerical ray tracing technique was applied to simulate radiation propagating within and from various cavity designs. To obtain the energy balance relationships between isothermal and nonisothermal cavity surfaces and space, the computer code NEVADA was utilized for its statistical technique applied to numerical ray tracing. The analysis method was validated by comparing results with known theoretical and limiting solutions, and the electrical resistance network method. In general, for nonisothermal cavities the performance (apparent emissivity) is a function of cylinder length-to-diameter ratio, surface emissivity, and cylinder surface temperatures. The extent of nonisothermal conditions in a cylindrical cavity significantly affects the overall cavity performance. Results are presented over a wide range of parametric variables for use as a possible design reference.

6. Numerical

Directory of Open Access Journals (Sweden)

M. Boumaza

2015-07-01

Full Text Available Transient convection heat transfer is of fundamental interest in many industrial and environmental situations, as well as in electronic devices and security of energy systems. Transient fluid flow problems are among the more difficult to analyze and yet are very often encountered in modern day technology. The main objective of this research project is to carry out a theoretical and numerical analysis of transient convective heat transfer in vertical flows, when the thermal field is due to different kinds of variation, in time and space of some boundary conditions, such as wall temperature or wall heat flux. This is achieved by the development of a mathematical model and its resolution by suitable numerical methods, as well as performing various sensitivity analyses. These objectives are achieved through a theoretical investigation of the effects of wall and fluid axial conduction, physical properties and heat capacity of the pipe wall on the transient downward mixed convection in a circular duct experiencing a sudden change in the applied heat flux on the outside surface of a central zone.

7. Review of the phenomenon of fluidization and its numerical modelling techniques

Directory of Open Access Journals (Sweden)

H Khawaja

2016-10-01

Full Text Available The paper introduces the phenomenon of fluidization as a process. Fluidization occurs when a fluid (liquid or gas is pushed upwards through a bed of granular material. This may make the granular material to behave like a liquid and, for example, keep a level meniscus on a tilted container, or make a lighter object float on top and a heavier object sink to the bottom. The behavior of the granular material, when fluidized, depends on the superficial gas velocity, particle size, particle density, and fluid properties resulting in various regimes of fluidization. These regimes are discussed in detail in the paper. This paper also discusses the application of fluidized beds from its early usage in the Winkler coal gasifier to more recent applications for manufacturing of carbon nano-tubes. In addition, Geldart grouping based on the range of particle sizes is discussed. The minimum fluidization condition is defined and it is demonstrated that it may be registered slightly different when particles are being fluidized or de-fluidized. The paper presents discussion on three numerical modelling techniques: the two fluid model, unresolved fluid-particle model and resolved fluid particle model. The two fluid model is often referred to Eulerian-Eulerian method of solution and assumes particles as well as fluid as continuum. The unresolved and resolved fluid-particle models are based on Eulerian-Lagrangian method of solution. The key difference between them is the whether to use a drag correlation or solve the boundary layer around the particles. The paper ends with the discussion on the applicability of these models.

8. Microprocessor based techniques at CESR

International Nuclear Information System (INIS)

Giannini, G.; Cornell Univ., Ithaca, NY

1981-01-01

Microprocessor based systems succesfully used in connection with the High Energy Physics experimental program at the Cornell Electron Storage Ring are described. The multiprocessor calibration system for the CUSB calorimeter is analyzed in view of present and future applications. (orig.)

9. Bases en technique du vide

CERN Document Server

Rommel, Guy

2017-01-01

Cette seconde édition, 20 ans après la première, devrait continuer à aider les techniciens pour la réalisation de leur système de vide. La technologie du vide est utilisée, à présent, dans de nombreux domaines très différents les uns des autres et avec des matériels très fiables. Or, elle est souvent bien peu étudiée, de plus, c'est une discipline où le savoir-faire prend tout son sens. Malheureusement la transmission par des ingénieurs et techniciens expérimentés ne se fait plus ou trop rapidement. La technologie du vide fait appel à la physique, à la chimie, à la mécanique, à la métallurgie, au dessin industriel, à l'électronique, à la thermique, etc. Cette discipline demande donc de maîtriser des techniques de domaines très divers, et ce n'est pas chose facile. Chaque installation est en soi un cas particulier avec ses besoins, sa façon de traiter les matériaux et celle d'utiliser les matériels. Les systèmes de vide sont parfois copiés d'un laboratoire à un autre et le...

10. Numerical methods in finance and economics a MATLAB-based introduction

CERN Document Server

Brandimarte, Paolo

2006-01-01

A state-of-the-art introduction to the powerful mathematical and statistical tools used in the field of financeThe use of mathematical models and numerical techniques is a practice employed by a growing number of applied mathematicians working on applications in finance. Reflecting this development, Numerical Methods in Finance and Economics: A MATLAB?-Based Introduction, Second Edition bridges the gap between financial theory and computational practice while showing readers how to utilize MATLAB?--the powerful numerical computing environment--for financial applications.The author provides an essential foundation in finance and numerical analysis in addition to background material for students from both engineering and economics perspectives. A wide range of topics is covered, including standard numerical analysis methods, Monte Carlo methods to simulate systems affected by significant uncertainty, and optimization methods to find an optimal set of decisions.Among this book''s most outstanding features is the...

11. To Examine effect of Flow Zone Generation Techniques for Numerical Flow Analysis in Hydraulic Turbine

International Nuclear Information System (INIS)

Hussain, M.; Khan, J.A.

2004-01-01

A numerical study of flow in distributor of Francis Turbine is carried out by using two different techniques of flow zone generation. Distributor of GAMM Francis Turbine is used for present calculation. In present work, flow is assumed to be periodic around the distributor in steady state conditions, therefore computational domain consists of only one blade channel (one stay vane and one guide vane). The distributor computational domain is bounded up stream by cylindrical and downstream by conical patches. The first one corresponds to the spiral casing outflow section, while the second one is considered to be the distributor outlet or runner inlet. Upper and lower surfaces are generated by the revolution of hub and shroud edges. Single connected and multiple connected techniques are considered to generate distributor flow zone for numerical flow analysis of GAMM Francis turbine. The tetrahedral meshes are generated in both the flow zones. Same boundary conditions are applied for both the equivalent flow zones. The three dimensional, laminar flow analysis for both the distributor flow zones of the GAMM Francis turbine operating at the best efficiency point is performed. Gambit and G- Turbo are used as a preprocessor while calculations are done by using Fluent. Finally, numerical results obtained on the distributor outlet are compared with the available experimental data to validate the two different methodologies and examine their accuracy. (author)

12. Virtual photons in imaginary time: Computing exact Casimir forces via standard numerical electromagnetism techniques

NARCIS (Netherlands)

Rodriguez, A.; Ibanescu, M.; Iannuzzi, D.; Joannopoulos, J. D.; Johnson, S.T.

2007-01-01

We describe a numerical method to compute Casimir forces in arbitrary geometries, for arbitrary dielectric and metallic materials, with arbitrary accuracy (given sufficient computational resources). Our approach, based on well-established integration of the mean stress tensor evaluated via the

13. A stochastic delay model for pricing debt and equity: Numerical techniques and applications

Science.gov (United States)

Tambue, Antoine; Kemajou Brown, Elisabeth; Mohammed, Salah

2015-01-01

Delayed nonlinear models for pricing corporate liabilities and European options were recently developed. Using self-financed strategy and duplication we were able to derive a Random Partial Differential Equation (RPDE) whose solutions describe the evolution of debt and equity values of a corporate in the last delay period interval in the accompanied paper (Kemajou et al., 2012) [14]. In this paper, we provide robust numerical techniques to solve the delayed nonlinear model for the corporate value, along with the corresponding RPDEs modeling the debt and equity values of the corporate. Using financial data from some firms, we forecast and compare numerical solutions from both the nonlinear delayed model and classical Merton model with the real corporate data. From this comparison, it comes up that in corporate finance the past dependence of the firm value process may be an important feature and therefore should not be ignored.

14. Integration of artificial intelligence and numerical optimization techniques for the design of complex aerospace systems

International Nuclear Information System (INIS)

Tong, S.S.; Powell, D.; Goel, S.

1992-02-01

A new software system called Engineous combines artificial intelligence and numerical methods for the design and optimization of complex aerospace systems. Engineous combines the advanced computational techniques of genetic algorithms, expert systems, and object-oriented programming with the conventional methods of numerical optimization and simulated annealing to create a design optimization environment that can be applied to computational models in various disciplines. Engineous has produced designs with higher predicted performance gains that current manual design processes - on average a 10-to-1 reduction of turnaround time - and has yielded new insights into product design. It has been applied to the aerodynamic preliminary design of an aircraft engine turbine, concurrent aerodynamic and mechanical preliminary design of an aircraft engine turbine blade and disk, a space superconductor generator, a satellite power converter, and a nuclear-powered satellite reactor and shield. 23 refs

15. Numerical modelling of radon-222 entry into houses: An outline of techniques and results

DEFF Research Database (Denmark)

Andersen, C.E.

2001-01-01

Numerical modelling is a powerful tool for studies of soil gas and radon-222 entry into houses. It is the purpose of this paper to review some main techniques and results. In the past, modelling has focused on Darcy flow of soil gas (driven by indoor–outdoor pressure differences) and combined...... diffusive and advective transport of radon. Models of different complexity have been used. The simpler ones are finite-difference models with one or two spatial dimensions. The more complex models allow for full three-dimensional and time dependency. Advanced features include: soil heterogeneity, anisotropy......, fractures, moisture, non-uniform soil temperature, non-Darcy flow of gas, and flow caused by changes in the atmospheric pressure. Numerical models can be used to estimate the importance of specific factors for radon entry. Models are also helpful when results obtained in special laboratory or test structure...

16. Comparison of GPU-Based Numerous Particles Simulation and Experiment

International Nuclear Information System (INIS)

Park, Sang Wook; Jun, Chul Woong; Sohn, Jeong Hyun; Lee, Jae Wook

2014-01-01

The dynamic behavior of numerous grains interacting with each other can be easily observed. In this study, this dynamic behavior was analyzed based on the contact between numerous grains. The discrete element method was used for analyzing the dynamic behavior of each particle and the neighboring-cell algorithm was employed for detecting their contact. The Hertzian and tangential sliding friction contact models were used for calculating the contact force acting between the particles. A GPU-based parallel program was developed for conducting the computer simulation and calculating the numerous contacts. The dam break experiment was performed to verify the simulation results. The reliability of the program was verified by comparing the results of the simulation with those of the experiment

17. Numeric treatment of nonlinear second order multi-point boundary value problems using ANN, GAs and sequential quadratic programming technique

Directory of Open Access Journals (Sweden)

Zulqurnain Sabir

2014-06-01

Full Text Available In this paper, computational intelligence technique are presented for solving multi-point nonlinear boundary value problems based on artificial neural networks, evolutionary computing approach, and active-set technique. The neural network is to provide convenient methods for obtaining useful model based on unsupervised error for the differential equations. The motivation for presenting this work comes actually from the aim of introducing a reliable framework that combines the powerful features of ANN optimized with soft computing frameworks to cope with such challenging system. The applicability and reliability of such methods have been monitored thoroughly for various boundary value problems arises in science, engineering and biotechnology as well. Comprehensive numerical experimentations have been performed to validate the accuracy, convergence, and robustness of the designed scheme. Comparative studies have also been made with available standard solution to analyze the correctness of the proposed scheme.

18. Numerical differentiation methods for the logarithmic derivative technique used in dielectric spectroscopy

Directory of Open Access Journals (Sweden)

Henrik Haspel

2010-06-01

Full Text Available In dielectric relaxation spectroscopy the conduction contribution often hampers the evaluation of dielectric spectra, especially in the low-frequency regime. In order to overcome this the logarithmic derivative technique could be used, where the calculation of the logarithmic derivative of the real part of the complex permittivity function is needed. Since broadband dielectric measurement provides discrete permittivity function, numerical differentiation has to be used. Applicability of the Savitzky-Golay convolution method in the derivative analysis is examined, and a detailed investigation of the influential parameters (frequency, spectrum resolution, peak shape is presented on synthetic dielectric data.

19. Time dependent AN neutron transport calculations in finite media using a numerical inverse Laplace transform technique

International Nuclear Information System (INIS)

Ganapol, B.D.; Sumini, M.

1990-01-01

The time dependent space second order discrete form of the monokinetic transport equation is given an analytical solution, within the Laplace transform domain. Th A n dynamic model is presented and the general resolution procedure is worked out. The solution in the time domain is then obtained through the application of a numerical transform inversion technique. The justification of the research relies in the need to produce reliable and physically meaningful transport benchmarks for dynamic calculations. The paper is concluded by a few results followed by some physical comments

20. Application of a numerical Laplace transform inversion technique to a problem in reactor dynamics

International Nuclear Information System (INIS)

Ganapol, B.D.; Sumini, M.

1990-01-01

A newly developed numerical technique for the Laplace transform inversion is applied to a classical time-dependent problem of reactor physics. The dynamic behaviour of a multiplying system has been analyzed through a continuous slowing down model, taking into account a finite slowing down time, the presence of several groups of neutron precursors and simplifying the spatial analysis using the space asymptotic approximation. The results presented, show complete agreement with analytical ones previously obtained and allow a deeper understanding of the model features. (author)

1. Numerical analysis of a polysilicon-based resistive memory device

KAUST Repository

Berco, Dan; Chand, Umesh

2018-01-01

This study investigates a conductive bridge resistive memory device based on a Cu top electrode, 10-nm polysilicon resistive switching layer and a TiN bottom electrode, by numerical analysis for $$10^{3}$$103 programming and erase simulation cycles

2. Computational reduction techniques for numerical vibro-acoustic analysis of hearing aids

DEFF Research Database (Denmark)

Creixell Mediante, Ester

. In this thesis, several challenges encountered in the process of modelling and optimizing hearing aids are addressed. Firstly, a strategy for modelling the contacts between plastic parts for harmonic analysis is developed. Irregularities in the contact surfaces, inherent to the manufacturing process of the parts....... Secondly, the applicability of Model Order Reduction (MOR) techniques to lower the computational complexity of hearing aid vibro-acoustic models is studied. For fine frequency response calculation and optimization, which require solving the numerical model repeatedly, a computational challenge...... is encountered due to the large number of Degrees of Freedom (DOFs) needed to represent the complexity of the hearing aid system accurately. In this context, several MOR techniques are discussed, and an adaptive reduction method for vibro-acoustic optimization problems is developed as a main contribution. Lastly...

3. Some considerations on displacement assumed finite elements with the reduced numerical integration technique

International Nuclear Information System (INIS)

Takeda, H.; Isha, H.

1981-01-01

The paper is concerned with the displacement-assumed-finite elements by applying the reduced numerical integration technique in structural problems. The first part is a general consideration on the technique. Its purpose is to examine a variational interpretation of the finite element displacement formulation with the reduced integration technique in structural problems. The formulation is critically studied from a standpoint of the natural stiffness approach. It is shown that these types of elements are equivalent to a certain type of displacement and stress assumed mixed elements. The rank deficiency of the stiffness matrix of these elements is interpreted as a problem in the transformation from the natural system to a Cartesian system. It will be shown that a variational basis of the equivalent mixed formulation is closely related to the Hellinger-Reissner's functional. It is presented that for simple elements, e.g. bilinear quadrilateral plane stress and plate bending there are corresponding mixed elements from the functional. For relatively complex types of these elements, it is shown that they are equivalent to localized mixed elements from the Hellinger-Reissner's functional. In the second part, typical finite elements with the reduced integration technique are studied to demonstrate this equivalence. A bilinear displacement and rotation assumed shear beam element, a bilinear displacement assumed quadrilateral plane stress element and a bilinear deflection and rotation assumed quadrilateral plate bending element are examined to present equivalent mixed elements. Not only the theoretical consideration is presented but numerical studies are shown to demonstrate the effectiveness of these elements in practical analysis. (orig.)

4. Composite use of numerical groundwater flow modeling and geoinformatics techniques for monitoring Indus Basin aquifer, Pakistan.

Science.gov (United States)

2011-02-01

The integration of the Geographic Information System (GIS) with groundwater modeling and satellite remote sensing capabilities has provided an efficient way of analyzing and monitoring groundwater behavior and its associated land conditions. A 3-dimensional finite element model (Feflow) has been used for regional groundwater flow modeling of Upper Chaj Doab in Indus Basin, Pakistan. The approach of using GIS techniques that partially fulfill the data requirements and define the parameters of existing hydrologic models was adopted. The numerical groundwater flow model is developed to configure the groundwater equipotential surface, hydraulic head gradient, and estimation of the groundwater budget of the aquifer. GIS is used for spatial database development, integration with a remote sensing, and numerical groundwater flow modeling capabilities. The thematic layers of soils, land use, hydrology, infrastructure, and climate were developed using GIS. The Arcview GIS software is used as additive tool to develop supportive data for numerical groundwater flow modeling and integration and presentation of image processing and modeling results. The groundwater flow model was calibrated to simulate future changes in piezometric heads from the period 2006 to 2020. Different scenarios were developed to study the impact of extreme climatic conditions (drought/flood) and variable groundwater abstraction on the regional groundwater system. The model results indicated a significant response in watertable due to external influential factors. The developed model provides an effective tool for evaluating better management options for monitoring future groundwater development in the study area.

5. Study of flow characteristics in a secondary clarifier by numerical simulation and radioisotope tracer technique

International Nuclear Information System (INIS)

Kim, H.S.; Shin, M.S.; Jang, D.S.; Jung, S.H.; Jin, J.H.

2005-01-01

Numerical simulation in a 2-D rectangular coordinate and experimental study have been performed to figure out the flow characteristics and concentration distribution of a large-scale rectangular final clarifier in wastewater treatment facility located in Busan, S. Korea. The purpose of numerical calculation is to verify the experimentally measured data by radioisotope tracer technique and further to understand the important physical feature occurring in a large-scale clarifier, in many cases which is not sufficient by the aid of limited number of experimental data. To this end, a comprehensive computer program is basically made by SIMPLE algorithm by Patankar with the special emphasis on the parametric evaluation of the various phenomenological models. Calculation results are successfully evaluated against experimental data obtained by the method of radioisotope tracer. Detailed comparison is made on the calculated residence time distribution (RTD) curves with measurement inside the clarifier as well as the exhaust. Further the calculation results predict well the well-known characteristics of clarifier flow such as the waterfall phenomenon at the front end of the clarifier, the bottom density current in the settling zone and the upward flow in the withdrawal zone. Thus it is believed that the flow calculation program and the data incorporation technique of radioisotope measurement employed in this study show the high possibility as a complementary tool of experiment in this area

6. Application of numerical analysis technique to make up for pipe wall thinning prediction program

International Nuclear Information System (INIS)

Hwang, Kyeong Mo; Jin, Tae Eun; Park, Won; Oh, Dong Hoon

2009-01-01

Flow Accelerated Corrosion (FAC) leads to wall thinning of steel piping exposed to flowing water or wet steam. Experience has shown that FAC damage to piping at fossil and nuclear plants can lead to costly outages and repairs and can affect plant reliability and safety. CHEWORKS have been utilized in domestic nuclear plants as a predictive tool to assist FAC engineers in planning inspections and evaluating the inspection data to prevent piping failures caused by FAC. However, CHECWORKS may be occasionally left out local susceptible portions owing to predicting FAC damage by pipeline group after constructing a database for all secondary side piping in nuclear plants. This paper describes the methodologies that can complement CHECWORKS and the verifications of the CHECWORKS prediction results in terms of numerical analysis. FAC susceptible locations based on CHECWORKS for the two pipeline groups of a nuclear plant was compared with those of numerical analysis based on FLUENT.

7. Line impedance estimation using model based identification technique

DEFF Research Database (Denmark)

Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

2011-01-01

The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions...... into the operation of the grid-connected power converters. This paper describes a quasi passive method for estimating the line impedance of the distribution electricity network. The method uses the model based identification technique to obtain the resistive and inductive parts of the line impedance. The quasi...

8. Supplementation of Flow Accelerated Corrosion Prediction Program Using Numerical Analysis Technique

International Nuclear Information System (INIS)

Hwang, Kyeong Mo; Jin, Tae Eun; Park, Won; Oh, Dong Hoon

2010-01-01

Flow-accelerated corrosion (FAC) leads to thinning of steel pipe walls that are exposed to flowing water or wet steam. From experience, it is seen that FAC damage to piping at fossil and nuclear plants can result in outages that require expensive repairs and can affect plant reliability and safety. CHECWORKS have been utilized in domestic nuclear plants as a predictive tool to assist FAC engineers in planning inspections and evaluating the inspection data so that piping failures caused by FAC can be prevented. However, CHECWORKS may be occasionally ignore local susceptible portions when predicting FAC damage in a group of pipelines after constructing a database for all the secondary side piping in nuclear plants. This paper describes the methodologies that can complement CHECWORKS and the verifications of CHECWORKS prediction results using numerical analysis. FAC susceptible locations determined using CHECWORKS for two pipeline groups of a nuclear plant was compared with determined using the numerical-analysis-based FLUENT

9. Implementation of visual programming methods for numerical techniques used in electromagnetic field theory

Directory of Open Access Journals (Sweden)

Metin Varan

2017-08-01

Full Text Available Field theory is one of the two sub-field theories in electrical and electronics engineering that for creates difficulties for undergraduate students. In undergraduate period, field theory has been taught under the theory of electromagnetic fields by which describes using partial differential equations and integral methods. Analytical methods for solution of field problems on the basis of a mathematical model may result the understanding difficulties for undergraduate students due to their mathematical and physical infrastructure. The analytical methods which can be applied in simple model lose their applicability to more complex models. In this case, the numerical methods are used to solve more complex equations. In this study, by preparing some field theory‘s web-based graphical user interface numerical methods of applications it has been aimed to increase learning levels of field theory problems for undergraduate and graduate students while taking in mind their computer programming capabilities.

10. Numerical solution of large nonlinear boundary value problems by quadratic minimization techniques

International Nuclear Information System (INIS)

Glowinski, R.; Le Tallec, P.

1984-01-01

The objective of this paper is to describe the numerical treatment of large highly nonlinear two or three dimensional boundary value problems by quadratic minimization techniques. In all the different situations where these techniques were applied, the methodology remains the same and is organized as follows: 1) derive a variational formulation of the original boundary value problem, and approximate it by Galerkin methods; 2) transform this variational formulation into a quadratic minimization problem (least squares methods) or into a sequence of quadratic minimization problems (augmented lagrangian decomposition); 3) solve each quadratic minimization problem by a conjugate gradient method with preconditioning, the preconditioning matrix being sparse, positive definite, and fixed once for all in the iterative process. This paper will illustrate the methodology above on two different examples: the description of least squares solution methods and their application to the solution of the unsteady Navier-Stokes equations for incompressible viscous fluids; the description of augmented lagrangian decomposition techniques and their application to the solution of equilibrium problems in finite elasticity

11. Stability analysis of resistive MHD modes via a new numerical matching technique

International Nuclear Information System (INIS)

Furukawa, M.; Tokuda, S.; Zheng, L.-J.

2009-01-01

Full text: Asymptotic matching technique is one of the principal methods for calculating linear stability of resistive magnetohydrodynamics (MHD) modes such as tearing modes. In applying the asymptotic method, the plasma region is divided into two regions: a thin inner layer around the mode-resonant surface and ideal MHD regions except for the layer. If we try to solve this asymptotic matching problem numerically, we meet practical difficulties. Firstly, the inertia-less ideal MHD equation or the Newcomb equation has a regular singular point at the mode-resonant surface, leading to the so-called big and small solutions. Since the big solution is not square-integrable, it needs sophisticated treatment. Even if such a treatment is applied, the matching data or the ratio of small solution to the big one, has been revealed to be sensitive to local MHD equilibrium accuracy and grid structure at the mode-resonant surface by numerical experiments. Secondly, one of the independent solutions in the inner layer, which should be matched onto the ideal MHD solution, is not square-integrable. The response formalism has been adopted to resolve this problem. In the present paper, we propose a new method for computing the linear stability of resistive MHD modes via matching technique, where the plasma region is divided into ideal MHD regions and an inner region with finite width. The matching technique using an inner region with finite width was recently developed for ideal MHD modes in cylindrical geometry, and good performance was shown. Our method extends this idea to resistive MHD modes. In the inner region, the low-beta reduced MHD equations are solved, and the solution is matched onto the solution of the Newcomb equation by using boundary conditions such that the parallel electric field vanishes properly as approaching the computational boundaries. If we use the inner region with finite width, the practical difficulties raised above can be avoided from the beginning. Figure

12. NUMERICAL TECHNIQUES TO SOLVE CONDENSATIONAL AND DISSOLUTIONAL GROWTH EQUATIONS WHEN GROWTH IS COUPLED TO REVERSIBLE REACTIONS (R823186)

Science.gov (United States)

Noniterative, unconditionally stable numerical techniques for solving condensational anddissolutional growth equations are given. Growth solutions are compared to Gear-code solutions forthree cases when growth is coupled to reversible equilibrium chemistry. In all cases, ...

13. An evaluation of directional analysis techniques for multidirectional, partially reflected waves .1. numerical investigations

DEFF Research Database (Denmark)

Ilic, C; Chadwick, A; Helm-Petersen, Jacob

2000-01-01

, non-phased locked methods are more appropriate. In this paper, the accuracy of two non-phased locked methods of directional analysis, the maximum likelihood method (MLM) and the Bayesian directional method (BDM) have been quantitatively evaluated using numerical simulations for the case...... of multidirectional waves with partial reflections. It is shown that the results are influenced by the ratio of distance from the reflector (L) to the length of the time series (S) used in the spectral analysis. Both methods are found to be capable of determining the incident and reflective wave fields when US > 0......Recent studies of advanced directional analysis techniques have mainly centred on incident wave fields. In the study of coastal structures, however, partially reflective wave fields are commonly present. In the near structure field, phase locked methods can be successfully applied. In the far field...

14. Application of numerical analysis techniques to eddy current testing for steam generator tubes

International Nuclear Information System (INIS)

Morimoto, Kazuo; Satake, Koji; Araki, Yasui; Morimura, Koichi; Tanaka, Michio; Shimizu, Naoya; Iwahashi, Yoichi

1994-01-01

This paper describes the application of numerical analysis to eddy current testing (ECT) for steam generator tubes. A symmetrical and three-dimensional sinusoidal steady state eddy current analysis code was developed. This code is formulated by future element method-boundary element method coupling techniques, in order not to regenerate the mesh data in the tube domain at every movement of the probe. The calculations were carried out under various conditions including those for various probe types, defect orientations and so on. Compared with the experimental data, it was shown that it is feasible to apply this code to actual use. Furthermore, we have developed a total eddy current analysis system which consists of an ECT calculation code, an automatic mesh generator for analysis, a database and display software for calculated results. ((orig.))

15. A numerical approach to the time dependent neutron flux using the Laplace transform technique

International Nuclear Information System (INIS)

El-Demerdash, A; Beynon, T.D.

1979-01-01

In this study a time dependent transport problem in which an isotopic neutron source emits a pulse of neutrons into a finite sphere has been solved by a numerical Laplace transform technique. The object has been to investigate the time behaviour of the neutron field in the moderators at times shortly after the neutron source initiation, that is in the nanosecond time period. The basis of the solution is a numercial evaluation of the Laplace transform of the flux in the linear Boltzmann equation with the use of a modified version of a steady state energy multi-group spatially dependent code. The explicit or direct inversion of the Laplace transformed flux is complicated to be solved numerically due to the ill-conditioned matrix obtained. The suggested method of solutions depends on choice of a function that satisfies the physical condition known from the neutron behaviour and that has a Laplace inversion which is analytically amenable. By employing a least square fitting procedure the function is modified in order to minimize the error in the Laplace transformed values and hence in the time dependent solution. This method has been applied satisfactorily in comparison to analytical and experimental results

16. Analytical and numerical techniques for predicting the interfacial stresses of wavy carbon nanotube/polymer composites

NARCIS (Netherlands)

Yazdchi, K.; Salehi, M.; Shokrieh, M.M.

2009-01-01

By introducing a new simplified 3D representative volume element for wavy carbon nanotubes, an analytical model is developed to study the stress transfer in single-walled carbon nanotube-reinforced polymer composites. Based on the pull-out modeling technique, the effects of waviness, aspect ratio,

17. Composite Techniques Based Color Image Compression

Directory of Open Access Journals (Sweden)

Zainab Ibrahim Abood

2017-03-01

Full Text Available Compression for color image is now necessary for transmission and storage in the data bases since the color gives a pleasing nature and natural for any object, so three composite techniques based color image compression is implemented to achieve image with high compression, no loss in original image, better performance and good image quality. These techniques are composite stationary wavelet technique (S, composite wavelet technique (W and composite multi-wavelet technique (M. For the high energy sub-band of the 3rd level of each composite transform in each composite technique, the compression parameters are calculated. The best composite transform among the 27 types is the three levels of multi-wavelet transform (MMM in M technique which has the highest values of energy (En and compression ratio (CR and least values of bit per pixel (bpp, time (T and rate distortion R(D. Also the values of the compression parameters of the color image are nearly the same as the average values of the compression parameters of the three bands of the same image.

18. Numeral eddy current sensor modelling based on genetic neural network

International Nuclear Information System (INIS)

Yu Along

2008-01-01

This paper presents a method used to the numeral eddy current sensor modelling based on the genetic neural network to settle its nonlinear problem. The principle and algorithms of genetic neural network are introduced. In this method, the nonlinear model parameters of the numeral eddy current sensor are optimized by genetic neural network (GNN) according to measurement data. So the method remains both the global searching ability of genetic algorithm and the good local searching ability of neural network. The nonlinear model has the advantages of strong robustness, on-line modelling and high precision. The maximum nonlinearity error can be reduced to 0.037% by using GNN. However, the maximum nonlinearity error is 0.075% using the least square method

19. Numerical solution of modified differential equations based on symmetry preservation.

Science.gov (United States)

Ozbenli, Ersin; Vedula, Prakash

2017-12-01

In this paper, we propose a method to construct invariant finite-difference schemes for solution of partial differential equations (PDEs) via consideration of modified forms of the underlying PDEs. The invariant schemes, which preserve Lie symmetries, are obtained based on the method of equivariant moving frames. While it is often difficult to construct invariant numerical schemes for PDEs due to complicated symmetry groups associated with cumbersome discrete variable transformations, we note that symmetries associated with more convenient transformations can often be obtained by appropriately modifying the original PDEs. In some cases, modifications to the original PDEs are also found to be useful in order to avoid trivial solutions that might arise from particular selections of moving frames. In our proposed method, modified forms of PDEs can be obtained either by addition of perturbation terms to the original PDEs or through defect correction procedures. These additional terms, whose primary purpose is to enable symmetries with more convenient transformations, are then removed from the system by considering moving frames for which these specific terms go to zero. Further, we explore selection of appropriate moving frames that result in improvement in accuracy of invariant numerical schemes based on modified PDEs. The proposed method is tested using the linear advection equation (in one- and two-dimensions) and the inviscid Burgers' equation. Results obtained for these tests cases indicate that numerical schemes derived from the proposed method perform significantly better than existing schemes not only by virtue of improvement in numerical accuracy but also due to preservation of qualitative properties or symmetries of the underlying differential equations.

20. CASTING IMPROVEMENT BASED ON METAHEURISTIC OPTIMIZATION AND NUMERICAL SIMULATION

Directory of Open Access Journals (Sweden)

2017-12-01

Full Text Available This paper presents the use of metaheuristic optimization techniques to support the improvement of casting process. Genetic algorithm (GA, Ant Colony Optimization (ACO, Simulated annealing (SA and Particle Swarm Optimization (PSO have been considered as optimization tools to define the geometry of the casting part’s feeder. The proposed methodology has been demonstrated in the design of the feeder for casting Pelton turbine bucket. The results of the optimization are dimensional characteristics of the feeder, and the best result from all the implemented optimization processes has been adopted. Numerical simulation has been used to verify the validity of the presented design methodology and the feeding system optimization in the casting system of the Pelton turbine bucket.

1. Dynamic optimization of distributed biological systems using robust and efficient numerical techniques.

Science.gov (United States)

Vilas, Carlos; Balsa-Canto, Eva; García, Maria-Sonia G; Banga, Julio R; Alonso, Antonio A

2012-07-02

Systems biology allows the analysis of biological systems behavior under different conditions through in silico experimentation. The possibility of perturbing biological systems in different manners calls for the design of perturbations to achieve particular goals. Examples would include, the design of a chemical stimulation to maximize the amplitude of a given cellular signal or to achieve a desired pattern in pattern formation systems, etc. Such design problems can be mathematically formulated as dynamic optimization problems which are particularly challenging when the system is described by partial differential equations.This work addresses the numerical solution of such dynamic optimization problems for spatially distributed biological systems. The usual nonlinear and large scale nature of the mathematical models related to this class of systems and the presence of constraints on the optimization problems, impose a number of difficulties, such as the presence of suboptimal solutions, which call for robust and efficient numerical techniques. Here, the use of a control vector parameterization approach combined with efficient and robust hybrid global optimization methods and a reduced order model methodology is proposed. The capabilities of this strategy are illustrated considering the solution of a two challenging problems: bacterial chemotaxis and the FitzHugh-Nagumo model. In the process of chemotaxis the objective was to efficiently compute the time-varying optimal concentration of chemotractant in one of the spatial boundaries in order to achieve predefined cell distribution profiles. Results are in agreement with those previously published in the literature. The FitzHugh-Nagumo problem is also efficiently solved and it illustrates very well how dynamic optimization may be used to force a system to evolve from an undesired to a desired pattern with a reduced number of actuators. The presented methodology can be used for the efficient dynamic optimization of

2. An Experimentally Validated Numerical Modeling Technique for Perforated Plate Heat Exchangers.

Science.gov (United States)

White, M J; Nellis, G F; Kelin, S A; Zhu, W; Gianchandani, Y

2010-11-01

Cryogenic and high-temperature systems often require compact heat exchangers with a high resistance to axial conduction in order to control the heat transfer induced by axial temperature differences. One attractive design for such applications is a perforated plate heat exchanger that utilizes high conductivity perforated plates to provide the stream-to-stream heat transfer and low conductivity spacers to prevent axial conduction between the perforated plates. This paper presents a numerical model of a perforated plate heat exchanger that accounts for axial conduction, external parasitic heat loads, variable fluid and material properties, and conduction to and from the ends of the heat exchanger. The numerical model is validated by experimentally testing several perforated plate heat exchangers that are fabricated using microelectromechanical systems based manufacturing methods. This type of heat exchanger was investigated for potential use in a cryosurgical probe. One of these heat exchangers included perforated plates with integrated platinum resistance thermometers. These plates provided in situ measurements of the internal temperature distribution in addition to the temperature, pressure, and flow rate measured at the inlet and exit ports of the device. The platinum wires were deposited between the fluid passages on the perforated plate and are used to measure the temperature at the interface between the wall material and the flowing fluid. The experimental testing demonstrates the ability of the numerical model to accurately predict both the overall performance and the internal temperature distribution of perforated plate heat exchangers over a range of geometry and operating conditions. The parameters that were varied include the axial length, temperature range, mass flow rate, and working fluid.

3. Numerical simulation of multi-dimensional two-phase flow based on flux vector splitting

Energy Technology Data Exchange (ETDEWEB)

Staedtke, H.; Franchello, G.; Worth, B. [Joint Research Centre - Ispra Establishment (Italy)

1995-09-01

This paper describes a new approach to the numerical simulation of transient, multidimensional two-phase flow. The development is based on a fully hyperbolic two-fluid model of two-phase flow using separated conservation equations for the two phases. Features of the new model include the existence of real eigenvalues, and a complete set of independent eigenvectors which can be expressed algebraically in terms of the major dependent flow parameters. This facilitates the application of numerical techniques specifically developed for high speed single-phase gas flows which combine signal propagation along characteristic lines with the conservation property with respect to mass, momentum and energy. Advantages of the new model for the numerical simulation of one- and two- dimensional two-phase flow are discussed.

4. Virtual photons in imaginary time: Computing exact Casimir forces via standard numerical electromagnetism techniques

International Nuclear Information System (INIS)

Rodriguez, Alejandro; Ibanescu, Mihai; Joannopoulos, J. D.; Johnson, Steven G.; Iannuzzi, Davide

2007-01-01

We describe a numerical method to compute Casimir forces in arbitrary geometries, for arbitrary dielectric and metallic materials, with arbitrary accuracy (given sufficient computational resources). Our approach, based on well-established integration of the mean stress tensor evaluated via the fluctuation-dissipation theorem, is designed to directly exploit fast methods developed for classical computational electromagnetism, since it only involves repeated evaluation of the Green's function for imaginary frequencies (equivalently, real frequencies in imaginary time). We develop the approach by systematically examining various formulations of Casimir forces from the previous decades and evaluating them according to their suitability for numerical computation. We illustrate our approach with a simple finite-difference frequency-domain implementation, test it for known geometries such as a cylinder and a plate, and apply it to new geometries. In particular, we show that a pistonlike geometry of two squares sliding between metal walls, in both two and three dimensions with both perfect and realistic metallic materials, exhibits a surprising nonmonotonic ''lateral'' force from the walls

5. IMAGE ANALYSIS BASED ON EDGE DETECTION TECHNIQUES

Institute of Scientific and Technical Information of China (English)

纳瑟; 刘重庆

2002-01-01

A method that incorporates edge detection technique, Markov Random field (MRF), watershed segmentation and merging techniques was presented for performing image segmentation and edge detection tasks. It first applies edge detection technique to obtain a Difference In Strength (DIS) map. An initial segmented result is obtained based on K-means clustering technique and the minimum distance. Then the region process is modeled by MRF to obtain an image that contains different intensity regions. The gradient values are calculated and then the watershed technique is used. DIS calculation is used for each pixel to define all the edges (weak or strong) in the image. The DIS map is obtained. This help as priority knowledge to know the possibility of the region segmentation by the next step (MRF), which gives an image that has all the edges and regions information. In MRF model,gray level l, at pixel location i, in an image X, depends on the gray levels of neighboring pixels. The segmentation results are improved by using watershed algorithm. After all pixels of the segmented regions are processed, a map of primitive region with edges is generated. The edge map is obtained using a merge process based on averaged intensity mean values. A common edge detectors that work on (MRF) segmented image are used and the results are compared. The segmentation and edge detection result is one closed boundary per actual region in the image.

6. Numerical tilting compensation in microscopy based on wavefront sensing using transport of intensity equation method

Science.gov (United States)

Hu, Junbao; Meng, Xin; Wei, Qi; Kong, Yan; Jiang, Zhilong; Xue, Liang; Liu, Fei; Liu, Cheng; Wang, Shouyu

2018-03-01

Wide-field microscopy is commonly used for sample observations in biological research and medical diagnosis. However, the tilting error induced by the oblique location of the image recorder or the sample, as well as the inclination of the optical path often deteriorates the imaging quality. In order to eliminate the tilting in microscopy, a numerical tilting compensation technique based on wavefront sensing using transport of intensity equation method is proposed in this paper. Both the provided numerical simulations and practical experiments prove that the proposed technique not only accurately determines the tilting angle with simple setup and procedures, but also compensates the tilting error for imaging quality improvement even in the large tilting cases. Considering its simple systems and operations, as well as image quality improvement capability, it is believed the proposed method can be applied for tilting compensation in the optical microscopy.

7. Analytical research using synchrotron radiation based techniques

International Nuclear Information System (INIS)

Jha, Shambhu Nath

2015-01-01

There are many Synchrotron Radiation (SR) based techniques such as X-ray Absorption Spectroscopy (XAS), X-ray Fluorescence Analysis (XRF), SR-Fourier-transform Infrared (SRFTIR), Hard X-ray Photoelectron Spectroscopy (HAXPS) etc. which are increasingly being employed worldwide in analytical research. With advent of modern synchrotron sources these analytical techniques have been further revitalized and paved ways for new techniques such as microprobe XRF and XAS, FTIR microscopy, Hard X-ray Photoelectron Spectroscopy (HAXPS) etc. The talk will cover mainly two techniques illustrating its capability in analytical research namely XRF and XAS. XRF spectroscopy: XRF spectroscopy is an analytical technique which involves the detection of emitted characteristic X-rays following excitation of the elements within the sample. While electron, particle (protons or alpha particles), or X-ray beams can be employed as the exciting source for this analysis, the use of X-ray beams from a synchrotron source has been instrumental in the advancement of the technique in the area of microprobe XRF imaging and trace level compositional characterisation of any sample. Synchrotron radiation induced X-ray emission spectroscopy, has become competitive with the earlier microprobe and nanoprobe techniques following the advancements in manipulating and detecting these X-rays. There are two important features that contribute to the superb elemental sensitivities of microprobe SR induced XRF: (i) the absence of the continuum (Bremsstrahlung) background radiation that is a feature of spectra obtained from charged particle beams, and (ii) the increased X-ray flux on the sample associated with the use of tunable third generation synchrotron facilities. Detection sensitivities have been reported in the ppb range, with values of 10 -17 g - 10 -14 g (depending on the particular element and matrix). Keeping in mind its demand, a microprobe XRF beamline has been setup by RRCAT at Indus-2 synchrotron

8. Model-checking techniques based on cumulative residuals.

Science.gov (United States)

Lin, D Y; Wei, L J; Ying, Z

2002-03-01

Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

9. Reliability-Based Stability Analysis of Rock Slopes Using Numerical Analysis and Response Surface Method

Science.gov (United States)

2017-08-01

While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.

10. Rapid Late Holocene glacier fluctuations reconstructed from South Georgia lake sediments using novel analytical and numerical techniques

Science.gov (United States)

van der Bilt, Willem; Bakke, Jostein; Werner, Johannes; Paasche, Øyvind; Rosqvist, Gunhild

2016-04-01

The collapse of ice shelves, rapidly retreating glaciers and a dramatic recent temperature increase show that Southern Ocean climate is rapidly shifting. Also, instrumental and modelling data demonstrate transient interactions between oceanic and atmospheric forcings as well as climatic teleconnections with lower-latitude regions. Yet beyond the instrumental period, a lack of proxy climate timeseries impedes our understanding of Southern Ocean climate. Also, available records often lack the resolution and chronological control required to resolve rapid climate shifts like those observed at present. Alpine glaciers are found on most Southern Ocean islands and quickly respond to shifts in climate through changes in mass balance. Attendant changes in glacier size drive variations in the production of rock flour, the suspended product of glacial erosion. This climate response may be captured by downstream distal glacier-fed lakes, continuously recording glacier history. Sediment records from such lakes are considered prime sources for paleoclimate reconstructions. Here, we present the first reconstruction of Late Holocene glacier variability from the island of South Georgia. Using a toolbox of advanced physical, geochemical (XRF) and magnetic proxies, in combination with state-of-the-art numerical techniques, we fingerprinted a glacier signal from glacier-fed lake sediments. This lacustrine sediment signal was subsequently calibrated against mapped glacier extent with the help of geomorphological moraine evidence and remote sensing techniques. The outlined approach enabled us to robustly resolve variations of a complex glacier at sub-centennial timescales, while constraining the sedimentological imprint of other geomorphic catchment processes. From a paleoclimate perspective, our reconstruction reveals a dynamic Late Holocene climate, modulated by long-term shifts in regional circulation patterns. We also find evidence for rapid medieval glacier retreat as well as a

11. Joining of polymer-metal lightweight structures using self-piercing riveting (SPR) technique: Numerical approach and simulation results

Science.gov (United States)

2018-05-01

Restrictions in pollutant emissions dictated at the European Commission level in the past few years have urged mass production car manufacturers to engage rapidly several strategies in order to reduce significantly the energy consumption of their vehicles. One of the most relevant taken action is light-weighting of body in white (BIW) structures, concretely visible with the increased introduction of polymer-based composite materials reinforced by carbon/glass fibers. However, the design and manufacturing of such "hybrid" structures is limiting the use of conventional assembly techniques like resistance spot welding (RSW) which are not transferable as they are for polymer-metal joining. This research aims at developing a joining technique that would eventually enable the assembly of a sheet molding compound (SMC) polyester thermoset-made component on a structure composed of several high strength steel grades. The state of the art of polymer-metal joining techniques highlighted the few ones potentially able to respond to the industrial challenge, which are: structural bonding, self-piercing riveting (SPR), direct laser joining and friction spot welding (FSpW). In this study, the promising SPR technique is investigated. Modelling of SPR process in the case of polymer-metal joining was performed through the building of a 2D axisymmetric FE model using the commercial code Abaqus CAE 6.10-1. Details of the numerical approach are presented with a particular attention to the composite sheet for which Mori-Tanaka's homogenization method is used in order to estimate overall mechanical properties. Large deformations induced by the riveting process are enabled with the use of a mixed finite element formulation ALE (arbitrary Lagrangian-Eulerian). FE model predictions are compared with experimental data followed by a discussion.

12. Numerical analysis of a polysilicon-based resistive memory device

KAUST Repository

Berco, Dan

2018-03-08

This study investigates a conductive bridge resistive memory device based on a Cu top electrode, 10-nm polysilicon resistive switching layer and a TiN bottom electrode, by numerical analysis for $$10^{3}$$103 programming and erase simulation cycles. The low and high resistive state values in each cycle are calculated, and the analysis shows that the structure has excellent retention reliability properties. The presented Cu species density plot indicates that Cu insertion occurs almost exclusively along grain boundaries resulting in a confined isomorphic conductive filament that maintains its overall shape and electric properties during cycling. The superior reliability of this structure may thus be attributed to the relatively low amount of Cu migrating into the RSL during initial formation. In addition, the results show a good match and help to confirm experimental measurements done over a previously demonstrated device.

13. Operational Numerical Weather Prediction systems based on Linux cluster architectures

International Nuclear Information System (INIS)

Pasqui, M.; Baldi, M.; Gozzini, B.; Maracchi, G.; Giuliani, G.; Montagnani, S.

2005-01-01

The progress in weather forecast and atmospheric science has been always closely linked to the improvement of computing technology. In order to have more accurate weather forecasts and climate predictions, more powerful computing resources are needed, in addition to more complex and better-performing numerical models. To overcome such a large computing request, powerful workstations or massive parallel systems have been used. In the last few years, parallel architectures, based on the Linux operating system, have been introduced and became popular, representing real high performance-low cost systems. In this work the Linux cluster experience achieved at the Laboratory far Meteorology and Environmental Analysis (LaMMA-CNR-IBIMET) is described and tips and performances analysed

14. Numerical Simulation of Molten Flow in Directed Energy Deposition Using an Iterative Geometry Technique

Science.gov (United States)

Vincent, Timothy J.; Rumpfkeil, Markus P.; Chaudhary, Anil

2018-06-01

The complex, multi-faceted physics of laser-based additive metals processing tends to demand high-fidelity models and costly simulation tools to provide predictions accurate enough to aid in selecting process parameters. Of particular difficulty is the accurate determination of melt pool shape and size, which are useful for predicting lack-of-fusion, as this typically requires an adequate treatment of thermal and fluid flow. In this article we describe a novel numerical simulation tool which aims to achieve a balance between accuracy and cost. This is accomplished by making simplifying assumptions regarding the behavior of the gas-liquid interface for processes with a moderate energy density, such as Laser Engineered Net Shaping (LENS). The details of the implementation, which is based on the solver simpleFoam of the well-known software suite OpenFOAM, are given here and the tool is verified and validated for a LENS process involving Ti-6Al-4V. The results indicate that the new tool predicts width and height of a deposited track to engineering accuracy levels.

15. Numerical Simulation of Molten Flow in Directed Energy Deposition Using an Iterative Geometry Technique

Science.gov (United States)

Vincent, Timothy J.; Rumpfkeil, Markus P.; Chaudhary, Anil

2018-03-01

The complex, multi-faceted physics of laser-based additive metals processing tends to demand high-fidelity models and costly simulation tools to provide predictions accurate enough to aid in selecting process parameters. Of particular difficulty is the accurate determination of melt pool shape and size, which are useful for predicting lack-of-fusion, as this typically requires an adequate treatment of thermal and fluid flow. In this article we describe a novel numerical simulation tool which aims to achieve a balance between accuracy and cost. This is accomplished by making simplifying assumptions regarding the behavior of the gas-liquid interface for processes with a moderate energy density, such as Laser Engineered Net Shaping (LENS). The details of the implementation, which is based on the solver simpleFoam of the well-known software suite OpenFOAM, are given here and the tool is verified and validated for a LENS process involving Ti-6Al-4V. The results indicate that the new tool predicts width and height of a deposited track to engineering accuracy levels.

16. Graph based techniques for tag cloud generation

DEFF Research Database (Denmark)

Leginus, Martin; Dolog, Peter; Lage, Ricardo Gomes

2013-01-01

Tag cloud is one of the navigation aids for exploring documents. Tag cloud also link documents through the user defined terms. We explore various graph based techniques to improve the tag cloud generation. Moreover, we introduce relevance measures based on underlying data such as ratings...... or citation counts for improved measurement of relevance of tag clouds. We show, that on the given data sets, our approach outperforms the state of the art baseline methods with respect to such relevance by 41 % on Movielens dataset and by 11 % on Bibsonomy data set....

17. Adaptive differential correspondence imaging based on sorting technique

Directory of Open Access Journals (Sweden)

Heng Wu

2017-04-01

Full Text Available We develop an adaptive differential correspondence imaging (CI method using a sorting technique. Different from the conventional CI schemes, the bucket detector signals (BDS are first processed by a differential technique, and then sorted in a descending (or ascending order. Subsequently, according to the front and last several frames of the sorted BDS, the positive and negative subsets (PNS are created by selecting the relative frames from the reference detector signals. Finally, the object image is recovered from the PNS. Besides, an adaptive method based on two-step iteration is designed to select the optimum number of frames. To verify the proposed method, a single-detector computational ghost imaging (GI setup is constructed. We experimentally and numerically compare the performance of the proposed method with different GI algorithms. The results show that our method can improve the reconstruction quality and reduce the computation cost by using fewer measurement data.

18. The Technique for the Numerical Tolerances Estimations in the Construction of Compensated Accelerating Structures

CERN Document Server

Paramonov, V V

2004-01-01

The requirements to the cells manufacturing precision and tining in the multi-cells accelerating structures construction came from the required accelerating field uniformity, based on the beam dynamics demands. The standard deviation of the field distribution depends on accelerating and coupling modes frequencies deviations, stop-band width and coupling coefficient deviations. These deviations can be determined from 3D fields distribution for accelerating and coupling modes and the cells surface displacements. With modern software it can be done separately for every specified part of the cell surface. Finally, the cell surface displacements are defined from the cell dimensions deviations. This technique allows both to define qualitatively the critical regions and to optimize quantitatively the tolerances definition.

19. Spotted star light curve numerical modeling technique and its application to HII 1883 surface imaging

Science.gov (United States)

Kolbin, A. I.; Shimansky, V. V.

2014-04-01

We developed a code for imaging the surfaces of spotted stars by a set of circular spots with a uniform temperature distribution. The flux from the spotted surface is computed by partitioning the spots into elementary areas. The code takes into account the passing of spots behind the visible stellar limb, limb darkening, and overlapping of spots. Modeling of light curves includes the use of recent results of the theory of stellar atmospheres needed to take into account the temperature dependence of flux intensity and limb darkening coefficients. The search for spot parameters is based on the analysis of several light curves obtained in different photometric bands. We test our technique by applying it to HII 1883.

20. NUMERICAL RESEARCH TECHNIQUES OF MAGNETIC FIELDS GENERATED BY INDUCTION CURRENTS IN A MASSIVE CONDUCTOR

OpenAIRE

Tchernykh A. G.

2015-01-01

We consider the technology of application of numerical methods in the educational process in physics on the example of a study of the magnetic field induced by induction currents in a cylindrical conductor in a quasi-stationary magnetic field. Here is given the numerical calculation of the real and imaginary parts of the Bessel functions of complex argument. The listing of the program of drawing the graphs of the radial dependence of the amplitude and phase shift of the inductive currents fie...

1. Artificial Intelligence based technique for BTS placement

Science.gov (United States)

Alenoghena, C. O.; Emagbetere, J. O.; Aibinu, A. M.

2013-12-01

The increase of the base transceiver station (BTS) in most urban areas can be traced to the drive by network providers to meet demand for coverage and capacity. In traditional network planning, the final decision of BTS placement is taken by a team of radio planners, this decision is not fool proof against regulatory requirements. In this paper, an intelligent based algorithm for optimal BTS site placement has been proposed. The proposed technique takes into consideration neighbour and regulation considerations objectively while determining cell site. The application will lead to a quantitatively unbiased evaluated decision making process in BTS placement. An experimental data of a 2km by 3km territory was simulated for testing the new algorithm, results obtained show a 100% performance of the neighbour constrained algorithm in BTS placement optimization. Results on the application of GA with neighbourhood constraint indicate that the choices of location can be unbiased and optimization of facility placement for network design can be carried out.

2. Artificial Intelligence based technique for BTS placement

International Nuclear Information System (INIS)

Alenoghena, C O; Emagbetere, J O; 1 Minna (Nigeria))" data-affiliation=" (Department of Telecommunications Engineering, Federal University of Techn.1 Minna (Nigeria))" >Aibinu, A M

2013-01-01

The increase of the base transceiver station (BTS) in most urban areas can be traced to the drive by network providers to meet demand for coverage and capacity. In traditional network planning, the final decision of BTS placement is taken by a team of radio planners, this decision is not fool proof against regulatory requirements. In this paper, an intelligent based algorithm for optimal BTS site placement has been proposed. The proposed technique takes into consideration neighbour and regulation considerations objectively while determining cell site. The application will lead to a quantitatively unbiased evaluated decision making process in BTS placement. An experimental data of a 2km by 3km territory was simulated for testing the new algorithm, results obtained show a 100% performance of the neighbour constrained algorithm in BTS placement optimization. Results on the application of GA with neighbourhood constraint indicate that the choices of location can be unbiased and optimization of facility placement for network design can be carried out

3. Numerical Solution of Piecewise Constant Delay Systems Based on a Hybrid Framework

Directory of Open Access Journals (Sweden)

H. R. Marzban

2016-01-01

Full Text Available An efficient numerical scheme for solving delay differential equations with a piecewise constant delay function is developed in this paper. The proposed approach is based on a hybrid of block-pulse functions and Taylor’s polynomials. The operational matrix of delay corresponding to the proposed hybrid functions is introduced. The sparsity of this matrix significantly reduces the computation time and memory requirement. The operational matrices of integration, delay, and product are employed to transform the problem under consideration into a system of algebraic equations. It is shown that the developed approach is also applicable to a special class of nonlinear piecewise constant delay differential equations. Several numerical experiments are examined to verify the validity and applicability of the presented technique.

4. Ground-based PIV and numerical flow visualization results from the Surface Tension Driven Convection Experiment

Science.gov (United States)

Pline, Alexander D.; Werner, Mark P.; Hsieh, Kwang-Chung

1991-01-01

The Surface Tension Driven Convection Experiment (STDCE) is a Space Transportation System flight experiment to study both transient and steady thermocapillary fluid flows aboard the United States Microgravity Laboratory-1 (USML-1) Spacelab mission planned for June, 1992. One of the components of data collected during the experiment is a video record of the flow field. This qualitative data is then quantified using an all electric, two dimensional Particle Image Velocimetry (PIV) technique called Particle Displacement Tracking (PDT), which uses a simple space domain particle tracking algorithm. Results using the ground based STDCE hardware, with a radiant flux heating mode, and the PDT system are compared to numerical solutions obtained by solving the axisymmetric Navier Stokes equations with a deformable free surface. The PDT technique is successful in producing a velocity vector field and corresponding stream function from the raw video data which satisfactorily represents the physical flow. A numerical program is used to compute the velocity field and corresponding stream function under identical conditions. Both the PDT system and numerical results were compared to a streak photograph, used as a benchmark, with good correlation.

5. NUMERICAL SIMULATION OF ELECTRICAL IMPEDANCE TOMOGRAPHY PROBLEM AND STUDY OF APPROACH BASED ON FINITE VOLUME METHOD

Directory of Open Access Journals (Sweden)

Ye. S. Sherina

2014-01-01

Full Text Available This research has been aimed to carry out a study of peculiarities that arise in a numerical simulation of the electrical impedance tomography (EIT problem. Static EIT image reconstruction is sensitive to a measurement noise and approximation error. A special consideration has been given to reducing of the approximation error, which originates from numerical implementation drawbacks. This paper presents in detail two numerical approaches for solving EIT forward problem. The finite volume method (FVM on unstructured triangular mesh is introduced. In order to compare this approach, the finite element (FEM based forward solver was implemented, which has gained the most popularity among researchers. The calculated potential distribution with the assumed initial conductivity distribution has been compared to the analytical solution of a test Neumann boundary problem and to the results of problem simulation by means of ANSYS FLUENT commercial software. Two approaches to linearized EIT image reconstruction are discussed. Reconstruction of the conductivity distribution is an ill-posed problem, typically requiring a large amount of computation and resolved by minimization techniques. The objective function to be minimized is constructed of measured voltage and calculated boundary voltage on the electrodes. A classical modified Newton type iterative method and the stochastic differential evolution method are employed. A software package has been developed for the problem under investigation. Numerical tests were conducted on simulated data. The obtained results could be helpful to researches tackling the hardware and software issues for medical applications of EIT.

6. Lifecycle-Based Swarm Optimization Method for Numerical Optimization

Directory of Open Access Journals (Sweden)

Hai Shen

2014-01-01

Full Text Available Bioinspired optimization algorithms have been widely used to solve various scientific and engineering problems. Inspired by biological lifecycle, this paper presents a novel optimization algorithm called lifecycle-based swarm optimization (LSO. Biological lifecycle includes four stages: birth, growth, reproduction, and death. With this process, even though individual organism died, the species will not perish. Furthermore, species will have stronger ability of adaptation to the environment and achieve perfect evolution. LSO simulates Biological lifecycle process through six optimization operators: chemotactic, assimilation, transposition, crossover, selection, and mutation. In addition, the spatial distribution of initialization population meets clumped distribution. Experiments were conducted on unconstrained benchmark optimization problems and mechanical design optimization problems. Unconstrained benchmark problems include both unimodal and multimodal cases the demonstration of the optimal performance and stability, and the mechanical design problem was tested for algorithm practicability. The results demonstrate remarkable performance of the LSO algorithm on all chosen benchmark functions when compared to several successful optimization techniques.

7. Application of numerical optimization techniques to control system design for nonlinear dynamic models of aircraft

Science.gov (United States)

Lan, C. Edward; Ge, Fuying

1989-01-01

Control system design for general nonlinear flight dynamic models is considered through numerical simulation. The design is accomplished through a numerical optimizer coupled with analysis of flight dynamic equations. The general flight dynamic equations are numerically integrated and dynamic characteristics are then identified from the dynamic response. The design variables are determined iteratively by the optimizer to optimize a prescribed objective function which is related to desired dynamic characteristics. Generality of the method allows nonlinear effects to aerodynamics and dynamic coupling to be considered in the design process. To demonstrate the method, nonlinear simulation models for an F-5A and an F-16 configurations are used to design dampers to satisfy specifications on flying qualities and control systems to prevent departure. The results indicate that the present method is simple in formulation and effective in satisfying the design objectives.

8. A numerical technique for solving fractional optimal control problems and fractional Riccati differential equations

Directory of Open Access Journals (Sweden)

F. Ghomanjani

2016-10-01

Full Text Available In the present paper, we apply the Bezier curves method for solving fractional optimal control problems (OCPs and fractional Riccati differential equations. The main advantage of this method is that it can reduce the error of the approximate solutions. Hence, the solutions obtained using the Bezier curve method give good approximations. Some numerical examples are provided to confirm the accuracy of the proposed method. All of the numerical computations have been performed on a PC using several programs written in MAPLE 13.

9. FDTD technique based crosstalk analysis of bundled SWCNT interconnects

International Nuclear Information System (INIS)

Duksh, Yograj Singh; Kaushik, Brajesh Kumar; Agarwal, Rajendra P.

2015-01-01

The equivalent electrical circuit model of a bundled single-walled carbon nanotube based distributed RLC interconnects is employed for the crosstalk analysis. The accurate time domain analysis and crosstalk effect in the VLSI interconnect has emerged as an essential design criteria. This paper presents a brief description of the numerical method based finite difference time domain (FDTD) technique that is intended for estimation of voltages and currents on coupled transmission lines. For the FDTD implementation, the stability of the proposed model is strictly restricted by the Courant condition. This method is used for the estimation of crosstalk induced propagation delay and peak voltage in lossy RLC interconnects. Both functional and dynamic crosstalk effects are analyzed in the coupled transmission line. The effect of line resistance on crosstalk induced delay, and peak voltage under dynamic and functional crosstalk is also evaluated. The FDTD analysis and the SPICE simulations are carried out at 32 nm technology node for the global interconnects. It is observed that the analytical results obtained using the FDTD technique are in good agreement with the SPICE simulation results. The crosstalk induced delay, propagation delay, and peak voltage obtained using the FDTD technique shows average errors of 4.9%, 3.4% and 0.46%, respectively, in comparison to SPICE. (paper)

10. Numerical analysis of high-power broad-area laser diode with improved heat sinking structure using epitaxial liftoff technique

Science.gov (United States)

Kim, Younghyun; Sung, Yunsu; Yang, Jung-Tack; Choi, Woo-Young

2018-02-01

The characteristics of high-power broad-area laser diodes with the improved heat sinking structure are numerically analyzed by a technology computer-aided design based self-consistent electro-thermal-optical simulation. The high-power laser diodes consist of a separate confinement heterostructure of a compressively strained InGaAsP quantum well and GaInP optical cavity layers, and a 100-μm-wide rib and a 2000-μm long cavity. In order to overcome the performance deteriorations of high-power laser diodes caused by self-heating such as thermal rollover and thermal blooming, we propose the high-power broad-area laser diode with improved heat-sinking structure, which another effective heat-sinking path toward the substrate side is added by removing a bulk substrate. It is possible to obtain by removing a 400-μm-thick GaAs substrate with an AlAs sacrificial layer utilizing well-known epitaxial liftoff techniques. In this study, we present the performance improvement of the high-power laser diode with the heat-sinking structure by suppressing thermal effects. It is found that the lateral far-field angle as well as quantum well temperature is expected to be improved by the proposed heat-sinking structure which is required for high beam quality and optical output power, respectively.

11. Inversion of calcite twin data for paleostress (1) : improved Etchecopar technique tested on numerically-generated and natural data

Science.gov (United States)

Parlangeau, Camille; Lacombe, Olivier; Daniel, Jean-Marc; Schueller, Sylvie

2015-04-01

Inversion of calcite twin data are known to be a powerful tool to reconstruct the past-state of stress in carbonate rocks of the crust, especially in fold-and-thrust belts and sedimentary basins. This is of key importance to constrain results of geomechanical modelling. Without proposing a new inversion scheme, this contribution reports some recent improvements of the most efficient stress inversion technique to date (Etchecopar, 1984) that allows to reconstruct the 5 parameters of the deviatoric paleostress tensors (principal stress orientations and differential stress magnitudes) from monophase and polyphase twin data sets. The improvements consist in the search of the possible tensors that account for the twin data (twinned and untwinned planes) and the aid to the user to define the best stress tensor solution, among others. We perform a systematic exploration of an hypersphere in 4 dimensions by varying different parameters, Euler's angles and the stress ratio. We first record all tensors with a minimum penalization function accounting for 20% of the twinned planes. We then define clusters of tensors following a dissimilarity criterion based on the stress distance between the 4 parameters of the reduced stress tensors and a degree of disjunction of the related sets of twinned planes. The percentage of twinned data to be explained by each tensor is then progressively increased and tested using the standard Etchecopar procedure until the best solution that explains the maximum number of twinned planes and the whole set of untwinned planes is reached. This new inversion procedure is tested on monophase and polyphase numerically-generated as well as natural calcite twin data in order to more accurately define the ability of the technique to separate more or less similar deviatoric stress tensors applied in sequence on the samples, to test the impact of strain hardening through the change of the critical resolved shear stress for twinning as well as to evaluate the

12. Numerical Methods Are Feasible for Assessing Surgical Techniques: Application to Astigmatic Keratotomy

Energy Technology Data Exchange (ETDEWEB)

Ariza-Gracia, M.A.; Ortilles, A.; Cristobal, J.A.; Rodriguez, J.F.; Calvo, B.

2016-07-01

The present study proposes an experimental-numerical protocol whose novelty relies on using both the inflation and the indentation experiments simultaneously to obtain a set of material parameters which accounts for both deformation modes of the cornea: the physiological (biaxial tension) and the non-physiological (bending). The experimental protocol characterizes the corneal geometry and the mechanical response of the cornea when subjected to the experimental tests using an animal model (New Zealand rabbit's cornea). The numerical protocol reproduces the experimental tests by means of an inverse finite element methodology to obtain the set of material properties that minimizes both mechanical responses at the same time. To validate the methodology, an Astigmatic Keratotomy refractive surgery is performed on 4 New Zealand rabbit corneas. The pre and post-surgical topographies of the anterior corneal surface were measured using a MODI topographer (CSO, Italy) to control the total change in astigmatism. Afterwards, the surgery is numerically reproduced to predict the overall change of the cornea. Results showed an acceptable numerical prediction, close to the average experimental correction, validating the material parameters obtained with the proposed protocol. (Author)

13. The GRIM Test : A Simple Technique Detects Numerous Anomalies in the Reporting of Results in Psychology

NARCIS (Netherlands)

Brown, Nicholas J. L.; Heathers, James A. J.

2017-01-01

We present a simple mathematical technique that we call granularity-related inconsistency of means (GRIM) for verifying the summary statistics of research reports in psychology. This technique evaluates whether the reported means of integer data such as Likert-type scales are consistent with the

14. On a Numerical and Graphical Technique for Evaluating some Models Involving Rational Expectations

DEFF Research Database (Denmark)

Johansen, Søren; Swensen, Anders Rygh

Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

15. On a numerical and graphical technique for evaluating some models involving rational expectations

DEFF Research Database (Denmark)

Johansen, Søren; Swensen, Anders Rygh

Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

16. On the solution of two-point linear differential eigenvalue problems. [numerical technique with application to Orr-Sommerfeld equation

Science.gov (United States)

Antar, B. N.

1976-01-01

A numerical technique is presented for locating the eigenvalues of two point linear differential eigenvalue problems. The technique is designed to search for complex eigenvalues belonging to complex operators. With this method, any domain of the complex eigenvalue plane could be scanned and the eigenvalues within it, if any, located. For an application of the method, the eigenvalues of the Orr-Sommerfeld equation of the plane Poiseuille flow are determined within a specified portion of the c-plane. The eigenvalues for alpha = 1 and R = 10,000 are tabulated and compared for accuracy with existing solutions.

17. Depth-profiling by confocal Raman microscopy (CRM): data correction by numerical techniques.

Science.gov (United States)

Tomba, J Pablo; Eliçabe, Guillermo E; Miguel, María de la Paz; Perez, Claudio J

2011-03-01

The data obtained in confocal Raman microscopy (CRM) depth profiling experiments with dry optics are subjected to significant distortions, including an artificial compression of the depth scale, due to the combined influence of diffraction, refraction, and instrumental effects that operate on the measurement. This work explores the use of (1) regularized deconvolution and (2) the application of simple rescaling of the depth scale as methodologies to obtain an improved, more precise, confocal response. The deconvolution scheme is based on a simple predictive model for depth resolution and the use of regularization techniques to minimize the dramatic oscillations in the recovered response typical of problem inversion. That scheme is first evaluated using computer simulations on situations that reproduce smooth and sharp sample transitions between two materials and finally it is applied to correct genuine experimental data, obtained in this case from a sharp transition (planar interface) between two polymeric materials. It is shown that the methodology recovers very well most of the lost profile features in all the analyzed situations. The use of simple rescaling appears to be only useful for correcting smooth transitions, particularly those extended over distances larger than those spanned by the operative depth resolution, which limits the strategy to the study of profiles near the sample surface. However, through computer simulations, it is shown that the use of water immersion objectives may help to reduce optical distortions and to expand the application window of this simple methodology, which could be useful, for instance, to safely monitor Fickean sorption/desorption of penetrants in polymer films/coatings in a nearly noninvasive way.

18. High-precision numerical simulation with autoadaptative grid technique in nonlinear thermal diffusion

International Nuclear Information System (INIS)

Chambarel, A.; Pumborios, M.

1992-01-01

This paper reports that many engineering problems concern the determination of a steady state solution in the case with strong thermal gradients, and results obtained using the finite-element technique are sometimes inaccurate, particularly for nonlinear problems with unadapted meshes. Building on previous results in linear problems, we propose an autoadaptive technique for nonlinear cases that uses quasi-Newtonian iterations to reevaluate an interpolation error estimation. The authors perfected an automatic refinement technique to solve the nonlinear thermal problem of temperature calculus in a cast-iron cylinder head of a diesel engine

19. Multi-band effective mass approximations advanced mathematical models and numerical techniques

CERN Document Server

Koprucki, Thomas

2014-01-01

This book addresses several mathematical models from the most relevant class of kp-Schrödinger systems. Both mathematical models and state-of-the-art numerical methods for adequately solving the arising systems of differential equations are presented. The operational principle of modern semiconductor nano structures, such as quantum wells, quantum wires or quantum dots, relies on quantum mechanical effects. The goal of numerical simulations using quantum mechanical models in the development of semiconductor nano structures is threefold: First they are needed for a deeper understanding of experimental data and of the operational principle. Secondly, they allow us to predict and optimize in advance the qualitative and quantitative properties of new devices in order to minimize the number of prototypes needed. Semiconductor nano structures are embedded as an active region in semiconductor devices. Thirdly and finally, the results of quantum mechanical simulations of semiconductor nano structures can be used wit...

20. Development of physical and numerical techniques of Alanine/EPR dosimetry in radiotherapy

International Nuclear Information System (INIS)

Castro, F.; Ponte, F.; Pereira, L.

2006-01-01

In this work, a set of 50 alanine dosimeters has been used in a radiotherapy context, simulating a two-dimensional treatment in a non-overlapping dosimeter configuration. The dose is reconstructed from physical and numerical simulation of the electron paramagnetic resonance signal, calculating the spin density. Thus, it can be used to better adjust the error in the calibration curve to give a final accuracy of <0.03 Gy. A complete set of experimental test parameters have been used with a standard dosimeter in order to obtain the best analysis configuration. These results indicate that for a conventional treatment of some hundreds of mGy, this method can be useful with a correct signal validation. A numerical test and fitting software has been developed. The general use of alanine/electron paramagnetic resonance dosimetry in radiotherapy context is discussed. (authors)

1. Modeling seismic wave propagation across the European plate: structural models and numerical techniques, state-of-the-art and prospects

Science.gov (United States)

Morelli, Andrea; Danecek, Peter; Molinari, Irene; Postpischl, Luca; Schivardi, Renata; Serretti, Paola; Tondi, Maria Rosaria

2010-05-01

beneath the Alpine mobile belt, and fast lithospheric signatures under the two main Mediterranean subduction systems (Aegean and Tyrrhenian). We validate this new model through comparison of recorded seismograms with simulations based on numerical codes (SPECFEM3D). To ease and increase model usage, we also propose the adoption of a common exchange format for tomographic earth models based on JSON, a lightweight data-interchange format supported by most high-level programming languages, and provide tools for manipulating and visualising models, described in this standard format, in Google Earth and GEON IDV. In the next decade seismologists will be able to reap new possibilities offered by exciting progress in general computing power and algorithmic development in computational seismology. Structural models, still based on classical approaches and modeling just few parameters in each seismogram, will benefit from emerging techniques - such as full waveform fitting and fully nonlinear inversion - that are now just showing their potential. This will require extensive availability of supercomputing resources to earth scientists in Europe, as a tool to match the planned new massive data flow. We need to make sure that the whole apparatus, needed to fully exploit new data, will be widely accessible. To maximize the development, so as for instance to enable us to promptly model ground shaking after a major earthquake, we will also need a better coordination framework, that will enable us to share and amalgamate the abundant local information on earth structure - most often available but difficult to retrieve, merge and use. Comprehensive knowledge of earth structure and of best practices to model wave propagation can by all means be considered an enabling technology for further geophysical progress.

2. An efficient numerical technique for solving navier-stokes equations for rotating flows

International Nuclear Information System (INIS)

Haroon, T.; Shah, T.M.

2000-01-01

This paper simulates an industrial problem by solving compressible Navier-Stokes equations. The time-consuming tri-angularization process of a large-banded matrix, performed by memory economical Frontal Technique. This scheme successfully reduces the time for I/O operations even for as large as (40, 000 x 40, 000) matrix. Previously, this industrial problem can solved by using modified Newton's method with Gaussian elimination technique for the large matrix. In the present paper, the proposed Frontal Technique is successfully used, together with Newton's method, to solve compressible Navier-Stokes equations for rotating cylinders. By using the Frontal Technique, the method gives the solution within reasonably acceptance computational time. Results are compared with the earlier works done, and found computationally very efficient. Some features of the solution are reported here for the rotating machines. (author)

3. Seismic qualification of nuclear control board by using base isolation technique

International Nuclear Information System (INIS)

Koizumi, T.; Tsujiuchi, N.; Fujita, T.

1987-01-01

The purpose is to adopt base isolation technique as a new approach for seismic qualification of nuclear control board. Basic concept of base isolation technique is expressed. Two dimensional linear motion mechanism with pre-tensioned coil springs and some dampers are included in the isolation device. Control board is regarded as a lamped mass system with inertia moment. Fundamental movement of this device and control board is calculated as a non-linear response problems. Fundamental analysis and numerical estimation, experimental investigation has been undertaken using an actual size control board. Sufficient agreement was recognized between experimental results and numerical estimation. (orig./HP)

4. LED-based Photometric Stereo: Modeling, Calibration and Numerical Solutions

DEFF Research Database (Denmark)

Quéau, Yvain; Durix, Bastien; Wu, Tao

2018-01-01

We conduct a thorough study of photometric stereo under nearby point light source illumination, from modeling to numerical solution, through calibration. In the classical formulation of photometric stereo, the luminous fluxes are assumed to be directional, which is very difficult to achieve in pr...

5. An Energy Based Numerical Approach to Phase Change Problems

DEFF Research Database (Denmark)

Hauggaard-Nielsen, Anders Boe; Damkilde, Lars; Krenk, Steen

1996-01-01

Phase change problems, occurring e.g. in melting, casting and freezing processes, are often characterized by a very narrow transition zone with very lareg changes in heat capacity and conductivity. This leads to problems in numerical procedures, where the transition zone propagates through a mesh...

6. Detection and sizing of cracks using potential drop techniques based on electromagnetic induction

International Nuclear Information System (INIS)

Sato, Yasumoto; Kim, Hoon

2011-01-01

The potential drop techniques based on electromagnetic induction are classified into induced current focused potential drop (ICFPD) technique and remotely induced current potential drop (RICPD) technique. The possibility of numerical simulation of the techniques is investigated and the applicability of these techniques to the measurement of defects in conductive materials is presented. Finite element analysis (FEA) for the RICPD measurements on the plate specimen containing back wall slits is performed and calculated results by FEA show good agreement with experimental results. Detection limit of the RICPD technique in depth of back wall slits can also be estimated by FEA. Detection and sizing of artificial defects in parent and welded materials are successfully performed by the ICFPD technique. Applicability of these techniques to detection of cracks in field components is investigated, and most of the cracks in the components investigated are successfully detected by the ICFPD and RICPD techniques. (author)

7. Indepth diagnosis of a secondary clarifier by the application of radiotracer technique and numerical modeling.

Science.gov (United States)

Kim, H S; Shin, M S; Jang, D S; Jung, S H

2006-01-01

To make an indepth diagnosis of a full-scale rectangular secondary clarifier, an experimental and numerical study has been performed in a wastewater treatment facility. Calculation results by the numerical model with the adoption of the SIMPLE algorithm of Patankar are validated with radiotracer experiments. Emphasis is given to the prediction of residence time distribution (RTD) curves. The predicted RTD profiles are in good agreement with the experimental RTD curves at the upstream and center sections except for the withdrawal zone of the complex effluent weir structure. The simulation results predict successfully the well-known flow characteristics of each stage such as the waterfall phenomenon at the front of the clarifier, the bottom density current and the surface return flow in the settling zone, and the upward flow in the exit zone. The detailed effects of density current are thoroughly investigated in terms of high SS loading and temperature difference between influent and ambient fluid. The program developed in this study shows the high potential to assist in the design and determination of optimal operating conditions to improve effluent quality in a full-scale secondary clarifier.

8. The numerical solution of thawing process in phase change slab using variable space grid technique

Directory of Open Access Journals (Sweden)

Serttikul, C.

2007-09-01

Full Text Available This paper focuses on the numerical analysis of melting process in phase change material which considers the moving boundary as the main parameter. In this study, pure ice slab and saturated porous packed bed are considered as the phase change material. The formulation of partial differential equations is performed consisting heat conduction equations in each phase and moving boundary equation (Stefan equation. The variable space grid method is then applied to these equations. The transient heat conduction equations and the Stefan condition are solved by using the finite difference method. A one-dimensional melting model is then validated against the available analytical solution. The effect of constant temperature heat source on melting rate and location of melting front at various times is studied in detail.It is found that the nonlinearity of melting rate occurs for a short time. The successful comparison with numerical solution and analytical solution should give confidence in the proposed mathematical treatment, and encourage the acceptance of this method as useful tool for exploring practical problems such as forming materials process, ice melting process, food preservation process and tissue preservation process.

9. Comparison of groundwater residence time using isotope techniques and numerical groundwater flow model in Gneissic Terrain, Korea

International Nuclear Information System (INIS)

Bae, D.S.; Kim, C.S.; Koh, Y.K.; Kim, K.S.; Song, M.Y.

1997-01-01

The prediction of groundwater flow affecting the migration of radionuclides is an important component of the performance assessment of radioactive waste disposal. Groundwater flow in fractured rock mass is controlled by fracture networks, transmissivity and hydraulic gradient. Furthermore the scale-dependent and anisotropic properties of hydraulic parameters are resulted mainly from irregular patterns of fracture system, which are very complex to evaluate properly with the current techniques available. For the purpose of characterizing a groundwater flow in fractured rock mass, the discrete fracture network (DFN) concept is available on the basis of assumptions of groundwater flowing only along fractures and flowpaths in rock mass formed by interconnected fractures. To increase the reliability of assessment in groundwater flow phenomena, numerical groundwater flow model and isotopic techniques were applied. Fracture mapping, borehole acoustic scanning were performed to identify conductive fractures in gneissic terrane. Tracer techniques, using deuterium, oxygen-18 and tritium were applied to evaluate the recharge area and groundwater residence time

10. Multilevel techniques lead to accurate numerical upscaling and scalable robust solvers for reservoir simulation

DEFF Research Database (Denmark)

Christensen, Max la Cour; Villa, Umberto; Vassilevski, Panayot

2015-01-01

approach is well suited for the solution of large problems coming from finite element discretizations of systems of partial differential equations. The AMGe technique from 10,9 allows for the construction of operator-dependent coarse (upscaled) models and guarantees approximation properties of the coarse...... implementation of the reservoir simulator is demonstrated....

11. Numerical Relativity for Space-Based Gravitational Wave Astronomy

Science.gov (United States)

Baker, John G.

2011-01-01

In the next decade, gravitational wave instruments in space may provide high-precision measurements of gravitational-wave signals from strong sources, such as black holes. Currently variations on the original Laser Interferometer Space Antenna mission concepts are under study in the hope of reducing costs. Even the observations of a reduced instrument may place strong demands on numerical relativity capabilities. Possible advances in the coming years may fuel a new generation of codes ready to confront these challenges.

12. Analysis of control rod behavior based on numerical simulation

International Nuclear Information System (INIS)

Ha, D. G.; Park, J. K.; Park, N. G.; Suh, J. M.; Jeon, K. L.

2010-01-01

The main function of a control rod is to control core reactivity change during operation associated with changes in power, coolant temperature, and dissolved boron concentration by the insertion and withdrawal of control rods from the fuel assemblies. In a scram, the control rod assemblies are released from the CRDMs (Control Rod Drive Mechanisms) and, due to gravity, drop rapidly into the fuel assemblies. The control rod insertion time during a scram must be within the time limits established by the overall core safety analysis. To assure the control rod operational functions, the guide thimbles shall not obstruct the insertion and withdrawal of the control rods or cause any damage to the fuel assembly. When fuel assembly bow occurs, it can affect both the operating performance and the core safety. In this study, the drag forces of the control rod are estimated by a numerical simulation to evaluate the guide tube bow effect on control rod withdrawal. The contact condition effects are also considered. A full scale 3D model is developed for the evaluation, and ANSYS - commercial numerical analysis code - is used for this numerical simulation. (authors)

13. CONTROL BASED ON NUMERICAL METHODS AND RECURSIVE BAYESIAN ESTIMATION IN A CONTINUOUS ALCOHOLIC FERMENTATION PROCESS

Directory of Open Access Journals (Sweden)

Olga L. Quintero

Full Text Available Biotechnological processes represent a challenge in the control field, due to their high nonlinearity. In particular, continuous alcoholic fermentation from Zymomonas mobilis (Z.m presents a significant challenge. This bioprocess has high ethanol performance, but it exhibits an oscillatory behavior in process variables due to the influence of inhibition dynamics (rate of ethanol concentration over biomass, substrate, and product concentrations. In this work a new solution for control of biotechnological variables in the fermentation process is proposed, based on numerical methods and linear algebra. In addition, an improvement to a previously reported state estimator, based on particle filtering techniques, is used in the control loop. The feasibility estimator and its performance are demonstrated in the proposed control loop. This methodology makes it possible to develop a controller design through the use of dynamic analysis with a tested biomass estimator in Z.m and without the use of complex calculations.

14. Numerical algorithms based on Galerkin methods for the modeling of reactive interfaces in photoelectrochemical (PEC) solar cells

Science.gov (United States)

Harmon, Michael; Gamba, Irene M.; Ren, Kui

2016-12-01

This work concerns the numerical solution of a coupled system of self-consistent reaction-drift-diffusion-Poisson equations that describes the macroscopic dynamics of charge transport in photoelectrochemical (PEC) solar cells with reactive semiconductor and electrolyte interfaces. We present three numerical algorithms, mainly based on a mixed finite element and a local discontinuous Galerkin method for spatial discretization, with carefully chosen numerical fluxes, and implicit-explicit time stepping techniques, for solving the time-dependent nonlinear systems of partial differential equations. We perform computational simulations under various model parameters to demonstrate the performance of the proposed numerical algorithms as well as the impact of these parameters on the solution to the model.

15. Development of numerical solution techniques in the KIKO3D code

International Nuclear Information System (INIS)

Panka, Istvan; Kereszturi, Andras; Hegedus, Csaba

2005-01-01

The paper describes the numerical methods applied in KIKO3D three-dimensional reactor dynamics code and present a new, more effective method (Bi-CGSTAB) for accelerating the large sparse matrix equation solution. The convergence characteristics were investigated in a given macro time step of a Control Rod Ejection transient. The results obtained by the old GMRES and new Bi-CGSTAB methods are compared. It is concluded that the real relative errors of the solutions obtained by GMRES or Bi - CGSTAB algorithms are in fact closer together than the estimated relative errors. The KIKO3D-Bi-CGSTAB method converges safely and it is 7-12 % faster than the old KIKO3D-GMRES solution (Authors)

16. Numerical Analysis of the Cavity Flow subjected to Passive Controls Techniques

Science.gov (United States)

Melih Guleren, Kursad; Turk, Seyfettin; Mirza Demircan, Osman; Demir, Oguzhan

2018-03-01

Open-source flow solvers are getting more and more popular for the analysis of challenging flow problems in aeronautical and mechanical engineering applications. They are offered under the GNU General Public License and can be run, examined, shared and modified according to user’s requirements. SU2 and OpenFOAM are the two most popular open-source solvers in Computational Fluid Dynamics (CFD) community. In the present study, some passive control methods on the high-speed cavity flows are numerically simulated using these open-source flow solvers along with one commercial flow solver called ANSYS/Fluent. The results are compared with the available experimental data. The solver SU2 are seen to predict satisfactory the mean streamline velocity but not turbulent kinetic energy and overall averaged sound pressure level (OASPL). Whereas OpenFOAM predicts all these parameters nearly as the same levels of ANSYS/Fluent.

17. Kinetic calculations for miniature neutron source reactor using analytical and numerical techniques

International Nuclear Information System (INIS)

Ampomah-Amoako, E.

2008-06-01

The analytical methods, step change in reactivity and ramp change in reactivity as well as numerical methods, fixed point iteration and Runge Kutta-gill were used to simulate the initial build up of neutrons in a miniature neutron source reactor with and without temperature feedback effect. The methods were modified to include photo neutron concentration. PARET 7.3 was used to simulate the transients behaviour of Ghana Research Reactor-1. The PARET code was capable of simulating the transients for 2.1 mk and 4 mk insertions of reactivity with peak powers of 49.87 kW and 92.34 kW, respectively. PARET code however failed to simulate 6.71 mk of reactivity which was predicted by Akaho et al through TEMPFED. (au)

18. Aperture Array Photonic Metamaterials: Theoretical approaches, numerical techniques and a novel application

Science.gov (United States)

Lansey, Eli

Optical or photonic metamaterials that operate in the infrared and visible frequency regimes show tremendous promise for solving problems in renewable energy, infrared imaging, and telecommunications. However, many of the theoretical and simulation techniques used at lower frequencies are not applicable to this higher-frequency regime. Furthermore, technological and financial limitations of photonic metamaterial fabrication increases the importance of reliable theoretical models and computational techniques for predicting the optical response of photonic metamaterials. This thesis focuses on aperture array metamaterials. That is, a rectangular, circular, or other shaped cavity or hole embedded in, or penetrating through a metal film. The research in the first portion of this dissertation reflects our interest in developing a fundamental, theoretical understanding of the behavior of light's interaction with these aperture arrays, specifically regarding enhanced optical transmission. We develop an approximate boundary condition for metals at optical frequencies, and a comprehensive, analytical explanation of the physics underlying this effect. These theoretical analyses are augmented by computational techniques in the second portion of this thesis, used both for verification of the theoretical work, and solving more complicated structures. Finally, the last portion of this thesis discusses the results from designing, fabricating and characterizing a light-splitting metamaterial.

19. DCT-based cyber defense techniques

Science.gov (United States)

Amsalem, Yaron; Puzanov, Anton; Bedinerman, Anton; Kutcher, Maxim; Hadar, Ofer

2015-09-01

With the increasing popularity of video streaming services and multimedia sharing via social networks, there is a need to protect the multimedia from malicious use. An attacker may use steganography and watermarking techniques to embed malicious content, in order to attack the end user. Most of the attack algorithms are robust to basic image processing techniques such as filtering, compression, noise addition, etc. Hence, in this article two novel, real-time, defense techniques are proposed: Smart threshold and anomaly correction. Both techniques operate at the DCT domain, and are applicable for JPEG images and H.264 I-Frames. The defense performance was evaluated against a highly robust attack, and the perceptual quality degradation was measured by the well-known PSNR and SSIM quality assessment metrics. A set of defense techniques is suggested for improving the defense efficiency. For the most aggressive attack configuration, the combination of all the defense techniques results in 80% protection against cyber-attacks with PSNR of 25.74 db.

20. Validation of a numerical algorithm based on transformed equations

International Nuclear Information System (INIS)

Xu, H.; Barron, R.M.; Zhang, C.

2003-01-01

Generally, a typical equation governing a physical process, such as fluid flow or heat transfer, has three types of terms that involve partial derivatives, namely, the transient term, the convective terms and the diffusion terms. The major difficulty in obtaining numerical solutions of these partial differential equations is the discretization of the convective terms. The transient term is usually discretized using the first-order forward or backward differencing scheme. The diffusion terms are usually discretized using the central differencing scheme and no difficulty arises since these terms involve second-order spatial derivatives of the flow variables. The convective terms are non-linear and contain first-order spatial derivatives. The main difference between various numerical algorithms is the discretization of the convective terms. In the present study, an alternative approach to discretizing the governing equations is presented. In this algorithm, the governing equations are first transformed by introducing an exponential function to eliminate the convective terms in the equations. The proposed algorithm is applied to simulate some fluid flows with exact solutions to validate the proposed algorithm. The fluid flows used in this study are a self-designed quasi-fluid flow problem, stagnation in plane flow (Hiemenz flow), and flow between two concentric cylinders. The comparisons with the power-law scheme indicate that the proposed scheme exhibits better performance. (author)

1. Systematic study of the effects of scaling techniques in numerical simulations with application to enhanced geothermal systems

Science.gov (United States)

Heinze, Thomas; Jansen, Gunnar; Galvan, Boris; Miller, Stephen A.

2016-04-01

Numerical modeling is a well established tool in rock mechanics studies investigating a wide range of problems. Especially for estimating seismic risk of a geothermal energy plants a realistic rock mechanical model is needed. To simulate a time evolving system, two different approaches need to be separated: Implicit methods for solving linear equations are unconditionally stable, while explicit methods are limited by the time step. However, explicit methods are often preferred because of their limited memory demand, their scalability in parallel computing, and simple implementation of complex boundary conditions. In numerical modeling of explicit elastoplastic dynamics the time step is limited by the rock density. Mass scaling techniques, which increase the rock density artificially by several orders, can be used to overcome this limit and significantly reduce computation time. In the context of geothermal energy this is of great interest because in a coupled hydro-mechanical model the time step of the mechanical part is significantly smaller than for the fluid flow. Mass scaling can also be combined with time scaling, which increases the rate of physical processes, assuming that processes are rate independent. While often used, the effect of mass and time scaling and how it may influence the numerical results is rarely-mentioned in publications, and choosing the right scaling technique is typically performed by trial and error. Also often scaling techniques are used in commercial software packages, hidden from the untrained user. To our knowledge, no systematic studies have addressed how mass scaling might affect the numerical results. In this work, we present results from an extensive and systematic study of the influence of mass and time scaling on the behavior of a variety of rock-mechanical models. We employ a finite difference scheme to model uniaxial and biaxial compression experiments using different mass and time scaling factors, and with physical models

2. Modeling and numerical techniques for high-speed digital simulation of nuclear power plants

International Nuclear Information System (INIS)

Wulff, W.; Cheng, H.S.; Mallen, A.N.

1987-01-01

Conventional computing methods are contrasted with newly developed high-speed and low-cost computing techniques for simulating normal and accidental transients in nuclear power plants. Six principles are formulated for cost-effective high-fidelity simulation with emphasis on modeling of transient two-phase flow coolant dynamics in nuclear reactors. Available computing architectures are characterized. It is shown that the combination of the newly developed modeling and computing principles with the use of existing special-purpose peripheral processors is capable of achieving low-cost and high-speed simulation with high-fidelity and outstanding user convenience, suitable for detailed reactor plant response analyses

3. Nasal base narrowing: the combined alar base excision technique.

Science.gov (United States)

Foda, Hossam M T

2007-01-01

To evaluate the role of the combined alar base excision technique in narrowing the nasal base and correcting excessive alar flare. The study included 60 cases presenting with a wide nasal base and excessive alar flaring. The surgical procedure combined an external alar wedge resection with an internal vestibular floor excision. All cases were followed up for a mean of 32 (range, 12-144) months. Nasal tip modification and correction of any preexisting caudal septal deformities were always completed before the nasal base narrowing. The mean width of the external alar wedge excised was 7.2 (range, 4-11) mm, whereas the mean width of the sill excision was 3.1 (range, 2-7) mm. Completing the internal excision first resulted in a more conservative external resection, thus avoiding any blunting of the alar-facial crease. No cases of postoperative bleeding, infection, or keloid formation were encountered, and the external alar wedge excision healed with an inconspicuous scar that was well hidden in the depth of the alar-facial crease. Finally, the risk of notching of the alar rim, which can occur at the junction of the external and internal excisions, was significantly reduced by adopting a 2-layered closure of the vestibular floor (P = .01). The combined alar base excision resulted in effective narrowing of the nasal base with elimination of excessive alar flare. Commonly feared complications, such as blunting of the alar-facial crease or notching of the alar rim, were avoided by using simple modifications in the technique of excision and closure.

4. Robust and adaptive techniques for numerical simulation of nonlinear partial differential equations of fractional order

Science.gov (United States)

2017-03-01

In this paper, some nonlinear space-fractional order reaction-diffusion equations (SFORDE) on a finite but large spatial domain x ∈ [0, L], x = x(x , y , z) and t ∈ [0, T] are considered. Also in this work, the standard reaction-diffusion system with boundary conditions is generalized by replacing the second-order spatial derivatives with Riemann-Liouville space-fractional derivatives of order α, for 0 Fourier spectral method is introduced as a better alternative to existing low order schemes for the integration of fractional in space reaction-diffusion problems in conjunction with an adaptive exponential time differencing method, and solve a range of one-, two- and three-components SFORDE numerically to obtain patterns in one- and two-dimensions with a straight forward extension to three spatial dimensions in a sub-diffusive (0 reaction-diffusion case. With application to models in biology and physics, different spatiotemporal dynamics are observed and displayed.

5. Development of Numerical Technique to Analyze the Flow Characteristics of Porous Media Using Lattice Boltzmann Method

Energy Technology Data Exchange (ETDEWEB)

Kim, Hyung Min [Kyonggi Univ., Suwon (Korea, Republic of)

2016-11-15

The performance of proton exchange membrane fuel cells (PEMFC) is strongly related to the water flow and accumulation in the gas diffusion layer (GDL) and catalyst layer. Understanding the behavior of fluid from the characteristics of the media is crucial for the improvement of the performance and design of the GDL. In this paper, a numerical method is proposed to calculate the design parameters of the GDL, i.e., permeability, tortuosity, and effective diffusivity. The fluid flow in a channel filled with randomly packed hard spheres is simulated to validate the method. The flow simulation was performed by lattice Boltzmann method with bounce back condition for the solid volume fraction in the porous media, with different values of porosities. Permeability, which affects the flow, was calculated from the average pressure drop and the velocity in the porous media. Tortuosity, calculated by the ratio the average path length of the randomly injected massless particles to the thickness of the porous media, and the resultant effective diffusivity were in good agreement with the theoretical model. The suggested method can be used to calculate the parameters of real GDL accurately without any modification.

6. Problems with numerical techniques: Application to mid-loop operation transients

Energy Technology Data Exchange (ETDEWEB)

Bryce, W.M.; Lillington, J.N.

1997-07-01

There has been an increasing need to consider accidents at shutdown which have been shown in some PSAs to provide a significant contribution to overall risk. In the UK experience has been gained at three levels: (1) Assessment of codes against experiments; (2) Plant studies specifically for Sizewell B; and (3) Detailed review of modelling to support the plant studies for Sizewell B. The work has largely been carried out using various versions of RELAP5 and SCDAP/RELAP5. The paper details some of the problems that have needed to be addressed. It is believed by the authors that these kinds of problems are probably generic to most of the present generation system thermal-hydraulic codes for the conditions present in mid-loop transients. Thus as far as possible these problems and solutions are proposed in generic terms. The areas addressed include: condensables at low pressure, poor time step calculation detection, water packing, inadequate physical modelling, numerical heat transfer and mass errors. In general single code modifications have been proposed to solve the problems. These have been very much concerned with means of improving existing models rather than by formulating a completely new approach. They have been produced after a particular problem has arisen. Thus, and this has been borne out in practice, the danger is that when new transients are attempted, new problems arise which then also require patching.

7. Elementary mechanics using Matlab a modern course combining analytical and numerical techniques

CERN Document Server

Malthe-Sørenssen, Anders

2015-01-01

This book – specifically developed as a novel textbook on elementary classical mechanics – shows how analytical and numerical methods can be seamlessly integrated to solve physics problems. This approach allows students to solve more advanced and applied problems at an earlier stage and equips them to deal with real-world examples well beyond the typical special cases treated in standard textbooks. Another advantage of this approach is that students are brought closer to the way physics is actually discovered and applied, as they are introduced right from the start to a more exploratory way of understanding phenomena and of developing their physical concepts. While not a requirement, it is advantageous for the reader to have some prior knowledge of scientific programming with a scripting-type language. This edition of the book uses Matlab, and a chapter devoted to the basics of scientific programming with Matlab is included. A parallel edition using Python instead of Matlab is also available. Last but not...

8. Elementary mechanics using Python a modern course combining analytical and numerical techniques

CERN Document Server

Malthe-Sørenssen, Anders

2015-01-01

This book – specifically developed as a novel textbook on elementary classical mechanics – shows how analytical and numerical methods can be seamlessly integrated to solve physics problems. This approach allows students to solve more advanced and applied problems at an earlier stage and equips them to deal with real-world examples well beyond the typical special cases treated in standard textbooks. Another advantage of this approach is that students are brought closer to the way physics is actually discovered and applied, as they are introduced right from the start to a more exploratory way of understanding phenomena and of developing their physical concepts. While not a requirement, it is advantageous for the reader to have some prior knowledge of scientific programming with a scripting-type language. This edition of the book uses Python, and a chapter devoted to the basics of scientific programming with Python is included. A parallel edition using Matlab instead of Python is also available. Last but not...

9. A Numerical Investigation of the Time Reversal Mirror Technique for Trans-skull Brain Cancer Ultrasound Surgery

Directory of Open Access Journals (Sweden)

H. Zahedmanesh

2007-06-01

Full Text Available Introduction: The medical applications of ultrasound on human brain are highly limited by the phase and amplitude aberrations induced by the heterogeneities of the skull. However, it has been shown that time reversing coupled with amplitude compensation can overcome these aberrations. In this work, a model for 2D simulation of the time reversal mirror technique is proposed to study the possibility of targeting any point within the brain without the need for craniotomy and to calculate the acoustic pressure field and the resulting temperature distribution within the skull and brain during a High Intensity Focused Ultrasound (HIFU transcranial therapy. Materials and Methods: To overcome the sensitivity of the wave pattern to the heterogeneous geometry of the skull, a real MRI derived 2D model is constructed. The model should include the real geometry of brain and skull. The model should also include the couplant medium which has the responsibility of coupling the transducer to the skull for the penetration of ultrasound. The clinical substance used as the couplant is water. The acoustic and thermal parameters are derived from the references. Next, the wave propagation through the skull is computed based on the Helmholtz equation, with a 2D finite element analysis. The acoustic simulation is combined with a 2D thermal diffusion analysis based on Pennes Bioheat equation and the temperature elevation inside the skull and brain is computed. The numerical simulations were performed using the FEMLAB 3.2 software on a PC having 8 GB RAM and a 2.4 MHz dual CPU. Results: It is seen that the ultrasonic waves are exactly focalized at the location where the hydrophone has been previously implanted. There is no penetration into the sinuses and the waves are reflected from their surface because of the high discrepancy between the speed of sound in bone and air.  Under the focal pressure of 2.5 MPa and after 4 seconds of sonication the temperature at the focus

10. Thermal transport in phosphorene and phosphorene-based materials: A review on numerical studies

Science.gov (United States)

Hong, Yang; Zhang, Jingchao; Zeng, Xiao Cheng

2018-03-01

The recently discovered two-dimensional (2D) layered material phosphorene has attracted considerable interest as a promising p-type semiconducting material. In this article, we review the recent advances in numerical studies of the thermal properties of monolayer phosphorene and phosphorene-based heterostructures. We first briefly review the commonly used first-principles and molecular dynamics (MD) approaches to evaluate the thermal conductivity and interfacial thermal resistance of 2D phosphorene. Principles of different steady-state and transient MD techniques have been elaborated on in detail. Next, we discuss the anisotropic thermal transport of phosphorene in zigzag and armchair chiral directions. Subsequently, the in-plane and cross-plane thermal transport in phosphorene-based heterostructures such as phosphorene/silicon and phosphorene/graphene is summarized. Finally, the numerical research in the field of thermal transport in 2D phosphorene is highlighted along with our perspective of potentials and opportunities of 2D phosphorenes in electronic applications such as photodetectors, field-effect transistors, lithium ion batteries, sodium ion batteries, and thermoelectric devices.

11. Solving Linear Equations by Classical Jacobi-SR Based Hybrid Evolutionary Algorithm with Uniform Adaptation Technique

OpenAIRE

Jamali, R. M. Jalal Uddin; Hashem, M. M. A.; Hasan, M. Mahfuz; Rahman, Md. Bazlar

2013-01-01

Solving a set of simultaneous linear equations is probably the most important topic in numerical methods. For solving linear equations, iterative methods are preferred over the direct methods especially when the coefficient matrix is sparse. The rate of convergence of iteration method is increased by using Successive Relaxation (SR) technique. But SR technique is very much sensitive to relaxation factor, {\\omega}. Recently, hybridization of classical Gauss-Seidel based successive relaxation t...

12. Village Level Tsunami Threat Maps for Tamil Nadu, SE Coast of India: Numerical Modeling Technique

Science.gov (United States)

MP, J.; Kulangara Madham Subrahmanian, D.; V, R. M.

2014-12-01

The Indian Ocean tsunami (IOT) devastated several countries of North Indian Ocean. India is one of the worst affected countries after Indonesia and Sri Lanka. In India, Tamil Nadu suffered maximum with fatalities exceeding 8,000 people. Historical records show that tsunami has invaded the shores of Tamil Nadu in the past and has made people realize that the tsunami threat looms over Tamil Nadu and it is necessary to evolve strategies for tsunami threat management. The IOT has brought to light that tsunami inundation and runup varied within short distances and for the disaster management for tsunami, large scale maps showing areas that are likely to be affected by future tsunami are identified. Therefore threat assessment for six villages including Mamallapuram (also called Mahabalipuram) which is famous for its rock-cut temples, from the northern part of Tamil Nadu state of India has been carried out and threat maps categorizing the coast into areas of different degree of threat are prepared. The threat was assessed by numerical modeling using TUNAMI N2 code considering different tsunamigenic sources along the Andaman - Sumatra trench. While GEBCO and C-Map data was used for bathymetry and for land elevation data was generated by RTK - GPS survey for a distance of 1 km from shore and SRTM for the inland areas. The model results show that in addition to the Sumatra source which generated the IOT in 2004, earthquakes originating in Car Nicobar and North Andaman can inflict more damage. The North Andaman source can generate a massive tsunami and an earthquake of magnitude more than Mw 9 can not only affect Tamil Nadu but also entire south east coast of India. The runup water level is used to demarcate the tsunami threat zones in the villages using GIS.

13. Performance Monitoring Of A Computer Numerically Controlled (CNC) Lathe Using Pattern Recognition Techniques

Science.gov (United States)

Daneshmend, L. K.; Pak, H. A.

1984-02-01

On-line monitoring of the cutting process in CNC lathe is desirable to ensure unattended fault-free operation in an automated environment. The state of the cutting tool is one of the most important parameters which characterises the cutting process. Direct monitoring of the cutting tool or workpiece is not feasible during machining. However several variables related to the state of the tool can be measured on-line. A novel monitoring technique is presented which uses cutting torque as the variable for on-line monitoring. A classifier is designed on the basis of the empirical relationship between cutting torque and flank wear. The empirical model required by the on-line classifier is established during an automated training cycle using machine vision for off-line direct inspection of the tool.

14. Two dimensional fully nonlinear numerical wave tank based on the BEM

Science.gov (United States)

Sun, Zhe; Pang, Yongjie; Li, Hongwei

2012-12-01

The development of a two dimensional numerical wave tank (NWT) with a rocker or piston type wavemaker based on the high order boundary element method (BEM) and mixed Eulerian-Lagrangian (MEL) is examined. The cauchy principle value (CPV) integral is calculated by a special Gauss type quadrature and a change of variable. In addition the explicit truncated Taylor expansion formula is employed in the time-stepping process. A modified double nodes method is assumed to tackle the corner problem, as well as the damping zone technique is used to absorb the propagation of the free surface wave at the end of the tank. A variety of waves are generated by the NWT, for example; a monochromatic wave, solitary wave and irregular wave. The results confirm the NWT model is efficient and stable.

15. Exploring machine-learning-based control plane intrusion detection techniques in software defined optical networks

Science.gov (United States)

Zhang, Huibin; Wang, Yuqiao; Chen, Haoran; Zhao, Yongli; Zhang, Jie

2017-12-01

In software defined optical networks (SDON), the centralized control plane may encounter numerous intrusion threatens which compromise the security level of provisioned services. In this paper, the issue of control plane security is studied and two machine-learning-based control plane intrusion detection techniques are proposed for SDON with properly selected features such as bandwidth, route length, etc. We validate the feasibility and efficiency of the proposed techniques by simulations. Results show an accuracy of 83% for intrusion detection can be achieved with the proposed machine-learning-based control plane intrusion detection techniques.

16. Numerical relativity

International Nuclear Information System (INIS)

Piran, T.

1982-01-01

There are many recent developments in numerical relativity, but there remain important unsolved theoretical and practical problems. The author reviews existing numerical approaches to solution of the exact Einstein equations. A framework for classification and comparison of different numerical schemes is presented. Recent numerical codes are compared using this framework. The discussion focuses on new developments and on currently open questions, excluding a review of numerical techniques. (Auth.)

17. Theoretical and numerical studies of TWR based on ESFR core design

International Nuclear Information System (INIS)

Zhang, Dalin; Chen, Xue-Nong; Flad, Michael; Rineiski, Andrei; Maschek, Werner

2013-01-01

Highlights: • The traveling wave reactor (TWR) is studied based on the core design of the European Sodium-cooled Fast Reactor (ESFR). • The conventional fuel shuffling technique is used to produce a continuous radial fuel movement. • A stationary self sustainable nuclear fission power can be established asymptotically by only loading natural or depleted uranium. • The multi-group deterministic neutronic code ERANOS is applied. - Abstract: This paper deals with the so-called traveling wave reactor (TWR) based on the core design of the European Sodium-cooled Fast Reactor (ESFR). The current concept of TWR is to use the conventional radial fuel shuffling technique to produce a continuous radial fuel movement so that a stationary self sustainable nuclear fission power can be established asymptotically by only loading fertile material consisting of natural or depleted uranium. The core design of ESFR loaded with metallic uranium fuel without considering the control mechanism is used as a practical application example. The theoretical studies focus mainly on qualitative feasibility analyses, i.e. to identify out in general essential parameter dependences of such a kind of reactor. The numerical studies are carried out more specifically on a certain core design. The multi-group deterministic neutronic code ERANOS with the JEFF3.1 data library is applied as a basic tool to perform the neutronics and burn-up calculations. The calculations are performed in a 2-D R-Z geometry, which is sufficient for the current core layout. Numerical results of radial fuel shuffling indicate that the asymptotic k eff parabolically varies with the shuffling period, while the burn-up increases linearly. Typical shuffling periods investigated in this study are in the range of 300–1000 days. The important parameters, e.g. k eff , the burn-up, the power peaking factor, and safety coefficients are calculated

18. Development of a numerical experiment technique to solve inverse gamma-ray transport problems with application to nondestructive assay of nuclear waste barrels

International Nuclear Information System (INIS)

Chang, C.J.; Anghaie, S.

1998-01-01

A numerical experimental technique is presented to find an optimum solution to an undetermined inverse gamma-ray transport problem involving the nondestructive assay of radionuclide inventory in a nuclear waste drum. The method introduced is an optimization scheme based on performing a large number of numerical simulations that account for the counting statistics, the nonuniformity of source distribution, and the heterogeneous density of the self-absorbing medium inside the waste drum. The simulation model uses forward projection and backward reconstruction algorithms. The forward projection algorithm uses randomly selected source distribution and a first-flight kernel method to calculate external detector responses. The backward reconstruction algorithm uses the conjugate gradient with nonnegative constraint or the maximum likelihood expectation maximum method to reconstruct the source distribution based on calculated detector responses. Total source activity is determined by summing the reconstructed activity of each computational grid. By conducting 10,000 numerical simulations, the error bound and the associated confidence level for the prediction of total source activity are determined. The accuracy and reliability of the simulation model are verified by performing a series of experiments in a 208-ell waste barrel. Density heterogeneity is simulated by using different materials distributed in 37 egg-crate-type compartments simulating a vertical segment of the barrel. Four orthogonal detector positions are used to measure the emerging radiation field from the distributed source. Results of the performed experiments are in full agreement with the estimated error and the confidence level, which are predicted by the simulation model

19. On substructuring algorithms and solution techniques for the numerical approximation of partial differential equations

Science.gov (United States)

Gunzburger, M. D.; Nicolaides, R. A.

1986-01-01

Substructuring methods are in common use in mechanics problems where typically the associated linear systems of algebraic equations are positive definite. Here these methods are extended to problems which lead to nonpositive definite, nonsymmetric matrices. The extension is based on an algorithm which carries out the block Gauss elimination procedure without the need for interchanges even when a pivot matrix is singular. Examples are provided wherein the method is used in connection with finite element solutions of the stationary Stokes equations and the Helmholtz equation, and dual methods for second-order elliptic equations.

20. Effect of lithological variations of mine roof on chock shield support using numerical modeling technique

Energy Technology Data Exchange (ETDEWEB)

NONE

2006-09-15

Interaction between chock shield supports, the most popular powered supports in Indian longwall mines, and surrounding coal measure strata is analyzed using finite element models. Thickness and material properties of the main roof, the immediate roof and the coal seam are varied to simulate various geological conditions of Indian coal measure strata. Contact/gap elements are inserted in between the main roof and overburden layer to allow strata separation. Nonlinear material properties are applied with plastic corrections based on Drucker-Prager yield criterion. This paper illustrates effects of lithological variations on shield load, abutment stress, yield zone and longwall face convergence.

1. Teaching numerical methods with IPython notebooks and inquiry-based learning

KAUST Repository

Ketcheson, David I.

2014-01-01

A course in numerical methods should teach both the mathematical theory of numerical analysis and the craft of implementing numerical algorithms. The IPython notebook provides a single medium in which mathematics, explanations, executable code, and visualizations can be combined, and with which the student can interact in order to learn both the theory and the craft of numerical methods. The use of notebooks also lends itself naturally to inquiry-based learning methods. I discuss the motivation and practice of teaching a course based on the use of IPython notebooks and inquiry-based learning, including some specific practical aspects. The discussion is based on my experience teaching a Masters-level course in numerical analysis at King Abdullah University of Science and Technology (KAUST), but is intended to be useful for those who teach at other levels or in industry.

2. Numerical methods for characterization of synchrotron radiation based on the Wigner function method

Directory of Open Access Journals (Sweden)

Takashi Tanaka

2014-06-01

Full Text Available Numerical characterization of synchrotron radiation based on the Wigner function method is explored in order to accurately evaluate the light source performance. A number of numerical methods to compute the Wigner functions for typical synchrotron radiation sources such as bending magnets, undulators and wigglers, are presented, which significantly improve the computation efficiency and reduce the total computation time. As a practical example of the numerical characterization, optimization of betatron functions to maximize the brilliance of undulator radiation is discussed.

3. A calculation method for RF couplers design based on numerical simulation by microwave studio

International Nuclear Information System (INIS)

Wang Rong; Pei Yuanji; Jin Kai

2006-01-01

A numerical simulation method for coupler design is proposed. It is based on the matching procedure for the 2π/3 structure given by Dr. R.L. Kyhl. Microwave Studio EigenMode Solver is used for such numerical simulation. the simulation for a coupler has been finished with this method and the simulation data are compared with experimental measurements. The results show that this numerical simulation method is feasible for coupler design. (authors)

4. Numerical Experiments Based on the Catastrophe Model of Solar Eruptions

Science.gov (United States)

Xie, X. Y.; Ziegler, U.; Mei, Z. X.; Wu, N.; Lin, J.

2017-11-01

On the basis of the catastrophe model developed by Isenberg et al., we use the NIRVANA code to perform the magnetohydrodynamics (MHD) numerical experiments to look into various behaviors of the coronal magnetic configuration that includes a current-carrying flux rope used to model the prominence levitating in the corona. These behaviors include the evolution in equilibrium heights of the flux rope versus the change in the background magnetic field, the corresponding internal equilibrium of the flux rope, dynamic properties of the flux rope after the system loses equilibrium, as well as the impact of the referential radius on the equilibrium heights of the flux rope. In our calculations, an empirical model of the coronal density distribution given by Sittler & Guhathakurta is used, and the physical diffusion is included. Our experiments show that the deviation of simulations in the equilibrium heights from the theoretical results exists, but is not apparent, and the evolutionary features of the two results are similar. If the flux rope is initially locate at the stable branch of the theoretical equilibrium curve, the flux rope will quickly reach the equilibrium position in the simulation after several rounds of oscillations as a result of the self-adjustment of the system; and the flux rope lose the equilibrium if the initial location of the flux rope is set at the critical point on the theoretical equilibrium curve. Correspondingly, the internal equilibrium of the flux rope can be reached as well, and the deviation from the theoretical results is somewhat apparent since the approximation of the small radius of the flux rope is lifted in our experiments, but such deviation does not affect the global equilibrium in the system. The impact of the referential radius on the equilibrium heights of the flux rope is consistent with the prediction of the theory. Our calculations indicate that the motion of the flux rope after the loss of equilibrium is consistent with which

5. Theoretical and numerical studies on the transport of transverse beam quality in plasma-based accelerators

International Nuclear Information System (INIS)

Mehrling, Timon Johannes

2014-11-01

This work examines effects, which impact the transverse quality of electron-beams in plasma-based accelerators, by means of theoretical and numerical methods. Plasma-based acceleration is a promising candidate for future particle accelerator technologies. In plasma-based acceleration, highly intense laser beams or high-current relativistic particle beams are focused into a plasma to excite plasma-waves with extreme transverse and longitudinal electric fields. The amplitude of these fields exceed with 10-100 GV/m the ones in today's radio-frequency accelerators by several orders of magnitude, hence, in principle allowing for accordingly shorter and cheaper accelerators based on plasma. Despite the tremendous progress in the recent decade, beams from plasma accelerators are not yet achieving the quality as demanded for pivotal applications of relativistic electron-beams, e.g. free-electron lasers (FELs).Studies within this work examine how the quality can be optimized in the production of the beams and preserved during the acceleration and transport to the interaction region. Such studies cannot be approached purely analytical but necessitate numerical methods, such as the Particle-In-Cell (PIC) method, which can model kinetic, electrodynamic and relativistic plasma phenomena. However, this method is computationally too expensive for parameter-scans in three-dimensional geometries. Hence, a quasi-static PIC code was developed in connection with this work, which is significantly more effective than the full PIC method for a class of problems in plasma-based acceleration.The evolution of the emittance of beams which are injected into plasma modules was studied in this work by means of theoretical and the above numerical methods. It was shown that the beam parameters need to be matched accurately into the focusing plasma-channel in order to allow for beam-quality preservation. This suggested that new extraction and injection-techniques are required in staged plasma

6. Tensile Split Hopkinson Bar Technique: Numerical Analysis of the Problem of Wave Disturbance and Specimen Geometry Selection

Directory of Open Access Journals (Sweden)

Panowicz Robert

2016-09-01

Full Text Available A method of tensile testing of materials in dynamic conditions based on a slightly modified compressive split Hopkinson bar system using a shoulder is described in this paper. The main goal was to solve, with the use of numerical modelling, the problem of wave disturbance resulting from application of a shoulder, as well as the problem of selecting a specimen geometry that enables to study the phenomenon of high strain-rate failure in tension. It is shown that, in order to prevent any interference of disturbance with the required strain signals at a given recording moment, the positions of the strain gages on the bars have to be correctly chosen for a given experimental setup. Besides, it is demonstrated that - on the basis of simplified numerical analysis - an appropriate gage length and diameter of a material specimen for failure testing in tension can be estimated.

7. Nature Inspired Computational Technique for the Numerical Solution of Nonlinear Singular Boundary Value Problems Arising in Physiology

Directory of Open Access Journals (Sweden)

Suheel Abdullah Malik

2014-01-01

Full Text Available We present a hybrid heuristic computing method for the numerical solution of nonlinear singular boundary value problems arising in physiology. The approximate solution is deduced as a linear combination of some log sigmoid basis functions. A fitness function representing the sum of the mean square error of the given nonlinear ordinary differential equation (ODE and its boundary conditions is formulated. The optimization of the unknown adjustable parameters contained in the fitness function is performed by the hybrid heuristic computation algorithm based on genetic algorithm (GA, interior point algorithm (IPA, and active set algorithm (ASA. The efficiency and the viability of the proposed method are confirmed by solving three examples from physiology. The obtained approximate solutions are found in excellent agreement with the exact solutions as well as some conventional numerical solutions.

8. Numeric processor and text manipulator for the ''MASTER CONTROL'' data-base-management system

International Nuclear Information System (INIS)

Kuhn, R.W.

1976-01-01

The numeric and text processor of the MASTER CONTROL (MCP) data-base-management system permits the user to define fields and arrays that are functionally dependent on the data retained in a data base. This allows the storage of only the essential and unique information and data, and the calculation of derivable quantities as required. The derived quantity can be expressed as an arithmetic expression, that is, a functional relationship. Functions can be multiply subscripted and can be embedded within other functions at up to 58 levels. They can be stored either semi-permanently in a repertoire of functional relations, or they can be defined interactively from a terminal and used immediately for searching on the derived value. The processor also permits the conversion of literal strings into numbers, and vice versa. In addition, the user can define dictionaries that allow the expansion of keyed sentinels associated with records in the data base into fully descriptive expressions. This option can be used for cost-effective searching and data compaction. The functional definitions are reduced to Polish notation and stored in a disk file from which they are either retrieved on demand and evaluated according to the data of records specified or used in any given MASTER CONTROL command. The language used for the definitions of the numeric processor is essentially FORTRAN; most of the standard functions and over two dozen special functions are thus available. The functional processor provides a powerful technique for the integration of text and data for energy research and for scientific and technological work in general. MASTER CONTROL is operational at the Lawrence Livermore Laboratory (LLL) and at the Los Alamos Scientific Laboratory (LASL). 6 figures, 1 table

9. Synchrotron radiation based analytical techniques (XAS and XRF)

International Nuclear Information System (INIS)

Jha, Shambhu Nath

2014-01-01

A brief description of the principles of X-ray absorption spectroscopy (XAS) and X-ray fluorescence (XRF) techniques is given in this article with emphasis on the advantages of using synchrotron radiation-based instrumentation/beamline. XAS technique is described in more detail to emphasize the strength of the technique as a local structural probe. (author)

10. Problem-Based Instructional Strategy and Numerical Ability as Determinants of Senior Secondary Achievement in Mathematics

Science.gov (United States)

2016-01-01

The study investigated Problem-based Instructional Strategy and Numerical ability as determinants of Senior Secondary Achievement in Mathematics. This study used 4 x 2 x 2 non-randomised control group Pretest-Posttest Quasi-experimental Factorial design. It consisted of two independent variables (treatment and Numerical ability) and one moderating…

11. A Numerical Matrix-Based method in Harmonic Studies in Wind Power Plants

DEFF Research Database (Denmark)

2016-01-01

In the low frequency range, there are some couplings between the positive- and negative-sequence small-signal impedances of the power converter due to the nonlinear and low bandwidth control loops such as the synchronization loop. In this paper, a new numerical method which also considers...... these couplings will be presented. The numerical data are advantageous to the parametric differential equations, because analysing the high order and complex transfer functions is very difficult, and finally one uses the numerical evaluation methods. This paper proposes a numerical matrix-based method, which...

12. Numerical Multilevel Upscaling for Incompressible Flow in Reservoir Simulation: An Element-based Algebraic Multigrid (AMGe) Approach

DEFF Research Database (Denmark)

Christensen, Max la Cour; Villa, Umberto; Engsig-Karup, Allan Peter

2017-01-01

associated with non-planar interfaces between agglomerates, the coarse velocity space has guaranteed approximation properties. The employed AMGe technique provides coarse spaces with desirable local mass conservation and stability properties analogous to the original pair of Raviart-Thomas and piecewise......We study the application of a finite element numerical upscaling technique to the incompressible two-phase porous media total velocity formulation. Specifically, an element agglomeration based Algebraic Multigrid (AMGe) technique with improved approximation proper ties [37] is used, for the first...... discontinuous polynomial spaces, resulting in strong mass conservation for the upscaled systems. Due to the guaranteed approximation properties and the generic nature of the AMGe method, recursive multilevel upscaling is automatically obtained. Furthermore, this technique works for both structured...

13. Structural reliability analysis based on the cokriging technique

International Nuclear Information System (INIS)

Zhao Wei; Wang Wei; Dai Hongzhe; Xue Guofeng

2010-01-01

Approximation methods are widely used in structural reliability analysis because they are simple to create and provide explicit functional relationships between the responses and variables in stead of the implicit limit state function. Recently, the kriging method which is a semi-parameter interpolation technique that can be used for deterministic optimization and structural reliability has gained popularity. However, to fully exploit the kriging method, especially in high-dimensional problems, a large number of sample points should be generated to fill the design space and this can be very expensive and even impractical in practical engineering analysis. Therefore, in this paper, a new method-the cokriging method, which is an extension of kriging, is proposed to calculate the structural reliability. cokriging approximation incorporates secondary information such as the values of the gradients of the function being approximated. This paper explores the use of the cokriging method for structural reliability problems by comparing it with the Kriging method based on some numerical examples. The results indicate that the cokriging procedure described in this work can generate approximation models to improve on the accuracy and efficiency for structural reliability problems and is a viable alternative to the kriging.

14. Defect-based graphene nanoribbon photodetectors: A numerical study

Energy Technology Data Exchange (ETDEWEB)

Zarei, M. H.; Sharifi, M. J., E-mail: m-j-sharifi@sbu.ac.ir [Department of Electrical Engineering, Shahid Beheshti University, Tehran 1983963113 (Iran, Islamic Republic of)

2016-06-07

Recently, some photodetectors based on graphene have been proposed. In all of these works, current generation was carried out by separation of photo-excited carriers using an electric field, either internal or external. In this work, a new method of producing current which is based on different transmission coefficients for electrons and holes when they travel toward any of the two contacts is proposed. To this end, a single Stone–Wales defect close to one of the two contacts was used to break the channel symmetry. In order to confirm the idea, the non-equilibrium Green's function formalism in real space in conjunction with the tight binding method was used in simulations. In addition, to clarify the results, we present a classical model in which different diffusion constants are assumed for the left going and the right going carriers. Additional simulations for different positions of the defect, different lengths of the ribbon, and different bias voltages were performed, and the results are included in this study.

15. Energy-based numerical models for assessment of soil liquefaction

Directory of Open Access Journals (Sweden)

Amir Hossein Alavi

2012-07-01

Full Text Available This study presents promising variants of genetic programming (GP, namely linear genetic programming (LGP and multi expression programming (MEP to evaluate the liquefaction resistance of sandy soils. Generalized LGP and MEP-based relationships were developed between the strain energy density required to trigger liquefaction (capacity energy and the factors affecting the liquefaction characteristics of sands. The correlations were established based on well established and widely dispersed experimental results obtained from the literature. To verify the applicability of the derived models, they were employed to estimate the capacity energy values of parts of the test results that were not included in the analysis. The external validation of the models was verified using statistical criteria recommended by researchers. Sensitivity and parametric analyses were performed for further verification of the correlations. The results indicate that the proposed correlations are effectively capable of capturing the liquefaction resistance of a number of sandy soils. The developed correlations provide a significantly better prediction performance than the models found in the literature. Furthermore, the best LGP and MEP models perform superior than the optimal traditional GP model. The verification phases confirm the efficiency of the derived correlations for their general application to the assessment of the strain energy at the onset of liquefaction.

16. Elastic full waveform inversion based on the homogenization method: theoretical framework and 2-D numerical illustrations

Science.gov (United States)

Capdeville, Yann; Métivier, Ludovic

2018-05-01

Seismic imaging is an efficient tool to investigate the Earth interior. Many of the different imaging techniques currently used, including the so-called full waveform inversion (FWI), are based on limited frequency band data. Such data are not sensitive to the true earth model, but to a smooth version of it. This smooth version can be related to the true model by the homogenization technique. Homogenization for wave propagation in deterministic media with no scale separation, such as geological media, has been recently developed. With such an asymptotic theory, it is possible to compute an effective medium valid for a given frequency band such that effective waveforms and true waveforms are the same up to a controlled error. In this work we make the link between limited frequency band inversion, mainly FWI, and homogenization. We establish the relation between a true model and an FWI result model. This relation is important for a proper interpretation of FWI images. We numerically illustrate, in the 2-D case, that an FWI result is at best the homogenized version of the true model. Moreover, it appears that the homogenized FWI model is quite independent of the FWI parametrization, as long as it has enough degrees of freedom. In particular, inverting for the full elastic tensor is, in each of our tests, always a good choice. We show how the homogenization can help to understand FWI behaviour and help to improve its robustness and convergence by efficiently constraining the solution space of the inverse problem.

17. Accelerator based techniques for contraband detection

Science.gov (United States)

Vourvopoulos, George

1994-05-01

It has been shown that narcotics, explosives, and other contraband materials, contain various chemical elements such as H, C, N, O, P, S, and Cl in quantities and ratios that differentiate them from each other and from other innocuous substances. Neutrons and γ-rays have the ability to penetrate through various materials at large depths. They are thus able, in a non-intrusive way, to interrogate volumes ranging from suitcases to Sea-Land containers, and have the ability to image the object with an appreciable degree of reliability. Neutron induced reactions such as (n, γ), (n, n') (n, p) or proton induced γ-resonance absorption are some of the reactions currently investigated for the identification of the chemical elements mentioned above. Various DC and pulsed techniques are discussed and their advantages, characteristics, and current progress are shown. Areas where use of these methods is currently under evaluation are detection of hidden explosives, illicit drug interdiction, chemical war agents identification, nuclear waste assay, nuclear weapons destruction and others.

18. Numerical Analysis of Modeling Based on Improved Elman Neural Network

Directory of Open Access Journals (Sweden)

Shao Jie

2014-01-01

Full Text Available A modeling based on the improved Elman neural network (IENN is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL model, Chebyshev neural network (CNN model, and basic Elman neural network (BENN model, the proposed model has better performance.

19. Assessment of damage localization based on spatial filters using numerical crack propagation models

International Nuclear Information System (INIS)

Deraemaeker, Arnaud

2011-01-01

This paper is concerned with vibration based structural health monitoring with a focus on non-model based damage localization. The type of damage investigated is cracking of concrete structures due to the loss of prestress. In previous works, an automated method based on spatial filtering techniques applied to large dynamic strain sensor networks has been proposed and tested using data from numerical simulations. In the simulations, simplified representations of cracks (such as a reduced Young's modulus) have been used. While this gives the general trend for global properties such as eigen frequencies, the change of more local features, such as strains, is not adequately represented. Instead, crack propagation models should be used. In this study, a first attempt is made in this direction for concrete structures (quasi brittle material with softening laws) using crack-band models implemented in the commercial software DIANA. The strategy consists in performing a non-linear computation which leads to cracking of the concrete, followed by a dynamic analysis. The dynamic response is then used as the input to the previously designed damage localization system in order to assess its performances. The approach is illustrated on a simply supported beam modeled with 2D plane stress elements.

20. Effects of Compressibility on the Performance of a Wave-Energy Conversion Device with an Impulse Turbine Using a Numerical Simulation Technique

Directory of Open Access Journals (Sweden)

A. Thakker

2003-01-01

Full Text Available This article presents work carried out to predict the behavior of a 0.6 m impulse turbine with fixed guide vanes as compared with that of a 0.6 hub-to-tip ratio turbine under real sea conditions. In order to predict the true performance of the actual oscillating water column (OWC, the numerical technique was fine-tuned by incorporating the compressibility effect. Water surface elevation versus time history was used as the input data for this purpose. The effect of compressibility inside the air chamber and the turbine's performance under unsteady and irregular flow conditions were analyzed numerically. Considering the quasi-steady assumptions, the unidirectional steady-flow experimental data was used to simulate the turbines characteristics under irregular unsteady flow conditions. The results showed that the performance of this type of turbine is quite stable and that the efficiency of the air chamber and the mean conversion efficiency are reduced by around 8% and 5%, respectively, as a result of the compressibility inside the air chamber. The mean efficiencies of the OWC device and the impulse turbine were predicted for 1 month, based on the Irish wave climate, and it was found that the total time period of wave data used is one of the important factors in the simulation technique.

1. An Authentication Technique Based on Classification

Institute of Scientific and Technical Information of China (English)

李钢; 杨杰

2004-01-01

We present a novel watermarking approach based on classification for authentication, in which a watermark is embedded into the host image. When the marked image is modified, the extracted watermark is also different to the original watermark, and different kinds of modification lead to different extracted watermarks. In this paper, different kinds of modification are considered as classes, and we used classification algorithm to recognize the modifications with high probability. Simulation results show that the proposed method is potential and effective.

2. Laser-based techniques for combustion diagnostics

Energy Technology Data Exchange (ETDEWEB)

Georgiev, N.

1997-04-01

Two-photon-induced Degenerate Four-Wave Mixing, DFWM, was applied for the first time to the detection of CO, and NH{sub 3} molecules. Measurements were performed in a cell, and in atmospheric-pressure flames. In the cell measurements, the signal dependence on the pressure and on the laser beam intensity was studied. The possibility of simultaneous detection of NH{sub 3} and OH was investigated. Carbon monoxide and ammonia were also detected employing two-photon-induced Polarization Spectroscopy, PS. In the measurements performed in a cold gas flow, the signal strength dependence on the laser intensity, and on the polarization of the pump beam, was investigated. An approach to improve the spatial resolution of the Amplified Stimulated Emission, ASE, was developed. In this approach, two laser beams at different frequencies were crossed in the sample. If the sum of the frequencies of the two laser beams matches a two photon resonance of the investigated species, only the molecules in the intersection volume will be excited. NH{sub 3} molecules and C atoms were studied. The potential of using two-photon LIF for two-dimensional imaging of combustion species was investigated. Although LIF is species specific, several species can be detected simultaneously by utilizing spectral coincidences. Combining one- and two-photon process, OH, NO, and O were detected simultaneously, as well as OH, NO, and NH{sub 3}. Collisional quenching is the major source of uncertainty in quantitative applications of LIF. A technique for two-dimensional, absolute species concentration measurements, circumventing the problems associated with collisional quenching, was developed. By applying simple mathematics to the ratio of two LIF signals generated from two counterpropagating laser beams, the absolute species concentration could be obtained. 41 refs

3. An analysis of supersonic flows with low-Reynolds number compressible two-equation turbulence models using LU finite volume implicit numerical techniques

Science.gov (United States)

Lee, J.

1994-01-01

A generalized flow solver using an implicit Lower-upper (LU) diagonal decomposition based numerical technique has been coupled with three low-Reynolds number kappa-epsilon models for analysis of problems with engineering applications. The feasibility of using the LU technique to obtain efficient solutions to supersonic problems using the kappa-epsilon model has been demonstrated. The flow solver is then used to explore limitations and convergence characteristics of several popular two equation turbulence models. Several changes to the LU solver have been made to improve the efficiency of turbulent flow predictions. In general, the low-Reynolds number kappa-epsilon models are easier to implement than the models with wall-functions, but require much finer near-wall grid to accurately resolve the physics. The three kappa-epsilon models use different approaches to characterize the near wall regions of the flow. Therefore, the limitations imposed by the near wall characteristics have been carefully resolved. The convergence characteristics of a particular model using a given numerical technique are also an important, but most often overlooked, aspect of turbulence model predictions. It is found that some convergence characteristics could be sacrificed for more accurate near-wall prediction. However, even this gain in accuracy is not sufficient to model the effects of an external pressure gradient imposed by a shock-wave/ boundary-layer interaction. Additional work on turbulence models, especially for compressibility, is required since the solutions obtained with base line turbulence are in only reasonable agreement with the experimental data for the viscous interaction problems.

4. Numerical simulation of CICC design based on optimization of ratio of copper to superconductor

International Nuclear Information System (INIS)

Jiang Huawei; Li Yuan; Yan Shuailing

2007-01-01

For cable-in-conduit conductor (CICC) structure design, a numeric simulation is proposed for conductor configuration based on optimization of ratio of copper to superconductor. The simulation outcome is in agreement with engineering design one. (authors)

5. Advanced numerical technique for analysis of surface and bulk acoustic waves in resonators using periodic metal gratings

Science.gov (United States)

Naumenko, Natalya F.

2014-09-01

A numerical technique characterized by a unified approach for the analysis of different types of acoustic waves utilized in resonators in which a periodic metal grating is used for excitation and reflection of such waves is described. The combination of the Finite Element Method analysis of the electrode domain with the Spectral Domain Analysis (SDA) applied to the adjacent upper and lower semi-infinite regions, which may be multilayered and include air as a special case of a dielectric material, enables rigorous simulation of the admittance in resonators using surface acoustic waves, Love waves, plate modes including Lamb waves, Stonely waves, and other waves propagating along the interface between two media, and waves with transient structure between the mentioned types. The matrix formalism with improved convergence incorporated into SDA provides fast and robust simulation for multilayered structures with arbitrary thickness of each layer. The described technique is illustrated by a few examples of its application to various combinations of LiNbO3, isotropic silicon dioxide and silicon with a periodic array of Cu electrodes. The wave characteristics extracted from the admittance functions change continuously with the variation of the film and plate thicknesses over wide ranges, even when the wave nature changes. The transformation of the wave nature with the variation of the layer thicknesses is illustrated by diagrams and contour plots of the displacements calculated at resonant frequencies.

6. Huffman-based code compression techniques for embedded processors

KAUST Repository

Bonny, Mohamed Talal

2010-09-01

The size of embedded software is increasing at a rapid pace. It is often challenging and time consuming to fit an amount of required software functionality within a given hardware resource budget. Code compression is a means to alleviate the problem by providing substantial savings in terms of code size. In this article we introduce a novel and efficient hardware-supported compression technique that is based on Huffman Coding. Our technique reduces the size of the generated decoding table, which takes a large portion of the memory. It combines our previous techniques, Instruction Splitting Technique and Instruction Re-encoding Technique into new one called Combined Compression Technique to improve the final compression ratio by taking advantage of both previous techniques. The instruction Splitting Technique is instruction set architecture (ISA)-independent. It splits the instructions into portions of varying size (called patterns) before Huffman coding is applied. This technique improves the final compression ratio by more than 20% compared to other known schemes based on Huffman Coding. The average compression ratios achieved using this technique are 48% and 50% for ARM and MIPS, respectively. The Instruction Re-encoding Technique is ISA-dependent. It investigates the benefits of reencoding unused bits (we call them reencodable bits) in the instruction format for a specific application to improve the compression ratio. Reencoding those bits can reduce the size of decoding tables by up to 40%. Using this technique, we improve the final compression ratios in comparison to the first technique to 46% and 45% for ARM and MIPS, respectively (including all overhead that incurs). The Combined Compression Technique improves the compression ratio to 45% and 42% for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures

7. Scour Monitoring System for Subsea Pipeline Based on Active Thermometry: Numerical and Experimental Studies

Directory of Open Access Journals (Sweden)

Jun Du

2013-01-01

Full Text Available A scour monitoring system for subsea pipeline based on active thermometry is proposed in this paper. The temperature reading of the proposed system is based on a distributed Brillouin optical fiber sensing technique. A thermal cable acts as the main component of the system, which consists of a heating belt, armored optical fibers and heat-shrinkable tubes which run parallel to the pipeline. The scour-induced free span can be monitored through different heat transfer behaviors of in-water and in-sediment scenarios during heating and cooling processes. Two sets of experiments, including exposing different lengths of the upper surface of the pipeline to water and creating free spans of various lengths, were carried out in laboratory. In both cases, the scour condition was immediately detected by the proposed monitoring system, which confirmed the system is robust and very sensitive. Numerical study of the method was also investigated by using the finite element method (FEM with ANSYS, resulting in reasonable agreement with the test data. This brand new system provides a promising, low cost, highly precise and flexible approach for scour monitoring of subsea pipelines.

8. Numerical simulation of residual stress in laser based additive manufacturing process

Science.gov (United States)

2018-03-01

Minimizing the residual stress build-up in metal-based additive manufacturing plays a pivotal role in selecting a particular material and technique for making an industrial part. In beam-based additive manufacturing, although a great deal of effort has been made to minimize the residual stresses, it is still elusive how to do so by simply optimizing the processing parameters, such as beam size, beam power, and scan speed. Amid different types of additive manufacturing processes, Direct Metal Laser Sintering (DMLS) process uses a high-power laser to melt and sinter layers of metal powder. The rapid solidification and heat transfer on powder bed endows a high cooling rate which leads to the build-up of residual stresses, that will affect the mechanical properties of the build parts. In the present work, the authors develop a numerical thermo-mechanical model for the measurement of residual stress in the AlSi10Mg build samples by using finite element method. Transient temperature distribution in the powder bed was assessed using the coupled thermal to structural model. Subsequently, the residual stresses were estimated with varying laser power. From the simulation result, it found that the melt pool dimensions increase with increasing the laser power and the magnitude of residual stresses in the built part increases.

9. Numerical Calculation of Transport Based on the Drift-Kinetic Equation for Plasmas in General Toroidal Magnetic Geometry: Numerical Methods

International Nuclear Information System (INIS)

Reynolds, J. M.; Lopez-Bruna, D.

2009-01-01

In this report we continue with the description of a newly developed numerical method to solve the drift kinetic equation for ions and electrons in toroidal plasmas. Several numerical aspects, already outlined in a previous report [Informes Tecnicos Ciemat 1165, mayo 2009], will be treated now in more detail. Aside from discussing the method in the context of other existing codes, various aspects will be now explained from the viewpoint of numerical methods: the way to solve convection equations, the adopted boundary conditions, the real-space meshing procedures along with a new software developed to build them, and some additional questions related with the parallelization and the numerical integration. (Author) 16 refs

10. Experimental evaluation of a quasi-modal parameter based rotor foundation identification technique

Science.gov (United States)

Yu, Minli; Liu, Jike; Feng, Ningsheng; Hahn, Eric J.

2017-12-01

Correct modelling of the foundation of rotating machinery is an invaluable asset in model-based rotor dynamic study. One attractive approach for such purpose is to identify the relevant modal parameters of an equivalent foundation using the motion measurements of rotor and foundation at the bearing supports. Previous research showed that, a complex quasi-modal parameter based system identification technique could be feasible for this purpose; however, the technique was only validated by identifying simple structures under harmonic excitation. In this paper, such identification technique is further extended and evaluated by identifying the foundation of a numerical rotor-bearing-foundation system and an experimental rotor rig respectively. In the identification of rotor foundation with multiple bearing supports, all application points of excitation forces transmitted through bearings need to be included; however the assumed vibration modes far outside the rotor operating speed cannot or not necessary to be identified. The extended identification technique allows one to identify correctly an equivalent foundation with fewer modes than the assumed number of degrees of freedom, essentially by generalising the technique to be able to handle rectangular complex modal matrices. The extended technique is robust in numerical and experimental validation and is therefore likely to be applicable in the field.

11. Array-based techniques for fingerprinting medicinal herbs

Directory of Open Access Journals (Sweden)

Xue Charlie

2011-05-01

Full Text Available Abstract Poor quality control of medicinal herbs has led to instances of toxicity, poisoning and even deaths. The fundamental step in quality control of herbal medicine is accurate identification of herbs. Array-based techniques have recently been adapted to authenticate or identify herbal plants. This article reviews the current array-based techniques, eg oligonucleotides microarrays, gene-based probe microarrays, Suppression Subtractive Hybridization (SSH-based arrays, Diversity Array Technology (DArT and Subtracted Diversity Array (SDA. We further compare these techniques according to important parameters such as markers, polymorphism rates, restriction enzymes and sample type. The applicability of the array-based methods for fingerprinting depends on the availability of genomics and genetics of the species to be fingerprinted. For the species with few genome sequence information but high polymorphism rates, SDA techniques are particularly recommended because they require less labour and lower material cost.

12. Active Vibration damping of Smart composite beams based on system identification technique

Science.gov (United States)

Bendine, Kouider; Satla, Zouaoui; Boukhoulda, Farouk Benallel; Nouari, Mohammed

2018-03-01

In the present paper, the active vibration control of a composite beam using piezoelectric actuator is investigated. The space state equation is determined using system identification technique based on the structure input output response provided by ANSYS APDL finite element package. The Linear Quadratic (LQG) control law is designed and integrated into ANSYS APDL to perform closed loop simulations. Numerical examples for different types of excitation loads are presented to test the efficiency and the accuracy of the proposed model.

13. Simplex-based optimization of numerical and categorical inputs in early bioprocess development: Case studies in HT chromatography.

Science.gov (United States)

Konstantinidis, Spyridon; Titchener-Hooker, Nigel; Velayudhan, Ajoy

2017-08-01

Bioprocess development studies often involve the investigation of numerical and categorical inputs via the adoption of Design of Experiments (DoE) techniques. An attractive alternative is the deployment of a grid compatible Simplex variant which has been shown to yield optima rapidly and consistently. In this work, the method is combined with dummy variables and it is deployed in three case studies wherein spaces are comprised of both categorical and numerical inputs, a situation intractable by traditional Simplex methods. The first study employs in silico data and lays out the dummy variable methodology. The latter two employ experimental data from chromatography based studies performed with the filter-plate and miniature column High Throughput (HT) techniques. The solute of interest in the former case study was a monoclonal antibody whereas the latter dealt with the separation of a binary system of model proteins. The implemented approach prevented the stranding of the Simplex method at local optima, due to the arbitrary handling of the categorical inputs, and allowed for the concurrent optimization of numerical and categorical, multilevel and/or dichotomous, inputs. The deployment of the Simplex method, combined with dummy variables, was therefore entirely successful in identifying and characterizing global optima in all three case studies. The Simplex-based method was further shown to be of equivalent efficiency to a DoE-based approach, represented here by D-Optimal designs. Such an approach failed, however, to both capture trends and identify optima, and led to poor operating conditions. It is suggested that the Simplex-variant is suited to development activities involving numerical and categorical inputs in early bioprocess development. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

14. Discussion on numerical simulation techniques for patterns of water vapor rise and droplet deposition at NPP cooling tower

International Nuclear Information System (INIS)

Guo Dongpeng; Yao Rentai

2010-01-01

Based on the working principle of cooling tower, analysis and comparison are made of both advantages and disadvantages of the numerical simulation models, such as ORFAD, KUMULUS, ISCST:A, ANL/UI, CFD etc., which predict the rise and droplet deposition pattern of cooling tower water vapor. The results showed that, CFD model is currently a better model that is used of three-dimensional Renault fluid flow equations predicting the rise and droplet deposition pattern of cooling tower water vapor. The impact of the line trajectory deviation and the speed change inn plume rising is not considered in any other models, and they can not be used for prediction of particle rise and droplet deposition when a larger particle or large buildings in the direction of cooling tower. (authors)

15. improvement of digital image watermarking techniques based on FPGA implementation

International Nuclear Information System (INIS)

2006-01-01

digital watermarking provides the ownership of a piece of digital data by marking the considered data invisibly or visibly. this can be used to protect several types of multimedia objects such as audio, text, image and video. this thesis demonstrates the different types of watermarking techniques such as (discrete cosine transform (DCT) and discrete wavelet transform (DWT) and their characteristics. then, it classifies these techniques declaring their advantages and disadvantages. an improved technique with distinguished features, such as peak signal to noise ratio ( PSNR) and similarity ratio (SR) has been introduced. the modified technique has been compared with the other techniques by measuring heir robustness against differ attacks. finally, field programmable gate arrays (FPGA) based implementation and comparison, for the proposed watermarking technique have been presented and discussed

16. Synchronization of uncertain time-varying network based on sliding mode control technique

Science.gov (United States)

Lü, Ling; Li, Chengren; Bai, Suyuan; Li, Gang; Rong, Tingting; Gao, Yan; Yan, Zhe

2017-09-01

We research synchronization of uncertain time-varying network based on sliding mode control technique. The sliding mode control technique is first modified so that it can be applied to network synchronization. Further, by choosing the appropriate sliding surface, the identification law of uncertain parameter, the adaptive law of the time-varying coupling matrix element and the control input of network are designed, it is sure that the uncertain time-varying network can synchronize effectively the synchronization target. At last, we perform some numerical simulations to demonstrate the effectiveness of the proposed results.

17. Rapid analysis of steels using laser-based techniques

International Nuclear Information System (INIS)

Cremers, D.A.; Archuleta, F.L.; Dilworth, H.C.

1985-01-01

Based on the data obtained by this study, we conclude that laser-based techniques can be used to provide at least semi-quantitative information about the elemental composition of molten steel. Of the two techniques investigated here, the Sample-Only method appears preferable to the LIBS (laser-induced breakdown spectroscopy) method because of its superior analytical performance. In addition, the Sample-Only method would probably be easier to incorporate into a steel plant environment. However, before either technique can be applied to steel monitoring, additional research is needed

18. SLIM-MAUD - a computer based technique for human reliability assessment

International Nuclear Information System (INIS)

Embrey, D.E.

1985-01-01

The Success Likelihood Index Methodology (SLIM) is a widely applicable technique which can be used to assess human error probabilities in both proceduralized and cognitive tasks (i.e. those involving decision making, problem solving, etc.). It assumes that expert assessors are able to evaluate the relative importance (or weights) of different factors called Performance Shaping Factors (PSFs), in determining the likelihood of error for the situations being assessed. Typical PSFs are the extent to which good procedures are available, operators are adequately trained, the man-machine interface is well designed, etc. If numerical ratings are made of the PSFs for the specific tasks being evaluated, these can be combined with the weights to give a numerical index, called the Success Likelihood Index (SLI). The SLI represents, in numerical form, the overall assessment of the experts of the likelihood of task success. The SLI can be subsequently transformed to a corresponding human error probability (HEP) estimate. The latest form of the SLIM technique is implemented using a microcomputer based system called MAUD (Multi-Attribute Utility Decomposition), the resulting technique being called SLIM-MAUD. A detailed description of the SLIM-MAUD technique and case studies of applications are available. An illustrative example of the application of SLIM-MAUD in probabilistic risk assessment is given

19. Numerical Simulation of a Grinding Process Model for the Spatial Work-pieces: Development of Modeling Techniques

Directory of Open Access Journals (Sweden)

S. A. Voronov

2015-01-01

Full Text Available The article presents a literature review in simulation of grinding processes. It takes into consideration the statistical, energy based, and imitation approaches to simulation of grinding forces. Main stages of interaction between abrasive grains and machined surface are shown. The article describes main approaches to the geometry modeling of forming new surfaces when grinding. The review of approaches to the chip and pile up effect numerical modeling is shown. Advantages and disadvantages of grain-to-surface interaction by means of finite element method and molecular dynamics method are considered. The article points out that it is necessary to take into consideration the system dynamics and its effect on the finished surface. Structure of the complex imitation model of grinding process dynamics for flexible work-pieces with spatial surface geometry is proposed from the literature review. The proposed model of spatial grinding includes the model of work-piece dynamics, model of grinding wheel dynamics, phenomenological model of grinding forces based on 3D geometry modeling algorithm. Model gives the following results for spatial grinding process: vibration of machining part and grinding wheel, machined surface geometry, static deflection of the surface and grinding forces under various cutting conditions.

20. Investigation of the Rock Fragmentation Process by a Single TBM Cutter Using a Voronoi Element-Based Numerical Manifold Method

Science.gov (United States)

Liu, Quansheng; Jiang, Yalong; Wu, Zhijun; Xu, Xiangyu; Liu, Qi

2018-04-01

In this study, a two-dimensional Voronoi element-based numerical manifold method (VE-NMM) is developed to analyze the granite fragmentation process by a single tunnel boring machine (TBM) cutter under different confining stresses. A Voronoi tessellation technique is adopted to generate the polygonal grain assemblage to approximate the microstructure of granite sample from the Gubei colliery of Huainan mining area in China. A modified interface contact model with cohesion and tensile strength is embedded into the numerical manifold method (NMM) to interpret the interactions between the rock grains. Numerical uniaxial compression and Brazilian splitting tests are first conducted to calibrate and validate the VE-NMM models based on the laboratory experiment results using a trial-and-error method. On this basis, numerical simulations of rock fragmentation by a single TBM cutter are conducted. The simulated crack initiation and propagation process as well as the indentation load-penetration depth behaviors in the numerical models accurately predict the laboratory indentation test results. The influence of confining stress on rock fragmentation is also investigated. Simulation results show that radial tensile cracks are more likely to be generated under a low confining stress, eventually coalescing into a major fracture along the loading axis. However, with the increase in confining stress, more side cracks initiate and coalesce, resulting in the formation of rock chips at the upper surface of the model. In addition, the peak indentation load also increases with the increasing confining stress, indicating that a higher thrust force is usually needed during the TBM boring process in deep tunnels.

1. Numerical solution of the unsteady diffusion-convection-reaction equation based on improved spectral Galerkin method

Science.gov (United States)

Zhong, Jiaqi; Zeng, Cheng; Yuan, Yupeng; Zhang, Yuzhe; Zhang, Ye

2018-04-01

The aim of this paper is to present an explicit numerical algorithm based on improved spectral Galerkin method for solving the unsteady diffusion-convection-reaction equation. The principal characteristics of this approach give the explicit eigenvalues and eigenvectors based on the time-space separation method and boundary condition analysis. With the help of Fourier series and Galerkin truncation, we can obtain the finite-dimensional ordinary differential equations which facilitate the system analysis and controller design. By comparing with the finite element method, the numerical solutions are demonstrated via two examples. It is shown that the proposed method is effective.

2. A natural approach to convey numerical digits using hand activity recognition based on hand shape features

Science.gov (United States)

Chidananda, H.; Reddy, T. Hanumantha

2017-06-01

This paper presents a natural representation of numerical digit(s) using hand activity analysis based on number of fingers out stretched for each numerical digit in sequence extracted from a video. The analysis is based on determining a set of six features from a hand image. The most important features used from each frame in a video are the first fingertip from top, palm-line, palm-center, valley points between the fingers exists above the palm-line. Using this work user can convey any number of numerical digits using right or left or both the hands naturally in a video. Each numerical digit ranges from 0 to9. Hands (right/left/both) used to convey digits can be recognized accurately using the valley points and with this recognition whether the user is a right / left handed person in practice can be analyzed. In this work, first the hand(s) and face parts are detected by using YCbCr color space and face part is removed by using ellipse based method. Then, the hand(s) are analyzed to recognize the activity that represents a series of numerical digits in a video. This work uses pixel continuity algorithm using 2D coordinate geometry system and does not use regular use of calculus, contours, convex hull and datasets.

3. Power system stabilizers based on modern control techniques

Energy Technology Data Exchange (ETDEWEB)

Malik, O P; Chen, G P; Zhang, Y; El-Metwally, K [Calgary Univ., AB (Canada). Dept. of Electrical and Computer Engineering

1994-12-31

Developments in digital technology have made it feasible to develop and implement improved controllers based on sophisticated control techniques. Power system stabilizers based on adaptive control, fuzzy logic and artificial networks are being developed. Each of these control techniques possesses unique features and strengths. In this paper, the relative performance of power systems stabilizers based on adaptive control, fuzzy logic and neural network, both in simulation studies and real time tests on a physical model of a power system, is presented and compared to that of a fixed parameter conventional power system stabilizer. (author) 16 refs., 45 figs., 3 tabs.

4. Simulation-based optimization parametric optimization techniques and reinforcement learning

CERN Document Server

Gosavi, Abhijit

2003-01-01

Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of simulation-based optimization. The book's objective is two-fold: (1) It examines the mathematical governing principles of simulation-based optimization, thereby providing the reader with the ability to model relevant real-life problems using these techniques. (2) It outlines the computational technology underlying these methods. Taken together these two aspects demonstrate that the mathematical and computational methods discussed in this book do work. Broadly speaking, the book has two parts: (1) parametric (static) optimization and (2) control (dynamic) optimization. Some of the book's special features are: *An accessible introduction to reinforcement learning and parametric-optimization techniques. *A step-by-step description of several algorithms of simulation-based optimization. *A clear and simple introduction to the methodology of neural networks. *A gentle introduction to converg...

5. COMPARISON AND EVALUATION OF CLUSTER BASED IMAGE SEGMENTATION TECHNIQUES

OpenAIRE

Hetangi D. Mehta*, Daxa Vekariya, Pratixa Badelia

2017-01-01

Image segmentation is the classification of an image into different groups. Numerous algorithms using different approaches have been proposed for image segmentation. A major challenge in segmentation evaluation comes from the fundamental conflict between generality and objectivity. A review is done on different types of clustering methods used for image segmentation. Also a methodology is proposed to classify and quantify different clustering algorithms based on their consistency in different...

6. Optimal design of a composite space shield based on numerical simulations

International Nuclear Information System (INIS)

Son, Byung Jin; Yoo, Jeong Hoon; Lee, Min Hyung

2015-01-01

In this study, optimal design of a stuffed Whipple shield is proposed by using numerical simulations and new penetration criterion. The target model was selected based on the shield model used in the Columbus module of the international space station. Because experimental results can be obtained only in the low velocity region below 7 km/s, it is required to derive the Ballistic limit curve (BLC) in the high velocity region above 7 km/s by numerical simulation. AUTODYN-2D, the commercial hydro-code package, was used to simulate the nonlinear transient analysis for the hypervelocity impact. The Smoothed particle hydrodynamics (SPH) method was applied to projectile and bumper modeling to represent the debris cloud generated after the impact. Numerical simulation model and selected material properties were validated through a quantitative comparison between numerical and experimental results. A new criterion to determine whether the penetration occurs or not is proposed from kinetic energy analysis by numerical simulation in the velocity region over 7 km/s. The parameter optimization process was performed to improve the protection ability at a specific condition through the Design of experiment (DOE) method and the Response surface methodology (RSM). The performance of the proposed optimal design was numerically verified.

7. An Image Registration Based Technique for Noninvasive Vascular Elastography

OpenAIRE

2018-01-01

Non-invasive vascular elastography is an emerging technique in vascular tissue imaging. During the past decades, several techniques have been suggested to estimate the tissue elasticity by measuring the displacement of the Carotid vessel wall. Cross correlation-based methods are the most prevalent approaches to measure the strain exerted in the wall vessel by the blood pressure. In the case of a low pressure, the displacement is too small to be apparent in ultrasound imaging, especially in th...

8. Memory Based Machine Intelligence Techniques in VLSI hardware

OpenAIRE

James, Alex Pappachen

2012-01-01

We briefly introduce the memory based approaches to emulate machine intelligence in VLSI hardware, describing the challenges and advantages. Implementation of artificial intelligence techniques in VLSI hardware is a practical and difficult problem. Deep architectures, hierarchical temporal memories and memory networks are some of the contemporary approaches in this area of research. The techniques attempt to emulate low level intelligence tasks and aim at providing scalable solutions to high ...

9. The Research of Histogram Enhancement Technique Based on Matlab Software

Directory of Open Access Journals (Sweden)

Li Kai

2014-08-01

Full Text Available Histogram enhancement technique has been widely applied as a typical pattern in digital image processing. The paper is based on Matlab software, through the two ways of histogram equalization and histogram specification technologies to deal with the darker images, using two methods of partial equilibrium and mapping histogram to transform the original histograms, thereby enhanced the image information. The results show that these two kinds of techniques both can significantly improve the image quality and enhance the image feature.

10. Numerical Upscaling of Solute Transport in Fractured Porous Media Based on Flow Aligned Blocks

Science.gov (United States)

Leube, P.; Nowak, W.; Sanchez-Vila, X.

2013-12-01

High-contrast or fractured-porous media (FPM) pose one of the largest unresolved challenges for simulating large hydrogeological systems. The high contrast in advective transport between fast conduits and low-permeability rock matrix, including complex mass transfer processes, leads to the typical complex characteristics of early bulk arrivals and long tailings. Adequate direct representation of FPM requires enormous numerical resolutions. For large scales, e.g. the catchment scale, and when allowing for uncertainty in the fracture network architecture or in matrix properties, computational costs quickly reach an intractable level. In such cases, multi-scale simulation techniques have become useful tools. They allow decreasing the complexity of models by aggregating and transferring their parameters to coarser scales and so drastically reduce the computational costs. However, these advantages come at a loss of detail and accuracy. In this work, we develop and test a new multi-scale or upscaled modeling approach based on block upscaling. The novelty is that individual blocks are defined by and aligned with the local flow coordinates. We choose a multi-rate mass transfer (MRMT) model to represent the remaining sub-block non-Fickian behavior within these blocks on the coarse scale. To make the scale transition simple and to save computational costs, we capture sub-block features by temporal moments (TM) of block-wise particle arrival times to be matched with the MRMT model. By predicting spatial mass distributions of injected tracers in a synthetic test scenario, our coarse-scale solution matches reasonably well with the corresponding fine-scale reference solution. For predicting higher TM-orders (such as arrival time and effective dispersion), the prediction accuracy steadily decreases. This is compensated to some extent by the MRMT model. If the MRMT model becomes too complex, it loses its effect. We also found that prediction accuracy is sensitive to the choice of

11. Numerical Problems and Agent-Based Models for a Mass Transfer Course

Science.gov (United States)

Murthi, Manohar; Shea, Lonnie D.; Snurr, Randall Q.

2009-01-01

Problems requiring numerical solutions of differential equations or the use of agent-based modeling are presented for use in a course on mass transfer. These problems were solved using the popular technical computing language MATLABTM. Students were introduced to MATLAB via a problem with an analytical solution. A more complex problem to which no…

12. Simulation of Supersonic Base Flows: Numerical Investigations Using DNS, LES, and URANS

Science.gov (United States)

2006-10-01

global instabilities were found for a two-dimensional bluff body with a blunt base by Hannemann & Oertel (1989). Oertel (1990) found that the... Hannemann , K. & Oertel, H. 1989 Numerical simulation of the absolutely and convectively unstable wake. J. Fluid Mech. 199, 55–88. Harris, P. J. 1997

13. Numerical Analysis of an All-optical Logic XOR gate based on an active MZ interferometer

DEFF Research Database (Denmark)

Nielsen, Mads Lønstrup; Mørk, Jesper; Fjelde, T.

2002-01-01

are investigated numerically for a Mach-Zehnder interferometer (MZI) based XOR gate. For bit-rates up to 40 Gb/s, the synchronization tolerance of a MZI XOR gate is determined by the pulse width for RZ format. For the NRZ format, the tolerance decreases as the rise/fall-time approaches the timeslot. The gate...

14. Structure of unilamellar vesicles: Numerical analysis based on small-angle neutron scattering data

International Nuclear Information System (INIS)

Zemlyanaya, E. V.; Kiselev, M. A.; Zbytovska, J.; Almasy, L.; Aswal, V. K.; Strunz, P.; Wartewig, S.; Neubert, R.

2006-01-01

The structure of polydispersed populations of unilamellar vesicles is studied by small-angle neutron scattering for three types of lipid systems, namely, single-, two-and four-component vesicular systems. Results of the numerical analysis based on the separated-form-factor model are reported

15. Laser-based direct-write techniques for cell printing

Energy Technology Data Exchange (ETDEWEB)

Schiele, Nathan R; Corr, David T [Biomedical Engineering Department, Rensselaer Polytechnic Institute, Troy, NY (United States); Huang Yong [Department of Mechanical Engineering, Clemson University, Clemson, SC (United States); Raof, Nurazhani Abdul; Xie Yubing [College of Nanoscale Science and Engineering, University at Albany, SUNY, Albany, NY (United States); Chrisey, Douglas B, E-mail: schien@rpi.ed, E-mail: chrisd@rpi.ed [Material Science and Engineering Department, Rensselaer Polytechnic Institute, Troy, NY (United States)

2010-09-15

Fabrication of cellular constructs with spatial control of cell location ({+-}5 {mu}m) is essential to the advancement of a wide range of applications including tissue engineering, stem cell and cancer research. Precise cell placement, especially of multiple cell types in co- or multi-cultures and in three dimensions, can enable research possibilities otherwise impossible, such as the cell-by-cell assembly of complex cellular constructs. Laser-based direct writing, a printing technique first utilized in electronics applications, has been adapted to transfer living cells and other biological materials (e.g., enzymes, proteins and bioceramics). Many different cell types have been printed using laser-based direct writing, and this technique offers significant improvements when compared to conventional cell patterning techniques. The predominance of work to date has not been in application of the technique, but rather focused on demonstrating the ability of direct writing to pattern living cells, in a spatially precise manner, while maintaining cellular viability. This paper reviews laser-based additive direct-write techniques for cell printing, and the various cell types successfully laser direct-written that have applications in tissue engineering, stem cell and cancer research are highlighted. A particular focus is paid to process dynamics modeling and process-induced cell injury during laser-based cell direct writing. (topical review)

16. Laser-based direct-write techniques for cell printing

International Nuclear Information System (INIS)

Schiele, Nathan R; Corr, David T; Huang Yong; Raof, Nurazhani Abdul; Xie Yubing; Chrisey, Douglas B

2010-01-01

Fabrication of cellular constructs with spatial control of cell location (±5 μm) is essential to the advancement of a wide range of applications including tissue engineering, stem cell and cancer research. Precise cell placement, especially of multiple cell types in co- or multi-cultures and in three dimensions, can enable research possibilities otherwise impossible, such as the cell-by-cell assembly of complex cellular constructs. Laser-based direct writing, a printing technique first utilized in electronics applications, has been adapted to transfer living cells and other biological materials (e.g., enzymes, proteins and bioceramics). Many different cell types have been printed using laser-based direct writing, and this technique offers significant improvements when compared to conventional cell patterning techniques. The predominance of work to date has not been in application of the technique, but rather focused on demonstrating the ability of direct writing to pattern living cells, in a spatially precise manner, while maintaining cellular viability. This paper reviews laser-based additive direct-write techniques for cell printing, and the various cell types successfully laser direct-written that have applications in tissue engineering, stem cell and cancer research are highlighted. A particular focus is paid to process dynamics modeling and process-induced cell injury during laser-based cell direct writing. (topical review)

17. Advanced numerical simulation based on a non-local micromorphic model for metal forming processes

Directory of Open Access Journals (Sweden)

Diamantopoulou Evangelia

2016-01-01

Full Text Available An advanced numerical methodology is developed for metal forming simulation based on thermodynamically-consistent nonlocal constitutive equations accounting for various fully coupled mechanical phenomena under finite strain in the framework of micromorphic continua. The numerical implementation into ABAQUS/Explicit is made for 2D quadrangular elements thanks to the VUEL users’ subroutine. Simple examples with presence of a damaged area are made in order to show the ability of the proposed methodology to describe the independence of the solution from the space discretization.

18. A Novel Machine Learning Strategy Based on Two-Dimensional Numerical Models in Financial Engineering

Directory of Open Access Journals (Sweden)

Qingzhen Xu

2013-01-01

Full Text Available Machine learning is the most commonly used technique to address larger and more complex tasks by analyzing the most relevant information already present in databases. In order to better predict the future trend of the index, this paper proposes a two-dimensional numerical model for machine learning to simulate major U.S. stock market index and uses a nonlinear implicit finite-difference method to find numerical solutions of the two-dimensional simulation model. The proposed machine learning method uses partial differential equations to predict the stock market and can be extensively used to accelerate large-scale data processing on the history database. The experimental results show that the proposed algorithm reduces the prediction error and improves forecasting precision.

19. Solution of AntiSeepage for Mengxi River Based on Numerical Simulation of Unsaturated Seepage

Science.gov (United States)

Ji, Youjun; Zhang, Linzhi; Yue, Jiannan

2014-01-01

Lessening the leakage of surface water can reduce the waste of water resources and ground water pollution. To solve the problem that Mengxi River could not store water enduringly, geology investigation, theoretical analysis, experiment research, and numerical simulation analysis were carried out. Firstly, the seepage mathematical model was established based on unsaturated seepage theory; secondly, the experimental equipment for testing hydraulic conductivity of unsaturated soil was developed to obtain the curve of two-phase flow. The numerical simulation of leakage in natural conditions proves the previous inference and leakage mechanism of river. At last, the seepage control capacities of different impervious materials were compared by numerical simulations. According to the engineering actuality, the impervious material was selected. The impervious measure in this paper has been proved to be effectible by hydrogeological research today. PMID:24707199

20. Solution of AntiSeepage for Mengxi River Based on Numerical Simulation of Unsaturated Seepage

Directory of Open Access Journals (Sweden)

Youjun Ji

2014-01-01

Full Text Available Lessening the leakage of surface water can reduce the waste of water resources and ground water pollution. To solve the problem that Mengxi River could not store water enduringly, geology investigation, theoretical analysis, experiment research, and numerical simulation analysis were carried out. Firstly, the seepage mathematical model was established based on unsaturated seepage theory; secondly, the experimental equipment for testing hydraulic conductivity of unsaturated soil was developed to obtain the curve of two-phase flow. The numerical simulation of leakage in natural conditions proves the previous inference and leakage mechanism of river. At last, the seepage control capacities of different impervious materials were compared by numerical simulations. According to the engineering actuality, the impervious material was selected. The impervious measure in this paper has been proved to be effectible by hydrogeological research today.

1. GIS-based two-dimensional numerical simulation of rainfall-induced debris flow

Directory of Open Access Journals (Sweden)

C. Wang

2008-02-01

Full Text Available This paper aims to present a useful numerical method to simulate the propagation and deposition of debris flow across the three dimensional complex terrain. A depth-averaged two-dimensional numerical model is developed, in which the debris and water mixture is assumed to be continuous, incompressible, unsteady flow. The model is based on the continuity equations and Navier-Stokes equations. Raster grid networks of digital elevation model in GIS provide a uniform grid system to describe complex topography. As the raster grid can be used as the finite difference mesh, the continuity and momentum equations are solved numerically using the finite difference method. The numerical model is applied to simulate the rainfall-induced debris flow occurred in 20 July 2003, in Minamata City of southern Kyushu, Japan. The simulation reproduces the propagation and deposition and the results are in good agreement with the field investigation. The synthesis of numerical method and GIS makes possible the solution of debris flow over a realistic terrain, and can be used to estimate the flow range, and to define potentially hazardous areas for homes and road section.

2. Final Progress Report: Collaborative Research: Decadal-to-Centennial Climate & Climate Change Studies with Enhanced Variable and Uniform Resolution GCMs Using Advanced Numerical Techniques

Energy Technology Data Exchange (ETDEWEB)

Fox-Rabinovitz, M; Cote, J

2009-06-05

The joint U.S-Canadian project has been devoted to: (a) decadal climate studies using developed state-of-the-art GCMs (General Circulation Models) with enhanced variable and uniform resolution; (b) development and implementation of advanced numerical techniques; (c) research in parallel computing and associated numerical methods; (d) atmospheric chemistry experiments related to climate issues; (e) validation of regional climate modeling strategies for nested- and stretched-grid models. The variable-resolution stretched-grid (SG) GCMs produce accurate and cost-efficient regional climate simulations with mesoscale resolution. The advantage of the stretched grid approach is that it allows us to preserve the high quality of both global and regional circulations while providing consistent interactions between global and regional scales and phenomena. The major accomplishment for the project has been the successful international SGMIP-1 and SGMIP-2 (Stretched-Grid Model Intercomparison Project, phase-1 and phase-2) based on this research developments and activities. The SGMIP provides unique high-resolution regional and global multi-model ensembles beneficial for regional climate modeling and broader modeling community. The U.S SGMIP simulations have been produced using SciDAC ORNL supercomputers. Collaborations with other international participants M. Deque (Meteo-France) and J. McGregor (CSIRO, Australia) and their centers and groups have been beneficial for the strong joint effort, especially for the SGMIP activities. The WMO/WCRP/WGNE endorsed the SGMIP activities in 2004-2008. This project reflects a trend in the modeling and broader communities to move towards regional and sub-regional assessments and applications important for the U.S. and Canadian public, business and policy decision makers, as well as for international collaborations on regional, and especially climate related issues.

3. Constrained Optimization Based on Hybrid Evolutionary Algorithm and Adaptive Constraint-Handling Technique

DEFF Research Database (Denmark)

Wang, Yong; Cai, Zixing; Zhou, Yuren

2009-01-01

A novel approach to deal with numerical and engineering constrained optimization problems, which incorporates a hybrid evolutionary algorithm and an adaptive constraint-handling technique, is presented in this paper. The hybrid evolutionary algorithm simultaneously uses simplex crossover and two...... mutation operators to generate the offspring population. Additionally, the adaptive constraint-handling technique consists of three main situations. In detail, at each situation, one constraint-handling mechanism is designed based on current population state. Experiments on 13 benchmark test functions...... and four well-known constrained design problems verify the effectiveness and efficiency of the proposed method. The experimental results show that integrating the hybrid evolutionary algorithm with the adaptive constraint-handling technique is beneficial, and the proposed method achieves competitive...

4. A technique for measuring oxygen saturation in biological tissues based on diffuse optical spectroscopy

Science.gov (United States)

Kleshnin, Mikhail; Orlova, Anna; Kirillin, Mikhail; Golubiatnikov, German; Turchin, Ilya

2017-07-01

A new approach to optical measuring blood oxygen saturation was developed and implemented. This technique is based on an original three-stage algorithm for reconstructing the relative concentration of biological chromophores (hemoglobin, water, lipids) from the measured spectra of diffusely scattered light at different distances from the probing radiation source. The numerical experiments and approbation of the proposed technique on a biological phantom have shown the high reconstruction accuracy and the possibility of correct calculation of hemoglobin oxygenation in the presence of additive noise and calibration errors. The obtained results of animal studies have agreed with the previously published results of other research groups and demonstrated the possibility to apply the developed technique to monitor oxygen saturation in tumor tissue.

5. Wind Turbine Rotor Simulation via CFD Based Actuator Disc Technique Compared to Detailed Measurement

Directory of Open Access Journals (Sweden)

Esmail Mahmoodi

2015-10-01

Full Text Available In this paper, a generalized Actuator Disc (AD is used to model the wind turbine rotor of the MEXICO experiment, a collaborative European wind turbine project. The AD model as a combination of CFD technique and User Defined Functions codes (UDF, so-called UDF/AD model is used to simulate loads and performance of the rotor in three different wind speed tests. Distributed force on the blade, thrust and power production of the rotor as important designing parameters of wind turbine rotors are focused to model. A developed Blade Element Momentum (BEM theory as a code based numerical technique as well as a full rotor simulation both from the literature are included into the results to compare and discuss. The output of all techniques is compared to detailed measurements for validation, which led us to final conclusions.

6. Estimate-Merge-Technique-based algorithms to track an underwater ...

D V A N Ravi Kumar

2017-07-04

Jul 4, 2017 ... In this paper, two novel methods based on the Estimate Merge Technique ... mentioned advantages of the proposed novel methods is shown by carrying out Monte Carlo simulation in .... equations are converted to sequential equations to make ... estimation error and low convergence time) at feasibly high.

7. GIS-Based bivariate statistical techniques for groundwater potential ...

24

This study shows the potency of two GIS-based data driven bivariate techniques namely ... In the view of these weaknesses , there is a strong requirement for reassessment of .... Font color: Text 1, Not Expanded by / Condensed by , ...... West Bengal (India) using remote sensing, geographical information system and multi-.

8. Learning Physics through Project-Based Learning Game Techniques

Science.gov (United States)

2018-01-01

The aim of the present study, in which Project and game techniques are used together, is to examine the impact of project-based learning games on students' physics achievement. Participants of the study consist of 34 9th grade students (N = 34). The data were collected using achievement tests and a questionnaire. Throughout the applications, the…

9. Field-based dynamic light scattering microscopy: theory and numerical analysis.

Science.gov (United States)

Joo, Chulmin; de Boer, Johannes F

2013-11-01

We present a theoretical framework for field-based dynamic light scattering microscopy based on a spectral-domain optical coherence phase microscopy (SD-OCPM) platform. SD-OCPM is an interferometric microscope capable of quantitative measurement of amplitude and phase of scattered light with high phase stability. Field-based dynamic light scattering (F-DLS) analysis allows for direct evaluation of complex-valued field autocorrelation function and measurement of localized diffusive and directional dynamic properties of biological and material samples with high spatial resolution. In order to gain insight into the information provided by F-DLS microscopy, theoretical and numerical analyses are performed to evaluate the effect of numerical aperture of the imaging optics. We demonstrate that sharp focusing of fields affects the measured diffusive and transport velocity, which leads to smaller values for the dynamic properties in the sample. An approach for accurately determining the dynamic properties of the samples is discussed.

Science.gov (United States)

2016-02-26

bandwidths, and with it receiver noise floors , are unavoidable. Figure 1. SNR of a thermally limited receiver based on Friis equation showing the...techniques for RF and photonic integration based on liquid crystal polymer substrates were pursued that would aid in the realization of potential imaging...These models assumed that sufficient LNA gain was used on the antenna to set the noise floor of the imaging receiver, which necessitated physical

11. Optical supervised filtering technique based on Hopfield neural network

Science.gov (United States)

Bal, Abdullah

2004-11-01

Hopfield neural network is commonly preferred for optimization problems. In image segmentation, conventional Hopfield neural networks (HNN) are formulated as a cost-function-minimization problem to perform gray level thresholding on the image histogram or the pixels' gray levels arranged in a one-dimensional array [R. Sammouda, N. Niki, H. Nishitani, Pattern Rec. 30 (1997) 921-927; K.S. Cheng, J.S. Lin, C.W. Mao, IEEE Trans. Med. Imag. 15 (1996) 560-567; C. Chang, P. Chung, Image and Vision comp. 19 (2001) 669-678]. In this paper, a new high speed supervised filtering technique is proposed for image feature extraction and enhancement problems by modifying the conventional HNN. The essential improvement in this technique is to use 2D convolution operation instead of weight-matrix multiplication. Thereby, neural network based a new filtering technique has been obtained that is required just 3 × 3 sized filter mask matrix instead of large size weight coefficient matrix. Optical implementation of the proposed filtering technique is executed easily using the joint transform correlator. The requirement of non-negative data for optical implementation is provided by bias technique to convert the bipolar data to non-negative data. Simulation results of the proposed optical supervised filtering technique are reported for various feature extraction problems such as edge detection, corner detection, horizontal and vertical line extraction, and fingerprint enhancement.

12. Video Multiple Watermarking Technique Based on Image Interlacing Using DWT

Directory of Open Access Journals (Sweden)

Mohamed M. Ibrahim

2014-01-01

Full Text Available Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

13. Video multiple watermarking technique based on image interlacing using DWT.

Science.gov (United States)

Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M

2014-01-01

Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

14. Improvement in QEPAS system utilizing a second harmonic based wavelength calibration technique

Science.gov (United States)

Zhang, Qinduan; Chang, Jun; Wang, Fupeng; Wang, Zongliang; Xie, Yulei; Gong, Weihua

2018-05-01

A simple laser wavelength calibration technique, based on second harmonic signal, is demonstrated in this paper to improve the performance of quartz enhanced photoacoustic spectroscopy (QEPAS) gas sensing system, e.g. improving the signal to noise ratio (SNR), detection limit and long-term stability. Constant current, corresponding to the gas absorption line, combining f/2 frequency sinusoidal signal are used to drive the laser (constant driving mode), a software based real-time wavelength calibration technique is developed to eliminate the wavelength drift due to ambient fluctuations. Compared to conventional wavelength modulation spectroscopy (WMS), this method allows lower filtering bandwidth and averaging algorithm applied to QEPAS system, improving SNR and detection limit. In addition, the real-time wavelength calibration technique guarantees the laser output is modulated steadily at gas absorption line. Water vapor is chosen as an objective gas to evaluate its performance compared to constant driving mode and conventional WMS system. The water vapor sensor was designed insensitive to the incoherent external acoustic noise by the numerical averaging technique. As a result, the SNR increases 12.87 times in wavelength calibration technique based system compared to conventional WMS system. The new system achieved a better linear response (R2 = 0 . 9995) in concentration range from 300 to 2000 ppmv, and achieved a minimum detection limit (MDL) of 630 ppbv.

15. Efficient techniques for wave-based sound propagation in interactive applications

Science.gov (United States)

Mehra, Ravish

-driven, rotating or time-varying directivity function at runtime. Unlike previous approaches, the listener directivity approach can be used to compute spatial audio (3D audio) for a moving, rotating listener at interactive rates. Lastly, we propose an efficient GPU-based time-domain solver for the wave equation that enables wave simulation up to the mid-frequency range in tens of minutes on a desktop computer. It is demonstrated that by carefully mapping all the components of the wave simulator to match the parallel processing capabilities of the graphics processors, significant improvement in performance can be achieved compared to the CPU-based simulators, while maintaining numerical accuracy. We validate these techniques with offline numerical simulations and measured data recorded in an outdoor scene. We present results of preliminary user evaluations conducted to study the impact of these techniques on user's immersion in virtual environment. We have integrated these techniques with the Half-Life 2 game engine, Oculus Rift head-mounted display, and Xbox game controller to enable users to experience high-quality acoustics effects and spatial audio in the virtual environment.

16. An analytically based numerical method for computing view factors in real urban environments

Science.gov (United States)

Lee, Doo-Il; Woo, Ju-Wan; Lee, Sang-Hyun

2018-01-01

A view factor is an important morphological parameter used in parameterizing in-canyon radiative energy exchange process as well as in characterizing local climate over urban environments. For realistic representation of the in-canyon radiative processes, a complete set of view factors at the horizontal and vertical surfaces of urban facets is required. Various analytical and numerical methods have been suggested to determine the view factors for urban environments, but most of the methods provide only sky-view factor at the ground level of a specific location or assume simplified morphology of complex urban environments. In this study, a numerical method that can determine the sky-view factors ( ψ ga and ψ wa ) and wall-view factors ( ψ gw and ψ ww ) at the horizontal and vertical surfaces is presented for application to real urban morphology, which are derived from an analytical formulation of the view factor between two blackbody surfaces of arbitrary geometry. The established numerical method is validated against the analytical sky-view factor estimation for ideal street canyon geometries, showing a consolidate confidence in accuracy with errors of less than 0.2 %. Using a three-dimensional building database, the numerical method is also demonstrated to be applicable in determining the sky-view factors at the horizontal (roofs and roads) and vertical (walls) surfaces in real urban environments. The results suggest that the analytically based numerical method can be used for the radiative process parameterization of urban numerical models as well as for the characterization of local urban climate.

17. Biometric image enhancement using decision rule based image fusion techniques

Science.gov (United States)

Sagayee, G. Mary Amirtha; Arumugam, S.

2010-02-01

Introducing biometrics into information systems may result in considerable benefits. Most of the researchers confirmed that the finger print is widely used than the iris or face and more over it is the primary choice for most privacy concerned applications. For finger prints applications, choosing proper sensor is at risk. The proposed work deals about, how the image quality can be improved by introducing image fusion technique at sensor levels. The results of the images after introducing the decision rule based image fusion technique are evaluated and analyzed with its entropy levels and root mean square error.

18. Electromagnetism based atmospheric ice sensing technique - A conceptual review

Directory of Open Access Journals (Sweden)

U Mughal

2016-09-01

Full Text Available Electromagnetic and vibrational properties of ice can be used to measure certain parameters such as ice thickness, type and icing rate. In this paper we present a review of the dielectric based measurement techniques for matter and the dielectric/spectroscopic properties of ice. Atmospheric Ice is a complex material with a variable dielectric constant, but precise calculation of this constant may form the basis for measurement of its other properties such as thickness and strength using some electromagnetic methods. Using time domain or frequency domain spectroscopic techniques, by measuring both the reflection and transmission characteristics of atmospheric ice in a particular frequency range, the desired parameters can be determined.

19. Proposing a Wiki-Based Technique for Collaborative Essay Writing

Directory of Open Access Journals (Sweden)

Mabel Ortiz Navarrete

2014-10-01

Full Text Available This paper aims at proposing a technique for students learning English as a foreign language when they collaboratively write an argumentative essay in a wiki environment. A wiki environment and collaborative work play an important role within the academic writing task. Nevertheless, an appropriate and systematic work assignment is required in order to make use of both. In this paper the proposed technique when writing a collaborative essay mainly attempts to provide the most effective way to enhance equal participation among group members by taking as a base computer mediated collaboration. Within this context, the students’ role is clearly defined and individual and collaborative tasks are explained.

20. Knowledge based systems advanced concepts, techniques and applications

CERN Document Server

1997-01-01

The field of knowledge-based systems (KBS) has expanded enormously during the last years, and many important techniques and tools are currently available. Applications of KBS range from medicine to engineering and aerospace.This book provides a selected set of state-of-the-art contributions that present advanced techniques, tools and applications. These contributions have been prepared by a group of eminent researchers and professionals in the field.The theoretical topics covered include: knowledge acquisition, machine learning, genetic algorithms, knowledge management and processing under unc

1. An Observed Voting System Based On Biometric Technique

Directory of Open Access Journals (Sweden)

B. Devikiruba

2015-08-01

Full Text Available ABSTRACT This article describes a computational framework which can run almost on every computer connected to an IP based network to study biometric techniques. This paper discusses with a system protecting confidential information puts strong security demands on the identification. Biometry provides us with a user-friendly method for this identification and is becoming a competitor for current identification mechanisms. The experimentation section focuses on biometric verification specifically based on fingerprints. This article should be read as a warning to those thinking of using methods of identification without first examine the technical opportunities for compromising mechanisms and the associated legal consequences. The development is based on the java language that easily improves software packages that is useful to test new control techniques.

2. Microrheometric upconversion-based techniques for intracellular viscosity measurements

Science.gov (United States)

Rodríguez-Sevilla, Paloma; Zhang, Yuhai; de Sousa, Nuno; Marqués, Manuel I.; Sanz-Rodríguez, Francisco; Jaque, Daniel; Liu, Xiaogang; Haro-González, Patricia

2017-08-01

Rheological parameters (viscosity, creep compliance and elasticity) play an important role in cell function and viability. For this reason different strategies have been developed for their study. In this work, two new microrheometric techniques are presented. Both methods take advantage of the analysis of the polarized emission of an upconverting particle to determine its orientation inside the optical trap. Upconverting particles are optical materials that are able to convert infrared radiation into visible light. Their usefulness has been further boosted by the recent demonstration of their three-dimensional control and tracking by single beam infrared optical traps. In this work it is demonstrated that optical torques are responsible of the stable orientation of the upconverting particle inside the trap. Moreover, numerical calculations and experimental data allowed to use the rotation dynamics of the optically trapped upconverting particle for environmental sensing. In particular, the cytoplasm viscosity could be measured by using the rotation time and thermal fluctuations of an intracellular optically trapped upconverting particle, by means of the two previously mentioned microrheometric techniques.

3. Experimental and numerical studies on laser-based powder deposition of slurry erosion resistant materials

Science.gov (United States)

Balu, Prabu

cracking issue, and 3) the effect of composition and composition gradient of Ni and WC on the slurry erosion resistance over a wide range of erosion conditions. This thesis presents a set of numerical and experimental methods in order to address the challenges mentioned above. A three-dimensional (3-D) computational fluid dynamics (CFD) based powder flow model and three vision based techniques were developed in order to visualize the process of feeding the Ni-WC powder in the LBPD process. The results provide the guidelines for efficiently feeding the Ni-WC composite powder into the laser-formed molten pool. The finite element (FE) based experimentally verified 3-D thermal and thermo-mechanical models are developed in order to understand the thermal and stress evolutions in Ni-WC composite material during the LBPD process. The models address the effect of the process variables, preheating temperature, and different mass fractions of WC in Ni on thermal cycles and stress distributions within the deposited material. The slurry erosion behavior of the single and multilayered deposits of Ni-WC composite material produced by the LBPD process is investigated using an accelerated slurry erosion testing machine and a 3-D FE dynamic model. The verified model is used to identify the appropriate composition and composition gradient of Ni-WC composite material required to achieve erosion resistance over a wide range of erosion conditions.

4. Current STR-based techniques in forensic science

Directory of Open Access Journals (Sweden)

2013-01-01

Full Text Available DNA analysis in forensic science is mainly based on short tandem repeat (STR genotyping. The conventional analysis is a three-step process of DNA extraction, amplification and detection. An overview of various techniques that are currently in use and are being actively researched for STR typing is presented. The techniques are separated into STR amplification and detection. New techniques for forensic STR analysis focus on increasing sensitivity, resolution and discrimination power for suboptimal samples. These are achieved by shifting primer-binding sites, using high-fidelity and tolerant polymerases and applying novel methods to STR detection. Examples in which STRs are used in criminal investigations are provided and future research directions are discussed.

5. Benchmarking state-of-the-art numerical simulation techniques for analyzing large photonic crystal membrane line defect cavities

DEFF Research Database (Denmark)

Gregersen, Niels; de Lasson, Jakob Rosenkrantz; Frandsen, Lars Hagedorn

2018-01-01

In this work, we perform numerical studies of two photonic crystal membrane microcavities, a short line-defect L5 cavity with relatively low quality (Q) factor and a longer L9 cavity with high Q. We compute the cavity Q factor and the resonance wavelength λ of the fundamental M1 mode in the two...

6. MEMS-Based Power Generation Techniques for Implantable Biosensing Applications

Directory of Open Access Journals (Sweden)

Jonathan Lueke

2011-01-01

Full Text Available Implantable biosensing is attractive for both medical monitoring and diagnostic applications. It is possible to monitor phenomena such as physical loads on joints or implants, vital signs, or osseointegration in vivo and in real time. Microelectromechanical (MEMS-based generation techniques can allow for the autonomous operation of implantable biosensors by generating electrical power to replace or supplement existing battery-based power systems. By supplementing existing battery-based power systems for implantable biosensors, the operational lifetime of the sensor is increased. In addition, the potential for a greater amount of available power allows additional components to be added to the biosensing module, such as computational and wireless and components, improving functionality and performance of the biosensor. Photovoltaic, thermovoltaic, micro fuel cell, electrostatic, electromagnetic, and piezoelectric based generation schemes are evaluated in this paper for applicability for implantable biosensing. MEMS-based generation techniques that harvest ambient energy, such as vibration, are much better suited for implantable biosensing applications than fuel-based approaches, producing up to milliwatts of electrical power. High power density MEMS-based approaches, such as piezoelectric and electromagnetic schemes, allow for supplemental and replacement power schemes for biosensing applications to improve device capabilities and performance. In addition, this may allow for the biosensor to be further miniaturized, reducing the need for relatively large batteries with respect to device size. This would cause the implanted biosensor to be less invasive, increasing the quality of care received by the patient.

7. MEMS-based power generation techniques for implantable biosensing applications.

Science.gov (United States)

Lueke, Jonathan; Moussa, Walied A

2011-01-01

Implantable biosensing is attractive for both medical monitoring and diagnostic applications. It is possible to monitor phenomena such as physical loads on joints or implants, vital signs, or osseointegration in vivo and in real time. Microelectromechanical (MEMS)-based generation techniques can allow for the autonomous operation of implantable biosensors by generating electrical power to replace or supplement existing battery-based power systems. By supplementing existing battery-based power systems for implantable biosensors, the operational lifetime of the sensor is increased. In addition, the potential for a greater amount of available power allows additional components to be added to the biosensing module, such as computational and wireless and components, improving functionality and performance of the biosensor. Photovoltaic, thermovoltaic, micro fuel cell, electrostatic, electromagnetic, and piezoelectric based generation schemes are evaluated in this paper for applicability for implantable biosensing. MEMS-based generation techniques that harvest ambient energy, such as vibration, are much better suited for implantable biosensing applications than fuel-based approaches, producing up to milliwatts of electrical power. High power density MEMS-based approaches, such as piezoelectric and electromagnetic schemes, allow for supplemental and replacement power schemes for biosensing applications to improve device capabilities and performance. In addition, this may allow for the biosensor to be further miniaturized, reducing the need for relatively large batteries with respect to device size. This would cause the implanted biosensor to be less invasive, increasing the quality of care received by the patient.

8. Power system dynamic state estimation using prediction based evolutionary technique

International Nuclear Information System (INIS)

Basetti, Vedik; Chandel, Ashwani K.; Chandel, Rajeevan

2016-01-01

In this paper, a new robust LWS (least winsorized square) estimator is proposed for dynamic state estimation of a power system. One of the main advantages of this estimator is that it has an inbuilt bad data rejection property and is less sensitive to bad data measurements. In the proposed approach, Brown's double exponential smoothing technique has been utilised for its reliable performance at the prediction step. The state estimation problem is solved as an optimisation problem using a new jDE-self adaptive differential evolution with prediction based population re-initialisation technique at the filtering step. This new stochastic search technique has been embedded with different state scenarios using the predicted state. The effectiveness of the proposed LWS technique is validated under different conditions, namely normal operation, bad data, sudden load change, and loss of transmission line conditions on three different IEEE test bus systems. The performance of the proposed approach is compared with the conventional extended Kalman filter. On the basis of various performance indices, the results thus obtained show that the proposed technique increases the accuracy and robustness of power system dynamic state estimation performance. - Highlights: • To estimate the states of the power system under dynamic environment. • The performance of the EKF method is degraded during anomaly conditions. • The proposed method remains robust towards anomalies. • The proposed method provides precise state estimates even in the presence of anomalies. • The results show that prediction accuracy is enhanced by using the proposed model.

9. Fractal Image Compression Based on High Entropy Values Technique

Directory of Open Access Journals (Sweden)

Douaa Younis Abbaas

2018-04-01

Full Text Available There are many attempts tried to improve the encoding stage of FIC because it consumed time. These attempts worked by reducing size of the search pool for pair range-domain matching but most of them led to get a bad quality, or a lower compression ratio of reconstructed image. This paper aims to present a method to improve performance of the full search algorithm by combining FIC (lossy compression and another lossless technique (in this case entropy coding is used. The entropy technique will reduce size of the domain pool (i. e., number of domain blocks based on the entropy value of each range block and domain block and then comparing the results of full search algorithm and proposed algorithm based on entropy technique to see each of which give best results (such as reduced the encoding time with acceptable values in both compression quali-ty parameters which are C. R (Compression Ratio and PSNR (Image Quality. The experimental results of the proposed algorithm proven that using the proposed entropy technique reduces the encoding time while keeping compression rates and reconstruction image quality good as soon as possible.

10. Characterization techniques for graphene-based materials in catalysis

Directory of Open Access Journals (Sweden)

Maocong Hu

2017-06-01

Full Text Available Graphene-based materials have been studied in a wide range of applications including catalysis due to the outstanding electronic, thermal, and mechanical properties. The unprecedented features of graphene-based catalysts, which are believed to be responsible for their superior performance, have been characterized by many techniques. In this article, we comprehensively summarized the characterization methods covering bulk and surface structure analysis, chemisorption ability determination, and reaction mechanism investigation. We reviewed the advantages/disadvantages of different techniques including Raman spectroscopy, X-ray photoelectron spectroscopy (XPS, Fourier transform infrared spectroscopy (FTIR and Diffuse Reflectance Fourier Transform Infrared Spectroscopy (DRIFTS, X-Ray diffraction (XRD, X-ray absorption near edge structure (XANES and X-ray absorption fine structure (XAFS, atomic force microscopy (AFM, scanning electron microscopy (SEM, transmission electron microscopy (TEM, high-resolution transmission electron microscopy (HRTEM, ultraviolet-visible spectroscopy (UV-vis, X-ray fluorescence (XRF, inductively coupled plasma mass spectrometry (ICP, thermogravimetric analysis (TGA, Brunauer–Emmett–Teller (BET, and scanning tunneling microscopy (STM. The application of temperature-programmed reduction (TPR, CO chemisorption, and NH3/CO2-temperature-programmed desorption (TPD was also briefly introduced. Finally, we discussed the challenges and provided possible suggestions on choosing characterization techniques. This review provides key information to catalysis community to adopt suitable characterization techniques for their research.

11. On-line diagnostic techniques for air-operated control valves based on time series analysis

International Nuclear Information System (INIS)

Ito, Kenji; Matsuoka, Yoshinori; Minamikawa, Shigeru; Komatsu, Yasuki; Satoh, Takeshi.

1996-01-01

The objective of this research is to study the feasibility of applying on-line diagnostic techniques based on time series analysis to air-operated control valves - numerous valves of the type which are used in PWR plants. Generally the techniques can detect anomalies by failures in the initial stages for which detection is difficult by conventional surveillance of process parameters measured directly. However, the effectiveness of these techniques depends on the system being diagnosed. The difficulties in applying diagnostic techniques to air-operated control valves seem to come from the reduced sensitivity of their response as compared with hydraulic control systems, as well as the need to identify anomalies in low level signals that fluctuate only slightly but continuously. In this research, simulation tests were performed by setting various kinds of failure modes for a test valve with the same specifications as of a valve actually used in the plants. Actual control signals recorded from an operating plant were then used as input signals for simulation. The results of the tests confirmed the feasibility of applying on-line diagnostic techniques based on time series analysis to air-operated control valves. (author)

12. A numerical method for the solution of three-dimensional incompressible viscous flow using the boundary-fitted curvilinear coordinate transformation and domain decomposition technique

International Nuclear Information System (INIS)

Umegaki, Kikuo; Miki, Kazuyoshi

1990-01-01

A numerical method is developed to solve three-dimensional incompressible viscous flow in complicated geometry using curvilinear coordinate transformation and domain decomposition technique. In this approach, a complicated flow domain is decomposed into several subdomains, each of which has an overlapping region with neighboring subdomains. Curvilinear coordinates are numerically generated in each subdomain using the boundary-fitted coordinate transformation technique. The modified SMAC scheme is developed to solve Navier-Stokes equations in which the convective terms are discretized by the QUICK method. A fully vectorized computer program is developed on the basis of the proposed method. The program is applied to flow analysis in a semicircular curved, 90deg elbow and T-shape branched pipes. Computational time with the vector processor of the HITAC S-810/20 supercomputer system, is reduced to 1/10∼1/20 of that with a scalar processor. (author)

13. "Physically-based" numerical experiment to determine the dominant hillslope processes during floods?

Science.gov (United States)

Gaume, Eric; Esclaffer, Thomas; Dangla, Patrick; Payrastre, Olivier

2016-04-01

To study the dynamics of hillslope responses during flood event, a fully coupled "physically-based" model for the combined numerical simulation of surface runoff and underground flows has been developed. A particular attention has been given to the selection of appropriate numerical schemes for the modelling of both processes and of their coupling. Surprisingly, the most difficult question to solve, from a numerical point of view, was not related to the coupling of two processes with contrasted kinetics such as surface and underground flows, but to the high gradient infiltration fronts appearing in soils, source of numerical diffusion, instabilities and sometimes divergence. The model being elaborated, it has been successfully tested against results of high quality experiments conducted on a laboratory sandy slope in the early eighties, which is still considered as a reference hillslope experimental setting (Abdul & Guilham). The model appeared able to accurately simulate the pore pressure distributions observed in this 1.5 meter deep and wide laboratory hillslope, as well as its outflow hydrograph shapes and the measured respective contributions of direct runoff and groundwater to these outflow hydrographs. Based on this great success, the same model has been used to simulate the response of a theoretical 100-meter wide and 10% sloped hillslope, with a 2 meter deep pervious soil and impervious bedrock. Three rain events have been tested: a 100 millimeter rainfall event over 10 days, over 1 day or over one hour. The simulated responses are hydrologically not realistic and especially the fast component of the response, that is generally observed in the real-world and explains flood events, is almost absent of the simulated response. Thinking a little about the whole problem, the simulation results appears totally logical according to the proposed model. The simulated response, in fact a recession hydrograph, corresponds to a piston flow of a relatively uniformly

14. IoT Security Techniques Based on Machine Learning

OpenAIRE

Xiao, Liang; Wan, Xiaoyue; Lu, Xiaozhen; Zhang, Yanyong; Wu, Di

2018-01-01

Internet of things (IoT) that integrate a variety of devices into networks to provide advanced and intelligent services have to protect user privacy and address attacks such as spoofing attacks, denial of service attacks, jamming and eavesdropping. In this article, we investigate the attack model for IoT systems, and review the IoT security solutions based on machine learning techniques including supervised learning, unsupervised learning and reinforcement learning. We focus on the machine le...

15. Numerical Calculation of Secondary Flow in Pump Volute and Circular Casings using 3D Viscous Flow Techniques

Directory of Open Access Journals (Sweden)

K. Majidi

2000-01-01

Full Text Available The flow field in volute and circular casings interacting with a centrifugal impeller is obtained by numerical analysis. In the present study, effects of the volute and circular casings on the flow pattern have been investigated by successively combining a volute casing and a circular casing with a single centrifugal impeller. The numerical calculations are carried out with a multiple frame of reference to predict the flow field inside the entire impeller and casings. The impeller flow field is solved in a rotating frame and the flow field in the casings in a stationary frame. The static pressure and velocity in the casing and impeller, and the static pressures and secondary velocity vectors at several cross-sectional planes of the casings are calculated. The calculations show that the curvature of the casings creates pressure gradients that cause vortices at cross-sectional planes of the casings.

16. Effects of geometry discretization aspects on the numerical solution of the bioheat transfer equation with the FDTD technique

Energy Technology Data Exchange (ETDEWEB)

Samaras, T; Christ, A; Kuster, N [Department of Physics, Aristotle University of Thessaloniki, GR-54124 Thessaloniki (Greece); Foundation for Research on Information Technologies in Society (IT' IS Foundation), Swiss Federal Institute of Technology (ETH), CH-8004 Zurich (Switzerland)

2006-06-07

In this work, we highlight two issues that have to be taken into consideration for accurate thermal modelling with the finite-difference time-domain (FDTD) method, namely the tissue interfaces and the staircasing effect. The former appears less critical in the overall accuracy of the results, whereas the latter may have an influence on the worst-case approach used in numerical dosimetry of non-ionizing radiation. (note)

17. Effects of geometry discretization aspects on the numerical solution of the bioheat transfer equation with the FDTD technique

International Nuclear Information System (INIS)

Samaras, T; Christ, A; Kuster, N

2006-01-01

In this work, we highlight two issues that have to be taken into consideration for accurate thermal modelling with the finite-difference time-domain (FDTD) method, namely the tissue interfaces and the staircasing effect. The former appears less critical in the overall accuracy of the results, whereas the latter may have an influence on the worst-case approach used in numerical dosimetry of non-ionizing radiation. (note)

18. Quantity estimation based on numerical cues in the mealworm beetle (Tenebrio molitor

Directory of Open Access Journals (Sweden)

Pau eCarazo

2012-11-01

Full Text Available In this study, we used a biologically relevant experimental procedure to ask whether mealworm beetles (Tenebrio molitor are spontaneously capable of assessing quantities based on numerical cues. Like other insect species, mealworm beetles adjust their reproductive behaviour (i.e. investment in mate guarding according to the perceived risk of sperm competition (i.e. probability that a female will mate with another male. To test whether males have the ability to estimate numerosity based on numerical cues, we staged matings between virgin females and virgin males in which we varied the number of rival males the experimental male had access to immediately preceding mating as a cue to sperm competition risk (from 1 to 4. Rival males were presented sequentially, and we controlled for continuous cues by ensuring that males in all treatments were exposed to the same amount of male-male contact. Males exhibited a marked increase in the time they devoted to mate guarding in response to an increase in the number of different rival males they were exposed to. Since males could not rely on continuous cues we conclude that they kept a running tally of the number of individuals they encountered serially, which meets the requirements of the basic ordinality and cardinality principles of proto-counting. Our results thus offer good evidence of ‘true’ numerosity estimation or quantity estimation and, along with recent studies in honey-bees, suggest that vertebrates and invertebrates share similar core systems of non-verbal numerical representation.

19. Online Monitoring System of Air Distribution in Pulverized Coal-Fired Boiler Based on Numerical Modeling

Science.gov (United States)

Żymełka, Piotr; Nabagło, Daniel; Janda, Tomasz; Madejski, Paweł

2017-12-01

Balanced distribution of air in coal-fired boiler is one of the most important factors in the combustion process and is strongly connected to the overall system efficiency. Reliable and continuous information about combustion airflow and fuel rate is essential for achieving optimal stoichiometric ratio as well as efficient and safe operation of a boiler. Imbalances in air distribution result in reduced boiler efficiency, increased gas pollutant emission and operating problems, such as corrosion, slagging or fouling. Monitoring of air flow trends in boiler is an effective method for further analysis and can help to appoint important dependences and start optimization actions. Accurate real-time monitoring of the air distribution in boiler can bring economical, environmental and operational benefits. The paper presents a novel concept for online monitoring system of air distribution in coal-fired boiler based on real-time numerical calculations. The proposed mathematical model allows for identification of mass flow rates of secondary air to individual burners and to overfire air (OFA) nozzles. Numerical models of air and flue gas system were developed using software for power plant simulation. The correctness of the developed model was verified and validated with the reference measurement values. The presented numerical model for real-time monitoring of air distribution is capable of giving continuous determination of the complete air flows based on available digital communication system (DCS) data.

20. Online Monitoring System of Air Distribution in Pulverized Coal-Fired Boiler Based on Numerical Modeling

Directory of Open Access Journals (Sweden)

Żymełka Piotr

2017-12-01

Full Text Available Balanced distribution of air in coal-fired boiler is one of the most important factors in the combustion process and is strongly connected to the overall system efficiency. Reliable and continuous information about combustion airflow and fuel rate is essential for achieving optimal stoichiometric ratio as well as efficient and safe operation of a boiler. Imbalances in air distribution result in reduced boiler efficiency, increased gas pollutant emission and operating problems, such as corrosion, slagging or fouling. Monitoring of air flow trends in boiler is an effective method for further analysis and can help to appoint important dependences and start optimization actions. Accurate real-time monitoring of the air distribution in boiler can bring economical, environmental and operational benefits. The paper presents a novel concept for online monitoring system of air distribution in coal-fired boiler based on real-time numerical calculations. The proposed mathematical model allows for identification of mass flow rates of secondary air to individual burners and to overfire air (OFA nozzles. Numerical models of air and flue gas system were developed using software for power plant simulation. The correctness of the developed model was verified and validated with the reference measurement values. The presented numerical model for real-time monitoring of air distribution is capable of giving continuous determination of the complete air flows based on available digital communication system (DCS data.

1. An Image Morphing Technique Based on Optimal Mass Preserving Mapping

Science.gov (United States)

Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

2013-01-01

Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L2 mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128

2. Comparing Four Touch-Based Interaction Techniques for an Image-Based Audience Response System

NARCIS (Netherlands)

Jorritsma, Wiard; Prins, Jonatan T.; van Ooijen, Peter M. A.

2015-01-01

This study aimed to determine the most appropriate touch-based interaction technique for I2Vote, an image-based audience response system for radiology education in which users need to accurately mark a target on a medical image. Four plausible techniques were identified: land-on, take-off,

3. A DIFFERENT WEB-BASED GEOCODING SERVICE USING FUZZY TECHNIQUES

Directory of Open Access Journals (Sweden)

P. Pahlavani

2015-12-01

Full Text Available Geocoding – the process of finding position based on descriptive data such as address or postal code - is considered as one of the most commonly used spatial analyses. Many online map providers such as Google Maps, Bing Maps and Yahoo Maps present geocoding as one of their basic capabilities. Despite the diversity of geocoding services, users usually face some limitations when they use available online geocoding services. In existing geocoding services, proximity and nearness concept is not modelled appropriately as well as these services search address only by address matching based on descriptive data. In addition there are also some limitations in display searching results. Resolving these limitations can enhance efficiency of the existing geocoding services. This paper proposes the idea of integrating fuzzy technique with geocoding process to resolve these limitations. In order to implement the proposed method, a web-based system is designed. In proposed method, nearness to places is defined by fuzzy membership functions and multiple fuzzy distance maps are created. Then these fuzzy distance maps are integrated using fuzzy overlay technique for obtain the results. Proposed methods provides different capabilities for users such as ability to search multi-part addresses, searching places based on their location, non-point representation of results as well as displaying search results based on their priority.

4. A Numerical-Analytical Approach Based on Canonical Transformations for Computing Optimal Low-Thrust Transfers

Science.gov (United States)

da Silva Fernandes, S.; das Chagas Carvalho, F.; Bateli Romão, J. V.

2018-04-01

A numerical-analytical procedure based on infinitesimal canonical transformations is developed for computing optimal time-fixed low-thrust limited power transfers (no rendezvous) between coplanar orbits with small eccentricities in an inverse-square force field. The optimization problem is formulated as a Mayer problem with a set of non-singular orbital elements as state variables. Second order terms in eccentricity are considered in the development of the maximum Hamiltonian describing the optimal trajectories. The two-point boundary value problem of going from an initial orbit to a final orbit is solved by means of a two-stage Newton-Raphson algorithm which uses an infinitesimal canonical transformation. Numerical results are presented for some transfers between circular orbits with moderate radius ratio, including a preliminary analysis of Earth-Mars and Earth-Venus missions.

5. An Effective Way to Control Numerical Instability of a Nonordinary State-Based Peridynamic Elastic Model

Directory of Open Access Journals (Sweden)

Xin Gu

2017-01-01

Full Text Available The constitutive modeling and numerical implementation of a nonordinary state-based peridynamic (NOSB-PD model corresponding to the classical elastic model are presented. Besides, the numerical instability problem of the NOSB-PD model is analyzed, and a penalty method involving the hourglass force is proposed to control the instabilities. Further, two benchmark problems, the static elastic deformation of a simple supported beam and the elastic wave propagation in a two-dimensional rod, are discussed with the present method. It proves that the penalty instability control method is effective in suppressing the displacement oscillations and improving the accuracy of calculated stress fields with a proper hourglass force coefficient, and the NOSB-PD approach with instability control can analyze the problems of structure deformation and elastic wave propagation well.

6. Numerical simulation of terahertz generation and detection based on ultrafast photoconductive antennas

Science.gov (United States)

Chen, Long-chao; Fan, Wen-hui

2011-08-01

The numerical simulation of terahertz generation and detection in the interaction between femtosecond laser pulse and photoconductive material has been reported in this paper. The simulation model based on the Drude-Lorentz theory is used, and takes into account the phenomena that photo-generated electrons and holes are separated by the external bias field, which is screened by the space-charge field simultaneously. According to the numerical calculation, the terahertz time-domain waveforms and their Fourier-transformed spectra are presented under different conditions. The simulation results indicate that terahertz generation and detection properties of photoconductive antennas are largely influenced by three major factors, including photo-carriers' lifetime, laser pulse width and pump laser power. Finally, a simple model has been applied to simulate the detected terahertz pulses by photoconductive antennas with various photo-carriers' lifetimes, and the results show that the detected terahertz spectra are very different from the spectra radiated from the emitter.

7. Applications of Kalman filters based on non-linear functions to numerical weather predictions

Directory of Open Access Journals (Sweden)

G. Galanis

2006-10-01

Full Text Available This paper investigates the use of non-linear functions in classical Kalman filter algorithms on the improvement of regional weather forecasts. The main aim is the implementation of non linear polynomial mappings in a usual linear Kalman filter in order to simulate better non linear problems in numerical weather prediction. In addition, the optimal order of the polynomials applied for such a filter is identified. This work is based on observations and corresponding numerical weather predictions of two meteorological parameters characterized by essential differences in their evolution in time, namely, air temperature and wind speed. It is shown that in both cases, a polynomial of low order is adequate for eliminating any systematic error, while higher order functions lead to instabilities in the filtered results having, at the same time, trivial contribution to the sensitivity of the filter. It is further demonstrated that the filter is independent of the time period and the geographic location of application.

8. Applications of Kalman filters based on non-linear functions to numerical weather predictions

Directory of Open Access Journals (Sweden)

G. Galanis

2006-10-01

Full Text Available This paper investigates the use of non-linear functions in classical Kalman filter algorithms on the improvement of regional weather forecasts. The main aim is the implementation of non linear polynomial mappings in a usual linear Kalman filter in order to simulate better non linear problems in numerical weather prediction. In addition, the optimal order of the polynomials applied for such a filter is identified. This work is based on observations and corresponding numerical weather predictions of two meteorological parameters characterized by essential differences in their evolution in time, namely, air temperature and wind speed. It is shown that in both cases, a polynomial of low order is adequate for eliminating any systematic error, while higher order functions lead to instabilities in the filtered results having, at the same time, trivial contribution to the sensitivity of the filter. It is further demonstrated that the filter is independent of the time period and the geographic location of application.

9. Developing group investigation-based book on numerical analysis to increase critical thinking student’s ability

Science.gov (United States)

Maharani, S.; Suprapto, E.

2018-03-01

Critical thinking is very important in Mathematics; it can make student more understanding mathematics concept. Critical thinking is also needed in numerical analysis. The Numerical analysis's book is not yet including critical thinking in them. This research aims to develop group investigation-based book on numerical analysis to increase critical thinking student’s ability, to know the quality of the group investigation-based book on numerical analysis is valid, practical, and effective. The research method is Research and Development (R&D) with the subject are 30 student college department of Mathematics education at Universitas PGRI Madiun. The development model used is 4-D modified to 3-D until the stage development. The type of data used is descriptive qualitative data. Instruments used are sheets of validation, test, and questionnaire. Development results indicate that group investigation-based book on numerical analysis in the category of valid a value 84.25%. Students response to the books very positive, so group investigation-based book on numerical analysis category practical, i.e., 86.00%. The use of group investigation-based book on numerical analysis has been meeting the completeness criteria classical learning that is 84.32 %. Based on research result of this study concluded that group investigation-based book on numerical analysis is feasible because it meets the criteria valid, practical, and effective. So, the book can be used by every mathematics academician. The next research can be observed that book based group investigation in other subjects.

10. SKILLS-BASED ECLECTIC TECHNIQUES MATRIX FOR ELT MICROTEACHINGS

Directory of Open Access Journals (Sweden)

İskender Hakkı Sarıgöz

2016-10-01

Full Text Available Foreign language teaching undergoes constant changes due to the methodological improvement. This progress may be examined in two parts. They are the methods era and the post-methods era. It is not pragmatic today to propose a particular language teaching method and its techniques for all purposes. The holistic inflexibility of mid-century methods has long gone. In the present day, constructivist foreign language teaching trends attempt to see the learner as a whole person and an individual who may be different from the other students in many respects. At the same time, the individual differences should not keep the learners away from group harmony. For this reason, current teacher training programs require eclectic teaching matrixes for unit design considering the mixed ability student groups. These matrixes can be prepared in a multidimensional fashion because there are many functional techniques in different methods and other new techniques to be created by instructors freely in accordance with the teaching aims. The hypothesis in this argument is that the collection of foreign language teaching techniques compiled in ELT microteachings for a particular group of learners has to be arranged eclectically in order to update the teaching process. Nevertheless, designing a teaching format of this sort is a demanding and highly criticized task. This study briefly argues eclecticism in language-skills based methodological struggle from the perspective of ELT teacher education.

11. IMAGE SEGMENTATION BASED ON MARKOV RANDOM FIELD AND WATERSHED TECHNIQUES

Institute of Scientific and Technical Information of China (English)

纳瑟; 刘重庆

2002-01-01

This paper presented a method that incorporates Markov Random Field(MRF), watershed segmentation and merging techniques for performing image segmentation and edge detection tasks. MRF is used to obtain an initial estimate of x regions in the image under process where in MRF model, gray level x, at pixel location i, in an image X, depends on the gray levels of neighboring pixels. The process needs an initial segmented result. An initial segmentation is got based on K-means clustering technique and the minimum distance, then the region process in modeled by MRF to obtain an image contains different intensity regions. Starting from this we calculate the gradient values of that image and then employ a watershed technique. When using MRF method it obtains an image that has different intensity regions and has all the edge and region information, then it improves the segmentation result by superimpose closed and an accurate boundary of each region using watershed algorithm. After all pixels of the segmented regions have been processed, a map of primitive region with edges is generated. Finally, a merge process based on averaged mean values is employed. The final segmentation and edge detection result is one closed boundary per actual region in the image.

12. Prediction of drug synergy in cancer using ensemble-based machine learning techniques

Science.gov (United States)

Singh, Harpreet; Rana, Prashant Singh; Singh, Urvinder

2018-04-01

Drug synergy prediction plays a significant role in the medical field for inhibiting specific cancer agents. It can be developed as a pre-processing tool for therapeutic successes. Examination of different drug-drug interaction can be done by drug synergy score. It needs efficient regression-based machine learning approaches to minimize the prediction errors. Numerous machine learning techniques such as neural networks, support vector machines, random forests, LASSO, Elastic Nets, etc., have been used in the past to realize requirement as mentioned above. However, these techniques individually do not provide significant accuracy in drug synergy score. Therefore, the primary objective of this paper is to design a neuro-fuzzy-based ensembling approach. To achieve this, nine well-known machine learning techniques have been implemented by considering the drug synergy data. Based on the accuracy of each model, four techniques with high accuracy are selected to develop ensemble-based machine learning model. These models are Random forest, Fuzzy Rules Using Genetic Cooperative-Competitive Learning method (GFS.GCCL), Adaptive-Network-Based Fuzzy Inference System (ANFIS) and Dynamic Evolving Neural-Fuzzy Inference System method (DENFIS). Ensembling is achieved by evaluating the biased weighted aggregation (i.e. adding more weights to the model with a higher prediction score) of predicted data by selected models. The proposed and existing machine learning techniques have been evaluated on drug synergy score data. The comparative analysis reveals that the proposed method outperforms others in terms of accuracy, root mean square error and coefficient of correlation.

13. Numerical simulation of nanofluids based on power-law fluids with flow and heat transfer

Science.gov (United States)

Li, Lin; Jiang, Yongyue; Chen, Aixin

2017-04-01

In this paper, we investigate the heat transfer of nanofluids based on power-law fluids and movement of nanoparticles with the effect of thermophoresis in a rotating circular groove. The velocity of circular groove rotating is a constant and the temperature on the wall is kept to be zero all the time which is different from the temperature of nanofluids in the initial time. The effects of thermophoresis and Brownian diffusion are considered in temperature and concentration equations, and it is assumed that the thermal conductivity of nanofluids is a function of concentration of nanoparticles. Based on numerical results, it can be found that nanofluids improve the process of heat transfer than base fluids in a rotating circular groove. The enhancement of heat transfer increases as the power law index of base fluids decreases.

14. Extremum-Seeking Control and Applications A Numerical Optimization-Based Approach

CERN Document Server

Zhang, Chunlei

2012-01-01

Extremum seeking control tracks a varying maximum or minimum in a performance function such as a cost. It attempts to determine the optimal performance of a control system as it operates, thereby reducing downtime and the need for system analysis. Extremum Seeking Control and Applications is divided into two parts. In the first, the authors review existing analog optimization based extremum seeking control including gradient, perturbation and sliding mode based control designs. They then propose a novel numerical optimization based extremum seeking control based on optimization algorithms and state regulation. This control design is developed for simple linear time-invariant systems and then extended for a class of feedback linearizable nonlinear systems. The two main optimization algorithms – line search and trust region methods – are analyzed for robustness. Finite-time and asymptotic state regulators are put forward for linear and nonlinear systems respectively. Further design flexibility is achieved u...

15. The numerical evaluation on non-radiative multiphonon transition rate from different electronic bases

International Nuclear Information System (INIS)

Zhu Bangfen.

1985-10-01

A numerical calculation on the non-radiative multiphonon transition probability based on the adiabatic approximation (AA) and the static approximation (SA) has been accomplished in a model of two electronic levels coupled to one phonon mode. The numerical results indicate that the spectra based on different approximations are generally different apart from those vibrational levels which are far below the classical crossing point. For large electron-phonon coupling constant, the calculated transition rates based on AA are more reliable; on the other hand, for small transition coupling the transition rates near or beyond the cross region are quite different for two approximations. In addition to the diagonal non-adiabatic potential, the mixing and splitting of the original static potential sheets are responsible for the deviation of the transition rates based on different approximations. The relationship between the transition matrix element and the vibrational level shift, the Huang-Rhys factor, the separation of the electronic levels and the electron-phonon coupling is analysed and discussed. (author)

16. The numerical computation of seismic fragility of base-isolated Nuclear Power Plants buildings

International Nuclear Information System (INIS)

Perotti, Federico; Domaneschi, Marco; De Grandis, Silvia

2013-01-01

Highlights: • Seismic fragility of structural components in base isolated NPP is computed. • Dynamic integration, Response Surface, FORM and Monte Carlo Simulation are adopted. • Refined approach for modeling the non-linearities behavior of isolators is proposed. • Beyond-design conditions are addressed. • The preliminary design of the isolated IRIS is the application of the procedure. -- Abstract: The research work here described is devoted to the development of a numerical procedure for the computation of seismic fragilities for equipment and structural components in Nuclear Power Plants; in particular, reference is made, in the present paper, to the case of isolated buildings. The proposed procedure for fragility computation makes use of the Response Surface Methodology to model the influence of the random variables on the dynamic response. To account for stochastic loading, the latter is computed by means of a simulation procedure. Given the Response Surface, the Monte Carlo method is used to compute the failure probability. The procedure is here applied to the preliminary design of the Nuclear Power Plant reactor building within the International Reactor Innovative and Secure international project; the building is equipped with a base isolation system based on the introduction of High Damping Rubber Bearing elements showing a markedly non linear mechanical behavior. The fragility analysis is performed assuming that the isolation devices become the critical elements in terms of seismic risk and that, once base-isolation is introduced, the dynamic behavior of the building can be captured by low-dimensional numerical models

17. Wear Detection of Drill Bit by Image-based Technique

Science.gov (United States)

Sukeri, Maziyah; Zulhilmi Paiz Ismadi, Mohd; Rahim Othman, Abdul; Kamaruddin, Shahrul

2018-03-01

Image processing for computer vision function plays an essential aspect in the manufacturing industries for the tool condition monitoring. This study proposes a dependable direct measurement method to measure the tool wear using image-based analysis. Segmentation and thresholding technique were used as the means to filter and convert the colour image to binary datasets. Then, the edge detection method was applied to characterize the edge of the drill bit. By using cross-correlation method, the edges of original and worn drill bits were correlated to each other. Cross-correlation graphs were able to detect the difference of the worn edge despite small difference between the graphs. Future development will focus on quantifying the worn profile as well as enhancing the sensitivity of the technique.

18. Underwater Time Service and Synchronization Based on Time Reversal Technique

Science.gov (United States)

Lu, Hao; Wang, Hai-bin; Aissa-El-Bey, Abdeldjalil; Pyndiah, Ramesh

2010-09-01

Real time service and synchronization are very important to many underwater systems. But the time service and synchronization in existence cannot work well due to the multi-path propagation and random phase fluctuation of signals in the ocean channel. The time reversal mirror technique can realize energy concentration through self-matching of the ocean channel and has very good spatial and temporal focusing properties. Based on the TRM technique, we present the Time Reversal Mirror Real Time service and synchronization (TRMRT) method which can bypass the processing of multi-path on the server side and reduce multi-path contamination on the client side. So TRMRT can improve the accuracy of time service. Furthermore, as an efficient and precise method of time service, TRMRT could be widely used in underwater exploration activities and underwater navigation and positioning systems.

19. Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization

Directory of Open Access Journals (Sweden)

José R. Casar

2011-09-01

Full Text Available The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network. The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.

20. Prediction of slope stability based on numerical modeling of stress–strain state of rocks

Science.gov (United States)

Kozhogulov Nifadyev, KCh, VI; Usmanov, SF

2018-03-01

The paper presents the developed technique for the estimation of rock mass stability based on the finite element modeling of stress–strain state of rocks. The modeling results on the pit wall landslide as a flow of particles along a sloped surface are described.

1. Adaptive, Small-Rotation-Based, Corotational Technique for Analysis of 2D Nonlinear Elastic Frames

Directory of Open Access Journals (Sweden)

Jaroon Rungamornrat

2014-01-01

Full Text Available This paper presents an efficient and accurate numerical technique for analysis of two-dimensional frames accounted for both geometric nonlinearity and nonlinear elastic material behavior. An adaptive remeshing scheme is utilized to optimally discretize a structure into a set of elements where the total displacement can be decomposed into the rigid body movement and one possessing small rotations. This, therefore, allows the force-deformation relationship for the latter part to be established based on small-rotation-based kinematics. Nonlinear elastic material model is integrated into such relation via the prescribed nonlinear moment-curvature relationship. The global force-displacement relation for each element can be derived subsequently using corotational formulations. A final system of nonlinear algebraic equations along with its associated gradient matrix for the whole structure is obtained by a standard assembly procedure and then solved numerically by Newton-Raphson algorithm. A selected set of results is then reported to demonstrate and discuss the computational performance including the accuracy and convergence of the proposed technique.

2. A heuristic ranking approach on capacity benefit margin determination using Pareto-based evolutionary programming technique.

Science.gov (United States)

2015-01-01

This paper introduces a novel multiobjective approach for capacity benefit margin (CBM) assessment taking into account tie-line reliability of interconnected systems. CBM is the imperative information utilized as a reference by the load-serving entities (LSE) to estimate a certain margin of transfer capability so that a reliable access to generation through interconnected system could be attained. A new Pareto-based evolutionary programming (EP) technique is used to perform a simultaneous determination of CBM for all areas of the interconnected system. The selection of CBM at the Pareto optimal front is proposed to be performed by referring to a heuristic ranking index that takes into account system loss of load expectation (LOLE) in various conditions. Eventually, the power transfer based available transfer capability (ATC) is determined by considering the firm and nonfirm transfers of CBM. A comprehensive set of numerical studies are conducted on the modified IEEE-RTS79 and the performance of the proposed method is numerically investigated in detail. The main advantage of the proposed technique is in terms of flexibility offered to an independent system operator in selecting an appropriate solution of CBM simultaneously for all areas.

3. A Heuristic Ranking Approach on Capacity Benefit Margin Determination Using Pareto-Based Evolutionary Programming Technique

Directory of Open Access Journals (Sweden)

2015-01-01

Full Text Available This paper introduces a novel multiobjective approach for capacity benefit margin (CBM assessment taking into account tie-line reliability of interconnected systems. CBM is the imperative information utilized as a reference by the load-serving entities (LSE to estimate a certain margin of transfer capability so that a reliable access to generation through interconnected system could be attained. A new Pareto-based evolutionary programming (EP technique is used to perform a simultaneous determination of CBM for all areas of the interconnected system. The selection of CBM at the Pareto optimal front is proposed to be performed by referring to a heuristic ranking index that takes into account system loss of load expectation (LOLE in various conditions. Eventually, the power transfer based available transfer capability (ATC is determined by considering the firm and nonfirm transfers of CBM. A comprehensive set of numerical studies are conducted on the modified IEEE-RTS79 and the performance of the proposed method is numerically investigated in detail. The main advantage of the proposed technique is in terms of flexibility offered to an independent system operator in selecting an appropriate solution of CBM simultaneously for all areas.

4. The Behaviour of Fracture Growth in Sedimentary Rocks: A Numerical Study Based on Hydraulic Fracturing Processes

Directory of Open Access Journals (Sweden)

Lianchong Li

2016-03-01

Full Text Available To capture the hydraulic fractures in heterogeneous and layered rocks, a numerical code that can consider the coupled effects of fluid flow, damage, and stress field in rocks is presented. Based on the characteristics of a typical thin and inter-bedded sedimentary reservoir, China, a series of simulations on the hydraulic fracturing are performed. In the simulations, three points, i.e., (1 confining stresses, representing the effect of in situ stresses, (2 strength of the interfaces, and (3 material properties of the layers on either side of the interface, are crucial in fracturing across interfaces between two adjacent rock layers. Numerical results show that the hydrofracture propagation within a layered sequence of sedimentary rocks is controlled by changing in situ stresses, interface properties, and lithologies. The path of the hydraulic fracture is characterized by numerous deflections, branchings, and terminations. Four types of potential interaction, i.e., penetration, arrest, T-shaped branching, and offset, between a hydrofracture and an interface within the layered rocks are formed. Discontinuous composite fracture segments resulting from out-of-plane growth of fractures provide a less permeable path for fluids, gas, and oil than a continuous planar composite fracture, which are one of the sources of the high treating pressures and reduced fracture volume.

5. A numerical homogenization method for heterogeneous, anisotropic elastic media based on multiscale theory

KAUST Repository

Gao, Kai

2015-06-05

The development of reliable methods for upscaling fine-scale models of elastic media has long been an important topic for rock physics and applied seismology. Several effective medium theories have been developed to provide elastic parameters for materials such as finely layered media or randomly oriented or aligned fractures. In such cases, the analytic solutions for upscaled properties can be used for accurate prediction of wave propagation. However, such theories cannot be applied directly to homogenize elastic media with more complex, arbitrary spatial heterogeneity. Therefore, we have proposed a numerical homogenization algorithm based on multiscale finite-element methods for simulating elastic wave propagation in heterogeneous, anisotropic elastic media. Specifically, our method used multiscale basis functions obtained from a local linear elasticity problem with appropriately defined boundary conditions. Homogenized, effective medium parameters were then computed using these basis functions, and the approach applied a numerical discretization that was similar to the rotated staggered-grid finite-difference scheme. Comparisons of the results from our method and from conventional, analytical approaches for finely layered media showed that the homogenization reliably estimated elastic parameters for this simple geometry. Additional tests examined anisotropic models with arbitrary spatial heterogeneity in which the average size of the heterogeneities ranged from several centimeters to several meters, and the ratio between the dominant wavelength and the average size of the arbitrary heterogeneities ranged from 10 to 100. Comparisons to finite-difference simulations proved that the numerical homogenization was equally accurate for these complex cases.

6. Vision based techniques for rotorcraft low altitude flight

Science.gov (United States)

Sridhar, Banavar; Suorsa, Ray; Smith, Philip

1991-01-01

An overview of research in obstacle detection at NASA Ames Research Center is presented. The research applies techniques from computer vision to automation of rotorcraft navigation. The development of a methodology for detecting the range to obstacles based on the maximum utilization of passive sensors is emphasized. The development of a flight and image data base for verification of vision-based algorithms, and a passive ranging methodology tailored to the needs of helicopter flight are discussed. Preliminary results indicate that it is possible to obtain adequate range estimates except at regions close to the FOE. Closer to the FOE, the error in range increases since the magnitude of the disparity gets smaller, resulting in a low SNR.

7. Numerical simulation of the heat extraction in EGS with thermal-hydraulic-mechanical coupling method based on discrete fractures model

International Nuclear Information System (INIS)

Sun, Zhi-xue; Zhang, Xu; Xu, Yi; Yao, Jun; Wang, Hao-xuan; Lv, Shuhuan; Sun, Zhi-lei; Huang, Yong; Cai, Ming-yu; Huang, Xiaoxue

2017-01-01

The Enhanced Geothermal System (EGS) creates an artificial geothermal reservoir by hydraulic fracturing which allows heat transmission through the fractures by the circulating fluids as they extract heat from Hot Dry Rock (HDR). The technique involves complex thermal–hydraulic–mechanical (THM) coupling process. A numerical approach is presented in this paper to simulate and analyze the heat extraction process in EGS. The reservoir is regarded as fractured porous media consisting of rock matrix blocks and discrete fracture networks. Based on thermal non-equilibrium theory, the mathematical model of THM coupling process in fractured rock mass is used. The proposed model is validated by comparing it with several analytical solutions. An EGS case from Cooper Basin, Australia is simulated with 2D stochastically generated fracture model to study the characteristics of fluid flow, heat transfer and mechanical response in geothermal reservoir. The main parameters controlling the outlet temperature of EGS are also studied by sensitivity analysis. The results shows the significance of taking into account the THM coupling effects when investigating the efficiency and performance of EGS. - Highlights: • EGS reservoir comprising discrete fracture networks and matrix rock is modeled. • A THM coupling model is proposed for simulating the heat extraction in EGS. • The numerical model is validated by comparing with several analytical solutions. • A case study is presented for understanding the main characteristics of EGS. • The THM coupling effects are shown to be significant factors to EGS's running performance.

8. Numerical Simulation of the Interaction between Phosphorus and Sediment Based on the Modified Langmuir Equation

Directory of Open Access Journals (Sweden)

Pengjie Hu

2018-06-01

Full Text Available Phosphorus is the primary factor that limits eutrophication of surface waters in aquatic environments. Sediment particles have a strong affinity to phosphorus due to the high specific surface areas and surface active sites. In this paper, a numerical model containing hydrodynamics, sediment, and phosphorus module based on improved Langmuir equation is established, where the processes of adsorption and desorption are considered. Through the statistical analysis of the physical experiment data, the fitting formulas of two important parameters in the Langmuir equation are obtained, which are the adsorption coefficient, ka, and the ratio k between the adsorption coefficient and the desorption coefficient. In order to simulate the experimental flume and get a constant and uniform water flow, a periodical numerical flume is built by adding a streamwise body force, Fx. The adsorbed phosphorus by sediment and the dissolved phosphorus in the water are separately added into the Advection Diffusion equation as a source term to simulate the interaction between them. The result of the numerical model turns out to be well matched with that of the physical experiment and can thus provide the basis for further analysis. With the application of the numerical model to some new and relative cases, the conclusion will be drawn through an afterwards analysis. The concentration of dissolved phosphorus proves to be unevenly distributed along the depth and the maximum value approximately appears in the 3/4 water depth because both the high velocity in the top layer and the high turbulence intensity in the bottom layer can promote sediment adsorption on phosphorus.

9. Combination Base64 Algorithm and EOF Technique for Steganography

Science.gov (United States)

Rahim, Robbi; Nurdiyanto, Heri; Hidayat, Rahmat; Saleh Ahmar, Ansari; Siregar, Dodi; Putera Utama Siahaan, Andysah; Faisal, Ilham; Rahman, Sayuti; Suita, Diana; Zamsuri, Ahmad; Abdullah, Dahlan; Napitupulu, Darmawan; Ikhsan Setiawan, Muhammad; Sriadhi, S.

2018-04-01

The steganography process combines mathematics and computer science. Steganography consists of a set of methods and techniques to embed the data into another media so that the contents are unreadable to anyone who does not have the authority to read these data. The main objective of the use of base64 method is to convert any file in order to achieve privacy. This paper discusses a steganography and encoding method using base64, which is a set of encoding schemes that convert the same binary data to the form of a series of ASCII code. Also, the EoF technique is used to embed encoding text performed by Base64. As an example, for the mechanisms a file is used to represent the texts, and by using the two methods together will increase the security level for protecting the data, this research aims to secure many types of files in a particular media with a good security and not to damage the stored files and coverage media that used.

10. Free Software Development. 3. Numerical Description of Soft Acid with Soft Base Titration

OpenAIRE

Lorentz JÄNTSCHI; Horea Iustin NAŞCU

2002-01-01

The analytical methods of qualitative and quantitative determination of ions in solutions are very flexible to automation. The present work is focus on modeling the process of titration and presents a numerical simulation of acid-base titration. A PHP program to compute all iterations in titration process that solves a 3th rank equation to find value of pH for was built and is available through http internet protocol at the address: http://vl.academicdirect.org/molecular_dynamics/ab_titra...

11. Numerical simulation and impact assessment of a groundwater pollution based on MODFLOW

International Nuclear Information System (INIS)

Liu Dongxu; Si Gaohua; Zheng Junfang; Yu Jing; Liu Yong; Chen Jianjie; Ma Jinzhu

2013-01-01

Based on MODFLOW, SRTM3 DEM data and GIS tool, a saturated-zone groundwater flow and radionuclide transport numerical model in a research area had been developed to evaluate the migration trend and environmental impact. The results showed that 3 H transporting with the groundwater had a fast velocity and a pulse concentration which can not reduce to acceptable level within short times. that may cause groundwater pollution in downstream region. However, 90 Sr was transported slowly with the groundwater, and may only cause a pollution area of about 200 m around the source. (authors)

12. Numerical methods and inversion algorithms in reservoir simulation based on front tracking

Energy Technology Data Exchange (ETDEWEB)

Haugse, Vidar

1999-04-01

This thesis uses front tracking to analyse laboratory experiments on multiphase flow in porous media. New methods for parameter estimation for two- and three-phase relative permeability experiments have been developed. Up scaling of heterogeneous and stochastic porous media is analysed. Numerical methods based on front tracking is developed and analysed. Such methods are efficient for problems involving steep changes in the physical quantities. Multi-dimensional problems are solved by combining front tracking with dimensional splitting. A method for adaptive grid refinement is developed.

13. Response of multiferroic composites inferred from a fast-Fourier-transform-based numerical scheme

International Nuclear Information System (INIS)

Brenner, Renald; Bravo-Castillero, Julián

2010-01-01

The effective response and the local fields within periodic magneto-electric multiferroic composites are investigated by means of a numerical scheme based on fast Fourier transforms. This computational framework relies on the iterative resolution of coupled series expansions for the magnetic, electric and strain fields. By using an augmented Lagrangian formulation, a simple and robust procedure which makes use of the uncoupled Green operators for the elastic, electrostatics and magnetostatics problems is proposed. Its accuracy is assessed in the cases of laminated and fibrous two-phase composites for which analytical solutions exist

14. Research on numerical control system based on S3C2410 and MCX314AL

Science.gov (United States)

Ren, Qiang; Jiang, Tingbiao

2008-10-01

With the rapid development of micro-computer technology, embedded system, CNC technology and integrated circuits, numerical control system with powerful functions can be realized by several high-speed CPU chips and RISC (Reduced Instruction Set Computing) chips which have small size and strong stability. In addition, the real-time operating system also makes the attainment of embedded system possible. Developing the NC system based on embedded technology can overcome some shortcomings of common PC-based CNC system, such as the waste of resources, low control precision, low frequency and low integration. This paper discusses a hardware platform of ENC (Embedded Numerical Control) system based on embedded processor chip ARM (Advanced RISC Machines)-S3C2410 and DSP (Digital Signal Processor)-MCX314AL and introduces the process of developing ENC system software. Finally write the MCX314AL's driver under the embedded Linux operating system. The embedded Linux operating system can deal with multitask well moreover satisfy the real-time and reliability of movement control. NC system has the advantages of best using resources and compact system with embedded technology. It provides a wealth of functions and superior performance with a lower cost. It can be sure that ENC is the direction of the future development.

15. Full-duplex MIMO system based on antenna cancellation technique

DEFF Research Database (Denmark)

Foroozanfard, Ehsan; Franek, Ondrej; Tatomirescu, Alexandru

2014-01-01

The performance of an antenna cancellation technique for a multiple-input– multiple-output (MIMO) full-duplex system that is based on null-steering beamforming and antenna polarization diversity is investigated. A practical implementation of a symmetric antenna topology comprising three dual......-polarized patch antennas operating at 2.4 GHz is described. The measurement results show an average of 60 dB self-interference cancellation over 200 MHz bandwidth. Moreover, a decoupling level of up to 22 dB is achieved for MIMO multiplexing using antenna polarization diversity. The performance evaluation...

16. Cooperative Technique Based on Sensor Selection in Wireless Sensor Network

Directory of Open Access Journals (Sweden)

ISLAM, M. R.

2009-02-01

Full Text Available An energy efficient cooperative technique is proposed for the IEEE 1451 based Wireless Sensor Networks. Selected numbers of Wireless Transducer Interface Modules (WTIMs are used to form a Multiple Input Single Output (MISO structure wirelessly connected with a Network Capable Application Processor (NCAP. Energy efficiency and delay of the proposed architecture are derived for different combination of cluster size and selected number of WTIMs. Optimized constellation parameters are used for evaluating derived parameters. The results show that the selected MISO structure outperforms the unselected MISO structure and it shows energy efficient performance than SISO structure after a certain distance.

17. Nitrous oxide-based techniques versus nitrous oxide-free techniques for general anaesthesia.

Science.gov (United States)

Sun, Rao; Jia, Wen Qin; Zhang, Peng; Yang, KeHu; Tian, Jin Hui; Ma, Bin; Liu, Yali; Jia, Run H; Luo, Xiao F; Kuriyama, Akira

2015-11-06

Nitrous oxide has been used for over 160 years for the induction and maintenance of general anaesthesia. It has been used as a sole agent but is most often employed as part of a technique using other anaesthetic gases, intravenous agents, or both. Its low tissue solubility (and therefore rapid kinetics), low cost, and low rate of cardiorespiratory complications have made nitrous oxide by far the most commonly used general anaesthetic. The accumulating evidence regarding adverse effects of nitrous oxide administration has led many anaesthetists to question its continued routine use in a variety of operating room settings. Adverse events may result from both the biological actions of nitrous oxide and the fact that to deliver an effective dose, nitrous oxide, which is a relatively weak anaesthetic agent, needs to be given in high concentrations that restrict oxygen delivery (for example, a common mixture is 30% oxygen with 70% nitrous oxide). As well as the risk of low blood oxygen levels, concerns have also been raised regarding the risk of compromising the immune system, impaired cognition, postoperative cardiovascular complications, bowel obstruction from distention, and possible respiratory compromise. To determine if nitrous oxide-based anaesthesia results in similar outcomes to nitrous oxide-free anaesthesia in adults undergoing surgery. We searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2014 Issue 10); MEDLINE (1966 to 17 October 2014); EMBASE (1974 to 17 October 2014); and ISI Web of Science (1974 to 17 October 2014). We also searched the reference lists of relevant articles, conference proceedings, and ongoing trials up to 17 October 2014 on specific websites (http://clinicaltrials.gov/, http://controlled-trials.com/, and http://www.centerwatch.com). We included randomized controlled trials (RCTs) comparing general anaesthesia where nitrous oxide was part of the anaesthetic technique used for the induction or maintenance of general

18. A theoretical study using the multiphase numerical simulation technique for effective use of H2 as blast furnaces fuel

Directory of Open Access Journals (Sweden)

2017-07-01

Full Text Available We present a numerical simulation procedure for analyzing hydrogen, oxygen and carbon dioxide gases injections mixed with pulverized coals within the tuyeres of blast furnaces. Effective use of H2 rich gas is highly attractive into the steelmaking blast furnace, considering the possibility of increasing the productivity and decreasing the specific emissions of carbon dioxide becoming the process less intensive in carbon utilization. However, the mixed gas and coal injection is a complex technology since significant changes on the inner temperature and gas flow patterns are expected, beyond to their effects on the chemical reactions and heat exchanges. Focusing on the evaluation of inner furnace status under such complex operation a comprehensive mathematical model has been developed using the multi interaction multiple phase theory. The BF, considered as a multiphase reactor, treats the lump solids (sinter, small coke, pellets, granular coke and iron ores, gas, liquids metal and slag and pulverized coal phases. The governing conservation equations are formulated for momentum, mass, chemical species and energy and simultaneously discretized using the numerical method of finite volumes. We verified the model with a reference operational condition using pulverized coal of 215 kg per ton of hot metal (kg thm−1. Thus, combined injections of varying concentrations of gaseous fuels with H2, O2 and CO2 are simulated with 220 kg thm−1 and 250 kg thm−1 coals injection. Theoretical analysis showed that stable operations conditions could be achieved with productivity increase of 60%. Finally, we demonstrated that the net carbon utilization per ton of hot metal decreased 12%.

19. Failure Mechanism of Rock Bridge Based on Acoustic Emission Technique

Directory of Open Access Journals (Sweden)

Guoqing Chen

2015-01-01

Full Text Available Acoustic emission (AE technique is widely used in various fields as a reliable nondestructive examination technology. Two experimental tests were carried out in a rock mechanics laboratory, which include (1 small scale direct shear tests of rock bridge with different lengths and (2 large scale landslide model with locked section. The relationship of AE event count and record time was analyzed during the tests. The AE source location technology and comparative analysis with its actual failure model were done. It can be found that whether it is small scale test or large scale landslide model test, AE technique accurately located the AE source point, which reflected the failure generation and expansion of internal cracks in rock samples. Large scale landslide model with locked section test showed that rock bridge in rocky slope has typical brittle failure behavior. The two tests based on AE technique well revealed the rock failure mechanism in rocky slope and clarified the cause of high speed and long distance sliding of rocky slope.

20. Radiation synthesized protein-based nanoparticles: A technique overview

International Nuclear Information System (INIS)

Varca, Gustavo H.C.; Perossi, Gabriela G.; Grasselli, Mariano; Lugão, Ademar B.

2014-01-01

Seeking for alternative routes for protein engineering a novel technique – radiation induced synthesis of protein nanoparticles – to achieve size controlled particles with preserved bioactivity has been recently reported. This work aimed to evaluate different process conditions to optimize and provide an overview of the technique using γ-irradiation. Papain was used as model protease and the samples were irradiated in a gamma cell irradiator in phosphate buffer (pH=7.0) containing ethanol (0–35%). The dose effect was evaluated by exposure to distinct γ-irradiation doses (2.5, 5, 7.5 and 10 kGy) and scale up experiments involving distinct protein concentrations (12.5–50 mg mL −1 ) were also performed. Characterization involved size monitoring using dynamic light scattering. Bityrosine detection was performed using fluorescence measurements in order to provide experimental evidence of the mechanism involved. Best dose effects were achieved at 10 kGy with regard to size and no relevant changes were observed as a function of papain concentration, highlighting very broad operational concentration range. Bityrosine changes were identified for the samples as a function of the process confirming that such linkages play an important role in the nanoparticle formation. - Highlights: • Synthesis of protein-based nanoparticles by γ-irradiation. • Optimization of the technique. • Overview of mechanism involved in the nanoparticle formation. • Engineered papain nanoparticles for biomedical applications

1. Numerical investigations of the WASA pellet target operation and proposal of a new technique for the PANDA pellet target

Energy Technology Data Exchange (ETDEWEB)

Varentsov, Victor L., E-mail: v.varentsov@gsi.de [Institute for Theoretical and Experimental Physics, B. Cheremushkinskaya 25, 117218 Moscow (Russian Federation)

2011-08-01

The conventional nozzle vibration technique of the hydrogen micro-droplet generation that is supposed to be used for internal pellet target production for the future PANDA experiment at the international FAIR facility in Darmstadtfor is described. The operation of this technique has been investigated by means of detailed computer simulations. Results of calculations for the geometry and operation conditions of the WASA pellet generator are presented and discussed. We have found that for every given pellet size, there is a set of operation parameters where the efficiency of the WASA hydrogen pellet target operation is considerably increased. Moreover, the results of presented computer simulations clearly show that the future PANDA pellet target setup can be realized with the use of much smaller (and cheaper) vacuum pumps than those used at present in the WASA hydrogen pellet target. To qualitatively improve the PANDA hydrogen pellet target performance we have proposed the use of a novel flow focusing method of Ganan-Calvo and Barreto (1997,1999) combined with the use of conventional vacuum injection capillary. Possibilities of this approach for the PANDA pellet target production have been also explored by means of computer simulations. The results of these simulations show that the use of this new approach looks very promising and in particular, there is no need here to use of expensive ultra-pure hydrogen to prevent nozzle clogging or freezing up due to impurities and it will allow simple, fast, smooth and a wide range of change of pellet sizes in accordance with requirements of different experiments at the PANDA detector. In this article we also propose and describe the idea of a new technique to break up a liquid microjet into microdroplets using a process of liquid jet evaporation under pulsed laser beam irradiation. This technique should be experimentally checked before it may be used in the design of the future PANDA pellet target setup.

2. Numerical investigations of the WASA pellet target operation and proposal of a new technique for the PANDA pellet target

International Nuclear Information System (INIS)

Varentsov, Victor L.

2011-01-01

The conventional nozzle vibration technique of the hydrogen micro-droplet generation that is supposed to be used for internal pellet target production for the future PANDA experiment at the international FAIR facility in Darmstadtfor is described. The operation of this technique has been investigated by means of detailed computer simulations. Results of calculations for the geometry and operation conditions of the WASA pellet generator are presented and discussed. We have found that for every given pellet size, there is a set of operation parameters where the efficiency of the WASA hydrogen pellet target operation is considerably increased. Moreover, the results of presented computer simulations clearly show that the future PANDA pellet target setup can be realized with the use of much smaller (and cheaper) vacuum pumps than those used at present in the WASA hydrogen pellet target. To qualitatively improve the PANDA hydrogen pellet target performance we have proposed the use of a novel flow focusing method of Ganan-Calvo and Barreto (1997,1999) combined with the use of conventional vacuum injection capillary. Possibilities of this approach for the PANDA pellet target production have been also explored by means of computer simulations. The results of these simulations show that the use of this new approach looks very promising and in particular, there is no need here to use of expensive ultra-pure hydrogen to prevent nozzle clogging or freezing up due to impurities and it will allow simple, fast, smooth and a wide range of change of pellet sizes in accordance with requirements of different experiments at the PANDA detector. In this article we also propose and describe the idea of a new technique to break up a liquid microjet into microdroplets using a process of liquid jet evaporation under pulsed laser beam irradiation. This technique should be experimentally checked before it may be used in the design of the future PANDA pellet target setup.

3. Numerical model of the influence function of deformable mirrors based on Bessel Fourier orthogonal functions

International Nuclear Information System (INIS)

Li Shun; Zhang Sijiong

2014-01-01

A numerical model is presented to simulate the influence function of deformable mirror actuators. The numerical model is formed by Bessel Fourier orthogonal functions, which are constituted of Bessel orthogonal functions and a Fourier basis. A detailed comparison is presented between the new Bessel Fourier model, the Zernike model, the Gaussian influence function and the modified Gaussian influence function. Numerical experiments indicate that the new numerical model is easy to use and more accurate compared with other numerical models. The new numerical model can be used for describing deformable mirror performances and numerical simulations of adaptive optics systems. (research papers)

4. Structural level characterization of base oils using advanced analytical techniques

KAUST Repository

2015-05-21

Base oils, blended for finished lubricant formulations, are classified by the American Petroleum Institute into five groups, viz., groups I-V. Groups I-III consist of petroleum based hydrocarbons whereas groups IV and V are made of synthetic polymers. In the present study, five base oil samples belonging to groups I and III were extensively characterized using high performance liquid chromatography (HPLC), comprehensive two-dimensional gas chromatography (GC×GC), and Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) equipped with atmospheric pressure chemical ionization (APCI) and atmospheric pressure photoionization (APPI) sources. First, the capabilities and limitations of each analytical technique were evaluated, and then the availed information was combined to reveal compositional details on the base oil samples studied. HPLC showed the overwhelming presence of saturated over aromatic compounds in all five base oils. A similar trend was further corroborated using GC×GC, which yielded semiquantitative information on the compound classes present in the samples and provided further details on the carbon number distributions within these classes. In addition to chromatography methods, FT-ICR MS supplemented the compositional information on the base oil samples by resolving the aromatics compounds into alkyl- and naphtheno-subtituted families. APCI proved more effective for the ionization of the highly saturated base oil components compared to APPI. Furthermore, for the detailed information on hydrocarbon molecules FT-ICR MS revealed the presence of saturated and aromatic sulfur species in all base oil samples. The results presented herein offer a unique perspective into the detailed molecular structure of base oils typically used to formulate lubricants. © 2015 American Chemical Society.

5. Characterization and Prediction of the Volume Flow Rate Aerating a Cross Ventilated Bilding by Means of Experimental Techniques and Numerical Approaches

DEFF Research Database (Denmark)

Larsen, Tine Steen; Nikolopoulos, N.; Nikolopoulos, A.

2011-01-01

. Furthermore, additional information regarding the flow field near the opening edges, not easily extracted by experimental methods, provide an in depth sight in the main characteristics of the flow field both at the openings but also inside the building. Finally, a new methodology for the approximation...... anemometers across the openings, whilst the numerical methodology is based on the time-dependant solution of the governing Navier-Stokes equations. The experimental data are compared to the corresponding numerical results, revealing the unsteady character of the flow field especially at large incidence angles...

6. Numerical Simulation of One-Dimensional Fractional Nonsteady Heat Transfer Model Based on the Second Kind Chebyshev Wavelet

Directory of Open Access Journals (Sweden)

Fuqiang Zhao

2017-01-01

Full Text Available In the current study, a numerical technique for solving one-dimensional fractional nonsteady heat transfer model is presented. We construct the second kind Chebyshev wavelet and then derive the operational matrix of fractional-order integration. The operational matrix of fractional-order integration is utilized to reduce the original problem to a system of linear algebraic equations, and then the numerical solutions obtained by our method are compared with those obtained by CAS wavelet method. Lastly, illustrated examples are included to demonstrate the validity and applicability of the technique.

7. Numerical analysis of liquid metal MHD flows through circular pipes based on a fully developed modeling

International Nuclear Information System (INIS)

Zhang, Xiujie; Pan, Chuanjie; Xu, Zengyu

2013-01-01

Highlights: ► 2D MHD code based on a fully developed modeling is developed and validated by Samad analytical results. ► The results of MHD effect of liquid metal through circular pipes at high Hartmann numbers are given. ► M type velocity profile is observed for MHD circular pipe flow at high wall conductance ratio condition. ► Non-uniform wall electrical conductivity leads to high jet velocity in Robert layers. -- Abstract: Magnetohydrodynamics (MHD) laminar flows through circular pipes are studied in this paper by numerical simulation under the conditions of Hartmann numbers from 18 to 10000. The code is developed based on a fully developed modeling and validated by Samad's analytical solution and Chang's asymptotic results. After the code validation, numerical simulation is extended to high Hartmann number for MHD circular pipe flows with conducting walls, and numerical results such as velocity distribution and MHD pressure gradient are obtained. Typical M-type velocity is observed but there is not such a big velocity jet as that of MHD rectangular duct flows even under the conditions of high Hartmann numbers and big wall conductance ratio. The over speed region in Robert layers becomes smaller when Hartmann numbers increase. When Hartmann number is fixed and wall conductance ratios change, the dimensionless velocity is through one point which is in agreement with Samad's results, the locus of maximum value of velocity jet is same and effects of wall conductance ratio only on the maximum value of velocity jet. In case of Robert walls are treated as insulating and Hartmann walls as conducting for circular pipe MHD flows, there is big velocity jet like as MHD rectangular duct flows of Hunt's case 2

8. Numerical study of base pressure characteristic curve for a four-engine clustered nozzle configuration

Science.gov (United States)

Wang, Ten-See

1993-07-01

Excessive base heating has been a problem for many launch vehicles. For certain designs such as the direct dump of turbine exhaust in the nozzle section and at the nozzle lip of the Space Transportation Systems Engine (STME), the potential burning of the turbine exhaust in the base region has caused tremendous concern. Two conventional approaches have been considered for predicting the base environment: (1) empirical approach, and (2) experimental approach. The empirical approach uses a combination of data correlations and semi-theoretical calculations. It works best for linear problems, simple physics and geometry. However, it is highly suspicious when complex geometry and flow physics are involved, especially when the subject is out of historical database. The experimental approach is often used to establish database for engineering analysis. However, it is qualitative at best for base flow problems. Other criticisms include the inability to simulate forebody boundary layer correctly, the interference effect from tunnel walls, and the inability to scale all pertinent parameters. Furthermore, there is a contention that the information extrapolated from subscale tests with combustion is not conservative. One potential alternative to the conventional methods is computational fluid dynamics (CFD), which has none of the above restrictions and is becoming more feasible due to maturing algorithms and advancing computer technology. It provides more details of the flowfield and is only limited by computer resources. However, it has its share of criticisms as a predictive tool for base environment. One major concern is that CFD has not been extensively tested for base flow problems. It is therefore imperative that CFD be assessed and benchmarked satisfactorily for base flows. In this study, the turbulent base flowfield of a experimental investigation for a four-engine clustered nozzle is numerically benchmarked using a pressure based CFD method. Since the cold air was the

9. On HTML and XML based web design and implementation techniques

International Nuclear Information System (INIS)

Bezboruah, B.; Kalita, M.

2006-05-01

Web implementation is truly a multidisciplinary field with influences from programming, choosing of scripting languages, graphic design, user interface design, and database design. The challenge of a Web designer/implementer is his ability to create an attractive and informative Web. To work with the universal framework and link diagrams from the design process as well as the Web specifications and domain information, it is essential to create Hypertext Markup Language (HTML) or other software and multimedia to accomplish the Web's objective. In this article we will discuss Web design standards and the techniques involved in Web implementation based on HTML and Extensible Markup Language (XML). We will also discuss the advantages and disadvantages of HTML over its successor XML in designing and implementing a Web. We have developed two Web pages, one utilizing the features of HTML and the other based on the features of XML to carry out the present investigation. (author)

10. Efficient Identification Using a Prime-Feature-Based Technique

DEFF Research Database (Denmark)

Hussain, Dil Muhammad Akbar; Haq, Shaiq A.; Valente, Andrea

2011-01-01

. Fingerprint identification system, implemented on PC/104 based real-time systems, can accurately identify the operator. Traditionally, the uniqueness of a fingerprint is determined by the overall pattern of ridges and valleys as well as the local ridge anomalies e.g., a ridge bifurcation or a ridge ending......, which are called minutiae points. Designing a reliable automatic fingerprint matching algorithm for minimal platform is quite challenging. In real-time systems, efficiency of the matching algorithm is of utmost importance. To achieve this goal, a prime-feature-based indexing algorithm is proposed......Identification of authorized train drivers through biometrics is a growing area of interest in locomotive radio remote control systems. The existing technique of password authentication is not very reliable and potentially unauthorized personnel may also operate the system on behalf of the operator...

11. Designing on ICT reconstruction software based on DSP techniques

International Nuclear Information System (INIS)

Liu Jinhui; Xiang Xincheng

2006-01-01

The convolution back project (CBP) algorithm is used to realize the CT image's reconstruction in ICT generally, which is finished by using PC or workstation. In order to add the ability of multi-platform operation of CT reconstruction software, a CT reconstruction method based on modern digital signal processor (DSP) technique is proposed and realized in this paper. The hardware system based on TI's C6701 DSP processor is selected to support the CT software construction. The CT reconstruction software is compiled only using assembly language related to the DSP hardware. The CT software can be run on TI's C6701 EVM board by inputting the CT data, and can get the CT Images that satisfy the real demands. (authors)

12. New calibration technique for KCD-based megavoltage imaging

Science.gov (United States)

Samant, Sanjiv S.; Zheng, Wei; DiBianca, Frank A.; Zeman, Herbert D.; Laughter, Joseph S.

1999-05-01

In megavoltage imaging, current commercial electronic portal imaging devices (EPIDs), despite having the advantage of immediate digital imaging over film, suffer from poor image contrast and spatial resolution. The feasibility of using a kinestatic charge detector (KCD) as an EPID to provide superior image contrast and spatial resolution for portal imaging has already been demonstrated in a previous paper. The KCD system had the additional advantage of requiring an extremely low dose per acquired image, allowing for superior imaging to be reconstructed form a single linac pulse per image pixel. The KCD based images utilized a dose of two orders of magnitude less that for EPIDs and film. Compared with the current commercial EPIDs and film, the prototype KCD system exhibited promising image qualities, despite being handicapped by the use of a relatively simple image calibration technique, and the performance limits of medical linacs on the maximum linac pulse frequency and energy flux per pulse delivered. This image calibration technique fixed relative image pixel values based on a linear interpolation of extrema provided by an air-water calibration, and accounted only for channel-to-channel variations. The counterpart of this for area detectors is the standard flat fielding method. A comprehensive calibration protocol has been developed. The new technique additionally corrects for geometric distortions due to variations in the scan velocity, and timing artifacts caused by mis-synchronization between the linear accelerator and the data acquisition system (DAS). The role of variations in energy flux (2 - 3%) on imaging is demonstrated to be not significant for the images considered. The methodology is presented, and the results are discussed for simulated images. It also allows for significant improvements in the signal-to- noise ratio (SNR) by increasing the dose using multiple images without having to increase the linac pulse frequency or energy flux per pulse. The

13. Risk-based maintenance-Techniques and applications

International Nuclear Information System (INIS)

Arunraj, N.S.; Maiti, J.

2007-01-01

Plant and equipment, however well designed, will not remain safe or reliable if it is not maintained. The general objective of the maintenance process is to make use of the knowledge of failures and accidents to achieve the possible safety with the lowest possible cost. The concept of risk-based maintenance was developed to inspect the high-risk components usually with greater frequency and thoroughness and to maintain in a greater manner, to achieve tolerable risk criteria. Risk-based maintenance methodology provides a tool for maintenance planning and decision making to reduce the probability of failure of equipment and the consequences of failure. In this paper, the risk analysis and risk-based maintenance methodologies were identified and classified into suitable classes. The factors affecting the quality of risk analysis were identified and analyzed. The applications, input data and output data were studied to understand their functioning and efficiency. The review showed that there is no unique way to perform risk analysis and risk-based maintenance. The use of suitable techniques and methodologies, careful investigation during the risk analysis phase, and its detailed and structured results are necessary to make proper risk-based maintenance decisions

14. Generation of Quasi-Gaussian Pulses Based on Correlation Techniques

Directory of Open Access Journals (Sweden)

POHOATA, S.

2012-02-01

Full Text Available The Gaussian pulses have been mostly used within communications, where some applications can be emphasized: mobile telephony (GSM, where GMSK signals are used, as well as the UWB communications, where short-period pulses based on Gaussian waveform are generated. Since the Gaussian function signifies a theoretical concept, which cannot be accomplished from the physical point of view, this should be expressed by using various functions, able to determine physical implementations. New techniques of generating the Gaussian pulse responses of good precision are approached, proposed and researched in this paper. The second and third order derivatives with regard to the Gaussian pulse response are accurately generated. The third order derivates is composed of four individual rectangular pulses of fixed amplitudes, being easily to be generated by standard techniques. In order to generate pulses able to satisfy the spectral mask requirements, an adequate filter is necessary to be applied. This paper emphasizes a comparative analysis based on the relative error and the energy spectra of the proposed pulses.

15. A Numerical Study of Aerodynamic Performance and Noise of a Bionic Airfoil Based on Owl Wing

Directory of Open Access Journals (Sweden)

Xiaomin Liu

2014-08-01

Full Text Available Noise reduction and efficiency enhancement are the two important directions in the development of the multiblade centrifugal fan. In this study, we attempt to develop a bionic airfoil based on the owl wing and investigate its aerodynamic performance and noise-reduction mechanism at the relatively low Reynolds number. Firstly, according to the geometric characteristics of the owl wing, a bionic airfoil is constructed as the object of study at Reynolds number of 12,300. Secondly, the large eddy simulation (LES with the Smagorinsky model is adopted to numerically simulate the unsteady flow fields around the bionic airfoil and the standard NACA0006 airfoil. And then, the acoustic sources are extracted from the unsteady flow field data, and the Ffowcs Williams-Hawkings (FW-H equation based on Lighthill's acoustic theory is solved to predict the propagation of these acoustic sources. The numerical results show that the lift-to-drag ratio of bionic airfoil is higher than that of the traditional NACA 0006 airfoil because of its deeply concave lower surface geometry. Finally, the sound field of the bionic airfoil is analyzed in detail. The distribution of the A-weighted sound pressure levels, the scaled directivity of the sound, and the distribution of dP/dt on the airfoil surface are provided so that the characteristics of the acoustic sources could be revealed.

16. Numerical simulation of base flow of a long range flight vehicle

Science.gov (United States)

Saha, S.; Rathod, S.; Chandra Murty, M. S. R.; Sinha, P. K.; Chakraborty, Debasis

2012-05-01

Numerical exploration of base flow of a long range flight vehicle is presented for different flight conditions. Three dimensional Navier-Stokes equations are solved along with k-ɛ turbulence model using commercial CFD software. Simulation captured all essential flow features including flow separation at base shoulder, shear layer formation at the jet boundary, recirculation at the base region etc. With the increase in altitude, the plume of the rocket exhaust is seen to bulge more and more and caused more intense free stream and rocket plume interaction leading to higher gas temperature in the base cavity. The flow field in the base cavity is investigated in more detail, which is found to be fairly uniform at different instant of time. Presence of the heat shield is seen to reduce the hot gas entry to the cavity region due to different recirculation pattern in the base region. Computed temperature history obtained from conjugate heat transfer analysis is found to compare very well with flight measured data.

17. Refractive index sensor based on optical fiber end face using pulse reference-based compensation technique

Science.gov (United States)

Bian, Qiang; Song, Zhangqi; Zhang, Xueliang; Yu, Yang; Chen, Yuzhong

2018-03-01

We proposed a refractive index sensor based on optical fiber end face using pulse reference-based compensation technique. With good compensation effect of this compensation technique, the power fluctuation of light source, the change of optic components transmission loss and coupler splitting ratio can be compensated, which largely reduces the background noise. The refractive index resolutions can achieve 3.8 × 10-6 RIU and1.6 × 10-6 RIU in different refractive index regions.

18. A numerical method for PCM-based pin fin heat sinks optimization

International Nuclear Information System (INIS)

Pakrouh, R.; Hosseini, M.J.; Ranjbar, A.A.; Bahrampoury, R.

2015-01-01

Highlights: • Optimization of PCM-based heat sink by using the Taguchi method. • Derivation of optimal PCM percentage to reach the maximum critical time. • Optimization is performed for four different critical temperatures. • Effective design factors are fins’ height and fins’ number. • The optimum configuration depends on geometric properties and the critical temperature. - Abstract: This paper presents a numerical investigation on geometric optimization of PCM-based pin fin heat sinks. Paraffin RT44HC is used as PCM while the fins and heat sink base is made of aluminum. The fins act as thermal conductivity enhancers (TCEs). The main goal of the study is to obtain the configurations that maximize the heat sink operational time. An approach witch couples Taguchi method with numerical simulations is utilized for this purpose. Number of fins, fins height, fins thickness and the base thickness are parameters which are studied for optimization. In this study natural convection and PCM volume variation during melting process are considered in the simulations. Optimization is performed for different critical temperatures of 50 °C, 60 °C, 70 °C and 80 °C. Results show that a complex relation exists between PCM and TCE volume percentages. The optimal case strongly depends on the fins’ number, fins’ height and thickness and also the critical temperature. The optimum PCM percentages are found to be 60.61% (corresponds to 100 pin fin heat sink with 4 mm thick fins) for critical temperature of 50 °C and 82.65% (corresponds to 100 pin fin heat sink with 2 mm thick fins) for other critical temperatures

19. Enhancing the effectiveness of IST through risk-based techniques

Energy Technology Data Exchange (ETDEWEB)

Floyd, S.D.

1996-12-01

Current IST requirements were developed mainly through deterministic-based methods. While this approach has resulted in an adequate level of safety and reliability for pumps and valves, insights from probabilistic safety assessments suggest a better safety focus can be achieved at lower costs. That is, some high safety impact pumps and valves are currently not tested under the IST program and should be added, while low safety impact valves could be tested at significantly greater intervals than allowed by the current IST program. The nuclear utility industry, through the Nuclear Energy Institute (NEI), has developed a draft guideline for applying risk-based techniques to focus testing on those pumps and valves with a high safety impact while reducing test frequencies on low safety impact pumps and valves. The guideline is being validated through an industry pilot application program that is being reviewed by the U.S. Nuclear Regulatory Commission. NEI and the ASME maintain a dialogue on the two groups` activities related to risk-based IST. The presenter will provide an overview of the NEI guideline, discuss the methodological approach for applying risk-based technology to IST and provide the status of the industry pilot plant effort.

20. Modal parameter identification based on combining transmissibility functions and blind source separation techniques

Science.gov (United States)

Araújo, Iván Gómez; Sánchez, Jesús Antonio García; Andersen, Palle

2018-05-01

Transmissibility-based operational modal analysis is a recent and alternative approach used to identify the modal parameters of structures under operational conditions. This approach is advantageous compared with traditional operational modal analysis because it does not make any assumptions about the excitation spectrum (i.e., white noise with a flat spectrum). However, common methodologies do not include a procedure to extract closely spaced modes with low signal-to-noise ratios. This issue is relevant when considering that engineering structures generally have closely spaced modes and that their measured responses present high levels of noise. Therefore, to overcome these problems, a new combined method for modal parameter identification is proposed in this work. The proposed method combines blind source separation (BSS) techniques and transmissibility-based methods. Here, BSS techniques were used to recover source signals, and transmissibility-based methods were applied to estimate modal information from the recovered source signals. To achieve this combination, a new method to define a transmissibility function was proposed. The suggested transmissibility function is based on the relationship between the power spectral density (PSD) of mixed signals and the PSD of signals from a single source. The numerical responses of a truss structure with high levels of added noise and very closely spaced modes were processed using the proposed combined method to evaluate its ability to identify modal parameters in these conditions. Colored and white noise excitations were used for the numerical example. The proposed combined method was also used to evaluate the modal parameters of an experimental test on a structure containing closely spaced modes. The results showed that the proposed combined method is capable of identifying very closely spaced modes in the presence of noise and, thus, may be potentially applied to improve the identification of damping ratios.

1. Numerical models: Detailing and simulation techniques aimed at comparison with experimental data, support to test result interpretation

International Nuclear Information System (INIS)

Lin Chiwen

2001-01-01

This part of the presentation discusses the modelling details required and the simulation techniques available for analyses, facilitating the comparison with the experimental data and providing support for interpretation of the test results. It is organised to cover the following topics: analysis inputs; basic modelling requirements for reactor coolant system; method applicable for reactor cooling system; consideration of damping values and integration time steps; typical analytic models used for analysis of reactor pressure vessel and internals; hydrodynamic mass and fluid damping for the internal analysis; impact elements for fuel analysis; and PEI theorem and its applications. The intention of these topics is to identify the key parameters associated with models of analysis and analytical methods. This should provide proper basis for useful comparison with the test results

2. Gravity Matching Aided Inertial Navigation Technique Based on Marginal Robust Unscented Kalman Filter

Directory of Open Access Journals (Sweden)

Ming Liu

2015-01-01

Full Text Available This paper is concerned with the topic of gravity matching aided inertial navigation technology using Kalman filter. The dynamic state space model for Kalman filter is constructed as follows: the error equation of the inertial navigation system is employed as the process equation while the local gravity model based on 9-point surface interpolation is employed as the observation equation. The unscented Kalman filter is employed to address the nonlinearity of the observation equation. The filter is refined in two ways as follows. The marginalization technique is employed to explore the conditionally linear substructure to reduce the computational load; specifically, the number of the needed sigma points is reduced from 15 to 5 after this technique is used. A robust technique based on Chi-square test is employed to make the filter insensitive to the uncertainties in the above constructed observation model. Numerical simulation is carried out, and the efficacy of the proposed method is validated by the simulation results.

3. Damage identification in beams by a response surface based technique

Directory of Open Access Journals (Sweden)

Teidj S.

2014-01-01

Full Text Available In this work, identification of damage in uniform homogeneous metallic beams was considered through the propagation of non dispersive elastic torsional waves. The proposed damage detection procedure consisted of the following sequence. Giving a localized torque excitation, having the form of a short half-sine pulse, the first step was calculating the transient solution of the resulting torsional wave. This torque could be generated in practice by means of asymmetric laser irradiation of the beam surface. Then, a localized defect assumed to be characterized by an abrupt reduction of beam section area with a given height and extent was placed at a known location of the beam. Next, the response in terms of transverse section rotation rate was obtained for a point situated afterwards the defect, where the sensor was positioned. This last could utilize in practice the concept of laser vibrometry. A parametric study has been conducted after that by using a full factorial design of experiments table and numerical simulations based on a finite difference characteristic scheme. This has enabled the derivation of a response surface model that was shown to represent adequately the response of the system in terms of the following factors: defect extent and severity. The final step was performing the inverse problem solution in order to identify the defect characteristics by using measurement.

4. Numerical Simulation for Mechanical Behavior of Asphalt Pavement with Graded Aggregate Base

Directory of Open Access Journals (Sweden)

Dongliang He

2018-01-01

5. Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique

Science.gov (United States)

Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi

2013-09-01

According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.

6. Detecting Molecular Properties by Various Laser-Based Techniques

Energy Technology Data Exchange (ETDEWEB)

Hsin, Tse-Ming [Iowa State Univ., Ames, IA (United States)

2007-01-01

Four different laser-based techniques were applied to study physical and chemical characteristics of biomolecules and dye molecules. These techniques are liole burning spectroscopy, single molecule spectroscopy, time-resolved coherent anti-Stokes Raman spectroscopy and laser-induced fluorescence microscopy. Results from hole burning and single molecule spectroscopy suggested that two antenna states (C708 & C714) of photosystem I from cyanobacterium Synechocystis PCC 6803 are connected by effective energy transfer and the corresponding energy transfer time is ~6 ps. In addition, results from hole burning spectroscopy indicated that the chlorophyll dimer of the C714 state has a large distribution of the dimer geometry. Direct observation of vibrational peaks and evolution of coumarin 153 in the electronic excited state was demonstrated by using the fs/ps CARS, a variation of time-resolved coherent anti-Stokes Raman spectroscopy. In three different solvents, methanol, acetonitrile, and butanol, a vibration peak related to the stretch of the carbonyl group exhibits different relaxation dynamics. Laser-induced fluorescence microscopy, along with the biomimetic containers-liposomes, allows the measurement of the enzymatic activity of individual alkaline phosphatase from bovine intestinal mucosa without potential interferences from glass surfaces. The result showed a wide distribution of the enzyme reactivity. Protein structural variation is one of the major reasons that are responsible for this highly heterogeneous behavior.

7. Numerical investigation of complex flooding schemes for surfactant polymer based enhanced oil recovery

Science.gov (United States)

Dutta, Sourav; Daripa, Prabir

2015-11-01

Surfactant-polymer flooding is a widely used method of chemical enhanced oil recovery (EOR) in which an array of complex fluids containing suitable and varying amounts of surfactant or polymer or both mixed with water is injected into the reservoir. This is an example of multiphase, multicomponent and multiphysics porous media flow which is characterized by the spontaneous formation of complex viscous fingering patterns and is modeled by a system of strongly coupled nonlinear partial differential equations with appropriate initial and boundary conditions. Here we propose and discuss a modern, hybrid method based on a combination of a discontinuous, multiscale finite element formulation and the method of characteristics to accurately solve the system. Several types of flooding schemes and rheological properties of the injected fluids are used to numerically study the effectiveness of various injection policies in minimizing the viscous fingering and maximizing oil recovery. Numerical simulations are also performed to investigate the effect of various other physical and model parameters such as heterogeneity, relative permeability and residual saturation on the quantities of interest like cumulative oil recovery, sweep efficiency, fingering intensity to name a few. Supported by the grant NPRP 08-777-1-141 from the Qatar National Research Fund (a member of The Qatar Foundation).

8. L{sub 1/2} regularization based numerical method for effective reconstruction of bioluminescence tomography

Energy Technology Data Exchange (ETDEWEB)

Chen, Xueli, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn; Yang, Defu; Zhang, Qitan; Liang, Jimin, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn [School of Life Science and Technology, Xidian University, Xi' an 710071 (China); Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education (China)

2014-05-14

Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l{sub 1/2} regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l{sub 1/2} regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l{sub 1} regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.

9. Numerical Simulation of Recycled Concrete Using Convex Aggregate Model and Base Force Element Method

Directory of Open Access Journals (Sweden)

Yijiang Peng

2016-01-01

Full Text Available By using the Base Force Element Method (BFEM on potential energy principle, a new numerical concrete model, random convex aggregate model, is presented in this paper to simulate the experiment under uniaxial compression for recycled aggregate concrete (RAC which can also be referred to as recycled concrete. This model is considered as a heterogeneous composite which is composed of five mediums, including natural coarse aggregate, old mortar, new mortar, new interfacial transition zone (ITZ, and old ITZ. In order to simulate the damage processes of RAC, a curve damage model was adopted as the damage constitutive model and the strength theory of maximum tensile strain was used as the failure criterion in the BFEM on mesomechanics. The numerical results obtained in this paper which contained the uniaxial compressive strengths, size effects on strength, and damage processes of RAC are in agreement with experimental observations. The research works show that the random convex aggregate model and the BFEM with the curve damage model can be used for simulating the relationship between microstructure and mechanical properties of RAC.

10. Risk assessment of storm surge disaster based on numerical models and remote sensing

Science.gov (United States)

Liu, Qingrong; Ruan, Chengqing; Zhong, Shan; Li, Jian; Yin, Zhonghui; Lian, Xihu

2018-06-01

Storm surge is one of the most serious ocean disasters in the world. Risk assessment of storm surge disaster for coastal areas has important implications for planning economic development and reducing disaster losses. Based on risk assessment theory, this paper uses coastal hydrological observations, a numerical storm surge model and multi-source remote sensing data, proposes methods for valuing hazard and vulnerability for storm surge and builds a storm surge risk assessment model. Storm surges in different recurrence periods are simulated in numerical models and the flooding areas and depth are calculated, which are used for assessing the hazard of storm surge; remote sensing data and GIS technology are used for extraction of coastal key objects and classification of coastal land use are identified, which is used for vulnerability assessment of storm surge disaster. The storm surge risk assessment model is applied for a typical coastal city, and the result shows the reliability and validity of the risk assessment model. The building and application of storm surge risk assessment model provides some basis reference for the city development plan and strengthens disaster prevention and mitigation.

11. CFD-DEM based numerical simulation of liquid-gas-particle mixture flow in dam break

Science.gov (United States)

Park, Kyung Min; Yoon, Hyun Sik; Kim, Min Il

2018-06-01

This study investigates the multiphase flow of a liquid-gas-particle mixture in dam break. The open source codes, OpenFOAM and CFDEMproject, were used to reproduce the multiphase flow. The results of the present study are compared with those of previous results obtained by numerical and experimental methods, which guarantees validity of present numerical method to handle the multiphase flow. The particle density ranging from 1100 to 2500 kg/m3 is considered to investigate the effect of the particle density on the behavior of the free-surface and the particles. The particle density has no effect on the liquid front, but it makes the particle front move with different velocity. The time when the liquid front reach at the opposite wall is independent of particle density. However, such time for particle front decrease as particle density increases, which turned out to be proportional to particle density. Based on these results, we classified characteristics of the movement by the front positions of the liquid and the particles. Eventually, the response of the free-surface and particles to particle density is identified by three motion regimes of the advancing, overlapping and delaying motions.

12. Numerical approximations for the molecular beam epitaxial growth model based on the invariant energy quadratization method

Energy Technology Data Exchange (ETDEWEB)

Yang, Xiaofeng, E-mail: xfyang@math.sc.edu [Department of Mathematics, University of South Carolina, Columbia, SC 29208 (United States); Zhao, Jia, E-mail: zhao62@math.sc.edu [Department of Mathematics, University of South Carolina, Columbia, SC 29208 (United States); Department of Mathematics, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 (United States); Wang, Qi, E-mail: qwang@math.sc.edu [Department of Mathematics, University of South Carolina, Columbia, SC 29208 (United States); Beijing Computational Science Research Center, Beijing (China); School of Materials Science and Engineering, Nankai University, Tianjin (China)

2017-03-15

The Molecular Beam Epitaxial model is derived from the variation of a free energy, that consists of either a fourth order Ginzburg–Landau double well potential or a nonlinear logarithmic potential in terms of the gradient of a height function. One challenge in solving the MBE model numerically is how to develop proper temporal discretization for the nonlinear terms in order to preserve energy stability at the time-discrete level. In this paper, we resolve this issue by developing a first and second order time-stepping scheme based on the “Invariant Energy Quadratization” (IEQ) method. The novelty is that all nonlinear terms are treated semi-explicitly, and the resulted semi-discrete equations form a linear system at each time step. Moreover, the linear operator is symmetric positive definite and thus can be solved efficiently. We then prove that all proposed schemes are unconditionally energy stable. The semi-discrete schemes are further discretized in space using finite difference methods and implemented on GPUs for high-performance computing. Various 2D and 3D numerical examples are presented to demonstrate stability and accuracy of the proposed schemes.

13. Analyzing asteroid reflectance spectra with numerical tools based on scattering simulations

Science.gov (United States)

Penttilä, Antti; Väisänen, Timo; Markkanen, Johannes; Martikainen, Julia; Gritsevich, Maria; Muinonen, Karri

2017-04-01

We are developing a set of numerical tools that can be used in analyzing the reflectance spectra of granular materials such as the regolith surface of atmosphereless Solar system objects. Our goal is to be able to explain, with realistic numerical scattering models, the spectral features arising when materials are intimately mixed together. We include the space-weathering -type effects in our simulations, i.e., mixing host mineral locally with small inclusions of another material in small proportions. Our motivation for this study comes from the present lack of such tools. The current common practice is to apply a semi-physical approximate model such as some variation of Hapke models [e.g., 1] or the Shkuratov model [2]. These models are expressed in a closed form so that they are relatively fast to apply. They are based on simplifications on the radiative transfer theory. The problem is that the validity of the model is not always guaranteed, and the derived physical properties related to particle scattering properties can be unrealistic [3]. We base our numerical tool into a chain of scattering simulations. Scattering properties of small inclusions inside an absorbing host matrix can be derived using exact methods solving the Maxwell equations of the system. The next step, scattering by a single regolith grain, is solved using a geometrical optics method accounting for surface reflections, internal absorption, and possibly the internal diffuse scattering. The third step involves the radiative transfer simulations of these regolith grains in a macroscopic planar element. The chain can be continued next with shadowing simulation over the target surface elements, and finally by integrating the bidirectional reflectance distribution function over the object's shape. Most of the tools in the proposed chain already exist, and one practical task for us is to tie these together into an easy-to-use toolchain that can be publicly distributed. We plan to open the

14. Determination of rock fragmentation based on a photographic technique

International Nuclear Information System (INIS)

Dehgan Banadaki, M.M.; Majdi, A.; Raessi Gahrooei, D.

2002-01-01

The paper represents a physical blasting model in laboratory scale along with a photographic approach to describe the distribution of blasted rock materials. For this purpose, based on wobble probability distribution function, eight samples each weighted 100 kg,were obtained. Four pictures from four different section of each sample were taken. Then, pictures were converted into graphic files with characterizing boundary of each piece of rocks in the samples. Error caused due to perspective were eliminated. Volume of each piece of the blasted rock materials and hence the required sieve size, each piece of rock to pass through, were calculated. Finally, original blasted rock size distribution was compared with that obtained from the photographic method. The paper concludes with presenting an approach to convert the results of photographic technique into size distribution obtained by seine analysis with sufficient verification

15. Whitelists Based Multiple Filtering Techniques in SCADA Sensor Networks

Directory of Open Access Journals (Sweden)

DongHo Kang

2014-01-01

Full Text Available Internet of Things (IoT consists of several tiny devices connected together to form a collaborative computing environment. Recently IoT technologies begin to merge with supervisory control and data acquisition (SCADA sensor networks to more efficiently gather and analyze real-time data from sensors in industrial environments. But SCADA sensor networks are becoming more and more vulnerable to cyber-attacks due to increased connectivity. To safely adopt IoT technologies in the SCADA environments, it is important to improve the security of SCADA sensor networks. In this paper we propose a multiple filtering technique based on whitelists to detect illegitimate packets. Our proposed system detects the traffic of network and application protocol attacks with a set of whitelists collected from normal traffic.

16. Demand Management Based on Model Predictive Control Techniques

Directory of Open Access Journals (Sweden)

Yasser A. Davizón

2014-01-01

Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.

17. Clustering economies based on multiple criteria decision making techniques

Directory of Open Access Journals (Sweden)

Mansour Momeni

2011-10-01

Full Text Available One of the primary concerns on many countries is to determine different important factors affecting economic growth. In this paper, we study some factors such as unemployment rate, inflation ratio, population growth, average annual income, etc to cluster different countries. The proposed model of this paper uses analytical hierarchy process (AHP to prioritize the criteria and then uses a K-mean technique to cluster 59 countries based on the ranked criteria into four groups. The first group includes countries with high standards such as Germany and Japan. In the second cluster, there are some developing countries with relatively good economic growth such as Saudi Arabia and Iran. The third cluster belongs to countries with faster rates of growth compared with the countries located in the second group such as China, India and Mexico. Finally, the fourth cluster includes countries with relatively very low rates of growth such as Jordan, Mali, Niger, etc.

18. Diagnosis of Dengue Infection Using Conventional and Biosensor Based Techniques

Science.gov (United States)

Parkash, Om; Hanim Shueb, Rafidah

2015-01-01

Dengue is an arthropod-borne viral disease caused by four antigenically different serotypes of dengue virus. This disease is considered as a major public health concern around the world. Currently, there is no licensed vaccine or antiviral drug available for the prevention and treatment of dengue disease. Moreover, clinical features of dengue are indistinguishable from other infectious diseases such as malaria, chikungunya, rickettsia and leptospira. Therefore, prompt and accurate laboratory diagnostic test is urgently required for disease confirmation and patient triage. The traditional diagnostic techniques for the dengue virus are viral detection in cell culture, serological testing, and RNA amplification using reverse transcriptase PCR. This paper discusses the conventional laboratory methods used for the diagnosis of dengue during the acute and convalescent phase and highlights the advantages and limitations of these routine laboratory tests. Subsequently, the biosensor based assays developed using various transducers for the detection of dengue are also reviewed. PMID:26492265

19. An RSS based location estimation technique for cognitive relay networks

KAUST Repository

Qaraqe, Khalid A.

2010-11-01

In this paper, a received signal strength (RSS) based location estimation method is proposed for a cooperative wireless relay network where the relay is a cognitive radio. We propose a method for the considered cognitive relay network to determine the location of the source using the direct and the relayed signal at the destination. We derive the Cramer-Rao lower bound (CRLB) expressions separately for x and y coordinates of the location estimate. We analyze the effects of cognitive behaviour of the relay on the performance of the proposed method. We also discuss and quantify the reliability of the location estimate using the proposed technique if the source is not stationary. The overall performance of the proposed method is presented through simulations. ©2010 IEEE.

20. Astronomical Image Compression Techniques Based on ACC and KLT Coder

Directory of Open Access Journals (Sweden)

J. Schindler

2011-01-01

Full Text Available This paper deals with a compression of image data in applications in astronomy. Astronomical images have typical specific properties — high grayscale bit depth, size, noise occurrence and special processing algorithms. They belong to the class of scientific images. Their processing and compression is quite different from the classical approach of multimedia image processing. The database of images from BOOTES (Burst Observer and Optical Transient Exploring System has been chosen as a source of the testing signal. BOOTES is a Czech-Spanish robotic telescope for observing AGN (active galactic nuclei and the optical transient of GRB (gamma ray bursts searching. This paper discusses an approach based on an analysis of statistical properties of image data. A comparison of two irrelevancy reduction methods is presented from a scientific (astrometric and photometric point of view. The first method is based on a statistical approach, using the Karhunen-Loeve transform (KLT with uniform quantization in the spectral domain. The second technique is derived from wavelet decomposition with adaptive selection of used prediction coefficients. Finally, the comparison of three redundancy reduction methods is discussed. Multimedia format JPEG2000 and HCOMPRESS, designed especially for astronomical images, are compared with the new Astronomical Context Coder (ACC coder based on adaptive median regression.

1. Numerical evolutions of fields on the 2-sphere using a spectral method based on spin-weighted spherical harmonics

International Nuclear Information System (INIS)

Beyer, Florian; Daszuta, Boris; Frauendiener, Jörg; Whale, Ben

2014-01-01

Many applications in science call for the numerical simulation of systems on manifolds with spherical topology. Through the use of integer spin-weighted spherical harmonics, we present a method which allows for the implementation of arbitrary tensorial evolution equations. Our method combines two numerical techniques that were originally developed with different applications in mind. The first is Huffenberger and Wandelt’s spectral decomposition algorithm to perform the mapping from physical to spectral space. The second is the application of Luscombe and Luban’s method, to convert numerically divergent linear recursions into stable nonlinear recursions, to the calculation of reduced Wigner d-functions. We give a detailed discussion of the theory and numerical implementation of our algorithm. The properties of our method are investigated by solving the scalar and vectorial advection equation on the sphere, as well as the 2 + 1 Maxwell equations on a deformed sphere. (paper)

2. An investigation of a video-based patient repositioning technique

International Nuclear Information System (INIS)

Yan Yulong; Song Yulin; Boyer, Arthur L.

2002-01-01

Purpose: We have investigated a video-based patient repositioning technique designed to use skin features for radiotherapy repositioning. We investigated the feasibility of the clinical application of this system by quantitative evaluation of performance characteristics of the methodology. Methods and Materials: Multiple regions of interest (ROI) were specified in the field of view of video cameras. We used a normalized correlation pattern-matching algorithm to compute the translations of each ROI pattern in a target image. These translations were compared against trial translations using a quadratic cost function for an optimization process in which the patient rotation and translational parameters were calculated. Results: A hierarchical search technique achieved high-speed (compute correlation for 128x128 ROI in 512x512 target image within 0.005 s) and subpixel spatial accuracy (as high as 0.2 pixel). By treating the observed translations as movements of points on the surfaces of a hypothetical cube, we were able to estimate accurately the actual translations and rotations of the test phantoms used in our experiments to less than 1 mm and 0.2 deg. with a standard deviation of 0.3 mm and 0.5 deg. respectively. For human volunteer cases, we estimated the translations and rotations to have an accuracy of 2 mm and 1.2 deg. Conclusion: A personal computer-based video system is suitable for routine patient setup of fractionated conformal radiotherapy. It is expected to achieve high-precision repositioning of the skin surface with high efficiency

3. Methods of numerical relativity

International Nuclear Information System (INIS)

Piran, T.

1983-01-01

Numerical Relativity is an alternative to analytical methods for obtaining solutions for Einstein equations. Numerical methods are particularly useful for studying generation of gravitational radiation by potential strong sources. The author reviews the analytical background, the numerical analysis aspects and techniques and some of the difficulties involved in numerical relativity. (Auth.)

4. Higher Order Numerical Methods and Use of Estimation Techniques to Improve Modeling of Two-Phase Flow in Pipelines and Wells

Energy Technology Data Exchange (ETDEWEB)

Lorentzen, Rolf Johan

2002-04-01

The main objective of this thesis is to develop methods which can be used to improve predictions of two-phase flow (liquid and gas) in pipelines and wells. More reliable predictions are accomplished by improvements of numerical methods, and by using measured data to tune the mathematical model which describes the two-phase flow. We present a way to extend simple numerical methods to second order spatial accuracy. These methods are implemented, tested and compared with a second order Godunov-type scheme. In addition, a new (and faster) version of the Godunov-type scheme utilizing primitive (observable) variables is presented. We introduce a least squares method which is used to tune parameters embedded in the two-phase flow model. This method is tested using synthetic generated measurements. We also present an ensemble Kalman filter which is used to tune physical state variables and model parameters. This technique is tested on synthetic generated measurements, but also on several sets of full-scale experimental measurements. The thesis is divided into an introductory part, and a part consisting of four papers. The introduction serves both as a summary of the material treated in the papers, and as supplementary background material. It contains five sections, where the first gives an overview of the main topics which are addressed in the thesis. Section 2 contains a description and discussion of mathematical models for two-phase flow in pipelines. Section 3 deals with the numerical methods which are used to solve the equations arising from the two-phase flow model. The numerical scheme described in Section 3.5 is not included in the papers. This section includes results in addition to an outline of the numerical approach. Section 4 gives an introduction to estimation theory, and leads towards application of the two-phase flow model. The material in Sections 4.6 and 4.7 is not discussed in the papers, but is included in the thesis as it gives an important validation

5. Bending of Euler-Bernoulli nanobeams based on the strain-driven and stress-driven nonlocal integral models: a numerical approach

Science.gov (United States)

Oskouie, M. Faraji; Ansari, R.; Rouhi, H.

2018-04-01

Eringen's nonlocal elasticity theory is extensively employed for the analysis of nanostructures because it is able to capture nanoscale effects. Previous studies have revealed that using the differential form of the strain-driven version of this theory leads to paradoxical results in some cases, such as bending analysis of cantilevers, and recourse must be made to the integral version. In this article, a novel numerical approach is developed for the bending analysis of Euler-Bernoulli nanobeams in the context of strain- and stress-driven integral nonlocal models. This numerical approach is proposed for the direct solution to bypass the difficulties related to converting the integral governing equation into a differential equation. First, the governing equation is derived based on both strain-driven and stress-driven nonlocal models by means of the minimum total potential energy. Also, in each case, the governing equation is obtained in both strong and weak forms. To solve numerically the derived equations, matrix differential and integral operators are constructed based upon the finite difference technique and trapezoidal integration rule. It is shown that the proposed numerical approach can be efficiently applied to the strain-driven nonlocal model with the aim of resolving the mentioned paradoxes. Also, it is able to solve the problem based on the strain-driven model without inconsistencies of the application of this model that are reported in the literature.

6. A hybrid bird mating optimizer algorithm with teaching-learning-based optimization for global numerical optimization

Directory of Open Access Journals (Sweden)

Qingyang Zhang

2015-02-01

Full Text Available Bird Mating Optimizer (BMO is a novel meta-heuristic optimization algorithm inspired by intelligent mating behavior of birds. However, it is still insufficient in convergence of speed and quality of solution. To overcome these drawbacks, this paper proposes a hybrid algorithm (TLBMO, which is established by combining the advantages of Teaching-learning-based optimization (TLBO and Bird Mating Optimizer (BMO. The performance of TLBMO is evaluated on 23 benchmark functions, and compared with seven state-of-the-art approaches, namely BMO, TLBO, Artificial Bee Bolony (ABC, Particle Swarm Optimization (PSO, Fast Evolution Programming (FEP, Differential Evolution (DE, Group Search Optimization (GSO. Experimental results indicate that the proposed method performs better than other existing algorithms for global numerical optimization.

7. Wavelet Analysis on Turbulent Structure in Drag-Reducing Channel Flow Based on Direct Numerical Simulation

Directory of Open Access Journals (Sweden)

Xuan Wu

2013-01-01

Full Text Available Direct numerical simulation has been performed to study a polymer drag-reducing channel flow by using a discrete-element model. And then, wavelet analyses are employed to investigate the multiresolution characteristics of velocity components based on DNS data. Wavelet decomposition is applied to decompose velocity fluctuation time series into ten different frequency components including approximate component and detailed components, which show more regular intermittency and burst events in drag-reducing flow. The energy contribution, intermittent factor, and intermittent energy are calculated to investigate characteristics of different frequency components. The results indicate that energy contributions of different frequency components are redistributed by polymer additives. The energy contribution of streamwise approximate component in drag-reducing flow is up to 82%, much more than 25% in the Newtonian flow. Feature of turbulent multiscale structures is shown intuitively by continuous wavelet transform, verifying that turbulent structures become much more regular in drag-reducing flow.

8. Free Software Development. 3. Numerical Description of Soft Acid with Soft Base Titration

Directory of Open Access Journals (Sweden)

Lorentz JÄNTSCHI

2002-12-01

Full Text Available The analytical methods of qualitative and quantitative determination of ions in solutions are very flexible to automation. The present work is focus on modeling the process of titration and presents a numerical simulation of acid-base titration. A PHP program to compute all iterations in titration process that solves a 3th rank equation to find value of pH for was built and is available through http internet protocol at the address: http://vl.academicdirect.org/molecular_dynamics/ab_titrations/v1.1/ The method allows expressing the value of pH in any point of titration process and permits to observe the equivalence point of titration.

9. Application of the results of experimental and numerical turbulent flow researches based on pressure pulsations analysis

Science.gov (United States)

Kovalnogov, Vladislav N.; Fedorov, Ruslan V.; Khakhalev, Yuri A.; Khakhaleva, Larisa V.; Chukalin, Andrei V.

2017-07-01

The numerical investigation of the turbulent flow with the impacts, based on a modified Prandtl mixing-length model with using of the analysis of pulsations of pressure, calculation of structure and a friction factor of a turbulent flow is made. These results under the study allowed us to propose a new design of a cooled turbine blade and gas turbine mobile. The turbine blade comprises a combined cooling and cylindrical cavity on the blade surface, and on the inner surfaces of the cooling channels too damping cavity located on the guide vanes of the compressor of a gas turbine engine, increase the supply of gas-dynamic stability of the compressor of a gas turbine engine, reduce the resistance of the guide blades, and increase the efficiency of the turbine engine.

10. Numerical simulation and experiments of precision bar cutting based on high speed and restrained state

International Nuclear Information System (INIS)

Song, J.L.; Li, Y.T.; Liu, Z.Q.; Fu, J.H.; Ting, K.L.

2009-01-01

According to the disadvantages of conventional bar cutting technology such as low-cutting speed, inferior section quality, high-processing cost and so on, a kind of novel precision bar cutting technology has been proposed. The cutting mechanism has also been analyzed. Finite element numerical simulation of the bar cutting process under different working conditions has been carried out with DEFORM. The stress and strain fields at different cutting speed and the variation curves of the cutting force and appropriate cutting parameters have been obtained. Scanning electron microscopy analysis of the cutting surface showed that the finite-element simulation result is correct and better cutting quality can be obtained with the developed bar cutting technology and equipment based on high speed and restrained state

11. Numerical Investigation on Electron and Ion Transmission of GEM-based Detectors

Science.gov (United States)

Bhattacharya, Purba; Sahoo, Sumanya Sekhar; Biswas, Saikat; Mohanty, Bedangadas; Majumdar, Nayana; Mukhopadhyay, Supratik

2018-02-01

ALICE at the LHC is planning a major upgrade of its detector systems, including the TPC, to cope with an increase of the LHC luminosity after 2018. Different R&D activities are currently concentrated on the adoption of the Gas Electron Multiplier (GEM) as the gas amplification stage of the ALICE-TPC upgrade version. The major challenge is to have low ion feedback in the drift volume as well as to ensure a collection of good percentage of primary electrons in the signal generation process. In the present work, Garfield simulation framework has been adopted to numerically estimate the electron transparency and ion backflow fraction of GEM-based detectors. In this process, extensive simulations have been carried out to enrich our understanding of the complex physical processes occurring within single, triple and quadruple GEM detectors. A detailed study has been performed to observe the effect of detector geometry, field configuration and magnetic field on the above mentioned characteristics.

12. Estimation of single plane unbalance parameters of a rotor-bearing system using Kalman filtering based force estimation technique

Science.gov (United States)

Shrivastava, Akash; Mohanty, A. R.

2018-03-01

This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.

13. A comparison of approximation techniques for variance-based sensitivity analysis of biochemical reaction systems

Directory of Open Access Journals (Sweden)

Goutsias John

2010-05-01

Full Text Available Abstract Background Sensitivity analysis is an indispensable tool for the analysis of complex systems. In a recent paper, we have introduced a thermodynamically consistent variance-based sensitivity analysis approach for studying the robustness and fragility properties of biochemical reaction systems under uncertainty in the standard chemical potentials of the activated complexes of the reactions and the standard chemical potentials of the molecular species. In that approach, key sensitivity indices were estimated by Monte Carlo sampling, which is computationally very demanding and impractical for large biochemical reaction systems. Computationally efficient algorithms are needed to make variance-based sensitivity analysis applicable to realistic cellular networks, modeled by biochemical reaction systems that consist of a large number of reactions and molecular species. Results We present four techniques, derivative approximation (DA, polynomial approximation (PA, Gauss-Hermite integration (GHI, and orthonormal Hermite approximation (OHA, for analytically approximating the variance-based sensitivity indices associated with a biochemical reaction system. By using a well-known model of the mitogen-activated protein kinase signaling cascade as a case study, we numerically compare the approximation quality of these techniques against traditional Monte Carlo sampling. Our results indicate that, although DA is computationally the most attractive technique, special care should be exercised when using it for sensitivity analysis, since it may only be accurate at low levels of uncertainty. On the other hand, PA, GHI, and OHA are computationally more demanding than DA but can work well at high levels of uncertainty. GHI results in a slightly better accuracy than PA, but it is more difficult to implement. OHA produces the most accurate approximation results and can be implemented in a straightforward manner. It turns out that the computational cost of the

14. A physiologically-inspired model of numerical classification based on graded stimulus coding

Directory of Open Access Journals (Sweden)

John Pearson

2010-01-01

Full Text Available In most natural decision contexts, the process of selecting among competing actions takes place in the presence of informative, but potentially ambiguous, stimuli. Decisions about magnitudes—quantities like time, length, and brightness that are linearly ordered—constitute an important subclass of such decisions. It has long been known that perceptual judgments about such quantities obey Weber’s Law, wherein the just-noticeable difference in a magnitude is proportional to the magnitude itself. Current physiologically inspired models of numerical classification assume discriminations are made via a labeled line code of neurons selectively tuned for numerosity, a pattern observed in the firing rates of neurons in the ventral intraparietal area (VIP of the macaque. By contrast, neurons in the contiguous lateral intraparietal area (LIP signal numerosity in a graded fashion, suggesting the possibility that numerical classification could be achieved in the absence of neurons tuned for number. Here, we consider the performance of a decision model based on this analog coding scheme in a paradigmatic discrimination task—numerosity bisection. We demonstrate that a basic two-neuron classifier model, derived from experimentally measured monotonic responses of LIP neurons, is sufficient to reproduce the numerosity bisection behavior of monkeys, and that the threshold of the classifier can be set by reward maximization via a simple learning rule. In addition, our model predicts deviations from Weber Law scaling of choice behavior at high numerosity. Together, these results suggest both a generic neuronal framework for magnitude-based decisions and a role for reward contingency in the classification of such stimuli.

15. Numerical simulation and comparison of nonlinear self-focusing based on iteration and ray tracing

Science.gov (United States)

Li, Xiaotong; Chen, Hao; Wang, Weiwei; Ruan, Wangchao; Zhang, Luwei; Cen, Zhaofeng

2017-05-01

Self-focusing is observed in nonlinear materials owing to the interaction between laser and matter when laser beam propagates. Some of numerical simulation strategies such as the beam propagation method (BPM) based on nonlinear Schrödinger equation and ray tracing method based on Fermat's principle have applied to simulate the self-focusing process. In this paper we present an iteration nonlinear ray tracing method in that the nonlinear material is also cut into massive slices just like the existing approaches, but instead of paraxial approximation and split-step Fourier transform, a large quantity of sampled real rays are traced step by step through the system with changing refractive index and laser intensity by iteration. In this process a smooth treatment is employed to generate a laser density distribution at each slice to decrease the error caused by the under-sampling. The characteristics of this method is that the nonlinear refractive indices of the points on current slice are calculated by iteration so as to solve the problem of unknown parameters in the material caused by the causal relationship between laser intensity and nonlinear refractive index. Compared with the beam propagation method, this algorithm is more suitable for engineering application with lower time complexity, and has the calculation capacity for numerical simulation of self-focusing process in the systems including both of linear and nonlinear optical media. If the sampled rays are traced with their complex amplitudes and light paths or phases, it will be possible to simulate the superposition effects of different beam. At the end of the paper, the advantages and disadvantages of this algorithm are discussed.

16. Huffman-based code compression techniques for embedded processors

KAUST Repository

Bonny, Mohamed Talal; Henkel, Jö rg

2010-01-01

% for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures, namely ARM and MIPS. © 2010 ACM.

17. Skull base tumours part I: Imaging technique, anatomy and anterior skull base tumours

International Nuclear Information System (INIS)

Borges, Alexandra

2008-01-01

Advances in cross-sectional imaging, surgical technique and adjuvant treatment have largely contributed to ameliorate the prognosis, lessen the morbidity and mortality of patients with skull base tumours and to the growing medical investment in the management of these patients. Because clinical assessment of the skull base is limited, cross-sectional imaging became indispensable in the diagnosis, treatment planning and follow-up of patients with suspected skull base pathology and the radiologist is increasingly responsible for the fate of these patients. This review will focus on the advances in imaging technique; contribution to patient's management and on the imaging features of the most common tumours affecting the anterior skull base. Emphasis is given to a systematic approach to skull base pathology based upon an anatomic division taking into account the major tissue constituents in each skull base compartment. The most relevant information that should be conveyed to surgeons and radiation oncologists involved in patient's management will be discussed

18. Comparative assessment of PIV-based pressure evaluation techniques applied to a transonic base flow

NARCIS (Netherlands)

Blinde, P; Michaelis, D; van Oudheusden, B.W.; Weiss, P.E.; de Kat, R.; Laskari, A.; Jeon, Y.J.; David, L; Schanz, D; Huhn, F.; Gesemann, S; Novara, M.; McPhaden, C.; Neeteson, N.; Rival, D.; Schneiders, J.F.G.; Schrijer, F.F.J.

2016-01-01

A test case for PIV-based pressure evaluation techniques has been developed by constructing a simulated experiment from a ZDES simulation for an axisymmetric base flow at Mach 0.7. The test case comprises sequences of four subsequent particle images (representing multi-pulse data) as well as

19. UAV based hydromorphological mapping of a river reach to improve hydrodynamic numerical models

Science.gov (United States)

Lükő, Gabriella; Baranya, Sándor; Rüther, Nils

2017-04-01

Unmanned Aerial Vehicles (UAVs) are increasingly used in the field of engineering surveys. In river engineering, or in general, water resources engineering, UAV based measurements have a huge potential. For instance, indirect measurements of the flow discharge using e.g. large-scale particle image velocimetry (LSPIV), particle tracking velocimetry (PTV), space-time image velocimetry (STIV) or radars became a real alternative for direct flow measurements. Besides flow detection, topographic surveys are also essential for river flow studies as the channel and floodplain geometry is the primary steering feature of the flow. UAVs can play an important role in this field, too. The widely used laser based topographic survey method (LIDAR) can be deployed on UAVs, moreover, the application of the Structure from Motion (SfM) method, which is based on images taken by UAVs, might be an even more cost-efficient alternative to reveal the geometry of distinct objects in the river or on the floodplain. The goal of this study is to demonstrate the utilization of photogrammetry and videogrammetry from airborne footage to provide geometry and flow data for a hydrodynamic numerical simulation of a 2 km long river reach in Albania. First, the geometry of the river is revealed from photogrammetry using the SfM method. Second, a more detailed view of the channel bed at low water level is taken. Using the fine resolution images, a Matlab based code, BASEGrain, developed by the ETH in Zürich, will be applied to determine the grain size characteristics of the river bed. This information will be essential to define the hydraulic roughness in the numerical model. Third, flow mapping is performed using UAV measurements and LSPIV method to quantitatively asses the flow field at the free surface and to estimate the discharge in the river. All data collection and analysis will be carried out using a simple, low-cost UAV, moreover, for all the data processing, open source, freely available

20. A novel technique for active vibration control, based on optimal

In the last few decades, researchers have proposed many control techniques to suppress unwanted vibrations in a structure. In this work, a novel and simple technique is proposed for the active vibration control. In this technique, an optimal tracking control is employed to suppress vibrations in a structure by simultaneously ...

1. Acellular dermal matrix based nipple reconstruction: A modified technique

Directory of Open Access Journals (Sweden)

Raghavan Vidya

2017-09-01

Full Text Available Nipple areolar reconstruction (NAR has evolved with the advancement in breast reconstruction and can improve self-esteem and, consequently, patient satisfaction. Although a variety of reconstruction techniques have been described in the literature varying from nipple sharing, local flaps to alloplastic and allograft augmentation, over time, loss of nipple projection remains a major problem. Acellular dermal matrices (ADM have revolutionised breast reconstruction more recently. We discuss the use of ADM to act as a base plate and strut to give support to the base and offer nipple bulk and projection in a primary procedure of NAR with a local clover shaped dermal flap in 5 breasts (4 patients. We used 5-point Likert scales (1 = highly unsatisfied, 5 = highly satisfied to assess patient satisfaction. Median age was 46 years (range: 38–55 years. Nipple projection of 8 mm, 7 mm, and 7 mms were achieved in the unilateral cases and 6 mm in the bilateral case over a median 18 month period. All patients reported at least a 4 on the Likert scale. We had no post-operative complications. It seems that nipple areolar reconstruction [NAR] using ADM can achieve nipple projection which is considered aesthetically pleasing for patients.

2. Crack identification based on synthetic artificial intelligent technique

International Nuclear Information System (INIS)

Shim, Mun Bo; Suh, Myung Won

2001-01-01

It has been established that a crack has an important effect on the dynamic behavior of a structure. This effect depends mainly on the location and depth of the crack. To identify the location and depth of a crack in a structure, a method is presented in this paper which uses synthetic artificial intelligent technique, that is, Adaptive-Network-based Fuzzy Inference System(ANFIS) solved via hybrid learning algorithm(the back-propagation gradient descent and the least-squares method) are used to learn the input(the location and depth of a crack)-output(the structural eigenfrequencies) relation of the structural system. With this ANFIS and a Continuous Evolutionary Algorithm(CEA), it is possible to formulate the inverse problem. CEAs based on genetic algorithms work efficiently for continuous search space optimization problems like a parameter identification problem. With this ANFIS, CEAs are used to identify the crack location and depth minimizing the difference from the measured frequencies. We have tried this new idea on a simple beam structure and the results are promising

3. Structural design systems using knowledge-based techniques

International Nuclear Information System (INIS)

Orsborn, K.

1993-01-01

Engineering information management and the corresponding information systems are of a strategic importance for industrial enterprises. This thesis treats the interdisciplinary field of designing computing systems for structural design and analysis using knowledge-based techniques. Specific conceptual models have been designed for representing the structure and the process of objects and activities in a structural design and analysis domain. In this thesis, it is shown how domain knowledge can be structured along several classification principles in order to reduce complexity and increase flexibility. By increasing the conceptual level of the problem description and representation of the domain knowledge in a declarative form, it is possible to enhance the development, maintenance and use of software for mechanical engineering. This will result in a corresponding increase of the efficiency of the mechanical engineering design process. These ideas together with the rule-based control point out the leverage of declarative knowledge representation within this domain. Used appropriately, a declarative knowledge representation preserves information better, is more problem-oriented and change-tolerant than procedural representations. 74 refs

4. Positron emission tomography, physical bases and comparaison with other techniques

International Nuclear Information System (INIS)

Guermazi, Fadhel; Hamza, F; Amouri, W.; Charfeddine, S.; Kallel, S.; Jardak, I.

2013-01-01

Positron emission tomography (PET) is a medical imaging technique that measures the three-dimensional distribution of molecules marked by a positron-emitting particle. PET has grown significantly in clinical fields, particularly in oncology for diagnosis and therapeutic follow purposes. The technical evolutions of this technique are fast. Among the technical improvements, is the coupling of the PET scan with computed tomography (CT). PET is obtained by intravenous injection of a radioactive tracer. The marker is usually fluorine ( 18 F) embedded in a glucose molecule forming the 18-fluorodeoxyglucose (FDG-18). This tracer, similar to glucose, binds to tissues that consume large quantities of the sugar such cancerous tissue, cardiac muscle or brain. Detection using scintillation crystals (BGO, LSO, LYSO) suitable for high energy (511keV) recognizes the lines of the gamma photons originating from the annihilation of a positron with an electron. The electronics of detection or coincidence circuit is based on two criteria: a time window, of about 6 to 15 ns, and an energy window. This system measures the true coincidences that correspond to the detection of two photons of 511 kV from the same annihilation. Most PET devices are constituted by a series of elementary detectors distributed annularly around the patient. Each detector comprises a scintillation crystal matrix coupled to a finite number (4 or 6) of photomultipliers. The electronic circuit, or the coincidence circuit, determines the projection point of annihilation by means of two elementary detectors. The processing of such information must be extremely fast, considering the count rates encountered in practice. The information measured by the coincidence circuit is then positioned in a matrix or sinogram, which contains a set of elements of a projection section of the object. Images are obtained by tomographic reconstruction by powerful computer stations equipped with a software tools allowing the analysis and

5. Numerical analysis

CERN Document Server

Scott, L Ridgway

2011-01-01

Computational science is fundamentally changing how technological questions are addressed. The design of aircraft, automobiles, and even racing sailboats is now done by computational simulation. The mathematical foundation of this new approach is numerical analysis, which studies algorithms for computing expressions defined with real numbers. Emphasizing the theory behind the computation, this book provides a rigorous and self-contained introduction to numerical analysis and presents the advanced mathematics that underpin industrial software, including complete details that are missing from most textbooks. Using an inquiry-based learning approach, Numerical Analysis is written in a narrative style, provides historical background, and includes many of the proofs and technical details in exercises. Students will be able to go beyond an elementary understanding of numerical simulation and develop deep insights into the foundations of the subject. They will no longer have to accept the mathematical gaps that ex...

6. High Performance Computation of a Jet in Crossflow by Lattice Boltzmann Based Parallel Direct Numerical Simulation

Directory of Open Access Journals (Sweden)

Jiang Lei

2015-01-01

Full Text Available Direct numerical simulation (DNS of a round jet in crossflow based on lattice Boltzmann method (LBM is carried out on multi-GPU cluster. Data parallel SIMT (single instruction multiple thread characteristic of GPU matches the parallelism of LBM well, which leads to the high efficiency of GPU on the LBM solver. With present GPU settings (6 Nvidia Tesla K20M, the present DNS simulation can be completed in several hours. A grid system of 1.5 × 108 is adopted and largest jet Reynolds number reaches 3000. The jet-to-free-stream velocity ratio is set as 3.3. The jet is orthogonal to the mainstream flow direction. The validated code shows good agreement with experiments. Vortical structures of CRVP, shear-layer vortices and horseshoe vortices, are presented and analyzed based on velocity fields and vorticity distributions. Turbulent statistical quantities of Reynolds stress are also displayed. Coherent structures are revealed in a very fine resolution based on the second invariant of the velocity gradients.

7. Validation techniques of agent based modelling for geospatial simulations

Directory of Open Access Journals (Sweden)

M. Darvishi

2014-10-01

Full Text Available One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS, biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI’s ArcGIS, OpenMap, GeoTools, etc for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

8. Validation techniques of agent based modelling for geospatial simulations

Science.gov (United States)

2014-10-01

One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

9. Validation of a numerical 3-D fluid-structure interaction model for a prosthetic valve based on experimental PIV measurements.

Science.gov (United States)

Guivier-Curien, Carine; Deplano, Valérie; Bertrand, Eric

2009-10-01

A numerical 3-D fluid-structure interaction (FSI) model of a prosthetic aortic valve was developed, based on a commercial computational fluid dynamics (CFD) software program using an Arbitrary Eulerian Lagrangian (ALE) formulation. To make sure of the validity of this numerical model, an equivalent experimental model accounting for both the geometrical features and the hydrodynamic conditions was also developed. The leaflet and the flow behaviours around the bileaflet valve were investigated numerically and experimentally by performing particle image velocimetry (PIV) measurements. Through quantitative and qualitative comparisons, it was shown that the leaflet behaviour and the velocity fields were similar in both models. The present study allows the validation of a fully coupled 3-D FSI numerical model. The promising numerical tool could be therefore used to investigate clinical issues involving the aortic valve.

10. Ground-based intercomparison of two isoprene measurement techniques

Directory of Open Access Journals (Sweden)

E. Leibrock

2003-01-01

Full Text Available An informal intercomparison of two isoprene (C5H8 measurement techniques was carried out during Fall of 1998 at a field site located approximately 3 km west of Boulder, Colorado, USA. A new chemical ionization mass spectrometric technique (CIMS was compared to a well-established gas chromatographic technique (GC. The CIMS technique utilized benzene cation chemistry to ionize isoprene. The isoprene levels measured by the CIMS were often larger than those obtained with the GC. The results indicate that the CIMS technique suffered from an anthropogenic interference associated with air masses from the Denver, CO metropolitan area as well as an additional interference occurring in clean conditions. However, the CIMS technique is also demonstrated to be sensitive and fast. Especially after introduction of a tandem mass spectrometric technique, it is therefore a candidate for isoprene measurements in remote environments near isoprene sources.

11. Feedback control of persistent-current oscillation based on the atomic-clock technique

Science.gov (United States)

Yu, Deshui; Dumke, Rainer

2018-05-01

We propose a scheme of stabilizing the persistent-current Rabi oscillation based on the flux qubit-resonator-atom hybrid structure. The low-Q L C resonator weakly interacts with the flux qubit and maps the persistent-current Rabi oscillation of the flux qubit onto the intraresonator electric field. This oscillating electric field is further coupled to a Rydberg-Rydberg transition of the 87Rb atoms. The Rabi-frequency fluctuation of the flux qubit is deduced from measuring the atomic population via the fluorescence detection and stabilized by feedback controlling the external flux bias. Our numerical simulation indicates that the feedback-control method can efficiently suppress the background fluctuations in the flux qubit, especially in the low-frequency limit. This technique may be extensively applicable to different types of superconducting circuits, paving a way to long-term-coherence superconducting quantum information processing.

12. Numerical simulation of groundwater flow at Puget Sound Naval Shipyard, Naval Base Kitsap, Bremerton, Washington

Science.gov (United States)

Jones, Joseph L.; Johnson, Kenneth H.; Frans, Lonna M.

2016-08-18

Information about groundwater-flow paths and locations where groundwater discharges at and near Puget Sound Naval Shipyard is necessary for understanding the potential migration of subsurface contaminants by groundwater at the shipyard. The design of some remediation alternatives would be aided by knowledge of whether groundwater flowing at specific locations beneath the shipyard will eventually discharge directly to Sinclair Inlet of Puget Sound, or if it will discharge to the drainage system of one of the six dry docks located in the shipyard. A 1997 numerical (finite difference) groundwater-flow model of the shipyard and surrounding area was constructed to help evaluate the potential for groundwater discharge to Puget Sound. That steady-state, multilayer numerical model with homogeneous hydraulic characteristics indicated that groundwater flowing beneath nearly all of the shipyard discharges to the dry-dock drainage systems, and only shallow groundwater flowing beneath the western end of the shipyard discharges directly to Sinclair Inlet.Updated information from a 2016 regional groundwater-flow model constructed for the greater Kitsap Peninsula was used to update the 1997 groundwater model of the Puget Sound Naval Shipyard. That information included a new interpretation of the hydrogeologic units underlying the area, as well as improved recharge estimates. Other updates to the 1997 model included finer discretization of the finite-difference model grid into more layers, rows, and columns, all with reduced dimensions. This updated Puget Sound Naval Shipyard model was calibrated to 2001–2005 measured water levels, and hydraulic characteristics of the model layers representing different hydrogeologic units were estimated with the aid of state-of-the-art parameter optimization techniques.The flow directions and discharge locations predicted by this updated model generally match the 1997 model despite refinements and other changes. In the updated model, most

13. Chronology of DIC technique based on the fundamental mathematical modeling and dehydration impact.

Science.gov (United States)

Alias, Norma; Saipol, Hafizah Farhah Saipan; Ghani, Asnida Che Abd

2014-12-01

A chronology of mathematical models for heat and mass transfer equation is proposed for the prediction of moisture and temperature behavior during drying using DIC (Détente Instantanée Contrôlée) or instant controlled pressure drop technique. DIC technique has the potential as most commonly used dehydration method for high impact food value including the nutrition maintenance and the best possible quality for food storage. The model is governed by the regression model, followed by 2D Fick's and Fourier's parabolic equation and 2D elliptic-parabolic equation in a rectangular slice. The models neglect the effect of shrinkage and radiation effects. The simulations of heat and mass transfer equations with parabolic and elliptic-parabolic types through some numerical methods based on finite difference method (FDM) have been illustrated. Intel®Core™2Duo processors with Linux operating system and C programming language have been considered as a computational platform for the simulation. Qualitative and quantitative differences between DIC technique and the conventional drying methods have been shown as a comparative.

14. Inversion of calcite twin data for paleostress orientations and magnitudes: A new technique tested and calibrated on numerically-generated and natural data

Science.gov (United States)

Parlangeau, Camille; Lacombe, Olivier; Schueller, Sylvie; Daniel, Jean-Marc

2018-01-01

The inversion of calcite twin data is a powerful tool to reconstruct paleostresses sustained by carbonate rocks during their geological history. Following Etchecopar's (1984) pioneering work, this study presents a new technique for the inversion of calcite twin data that reconstructs the 5 parameters of the deviatoric stress tensors from both monophase and polyphase twin datasets. The uncertainties in the parameters of the stress tensors reconstructed by this new technique are evaluated on numerically-generated datasets. The technique not only reliably defines the 5 parameters of the deviatoric stress tensor, but also reliably separates very close superimposed stress tensors (30° of difference in maximum principal stress orientation or switch between σ3 and σ2 axes). The technique is further shown to be robust to sampling bias and to slight variability in the critical resolved shear stress. Due to our still incomplete knowledge of the evolution of the critical resolved shear stress with grain size, our results show that it is recommended to analyze twin data subsets of homogeneous grain size to minimize possible errors, mainly those concerning differential stress values. The methodological uncertainty in principal stress orientations is about ± 10°; it is about ± 0.1 for the stress ratio. For differential stresses, the uncertainty is lower than ± 30%. Applying the technique to vein samples within Mesozoic limestones from the Monte Nero anticline (northern Apennines, Italy) demonstrates its ability to reliably detect and separate tectonically significant paleostress orientations and magnitudes from naturally deformed polyphase samples, hence to fingerprint the regional paleostresses of interest in tectonic studies.

15. A linac-based stereotactic irradiation technique of uveal melanoma

International Nuclear Information System (INIS)

Dieckmann, Karin; Bogner, Joachim; Georg, Dietmar; Zehetmayer, Martin; Kren, Gerhard; Poetter, Richard

2001-01-01

Purpose: To describe a stereotactic irradiation technique for uveal melanomas performed at a linac, based on a non-invasive eye fixation and eye monitoring system. Methods: For eye immobilization a light source system is integrated in a standard stereotactic mask system in front of the healthy eye: During treatment preparation (computed tomography/magnetic resonance imaging) as well as for treatment delivery, patients are instructed to gaze at the fixation light source. A mini-video camera monitors the pupil center position of the diseased eye. For treatment planning and beam delivery standard stereotactic radiotherapy equipment is used. If the pupil center deviation from a predefined 'zero-position' exceeds 1 mm (for more than 2 s), treatment delivery is interrupted. Between 1996 and 1999 60 patients with uveal melanomas, where (i) tumor height exceeded 7 mm, or (ii) tumor height was more than 3 mm, and the central tumor distance to the optic disc and/or the macula was less than 3 mm, have been treated. A total dose of 60 or 70 Gy has been given in 5 fractions within 10 days. Results: The repositioning accuracy in the mask system is 0.47±0.36 mm in rostral-occipital direction, 0.75±0.52 mm laterally, and 1.12±0.96 mm in vertical direction. An eye movement analysis performed for 23 patients shows a pupil center deviation from the 'zero' position<1 mm in 91% of all cases investigated. In a theoretical analysis, pupil center deviations are correlated with GTV 'movements'. For a pupil center deviation of 1 mm (rotation of the globe of 5 degree sign ) the GTV is still encompassed by the 80% isodose in 94%. Conclusion: For treatments of uveal melanomas, linac-based stereotactic radiotherapy combined with a non-invasive eye immobilization and monitoring system represents a feasible, accurate and reproducible method. Besides considerable technical requirements, the complexity of the treatment technique demands an interdisciplinary team continuously dedicated to this

16. An efficient soil water balance model based on hybrid numerical and statistical methods

Science.gov (United States)

Mao, Wei; Yang, Jinzhong; Zhu, Yan; Ye, Ming; Liu, Zhao; Wu, Jingwei

2018-04-01

Most soil water balance models only consider downward soil water movement driven by gravitational potential, and thus cannot simulate upward soil water movement driven by evapotranspiration especially in agricultural areas. In addition, the models cannot be used for simulating soil water movement in heterogeneous soils, and usually require many empirical parameters. To resolve these problems, this study derives a new one-dimensional water balance model for simulating both downward and upward soil water movement in heterogeneous unsaturated zones. The new model is based on a hybrid of numerical and statistical methods, and only requires four physical parameters. The model uses three governing equations to consider three terms that impact soil water movement, including the advective term driven by gravitational potential, the source/sink term driven by external forces (e.g., evapotranspiration), and the diffusive term driven by matric potential. The three governing equations are solved separately by using the hybrid numerical and statistical methods (e.g., linear regression method) that consider soil heterogeneity. The four soil hydraulic parameters required by the new models are as follows: saturated hydraulic conductivity, saturated water content, field capacity, and residual water content. The strength and weakness of the new model are evaluated by using two published studies, three hypothetical examples and a real-world application. The evaluation is performed by comparing the simulation results of the new model with corresponding results presented in the published studies, obtained using HYDRUS-1D and observation data. The evaluation indicates that the new model is accurate and efficient for simulating upward soil water flow in heterogeneous soils with complex boundary conditions. The new model is used for evaluating different drainage functions, and the square drainage function and the power drainage function are recommended. Computational efficiency of the new

17. Numerical Analysis of Hydrodynamics for Bionic Oscillating Hydrofoil Based on Panel Method

Directory of Open Access Journals (Sweden)

Gang Xue

2016-01-01

Full Text Available The kinematics model based on the Slender-Body theory is proposed from the bionic movement of real fish. The Panel method is applied to the hydrodynamic performance analysis innovatively, with the Gauss-Seidel method to solve the Navier-Stokes equations additionally, to evaluate the flexible deformation of fish in swimming accurately when satisfying the boundary conditions. A physical prototype to mimic the shape of tuna is developed with the revolutionized technology of rapid prototyping manufacturing. The hydrodynamic performance for rigid oscillating hydrofoil is analyzed with the proposed method, and it shows good coherence with the cases analyzed by the commercial software Fluent and the experimental data from robofish. Furthermore, the hydrodynamic performance of coupled hydrofoil, which consisted of flexible fish body and rigid caudal fin, is analyzed with the proposed method. It shows that the caudal fin has great influence on trailing vortex shedding and the phase angle is the key factor on hydrodynamic performance. It is verified that the shape of trailing vortex is similar to the image of the motion curve at the trailing edge as the assumption of linear vortex plane under the condition of small downwash velocity. The numerical analysis of hydrodynamics for bionic movement based on the Panel method has certain value to reveal the fish swimming mechanism.

18. The Numerical Simulation of the Crack Elastoplastic Extension Based on the Extended Finite Element Method

Directory of Open Access Journals (Sweden)

Xia Xiaozhou

2013-01-01

Full Text Available In the frame of the extended finite element method, the exponent disconnected function is introduced to reflect the discontinuous characteristic of crack and the crack tip enrichment function which is made of triangular basis function, and the linear polar radius function is adopted to describe the displacement field distribution of elastoplastic crack tip. Where, the linear polar radius function form is chosen to decrease the singularity characteristic induced by the plastic yield zone of crack tip, and the triangle basis function form is adopted to describe the displacement distribution character with the polar angle of crack tip. Based on the displacement model containing the above enrichment displacement function, the increment iterative form of elastoplastic extended finite element method is deduced by virtual work principle. For nonuniform hardening material such as concrete, in order to avoid the nonsymmetry characteristic of stiffness matrix induced by the non-associate flowing of plastic strain, the plastic flowing rule containing cross item based on the least energy dissipation principle is adopted. Finally, some numerical examples show that the elastoplastic X-FEM constructed in this paper is of validity.

19. LSSVM-Based Rock Failure Criterion and Its Application in Numerical Simulation

Directory of Open Access Journals (Sweden)

Changxing Zhu

2015-01-01

Full Text Available A rock failure criterion is very important for the prediction of the failure of rocks or rock masses in rock mechanics and engineering. Least squares support vector machines (LSSVM are a powerful tool for addressing complex nonlinear problems. This paper describes a LSSVM-based rock failure criterion for analyzing the deformation of a circular tunnel under different in situ stresses without assuming a function form. First, LSSVM was used to represent the nonlinear relationship between the mechanical properties of rock and the failure behavior of the rock in order to construct a rock failure criterion based on experimental data. Then, this was used in a hypothetical numerical analysis of a circular tunnel to analyze the mechanical behavior of the rock mass surrounding the tunnel. The Mohr-Coulomb and Hoek-Brown failure criteria were also used to analyze the same case, and the results were compared; these clearly indicate that LSSVM can be used to establish a rock failure criterion and to predict the failure of a rock mass during excavation of a circular tunnel.

20. Spontaneous Synchronization in Two Mutually Coupled Memristor-Based Chua’s Circuits: Numerical Investigations

Directory of Open Access Journals (Sweden)

Eleonora Bilotta

2014-01-01

Full Text Available Chaotic dynamics of numerous memristor-based circuits is widely reported in literature. Recently, some works have appeared which study the problem of synchronization control of these systems in a master-slave configuration. In the present paper, the spontaneous dynamic behavior of two chaotic memristor-based Chua’s circuits, mutually interacting through a coupling resistance, was studied via computer simulations in order to study possible self-organized synchronization phenomena. The used memristor is a flux controlled memristor with a cubic nonlinearity, and it can be regarded as a time-varying memductance. The memristor, in effect, retains memory of its past dynamic and any difference in the initial conditions of the two circuits results in different values of the corresponding memductances. In this sense, due to the memory effect of the memristor, even if coupled circuits have the same parameters they do not constitute two completely identical chaotic oscillators. As is known, for nonidentical chaotic systems, in addition to complete synchronizations (CS other weaker forms of synchronization which provide correlations between the signals of the two systems can also occur. Depending on initial conditions and coupling strength, both chaotic and nonchaotic synchronization are observed for the system considered in this work.

1. Prediction of 222 Rn exhalation rates from phosphogypsum based stacks. Part II: preliminary numerical results

International Nuclear Information System (INIS)

Rabi, Jose A.; Mohamad, Abdulmajeed A.

2004-01-01

The first part of this paper proposes a steady-state 2-D model for 222 Rn transport in phosphogypsum stacks. In this second part, the dimensionless model equations are solved numerically with the help of an existing finite-volume simulator that has been successfully used to solve heat and mass transfer problems in porous media. As a test case, a rectangular shaped stack is considered in order to verify the ability of the proposed parametric approach to account for concurrent effects on the 222 Rn exhalation into the local atmosphere. Air flow is supposed to be strictly buoyancy driven and the ground is assumed to be impermeable to 222 Rn and at a higher temperature under the stack base. Dimensionless controlling parameters are set to representative values and results are presented for Grashof number in the range 10 6 ≤Gr≤ 10 8 , corresponding to very small to small temperature differences between incoming air and ground underneath the stack base. For the particular set of parameters and inasmuch as Gr increases, streamlines presented basically the same pattern while internal isotherms and iso concentration lines remained almost unchanged. Total average Sherwood number proved to be rather insensitive to Gr while total average Nusselt increased slightly with Gr. (author)

2. Displacement-Based Seismic Design Procedure for Framed Buildings with Dissipative Braces Part II: Numerical Results

International Nuclear Information System (INIS)

Mazza, Fabio; Vulcano, Alfonso

2008-01-01

For a widespread application of dissipative braces to protect framed buildings against seismic loads, practical and reliable design procedures are needed. In this paper a design procedure based on the Direct Displacement-Based Design approach is adopted, assuming the elastic lateral storey-stiffness of the damped braces proportional to that of the unbraced frame. To check the effectiveness of the design procedure, presented in an associate paper, a six-storey reinforced concrete plane frame, representative of a medium-rise symmetric framed building, is considered as primary test structure; this structure, designed in a medium-risk region, is supposed to be retrofitted as in a high-risk region, by insertion of diagonal braces equipped with hysteretic dampers. A numerical investigation is carried out to study the nonlinear static and dynamic responses of the primary and the damped braced test structures, using step-by-step procedures described in the associate paper mentioned above; the behaviour of frame members and hysteretic dampers is idealized by bilinear models. Real and artificial accelerograms, matching EC8 response spectrum for a medium soil class, are considered for dynamic analyses

3. Numerical study of shear thickening fluid with discrete particles embedded in a base fluid

Directory of Open Access Journals (Sweden)

W Zhu

2016-09-01

Full Text Available The Shear Thickening Fluid (STF is a dilatant material, which displays non-Newtonian characteristics in its unique ability to transit from a low viscosity fluid to a high viscosity fluid. The research performed investigates the STF behavior by modeling and simulation of the interaction between the base flow and embedded rigid particles when subjected to shear stress. The model considered the Lagrangian description of the rigid particles and the Eulerian description of fluid flow. The numerical analysis investigated key parameters such as applied flow acceleration, particle distribution and arrangement, volume concentration of particles, particle size, shape and their behavior in a Newtonian and non-Newtonian fluid base. The fluid-particle interaction model showed that the arrangement, size, shape and volume concentration of the particles had a significant effect on the behavior of the STF. Although non-conclusive, the addition of particles in non-Newtonian fluids showed a promising trend of improved shear thickening effects at high shear strain rates.

4. Numerical simulation of Trichel pulses of negative DC corona discharge based on a plasma chemical model

Science.gov (United States)

Chen, Xiaoyue; Lan, Lei; Lu, Hailiang; Wang, Yu; Wen, Xishan; Du, Xinyu; He, Wangling

2017-10-01

A numerical simulation method of negative direct current (DC) corona discharge based on a plasma chemical model is presented, and a coaxial cylindrical gap is adopted. There were 15 particle species and 61 kinds of collision reactions electrons involved, and 22 kinds of reactions between ions are considered in plasma chemical reactions. Based on this method, continuous Trichel pulses are calculated on about a 100 us timescale, and microcosmic physicochemical process of negative DC corona discharge in three different periods is discussed. The obtained results show that the amplitude of Trichel pulses is between 1-2 mA, and that pulse interval is in the order of 10-5 s. The positive ions produced by avalanche ionization enhanced the electric field near the cathode at the beginning of the pulse, then disappeared from the surface of cathode. The electric field decreases and the pulse ceases to develop. The negative ions produced by attachment slowly move away from the cathode, and the electric field increases gradually until the next pulse begins to develop. The positive and negative ions with the highest density during the corona discharge process are O4+ and O3- , respectively.

5. Numerical estimation of ultrasonic production of hydrogen: Effect of ideal and real gas based models.

Science.gov (United States)

Kerboua, Kaouther; Hamdaoui, Oualid

2018-01-01

Based on two different assumptions regarding the equation describing the state of the gases within an acoustic cavitation bubble, this paper studies the sonochemical production of hydrogen, through two numerical models treating the evolution of a chemical mechanism within a single bubble saturated with oxygen during an oscillation cycle in water. The first approach is built on an ideal gas model, while the second one is founded on Van der Waals equation, and the main objective was to analyze the effect of the considered state equation on the ultrasonic hydrogen production retrieved by simulation under various operating conditions. The obtained results show that even when the second approach gives higher values of temperature, pressure and total free radicals production, yield of hydrogen does not follow the same trend. When comparing the results released by both models regarding hydrogen production, it was noticed that the ratio of the molar amount of hydrogen is frequency and acoustic amplitude dependent. The use of Van der Waals equation leads to higher quantities of hydrogen under low acoustic amplitude and high frequencies, while employing ideal gas law based model gains the upper hand regarding hydrogen production at low frequencies and high acoustic amplitudes. Copyright © 2017 Elsevier B.V. All rights reserved.

6. Micro-mechanics based damage mechanics for 3D Orthogonal Woven Composites: Experiment and Numerical Modelling

KAUST Repository

Saleh, Mohamed Nasr

2016-01-08

Damage initiation and evolution of three-dimensional (3D) orthogonal woven carbon fibre composite (3DOWC) is investigated experimentally and numerically. Meso-scale homogenisation of the representative volume element (RVE) is utilised to predict the elastic properties, simulate damage initiation and evolution when loaded in tension. The effect of intra-yarns transverse cracking and shear diffused damage on the in-plane transverse modulus and shear modulus is investigated while one failure criterion is introduced to simulate the matrix damage. The proposed model is based on two major assumptions. First, the effect of the binder yarns, on the in-plane properties, is neglected, so the 3DOWC unit cell can be approximated as a (0o/90o) cross-ply laminate. Second, a micro-mechanics based damage approach is used at the meso-scale, so damage indicators can be correlated, explicitly, to the density of cracks within the material. Results from the simulated RVE are validated against experimental results along the warp (0o direction) and weft (90o direction). This approach paves the road for more predictive models as damage evolution laws are obtained from micro mechanical considerations and rely on few well-defined material parameters. This largely differs from classical damage mechanics approaches in which the evolution law is obtained by retrofitting experimental observations.

7. Micro-mechanics based damage mechanics for 3D Orthogonal Woven Composites: Experiment and Numerical Modelling

KAUST Repository

Saleh, Mohamed Nasr; Lubineau, Gilles; Potluri, Prasad; Withers, Philip; Soutis, Constantinos

2016-01-01

Damage initiation and evolution of three-dimensional (3D) orthogonal woven carbon fibre composite (3DOWC) is investigated experimentally and numerically. Meso-scale homogenisation of the representative volume element (RVE) is utilised to predict the elastic properties, simulate damage initiation and evolution when loaded in tension. The effect of intra-yarns transverse cracking and shear diffused damage on the in-plane transverse modulus and shear modulus is investigated while one failure criterion is introduced to simulate the matrix damage. The proposed model is based on two major assumptions. First, the effect of the binder yarns, on the in-plane properties, is neglected, so the 3DOWC unit cell can be approximated as a (0o/90o) cross-ply laminate. Second, a micro-mechanics based damage approach is used at the meso-scale, so damage indicators can be correlated, explicitly, to the density of cracks within the material. Results from the simulated RVE are validated against experimental results along the warp (0o direction) and weft (90o direction). This approach paves the road for more predictive models as damage evolution laws are obtained from micro mechanical considerations and rely on few well-defined material parameters. This largely differs from classical damage mechanics approaches in which the evolution law is obtained by retrofitting experimental observations.

8. Evaluation of physics-based numerical modelling for diverse design architecture of perovskite solar cells

Science.gov (United States)

Mishra, A. K.; Catalan, Jorge; Camacho, Diana; Martinez, Miguel; Hodges, D.

2017-08-01

Solution processed organic-inorganic metal halide perovskite based solar cells are emerging as a new cost effective photovoltaic technology. In the context of increasing the power conversion efficiency (PCE) and sustainability of perovskite solar cells (PSC) devices, we comprehensively analyzed a physics-based numerical modelling for doped and un-doped PSC devices. Our analytics emphasized the role of different charge carrier layers from the view point of interfacial adhesion and its influence on charge extraction rate and charge recombination mechanism. Morphological and charge transport properties of perovskite thin film as a function of device architecture are also considered to investigate the photovoltaic properties of PSC. We observed that photocurrent is dominantly influenced by interfacial recombination process and photovoltage has functional relationship with defect density of perovskite absorption layer. A novel contour mapping method to understand the characteristics of current density-voltage (J-V) curves for each device as a function of perovskite layer thickness provide an important insight about the distribution spectrum of photovoltaic properties. Functional relationship of device efficiency and fill factor with absorption layer thickness are also discussed.

9. Numerical simulation of transitional flow on a wind turbine airfoil with RANS-based transition model

Science.gov (United States)

Zhang, Ye; Sun, Zhengzhong; van Zuijlen, Alexander; van Bussel, Gerard

2017-09-01

This paper presents a numerical investigation of transitional flow on the wind turbine airfoil DU91-W2-250 with chord-based Reynolds number Rec = 1.0 × 106. The Reynolds-averaged Navier-Stokes based transition model using laminar kinetic energy concept, namely the k - kL - ω model, is employed to resolve the boundary layer transition. Some ambiguities for this model are discussed and it is further implemented into OpenFOAM-2.1.1. The k - kL - ω model is first validated through the chosen wind turbine airfoil at the angle of attack (AoA) of 6.24° against wind tunnel measurement, where lift and drag coefficients, surface pressure distribution and transition location are compared. In order to reveal the transitional flow on the airfoil, the mean boundary layer profiles in three zones, namely the laminar, transitional and fully turbulent regimes, are investigated. Observation of flow at the transition location identifies the laminar separation bubble. The AoA effect on boundary layer transition over wind turbine airfoil is also studied. Increasing the AoA from -3° to 10°, the laminar separation bubble moves upstream and reduces in size, which is in close agreement with wind tunnel measurement.

10. A Method of Numerical Control Equipment Appearance Design Based on Product Identity

Science.gov (United States)

Zhu, Zhijuan; Zhou, Qi; Li, Bin; Visser, Steve

Research on numerical control (NC) equipment has been more and more abundant; however, there are few existing studies in the field of appearance design for NC equipments. This paper provided a method to generate new appearance design of NC equipments based on product identity (PI). For the purpose of providing guidelines to generate new concept of NC equipment design, this paper, therefore, took the DMG Company (a Germen NC equipment company) as a case, examined the total products of this company from two aspects: Product Image and Product Family. Task 1 was an evaluate task about the Product Image by using the semantic differential (SD) evaluation method; Task 2 was a study task about Product Family to find out features of the products and classify these features. During the Task 2, several features have been found out and summarized, and these features were classified into 3 different levels according to their frequency and importance. In the end, two appearance design samples have been generated based on the analysis above to prove the application of the research.

11. Numerical Simulations of Slow Stick Slip Events with PFC, a DEM Based Code

Science.gov (United States)

Ye, S. H.; Young, R. P.

2017-12-01

Nonvolcanic tremors around subduction zone have become a fascinating subject in seismology in recent years. Previous studies have shown that the nonvolcanic tremor beneath western Shikoku is composed of low frequency seismic waves overlapping each other. This finding provides direct link between tremor and slow earthquakes. Slow stick slip events are considered to be laboratory scaled slow earthquakes. Slow stick slip events are traditionally studied with direct shear or double direct shear experiment setup, in which the sliding velocity can be controlled to model a range of fast and slow stick slips. In this study, a PFC* model based on double direct shear is presented, with a central block clamped by two side blocks. The gauge layers between the central and side blocks are modelled as discrete fracture networks with smooth joint bonds between pairs of discrete elements. In addition, a second model is presented in this study. This model consists of a cylindrical sample subjected to triaxial stress. Similar to the previous model, a weak gauge layer at a 45 degrees is added into the sample, on which shear slipping is allowed. Several different simulations are conducted on this sample. While the confining stress is maintained at the same level in different simulations, the axial loading rate (displacement rate) varies. By varying the displacement rate, a range of slipping behaviour, from stick slip to slow stick slip are observed based on the stress-strain relationship. Currently, the stick slip and slow stick slip events are strictly observed based on the stress-strain relationship. In the future, we hope to monitor the displacement and velocity of the balls surrounding the gauge layer as a function of time, so as to generate a synthetic seismogram. This will allow us to extract seismic waveforms and potentially simulate the tremor-like waves found around subduction zones. *Particle flow code, a discrete element method based numerical simulation code developed by

12. MATHEMATICAL MODELING AND NUMERICAL SOLUTION OF IRON CORROSION PROBLEM BASED ON CONDENSATION CHEMICAL PROPERTIES

Directory of Open Access Journals (Sweden)

Basuki Widodo

2012-02-01

Full Text Available Corrosion process is a natural case that happened at the various metals, where the corrosion process in electrochemical can be explained by using galvanic cell. The iron corrosion process is based on the acidity degree (pH of a condensation, iron concentration and condensation temperature of electrolyte. Those are applied at electrochemistry cell. The iron corrosion process at this electrochemical cell also able to generate electrical potential and electric current during the process takes place. This paper considers how to build a mathematical model of iron corrosion, electrical potential and electric current. The mathematical model further is solved using the finite element method. This iron corrosion model is built based on the iron concentration, condensation temperature, and iteration time applied. In the electric current density model, the current based on electric current that is happened at cathode and anode pole and the iteration time applied. Whereas on the potential  electric model, it is based on the beginning of electric potential and the iteration time applied. The numerical results show that the part of iron metal, that is gristle caused by corrosion, is the part of metal that has function as anode and it has some influences, such as time depth difference, iron concentration and condensation temperature on the iron corrosion process and the sum of reduced mass during corrosion process. Moreover, difference influence of time and beginning electric potential has an effect on the electric potential, which emerges during corrosion process at the electrochemical cell. Whereas, at the electrical current is also influenced by difference of depth time and condensation temperature applied.Keywords: Iron Corrosion, Concentration of iron, Electrochemical Cell and Finite Element Method

13. Orientation of student entrepreneurial practices based on administrative techniques

Directory of Open Access Journals (Sweden)

Héctor Horacio Murcia Cabra

2005-07-01

Full Text Available As part of the second phase of the research project «Application of a creativity model to update the teaching of the administration in Colombian agricultural entrepreneurial systems» it was decided to re-enforce student planning and execution of the students of the Agricultural business Administration Faculty of La Salle University. Those finishing their studies were given special attention. The plan of action was initiated in the second semester of 2003. It was initially defined as a model of entrepreneurial strengthening based on a coherent methodology that included the most recent administration and management techniques. Later, the applicability of this model was tested in some organizations of the agricultural sector that had asked for support in their planning processes. Through an investigation-action process the methodology was redefined in order to arrive at a final model that could be used by faculty students and graduates. The results obtained were applied to the teaching of Entrepreneurial Laboratory of ninth semester students with the hope of improving administrative support to agricultural enterprises. Following this procedure more than 100 students and 200 agricultural producers have applied this procedure between June 2003 and July 2005. The methodology used and the results obtained are presented in this article.

14. Microgrids Real-Time Pricing Based on Clustering Techniques

Directory of Open Access Journals (Sweden)

Hao Liu

2018-05-01

Full Text Available Microgrids are widely spreading in electricity markets worldwide. Besides the security and reliability concerns for these microgrids, their operators need to address consumers’ pricing. Considering the growth of smart grids and smart meter facilities, it is expected that microgrids will have some level of flexibility to determine real-time pricing for at least some consumers. As such, the key challenge is finding an optimal pricing model for consumers. This paper, accordingly, proposes a new pricing scheme in which microgrids are able to deploy clustering techniques in order to understand their consumers’ load profiles and then assign real-time prices based on their load profile patterns. An improved weighted fuzzy average k-means is proposed to cluster load curve of consumers in an optimal number of clusters, through which the load profile of each cluster is determined. Having obtained the load profile of each cluster, real-time prices are given to each cluster, which is the best price given to all consumers in that cluster.

15. Using Neutron-based techniques to investigate battery behaviour

International Nuclear Information System (INIS)

Pramudita, James C.; Goonetilleke, Damien; Sharma, Neeraj; Peterson, Vanessa K.

2016-01-01

The extensive use of portable electronic devices has given rise to increasing demand for reliable high energy density storage in the form of batteries. Today, lithium-ion batteries (LIBs) are the leading technology as they offer high energy density and relatively long lifetimes. Despite their widespread adoption, Li-ion batteries still suffer from significant degradation in their performance over time. The most obvious degradation in lithium-ion battery performance is capacity fade – where the capacity of the battery reduces after extended cycling. This talk will focus on how in situ time-resolved neutron powder diffraction (NPD) can be used to gain a better understanding of the structural changes which contribute to the observed capacity fade. The commercial batteries studied each feature different electrochemical and storage histories that are precisely known, allowing us to elucidate the tell-tale signs of battery degradation using NPD and relate these to battery history. Moreover, this talk will also showcase the diverse use of other neutron-based techniques such as neutron imaging to study electrolyte concentrations in lead-acid batteries, and the use of quasi-elastic neutron scattering to study Na-ion dynamics in sodium-ion batteries.

16. Light based techniques for improving health care: studies at RRCAT

International Nuclear Information System (INIS)

Gupta, P.K.; Patel, H.S.; Ahlawat, S.

2015-01-01

The invention of Lasers in 1960, the phenomenal advances in photonics as well as the information processing capability of the computers has given a major boost to the R and D activity on the use of light for high resolution biomedical imaging, sensitive, non-invasive diagnosis and precision therapy. The effort has resulted in remarkable progress and it is widely believed that light based techniques hold great potential to offer simpler, portable systems which can help provide diagnostics and therapy in a low resource setting. At Raja Ramanna Centre for Advanced Technology (RRCAT) extensive studies have been carried out on fluorescence spectroscopy of native tissue. This work led to two important outcomes. First, a better understanding of tissue fluorescence and insights on the possible use of fluorescence spectroscopy for screening of cancer and second development of diagnostic systems that can serve as standalone tool for non-invasive screening of the cancer of oral cavity. The optical coherence tomography setups and their functional extensions (polarization sensitive, Doppler) have also been developed and used for high resolution (∼10 µm) biomedical imaging applications, in particular for non-invasive monitoring of the healing of wounds. Chlorophyll based photo-sensitisers and their derivatives have been synthesized in house and used for photodynamic therapy of tumors in animal models and for antimicrobial applications. Various variants of optical tweezers (holographic, Raman etc.) have also been developed and utilised for different applications notably Raman spectroscopy of optically trapped red blood cells. An overview of these activities carried out at RRCAT is presented in this article. (author)

17. Weighted graph based ordering techniques for preconditioned conjugate gradient methods

Science.gov (United States)

Clift, Simon S.; Tang, Wei-Pai

1994-01-01

We describe the basis of a matrix ordering heuristic for improving the incomplete factorization used in preconditioned conjugate gradient techniques applied to anisotropic PDE's. Several new matrix ordering techniques, derived from well-known algorithms in combinatorial graph theory, which attempt to implement this heuristic, are described. These ordering techniques are tested against a number of matrices arising from linear anisotropic PDE's, and compared with other matrix ordering techniques. A variation of RCM is shown to generally improve the quality of incomplete factorization preconditioners.

18. Project Analysis of Aerodynamics Configuration of Re-entry Сapsule-shaped Body Based on Numerical Methods for Newtonian Flow Theory

Directory of Open Access Journals (Sweden)

V. E. Minenko

2015-01-01

Full Text Available The article objective is to review the basic design parameters of space capsule (SC to select a rational shape at the early stages of design.The choice is based on the design parameters such as a volume filling factor (volumetric efficiency of shape, aerodynamic coefficients, margin of stability, and centering characteristics.The aerodynamic coefficients are calculated by a numerical method based on approximate Newton's theory. A proposed engineering technique uses this theory to calculate aerodynamic characteristics of the capsule shapes. The gist of the technique is in using a developed programme to generate capsule shapes and provide numerical calculation of aerodynamic characteristics. The accuracy of the calculation, performed according to proposed technique, tends to the results obtained by analytical integral dependencies according to the Newtonian technique.When considering the stability of the capsule shapes the paper gives a diagram of the aerodynamic forces acting on the SC in the descent phase, and using the aerodynamically-shaped SC "Soyuz" as an example analyses a dangerous moment of flow at adverse angles of attack.After determining a design center-of-mass position to meet the stability requirements it is necessary at the early stage, before starting the SC layout work, to evaluate the complexity of bringing the center-of-mass to the specified point. In this regard have been considered such design parameters of the shape as a volume-centering and surface-centering coefficients.Next, using the above engineering technique are calculated aerodynamic characteristics of capsule shapes similar to the well-known SC "Soyuz", "Zarya 2" and the command module "Apollo".All calculated design parameters are summarized in the table. Currently, among the works cited in foreign publications concerning the contours of winged configuration of the type "Space Shuttle" some papers are close to the proposed technique.Application of the proposed

19. Numerical analysis of continuous charge of lithium niobate in a double-crucible Czochralski system using the accelerated crucible rotation technique

Science.gov (United States)

Kitashima, Tomonori; Liu, Lijun; Kitamura, Kenji; Kakimoto, Koichi

2004-05-01

The transport mechanism of supplied raw material in a double-crucible Czochralski system using the accelerated crucible rotation technique (ACRT) was investigated by three-dimensional and time-dependent numerical simulation. The calculation clarified that use of the ACRT resulted in enhancement of the mixing effect of the supplied raw material. It is, therefore, possible to maintain the composition of the melt in an inner crucible during crystal growth by using the ACRT. The effect of the continuous charge of the raw material on melt temperature was also investigated. Our results showed that the effect of feeding lithium niobate granules on melt temperature was small, since the feeding rate of the granules is small. Therefore, solidification of the melt surface due to the heat of fusion in this system is not likely.

20. A novel image inpainting technique based on median diffusion

numerical methods such as anisotropic diffusion and multiresolution schemes. Some steps ... Roth & Black (2005) have developed a framework for learning a generic and expressive image priors that ..... This paper presents a new approach for image inpainting by propagating median information .... J. Graphics Tools 9(1):.

1. Emotional Design Tutoring System Based on Multimodal Affective Computing Techniques

Science.gov (United States)

Wang, Cheng-Hung; Lin, Hao-Chiang Koong

2018-01-01

In a traditional class, the role of the teacher is to teach and that of the students is to learn. However, the constant and rapid technological advancements have transformed education in numerous ways. For instance, in addition to traditional, face to face teaching, E-learning is now possible. Nevertheless, face to face teaching is unavailable in…

2. A GIS-based numerical simulation of the March 2014 Oso landslide fluidized motion

Science.gov (United States)

Fukuoka, H.; Ogbonnaya, I.; Wang, C.

2014-12-01

Sliding and flowing are the major movement type after slope failures. Landslides occur when slope-froming material moves downhill after failing along a sliding surface. Most debris flows originally occur in the form of rainfall-induced landslides before they move into valley channel. Landslides that mobilize into debris flows usually are characterized by high-speed movement and long run-out distance and may present the greatest risk to human life. The 22 March 2014 Oso landslide is a typical case of landside transformint to debris flow. The landslide was triggered on the edge of a plateau about 200 m high composed of glacial sediments after excessive prolonged rainfall of 348 in March 2014. After its initiation, portions of the landslide materials transitioned into a rapidly moving debris flow which traveled long distances across the downslope floodplain. U.S. Geological Survey estimated the volume of the slide to be about 7 million m3, and it traveled about 1 km from the toe of the slope. The apparent friction angle measured by the energy line drawn from the crown of the head scarp to the toe of the deposits which reached largest distance, was only 5~6 degrees. we performed two numerical modeling to predicting the runout distance and to get insight into the behaviour of the landslide movement. One is GIS-based revised Hovland's 3D limit equilibrium model which is used to simulate the movement and stoppage of a landslide. In this research, sliding is defined by a slip surface which cuts through the slope, causing the mass of earth to move above it. The factor of safety will be calculated step by step during the sliding process simulation. Stoppage is defined by the factor of safety much greater than one and the velocity equal zero. The other is GIS-based depth-averaged 2D numerical model using a coupled viscous and Coulomb type law to simulate a debris flow from initiation to deposition. We compared our simulaiton results with the results of preliminary computer

3. Numerical modeling of time-dependent deformation and induced stresses in concrete pipes constructed in Queenston shale using micro-tunneling technique

Directory of Open Access Journals (Sweden)

Hayder Mohammed Salim Al-Maamori

2018-04-01

Full Text Available Effects of time-dependent deformation (TDD on a tunnel constructed using the micro-tunneling technique in Queenston shale (QS are investigated employing the finite element method. The TDD and strength parameters of the QS were measured from tests conducted on QS specimens soaked in water and lubricant fluids (LFs used in micro-tunneling such as bentonite and polymer solutions. The numerical model was verified using the results of TDD tests performed on QS samples, field measurements of some documented projects, and the closed-form solutions to circular tunnels in swelling rock. The verified model was then employed to conduct a parametric study considering important micro-tunneling design parameters, such as depth and diameter of the tunnel, in situ stress ratio (Ko, and the time lapse prior to replacing LFs with permanent cement grout around the tunnel. It was revealed that the time lapse plays a vital role in controlling deformations and associated stresses developed in the tunnel lining. The critical case of a pipe or tunnel in which the maximum tensile stress develops at its springline occurs when it is constructed at shallow depths in the QS layer. The results of the parametric study were used to suggest recommendations for the construction of tunnels in QS employing micro-tunneling. Keywords: Numerical model, Micro-tunneling, Queenston shale (QS, Lubricant fluids (LFs

4. Experimental and numerical studies on liquid wicking into filter papers for paper-based diagnostics

International Nuclear Information System (INIS)

Liu, Zhi; Hu, Jie; Zhao, Yimeng; Qu, Zhiguo; Xu, Feng

2015-01-01

Paper-based diagnostics have shown promising potential applications in human disease surveillance and food safety analysis at the point-of-care (POC). The liquid wicking behavior in diagnostic fibrous paper plays an important role in development of paper-based diagnostics. In the current study, we performed experimental and numerical research on the liquid wicking height and mass with three width strips into filter paper. The effective porosity could be conveniently measured in the light of the linear correlation between wicking height and mass by the experimental system. A modified model with considering evaporation effect was proposed to predict wicking height and mass. The predicted wicking height and mass using the evaporation model was much closer to the experimental data compared with the model without evaporation. The wicking speed initially decreased significantly and then maintained at a constant value at lower level. The evaporation effect tends to reduce the wicking flow speed. More wicking mass could be obtained at larger strip width but the corresponding reagent loss became significant. The proposed model with evaporation paved a way to understanding the fundamental of fluid flow in diagnostic paper and was essential to provide meaningful and useful reference for the research and development of paper-based diagnostics devices. - Highlights: • A model with considering evaporation was proposed to predict wicking height and mass. • Flow characteristics of filter paper were experimentally and theoretically studied. • Effective porosity could be conveniently measured by the experimental platform. • The evaporation effect tended to reduce the wicking flow speed

5. SPAM CLASSIFICATION BASED ON SUPERVISED LEARNING USING MACHINE LEARNING TECHNIQUES

Directory of Open Access Journals (Sweden)

T. Hamsapriya

2011-12-01

Full Text Available E-mail is one of the most popular and frequently used ways of communication due to its worldwide accessibility, relatively fast message transfer, and low sending cost. The flaws in the e-mail protocols and the increasing amount of electronic business and financial transactions directly contribute to the increase in e-mail-based threats. Email spam is one of the major problems of the today’s Internet, bringing financial damage to companies and annoying individual users. Spam emails are invading users without their consent and filling their mail boxes. They consume more network capacity as well as time in checking and deleting spam mails. The vast majority of Internet users are outspoken in their disdain for spam, although enough of them respond to commercial offers that spam remains a viable source of income to spammers. While most of the users want to do right think to avoid and get rid of spam, they need clear and simple guidelines on how to behave. In spite of all the measures taken to eliminate spam, they are not yet eradicated. Also when the counter measures are over sensitive, even legitimate emails will be eliminated. Among the approaches developed to stop spam, filtering is the one of the most important technique. Many researches in spam filtering have been centered on the more sophisticated classifier-related issues. In recent days, Machine learning for spam classification is an important research issue. The effectiveness of the proposed work is explores and identifies the use of different learning algorithms for classifying spam messages from e-mail. A comparative analysis among the algorithms has also been presented.

6. Improved mesh based photon sampling techniques for neutron activation analysis

International Nuclear Information System (INIS)

Relson, E.; Wilson, P. P. H.; Biondo, E. D.

2013-01-01

The design of fusion power systems requires analysis of neutron activation of large, complex volumes, and the resulting particles emitted from these volumes. Structured mesh-based discretization of these problems allows for improved modeling in these activation analysis problems. Finer discretization of these problems results in large computational costs, which drives the investigation of more efficient methods. Within an ad hoc subroutine of the Monte Carlo transport code MCNP, we implement sampling of voxels and photon energies for volumetric sources using the alias method. The alias method enables efficient sampling of a discrete probability distribution, and operates in 0(1) time, whereas the simpler direct discrete method requires 0(log(n)) time. By using the alias method, voxel sampling becomes a viable alternative to sampling space with the 0(1) approach of uniformly sampling the problem volume. Additionally, with voxel sampling it is straightforward to introduce biasing of volumetric sources, and we implement this biasing of voxels as an additional variance reduction technique that can be applied. We verify our implementation and compare the alias method, with and without biasing, to direct discrete sampling of voxels, and to uniform sampling. We study the behavior of source biasing in a second set of tests and find trends between improvements and source shape, material, and material density. Overall, however, the magnitude of improvements from source biasing appears to be limited. Future work will benefit from the implementation of efficient voxel sampling - particularly with conformal unstructured meshes where the uniform sampling approach cannot be applied. (authors)

7. Numerical simulation of diffuse double layer around microporous electrodes based on the Poisson–Boltzmann equation

International Nuclear Information System (INIS)

Kitazumi, Yuki; Shirai, Osamu; Yamamoto, Masahiro; Kano, Kenji

2013-01-01

Graphical abstract: - Highlights: • Diffuse double layers overlap with each other in the micropore. • The overlapping of the diffuse double layer affects the double layer capacitance. • The electric field becomes weak in the micropore. • The electroneutrality is unsatisfactory in the micropore. - Abstract: The structure of the diffuse double layer around a nm-sized micropore on porous electrodes has been studied by numerical simulation using the Poisson–Boltzmann equation. The double layer capacitance of the microporous electrode strongly depends on the electrode potential, the electrolyte concentration, and the size of the micropore. The potential and the electrolyte concentration dependence of the capacitance is different from that of the planner electrode based on the Gouy's theory. The overlapping of the diffuse double layer becomes conspicuous in the micropore. The overlapped diffuse double layer provides the mild electric field. The intensified electric field exists at the rim of the orifice of the micropore because of the expansion of the diffuse double layers. The characteristic features of microporous electrodes are caused by the heterogeneity of the electric field around the micropores

8. MATSIM -The Development and Validation of a Numerical Voxel Model based on the MATROSHKA Phantom

Science.gov (United States)

Beck, Peter; Rollet, Sofia; Berger, Thomas; Bergmann, Robert; Hajek, Michael; Latocha, Marcin; Vana, Norbert; Zechner, Andrea; Reitz, Guenther

The AIT Austrian Institute of Technology coordinates the project MATSIM (MATROSHKA Simulation) in collaboration with the Vienna University of Technology and the German Aerospace Center. The aim of the project is to develop a voxel-based model of the MATROSHKA anthro-pomorphic torso used at the International Space Station (ISS) as foundation to perform Monte Carlo high-energy particle transport simulations for different irradiation conditions. Funded by the Austrian Space Applications Programme (ASAP), MATSIM is a co-investigation with the European Space Agency (ESA) ELIPS project MATROSHKA, an international collaboration of more than 18 research institutes and space agencies from all over the world, under the science and project lead of the German Aerospace Center. The MATROSHKA facility is designed to determine the radiation exposure of an astronaut onboard ISS and especially during an ex-travehicular activity. The numerical model developed in the frame of MATSIM is validated by reference measurements. In this report we give on overview of the model development and compare photon and neutron irradiations of the detector-equipped phantom torso with Monte Carlo simulations using FLUKA. Exposure to Co-60 photons was realized in the standard ir-radiation laboratory at Seibersdorf, while investigations with neutrons were performed at the thermal column of the Vienna TRIGA Mark-II reactor. The phantom was loaded with passive thermoluminescence dosimeters. In addition, first results of the calculated dose distribution within the torso are presented for a simulated exposure in low-Earth orbit.

9. Numerical model and analysis of an energy-based system using microwaves for vision correction

Science.gov (United States)

2009-02-01

A treatment system was developed utilizing a microwave-based procedure capable of treating myopia and offering a less invasive alternative to laser vision correction without cutting the eye. Microwave thermal treatment elevates the temperature of the paracentral stroma of the cornea to create a predictable refractive change while preserving the epithelium and deeper structures of the eye. A pattern of shrinkage outside of the optical zone may be sufficient to flatten the central cornea. A numerical model was set up to investigate both the electromagnetic field and the resultant transient temperature distribution. A finite element model of the eye was created and the axisymmetric distribution of temperature calculated to characterize the combination of controlled power deposition combined with surface cooling to spare the epithelium, yet shrink the cornea, in a circularly symmetric fashion. The model variables included microwave power levels and pulse width, cooling timing, dielectric material and thickness, and electrode configuration and gap. Results showed that power is totally contained within the cornea and no significant temperature rise was found outside the anterior cornea, due to the near-field design of the applicator and limited thermal conduction with the short on-time. Target isothermal regions were plotted as a result of common energy parameters along with a variety of electrode shapes and sizes, which were compared. Dose plots showed the relationship between energy and target isothermic regions.

10. Numerical flow models and their calibration using tracer based ages: Chapter 10

Science.gov (United States)

Sanford, W.

2013-01-01

Any estimate of ‘age’ of a groundwater sample based on environmental tracers requires some form of geochemical model to interpret the tracer chemistry (chapter 3) and is, therefore, referred to in this chapter as a tracer model age. the tracer model age of a groundwater sample can be useful for obtaining information on the residence time and replenishment rate of an aquifer system, but that type of data is most useful when it can be incorporated with all other information that is known about the groundwater system under study. groundwater fl ow models are constructed of aquifer systems because they are usually the best way of incorporating all of the known information about the system in the context of a mathematical framework that constrains the model to follow the known laws of physics and chemistry as they apply to groundwater flow and transport. It is important that the purpose or objective of the study be identified first before choosing the type and complexity of the model to be constructed, and to make sure such a model is necessary. The purpose of a modelling study is most often to characterize the system within a numerical framework, such that the hydrological responses of the system can be tested under potential stresses that might be imposed given future development scenarios. As this manual discusses dating as it applies to old groundwater, most readers are likely to be interested in studying regional groundwater flow systems and their water resource potential.

11. Multi-dimensional scavenging analysis of a free-piston linear alternator based on numerical simulation

Energy Technology Data Exchange (ETDEWEB)

Mao, Jinlong; Zuo, Zhengxing; Li, Wen; Feng, Huihua [School of Mechanical Engineering, Beijing Institute of Technology, Beijing 100081 (China)

2011-04-15

A free-piston linear alternator (FPLA) is being developed by the Beijing Institute of Technology to improve the thermal efficiency relative to conventional crank-driven engines. A two-stroke scavenging process recharges the engine and is crucial to realizing the continuous operation of a free-piston engine. In order to study the FPLA scavenging process, the scavenging system was configured using computational fluid dynamics. As the piston dynamics of the FPLA are different to conventional crank-driven two-stroke engines, a time-based numerical simulation program was built using Matlab to define the piston's motion profiles. A wide range of design and operating options were investigated including effective stroke length, valve overlapping distance, operating frequency and charging pressure to find out their effects on the scavenging performance. The results indicate that a combination of high effective stroke length to bore ratio and long valve overlapping distance with a low supercharging pressure has the potential to achieve high scavenging and trapping efficiencies with low short-circuiting losses. (author)

12. The human otitis media with effusion: a numerical-based study.

Science.gov (United States)

Areias, B; Parente, M P L; Santos, C; Gentil, F; Natal Jorge, R M

2017-07-01

Otitis media is a group of inflammatory diseases of the middle ear. Acute otitis media and otitis media with effusion (OME) are its two main types of manifestation. Otitis media is common in children and can result in structural alterations in the middle ear which will lead to hearing losses. This work studies the effects of an OME on the sound transmission from the external auditory meatus to the inner ear. The finite element method was applied on the present biomechanical study. The numerical model used in this work was built based on the geometrical information obtained from The visible ear project. The present work explains the mechanisms by which the presence of fluid in the middle ear affects hearing by calculating the magnitude, phase and reduction of the normalized umbo velocity and also the magnitude and phase of the normalized stapes velocity. A sound pressure level of 90 dB SPL was applied at the tympanic membrane. The harmonic analysis was performed with the auditory frequency varying from 100 Hz to 10 kHz. A decrease in the response of the normalized umbo and stapes velocity as the tympanic cavity was filled with fluid was obtained. The decrease was more accentuated at the umbo.

13. Cell light scattering characteristic numerical simulation research based on FDTD algorithm

Science.gov (United States)

Lin, Xiaogang; Wan, Nan; Zhu, Hao; Weng, Lingdong

2017-01-01

In this study, finite-difference time-domain (FDTD) algorithm has been used to work out the cell light scattering problem. Before beginning to do the simulation contrast, finding out the changes or the differences between normal cells and abnormal cells which may be cancerous or maldevelopment is necessary. The preparation of simulation are building up the simple cell model of cell which consists of organelles, nucleus and cytoplasm and setting up the suitable precision of mesh. Meanwhile, setting up the total field scattering field source as the excitation source and far field projection analysis group is also important. Every step need to be explained by the principles of mathematic such as the numerical dispersion, perfect matched layer boundary condition and near-far field extrapolation. The consequences of simulation indicated that the position of nucleus changed will increase the back scattering intensity and the significant difference on the peak value of scattering intensity may result from the changes of the size of cytoplasm. The study may help us find out the regulations based on the simulation consequences and the regulations can be meaningful for early diagnosis of cancers.

14. An Energy-Efficient Cluster-Based Vehicle Detection on Road Network Using Intention Numeration Method

Directory of Open Access Journals (Sweden)

Deepa Devasenapathy

2015-01-01

Full Text Available The traffic in the road network is progressively increasing at a greater extent. Good knowledge of network traffic can minimize congestions using information pertaining to road network obtained with the aid of communal callers, pavement detectors, and so on. Using these methods, low featured information is generated with respect to the user in the road network. Although the existing schemes obtain urban traffic information, they fail to calculate the energy drain rate of nodes and to locate equilibrium between the overhead and quality of the routing protocol that renders a great challenge. Thus, an energy-efficient cluster-based vehicle detection in road network using the intention numeration method (CVDRN-IN is developed. Initially, sensor nodes that detect a vehicle are grouped into separate clusters. Further, we approximate the strength of the node drain rate for a cluster using polynomial regression function. In addition, the total node energy is estimated by taking the integral over the area. Finally, enhanced data aggregation is performed to reduce the amount of data transmission using digital signature tree. The experimental performance is evaluated with Dodgers loop sensor data set from UCI repository and the performance evaluation outperforms existing work on energy consumption, clustering efficiency, and node drain rate.

15. An energy-efficient cluster-based vehicle detection on road network using intention numeration method.

Science.gov (United States)

Devasenapathy, Deepa; Kannan, Kathiravan

2015-01-01

The traffic in the road network is progressively increasing at a greater extent. Good knowledge of network traffic can minimize congestions using information pertaining to road network obtained with the aid of communal callers, pavement detectors, and so on. Using these methods, low featured information is generated with respect to the user in the road network. Although the existing schemes obtain urban traffic information, they fail to calculate the energy drain rate of nodes and to locate equilibrium between the overhead and quality of the routing protocol that renders a great challenge. Thus, an energy-efficient cluster-based vehicle detection in road network using the intention numeration method (CVDRN-IN) is developed. Initially, sensor nodes that detect a vehicle are grouped into separate clusters. Further, we approximate the strength of the node drain rate for a cluster using polynomial regression function. In addition, the total node energy is estimated by taking the integral over the area. Finally, enhanced data aggregation is performed to reduce the amount of data transmission using digital signature tree. The experimental performance is evaluated with Dodgers loop sensor data set from UCI repository and the performance evaluation outperforms existing work on energy consumption, clustering efficiency, and node drain rate.

16. Numerical study of natural convection in a horizontal cylinder filled with water-based alumina nanofluid.

Science.gov (United States)

Meng, Xiangyin; Li, Yan

2015-01-01

Natural heat convection of water-based alumina (Al2O3/water) nanofluids (with volume fraction 1% and 4%) in a horizontal cylinder is numerically investigated. The whole three-dimensional computational fluid dynamics (CFD) procedure is performed in a completely open-source way. Blender, enGrid, OpenFOAM and ParaView are employed for geometry creation, mesh generation, case simulation and post process, respectively. Original solver 'buoyantBoussinesqSimpleFoam' is selected for the present study, and a temperature-dependent solver 'buoyantBoussinesqSimpleTDFoam' is developed to ensure the simulation is more realistic. The two solvers are used for same cases and compared to corresponding experimental results. The flow regime in these cases is laminar (Reynolds number is 150) and the Rayleigh number range is 0.7 × 10(7) ~ 5 × 10(7). By comparison, the average natural Nusselt numbers of water and Al2O3/water nanofluids are found to increase with the Rayleigh number. At the same Rayleigh number, the Nusselt number is found to decrease with nanofluid volume fraction. The temperature-dependent solver is found better for water and 1% Al2O3/water nanofluid cases, while the original solver is better for 4% Al2O3/water nanofluid cases. Furthermore, due to strong three-dimensional flow features in the horizontal cylinder, three-dimensional CFD simulation is recommended instead of two-dimensional simplifications.

17. Numerical Investigation on Electron and Ion Transmission of GEM-based Detectors

Directory of Open Access Journals (Sweden)

Bhattacharya Purba

2018-01-01

Full Text Available ALICE at the LHC is planning a major upgrade of its detector systems, including the TPC, to cope with an increase of the LHC luminosity after 2018. Different R&D activities are currently concentrated on the adoption of the Gas Electron Multiplier (GEM as the gas amplification stage of the ALICE-TPC upgrade version. The major challenge is to have low ion feedback in the drift volume as well as to ensure a collection of good percentage of primary electrons in the signal generation process. In the present work, Garfield simulation framework has been adopted to numerically estimate the electron transparency and ion backflow fraction of GEM-based detectors. In this process, extensive simulations have been carried out to enrich our understanding of the complex physical processes occurring within single, triple and quadruple GEM detectors. A detailed study has been performed to observe the effect of detector geometry, field configuration and magnetic field on the above mentioned characteristics.

18. A comparative study of surface- and volume-based techniques for the automatic registration between CT and SPECT brain images

International Nuclear Information System (INIS)

Kagadis, George C.; Delibasis, Konstantinos K.; Matsopoulos, George K.; Mouravliansky, Nikolaos A.; Asvestas, Pantelis A.; Nikiforidis, George C.

2002-01-01

Image registration of multimodality images is an essential task in numerous applications in three-dimensional medical image processing. Medical diagnosis can benefit from the complementary information in different modality images. Surface-based registration techniques, while still widely used, were succeeded by volume-based registration algorithms that appear to be theoretically advantageous in terms of reliability and accuracy. Several applications of such algorithms for the registration of CT-MRI, CT-PET, MRI-PET, and SPECT-MRI images have emerged in the literature, using local optimization techniques for the matching of images. Our purpose in this work is the development of automatic techniques for the registration of real CT and SPECT images, based on either surface- or volume-based algorithms. Optimization is achieved using genetic algorithms that are known for their robustness. The two techniques are compared against a well-established method, the Iterative Closest Point--ICP. The correlation coefficient was employed as an independent measure of spatial match, to produce unbiased results. The repeated measures ANOVA indicates the significant impact of the choice of registration method on the magnitude of the correlation (F=4.968, p=0.0396). The volume-based method achieves an average correlation coefficient value of 0.454 with a standard deviation of 0.0395, as opposed to an average of 0.380 with a standard deviation of 0.0603 achieved by the surface-based method and an average of 0.396 with a standard deviation equal to 0.0353 achieved by ICP. The volume-based technique performs significantly better compared to both ICP (p<0.05, Neuman Keuls test) and the surface-based technique (p<0.05, Neuman-Keuls test). Surface-based registration and ICP do not differ significantly in performance

19. Science-Based Approach for Advancing Marine and Hydrokinetic Energy: Integrating Numerical Simulations with Experiments

Science.gov (United States)

Sotiropoulos, F.; Kang, S.; Chamorro, L. P.; Hill, C.

2011-12-01

The field of MHK energy is still in its infancy lagging approximately a decade or more behind the technology and development progress made in wind energy engineering. Marine environments are characterized by complex topography and three-dimensional (3D) turbulent flows, which can greatly affect the performance and structural integrity of MHK devices and impact the Levelized Cost of Energy (LCoE). Since the deployment of multi-turbine arrays is envisioned for field applications, turbine-to-turbine interactions and turbine-bathymetry interactions need to be understood and properly modeled so that MHK arrays can be optimized on a site specific basis. Furthermore, turbulence induced by MHK turbines alters and interacts with the nearby ecosystem and could potentially impact aquatic habitats. Increased turbulence in the wake of MHK devices can also change the shear stress imposed on the bed ultimately affecting the sediment transport and suspension processes in the wake of these structures. Such effects, however, remain today largely unexplored. In this work a science-based approach integrating state-of-the-art experimentation with high-resolution computational fluid dynamics is proposed as a powerful strategy for optimizing the performance of MHK devices and assessing environmental impacts. A novel numerical framework is developed for carrying out Large-Eddy Simulation (LES) in arbitrarily complex domains with embedded MHK devices. The model is able to resolve the geometrical complexity of real-life MHK devices using the Curvilinear Immersed Boundary (CURVIB) method along with a wall model for handling the flow near solid surfaces. Calculations are carried out for an axial flow hydrokinetic turbine mounted on the bed of rectangular open channel on a grid with nearly 200 million grid nodes. The approach flow corresponds to fully developed turbulent open channel flow and is obtained from a separate LES calculation. The specific case corresponds to that studied

20. Intelligent Search Method Based ACO Techniques for a Multistage Decision Problem EDP/LFP

Directory of Open Access Journals (Sweden)

Mostefa RAHLI

2006-07-01

Full Text Available The implementation of a numerical library of calculation based optimization in electrical supply networks area is in the centre of the current research orientations, thus, our project in a form given is centred on the development of platform NMSS1. It's a software environment which will preserve many efforts as regards calculations of charge, smoothing curves, losses calculation and economic planning of the generated powers [23].The operational research [17] in a hand and the industrial practice in the other, prove that the means and processes of simulation reached a level of very appreciable reliability and mathematical confidence [4, 5, 14]. It is of this expert observation that many processes make confidence to the results of simulation.The handicaps of this approach or methodology are that it makes base its judgments and handling on simplified assumptions and constraints whose influence was deliberately neglected to be added to the cost to spend [14].By juxtaposing the methods of simulation with artificial intelligence techniques, gathering set of numerical methods acquires an optimal reliability whose assurance can not leave doubt.Software environment NMSS [23] can be a in the field of the rallying techniques of simulation and electric network calculation via a graphic interface. In the same software integrate an AI capability via a module expert system.Our problem is a multistage case where are completely dependant and can't be performed separately.For a multistage problem [21, 22], the results obtained from a credible (large size problem calculation, makes the following question: Could choice of numerical methods set make the calculation of a complete problem using more than two treatments levels, a total error which will be the weakest one possible? It is well-known according to algorithmic policy; each treatment can be characterized by a function called mathematical complexity. This complexity is in fact a coast (a weight overloading

1. Structural level characterization of base oils using advanced analytical techniques

KAUST Repository

Hourani, Nadim; Muller, Hendrik; Adam, Frederick M.; Panda, Saroj K.; Witt, Matthias; Al-Hajji, Adnan A.; Sarathy, Mani

2015-01-01

cyclotron resonance mass spectrometry (FT-ICR MS) equipped with atmospheric pressure chemical ionization (APCI) and atmospheric pressure photoionization (APPI) sources. First, the capabilities and limitations of each analytical technique were evaluated

2. Pseudodynamic Bearing Capacity Analysis of Shallow Strip Footing Using the Advanced Optimization Technique “Hybrid Symbiosis Organisms Search Algorithm” with Numerical Validation

Directory of Open Access Journals (Sweden)

Arijit Saha

2018-01-01

Full Text Available The analysis of shallow foundations subjected to seismic loading has been an important area of research for civil engineers. This paper presents an upper-bound solution for bearing capacity of shallow strip footing considering composite failure mechanisms by the pseudodynamic approach. A recently developed hybrid symbiosis organisms search (HSOS algorithm has been used to solve this problem. In the HSOS method, the exploration capability of SQI and the exploitation potential of SOS have been combined to increase the robustness of the algorithm. This combination can improve the searching capability of the algorithm for attaining the global optimum. Numerical analysis is also done using dynamic modules of PLAXIS-8.6v for the validation of this analytical solution. The results obtained from the present analysis using HSOS are thoroughly compared with the existing available literature and also with the other optimization techniques. The significance of the present methodology to analyze the bearing capacity is discussed, and the acceptability of HSOS technique is justified to solve such type of engineering problems.

3. A simplified early-warning system for imminent landslide prediction based on failure index fragility curves developed through numerical analysis

Directory of Open Access Journals (Sweden)

Ugur Ozturk

2016-07-01

Full Text Available Early-warning systems (EWSs are crucial to reduce the risk of landslide, especially where the structural measures are not fully capable of preventing the devastating impact of such an event. Furthermore, designing and successfully implementing a complete landslide EWS is a highly complex task. The main technical challenges are linked to the definition of heterogeneous material properties (geotechnical and geomechanical parameters as well as a variety of the triggering factors. In addition, real-time data processing creates a significant complexity, since data collection and numerical models for risk assessment are time consuming tasks. Therefore, uncertainties in the physical properties of a landslide together with the data management represent the two crucial deficiencies in an efficient landslide EWS. Within this study the application is explored of the concept of fragility curves to landslides; fragility curves are widely used to simulate systems response to natural hazards, i.e. floods or earthquakes. The application of fragility curves to landslide risk assessment is believed to simplify emergency risk assessment; even though it cannot substitute detailed analysis during peace-time. A simplified risk assessment technique can remove some of the unclear features and decrease data processing time. The method is based on synthetic samples which are used to define the approximate failure thresholds for landslides, taking into account the materials and the piezometric levels. The results are presented in charts. The method presented in this paper, which is called failure index fragility curve (FIFC, allows assessment of the actual real-time risk in a case study that is based on the most appropriate FIFC. The application of an FIFC to a real case is presented as an example. This method to assess the landslide risk is another step towards a more integrated dynamic approach to a potential landslide prevention system. Even if it does not define

4. Numerical analysis of thermo-hydro-mechanical (THM) processes in the clay based material

Energy Technology Data Exchange (ETDEWEB)

Wang, Xuerui

2016-10-06

Clay formations are investigated worldwide as potential host rock for the deep geological disposal of high-level radioactive waste (HLW). Usually bentonite is preferred as the buffer and backfill material in the disposal system. In the disposal of HLW, heat emission is one of the most important issues as it can generate a series of complex thermo-hydro-mechanical (THM) processes in the surrounding materials and thus change the material properties. In the context of safety assessment, it is important to understand the thermally induced THM interactions and the associated change in material properties. In this work, the thermally induced coupled THM behaviours in the clay host rock and in the bentonite buffer as well as the corresponding coupling effects among the relevant material properties are numerically analysed. A coupled non-isothermal Richards flow mechanical model and a non-isothermal multiphase flow model were developed based on the scientific computer codes OpenGeoSys (OGS). Heat transfer in the porous media is governed by thermal conduction and advective flow of the pore fluids. Within the hydraulic processes, evaporation, vapour diffusion, and the unsaturated flow field are considered. Darcy's law is used to describe the advective flux of gas and liquid phases. The relative permeability of each phase is considered. The elastic deformation process is modelled by the generalized Hooke's law complemented with additional strain caused by swelling/shrinkage behaviour and by temperature change. In this study, special attention has been paid to the analysis of the thermally induced changes in material properties. The strong mechanical and hydraulic anisotropic properties of clay rock are described by a transversely isotropic mechanical model and by a transversely isotropic permeability tensor, respectively. The thermal anisotropy is described by adoption of the bedding-orientation-dependent thermal conductivity. The dependency of the thermal

5. Numerical analysis of thermo-hydro-mechanical (THM) processes in the clay based material

International Nuclear Information System (INIS)

Wang, Xuerui

2016-01-01

Clay formations are investigated worldwide as potential host rock for the deep geological disposal of high-level radioactive waste (HLW). Usually bentonite is preferred as the buffer and backfill material in the disposal system. In the disposal of HLW, heat emission is one of the most important issues as it can generate a series of complex thermo-hydro-mechanical (THM) processes in the surrounding materials and thus change the material properties. In the context of safety assessment, it is important to understand the thermally induced THM interactions and the associated change in material properties. In this work, the thermally induced coupled THM behaviours in the clay host rock and in the bentonite buffer as well as the corresponding coupling effects among the relevant material properties are numerically analysed. A coupled non-isothermal Richards flow mechanical model and a non-isothermal multiphase flow model were developed based on the scientific computer codes OpenGeoSys (OGS). Heat transfer in the porous media is governed by thermal conduction and advective flow of the pore fluids. Within the hydraulic processes, evaporation, vapour diffusion, and the unsaturated flow field are considered. Darcy's law is used to describe the advective flux of gas and liquid phases. The relative permeability of each phase is considered. The elastic deformation process is modelled by the generalized Hooke's law complemented with additional strain caused by swelling/shrinkage behaviour and by temperature change. In this study, special attention has been paid to the analysis of the thermally induced changes in material properties. The strong mechanical and hydraulic anisotropic properties of clay rock are described by a transversely isotropic mechanical model and by a transversely isotropic permeability tensor, respectively. The thermal anisotropy is described by adoption of the bedding-orientation-dependent thermal conductivity. The dependency of the thermal

6. Contests versus Norms: Implications of Contest-Based and Norm-Based Intervention Techniques.

Science.gov (United States)

Bergquist, Magnus; Nilsson, Andreas; Hansla, André

2017-01-01

Interventions using either contests or norms can promote environmental behavioral change. Yet research on the implications of contest-based and norm-based interventions is lacking. Based on Goal-framing theory, we suggest that a contest-based intervention frames a gain goal promoting intensive but instrumental behavioral engagement. In contrast, the norm-based intervention was expected to frame a normative goal activating normative obligations for targeted and non-targeted behavior and motivation to engage in pro-environmental behaviors in the future. In two studies participants ( n = 347) were randomly assigned to either a contest- or a norm-based intervention technique. Participants in the contest showed more intensive engagement in both studies. Participants in the norm-based intervention tended to report higher intentions for future energy conservation (Study 1) and higher personal norms for non-targeted pro-environmental behaviors (Study 2). These findings suggest that contest-based intervention technique frames a gain goal, while norm-based intervention frames a normative goal.

7. Contests versus Norms: Implications of Contest-Based and Norm-Based Intervention Techniques

Directory of Open Access Journals (Sweden)

Magnus Bergquist

2017-11-01

Full Text Available Interventions using either contests or norms can promote environmental behavioral change. Yet research on the implications of contest-based and norm-based interventions is lacking. Based on Goal-framing theory, we suggest that a contest-based intervention frames a gain goal promoting intensive but instrumental behavioral engagement. In contrast, the norm-based intervention was expected to frame a normative goal activating normative obligations for targeted and non-targeted behavior and motivation to engage in pro-environmental behaviors in the future. In two studies participants (n = 347 were randomly assigned to either a contest- or a norm-based intervention technique. Participants in the contest showed more intensive engagement in both studies. Participants in the norm-based intervention tended to report higher intentions for future energy conservation (Study 1 and higher personal norms for non-targeted pro-environmental behaviors (Study 2. These findings suggest that contest-based intervention technique frames a gain goal, while norm-based intervention frames a normative goal.

8. FPGA based mixed-signal circuit novel testing techniques

International Nuclear Information System (INIS)

Pouros, Sotirios; Vassios, Vassilios; Papakostas, Dimitrios; Hristov, Valentin

2013-01-01

Electronic circuits fault detection techniques, especially on modern mixed-signal circuits, are evolved and customized around the world to meet the industry needs. The paper presents techniques used on fault detection in mixed signal circuits. Moreover, the paper involves standardized methods, along with current innovations for external testing like Design for Testability (DfT) and Built In Self Test (BIST) systems. Finally, the research team introduces a circuit implementation scheme using FPGA

9. Biogeosystem technique as a base of Sustainable Irrigated Agriculture

Science.gov (United States)

Batukaev, Abdulmalik

2016-04-01

The world water strategy is to be changed because the current imitational gravitational frontal isotropic-continual paradigm of irrigation is not sustainable. This paradigm causes excessive consumption of fresh water - global deficit - up to 4-15 times, adverse effects on soils and landscapes. Current methods of irrigation does not control the water spread throughout the soil continuum. The preferable downward fluxes of irrigation water are forming, up to 70% and more of water supply loses into vadose zone. The moisture of irrigated soil is high, soil loses structure in the process of granulometric fractions flotation decomposition, the stomatal apparatus of plant leaf is fully open, transpiration rate is maximal. We propose the Biogeosystem technique - the transcendental, uncommon and non-imitating methods for Sustainable Natural Resources Management. New paradigm of irrigation is based on the intra-soil pulse discrete method of water supply into the soil continuum by injection in small discrete portions. Individual volume of water is supplied as a vertical cylinder of soil preliminary watering. The cylinder position in soil is at depth form 10 to 30 cm. Diameter of cylinder is 1-2 cm. Within 5-10 min after injection the water spreads from the cylinder of preliminary watering into surrounding soil by capillary, film and vapor transfer. Small amount of water is transferred gravitationally to the depth of 35-40 cm. The soil watering cylinder position in soil profile is at depth of 5-50 cm, diameter of the cylinder is 2-4 cm. Lateral distance between next cylinders along the plant raw is 10-15 cm. The soil carcass which is surrounding the cylinder of non-watered soil remains relatively dry and mechanically stable. After water injection the structure of soil in cylinder restores quickly because of no compression from the stable adjoining volume of soil and soil structure memory. The mean soil thermodynamic water potential of watered zone is -0.2 MPa. At this potential

10. Numerical Simulation of an Oscillatory-Type Tidal Current Powered Generator Based on Robotic Fish Technology

Directory of Open Access Journals (Sweden)

Ikuo Yamamoto

2017-10-01

Full Text Available The generation of clean renewable energy is becoming increasingly critical, as pollution and global warming threaten the environment in which we live. While there are many different kinds of natural energy that can be harnessed, marine tidal energy offers reliability and predictability. However, harnessing energy from tidal flows is inherently difficult, due to the harsh environment. Current mechanisms used to harness tidal flows center around propeller-based solutions but are particularly prone to failure due to marine fouling from such as encrustations and seaweed entanglement and the corrosion that naturally occurs in sea water. In order to efficiently harness tidal flow energy in a cost-efficient manner, development of a mechanism that is inherently resistant to these harsh conditions is required. One such mechanism is a simple oscillatory-type mechanism based on robotic fish tail fin technology. This uses the physical phenomenon of vortex-induced oscillation, in which water currents flowing around an object induce transverse motion. We consider two specific types of oscillators, firstly a wing-type oscillator, in which the optimal elastic modulus is being sort. Secondly, the optimal selection of shape from 6 basic shapes for a reciprocating oscillating head-type oscillator. A numerical analysis tool for fluid structure-coupled problems—ANSYS—was used to select the optimum softness of material for the first type of oscillator and the best shape for the second type of oscillator, based on the exhibition of high lift coefficients. For a wing-type oscillator, an optimum elastic modulus for an air-foil was found. For a self-induced vibration-type mechanism, based on analysis of vorticity and velocity distribution, a square-shaped head exhibited a lift coefficient of more than two times that of a cylindrically shaped head. Analysis of the flow field clearly showed that the discontinuous flow caused by a square-headed oscillator results in

11. Evaluation of the base/subgrade soil under repeated loading : phase I--laboratory testing and numerical modeling of geogrid reinforced bases in flexible pavement.

Science.gov (United States)

2009-10-01

This report documents the results of a study that was conducted to characterize the behavior of geogrid reinforced base : course materials. The research was conducted through an experimental testing and numerical modeling programs. The : experimental...

12. Validation of numerical model for cook stove using Reynolds averaged Navier-Stokes based solver

Science.gov (United States)

Islam, Md. Moinul; Hasan, Md. Abdullah Al; Rahman, Md. Mominur; Rahaman, Md. Mashiur

2017-12-01

Biomass fired cook stoves, for many years, have been the main cooking appliance for the rural people of developing countries. Several researches have been carried out to the find efficient stoves. In the present study, numerical model of an improved household cook stove is developed to analyze the heat transfer and flow behavior of gas during operation. The numerical model is validated with the experimental results. Computation of the numerical model is executed the using non-premixed combustion model. Reynold's averaged Navier-Stokes (RaNS) equation along with the κ - ɛ model governed the turbulent flow associated within the computed domain. The computational results are in well agreement with the experiment. Developed numerical model can be used to predict the effect of different biomasses on the efficiency of the cook stove.

13. High performance optical encryption based on computational ghost imaging with QR code and compressive sensing technique

Science.gov (United States)

Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan

2015-10-01

In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.

14. An Adjoint-based Numerical Method for a class of nonlinear Fokker-Planck Equations

KAUST Repository

2017-01-01

Here, we introduce a numerical approach for a class of Fokker-Planck (FP) equations. These equations are the adjoint of the linearization of Hamilton-Jacobi (HJ) equations. Using this structure, we show how to transfer the properties of schemes for HJ equations to the FP equations. Hence, we get numerical schemes with desirable features such as positivity and mass-preservation. We illustrate this approach in examples that include mean-field games and a crowd motion model.

15. Numerical stability for velocity-based 2-phase formulation for geotechnical dynamic analysis

OpenAIRE

Mieremet, M.M.J.

2015-01-01

As a master student in AppliedMathematics at the Delft University of Technology I am highly educated in Numerical Analysis. My interest in this field even mademe choose elective courses such as Advanced Numerical Methods, Applied Finite Elements and Computational Fluid Dynamics. In my search for a challenging graduationproject I chose a research proposal on the material point method, an extension of the finite element method that is well-suited for problems involving large deformations. The p...

16. An Adjoint-based Numerical Method for a class of nonlinear Fokker-Planck Equations

KAUST Repository

2017-03-22

Here, we introduce a numerical approach for a class of Fokker-Planck (FP) equations. These equations are the adjoint of the linearization of Hamilton-Jacobi (HJ) equations. Using this structure, we show how to transfer the properties of schemes for HJ equations to the FP equations. Hence, we get numerical schemes with desirable features such as positivity and mass-preservation. We illustrate this approach in examples that include mean-field games and a crowd motion model.

17. Constitutive error based parameter estimation technique for plate structures using free vibration signatures

Science.gov (United States)

Guchhait, Shyamal; Banerjee, Biswanath

2018-04-01

In this paper, a variant of constitutive equation error based material parameter estimation procedure for linear elastic plates is developed from partially measured free vibration sig-natures. It has been reported in many research articles that the mode shape curvatures are much more sensitive compared to mode shape themselves to localize inhomogeneity. Complying with this idea, an identification procedure is framed as an optimization problem where the proposed cost function measures the error in constitutive relation due to incompatible curvature/strain and moment/stress fields. Unlike standard constitutive equation error based procedure wherein a solution of a couple system is unavoidable in each iteration, we generate these incompatible fields via two linear solves. A simple, yet effective, penalty based approach is followed to incorporate measured data. The penalization parameter not only helps in incorporating corrupted measurement data weakly but also acts as a regularizer against the ill-posedness of the inverse problem. Explicit linear update formulas are then developed for anisotropic linear elastic material. Numerical examples are provided to show the applicability of the proposed technique. Finally, an experimental validation is also provided.

18. Numerical investigation of aerodynamic performance of darrieus wind turbine based on the magnus effect

Directory of Open Access Journals (Sweden)

2016-10-01

Full Text Available The use of several developmental approaches is the researchers’ major preoccupation with the DARRIEUS wind turbine. This paper presents the first approach and results of a wide computational investigation on the aerodynamics of a vertical axis DARRIEUS wind turbine based on the MAGNUS effect. Consequently, wind tunnel tests were carried out to ascertain overall performance of the turbine and two-dimensional unsteady computational fluid dynamics (CFD models were generated to help understand the aerodynamics of this new performance. Accordingly, a moving mesh technique was used where the geometry of the turbine blade was cylinders. The turbine model was created in Gambit modeling software and then read into fluent software for fluid flow analysis. Flow field characteristics are investigated for several values of tip speed ratio (TSR, in this case we generated a new rotational speed ratio between the turbine and cylinder (δ = ωC/ωT. This new concept based on the MAGNUS approach provides the best configuration for better power coefficient values. The positive results of Cp obtained in this study are used to generate energy; on the other hand, the negative values of Cp could be used in order to supply the engines with energy.

19. A compressed sensing based approach on Discrete Algebraic Reconstruction Technique.

Science.gov (United States)

Demircan-Tureyen, Ezgi; Kamasak, Mustafa E

2015-01-01

Discrete tomography (DT) techniques are capable of computing better results, even using less number of projections than the continuous tomography techniques. Discrete Algebraic Reconstruction Technique (DART) is an iterative reconstruction method proposed to achieve this goal by exploiting a prior knowledge on the gray levels and assuming that the scanned object is composed from a few different densities. In this paper, DART method is combined with an initial total variation minimization (TvMin) phase to ensure a better initial guess and extended with a segmentation procedure in which the threshold values are estimated from a finite set of candidates to minimize both the projection error and the total variation (TV) simultaneously. The accuracy and the robustness of the algorithm is compared with the original DART by the simulation experiments which are done under (1) limited number of projections, (2) limited view problem and (3) noisy projections conditions.

20. Non-Destructive Techniques Based on Eddy Current Testing

Science.gov (United States)

García-Martín, Javier; Gómez-Gil, Jaime; Vázquez-Sánchez, Ernesto

2011-01-01

Non-destructive techniques are used widely in the metal industry in order to control the quality of materials. Eddy current testing is one of the most extensively used non-destructive techniques for inspecting electrically conductive materials at very high speeds that does not require any contact between the test piece and the sensor. This paper includes an overview of the fundamentals and main variables of eddy current testing. It also describes the state-of-the-art sensors and modern techniques such as multi-frequency and pulsed systems. Recent advances in complex models towards solving crack-sensor interaction, developments in instrumentation due to advances in electronic devices, and the evolution of data processing suggest that eddy current testing systems will be increasingly used in the future. PMID:22163754

1. Plasticity models of material variability based on uncertainty quantification techniques

Energy Technology Data Exchange (ETDEWEB)

Jones, Reese E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Rizzi, Francesco [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Templeton, Jeremy Alan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

2017-11-01

The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters.

2. Numerical simulation on ferrofluid flow in fractured porous media based on discrete-fracture model

Science.gov (United States)

Huang, Tao; Yao, Jun; Huang, Zhaoqin; Yin, Xiaolong; Xie, Haojun; Zhang, Jianguang

2017-06-01

Water flooding is an efficient approach to maintain reservoir pressure and has been widely used to enhance oil recovery. However, preferential water pathways such as fractures can significantly decrease the sweep efficiency. Therefore, the utilization ratio of injected water is seriously affected. How to develop new flooding technology to further improve the oil recovery in this situation is a pressing problem. For the past few years, controllable ferrofluid has caused the extensive concern in oil industry as a new functional material. In the presence of a gradient in the magnetic field strength, a magnetic body force is produced on the ferrofluid so that the attractive magnetic forces allow the ferrofluid to be manipulated to flow in any desired direction through the control of the external magnetic field. In view of these properties, the potential application of using the ferrofluid as a new kind of displacing fluid for flooding in fractured porous media is been studied in this paper for the first time. Considering the physical process of the mobilization of ferrofluid through porous media by arrangement of strong external magnetic fields, the magnetic body force was introduced into the Darcy equation and deals with fractures based on the discrete-fracture model. The fully implicit finite volume method is used to solve mathematical model and the validity and accuracy of numerical simulation, which is demonstrated through an experiment with ferrofluid flowing in a single fractured oil-saturated sand in a 2-D horizontal cell. At last, the water flooding and ferrofluid flooding in a complex fractured porous media have been studied. The results showed that the ferrofluid can be manipulated to flow in desired direction through control of the external magnetic field, so that using ferrofluid for flooding can raise the scope of the whole displacement. As a consequence, the oil recovery has been greatly improved in comparison to water flooding. Thus, the ferrofluid

3. Numerical simulation of the gasification based biomass cofiring on a 600 MW pulverized coal boiler

Energy Technology Data Exchange (ETDEWEB)

Yang, R.; Dong, C.Q.; Yang, Y.P.; Zhang, J.J. [Key Laboratory of Condition Monitoring and Control for Power Plant Equipment, Ministry of Education, Beijing (China); North China Electric Power Univ., Beijing (China). Key Laboratory of Security and Clean Energy Technology

2008-07-01

Biomass cofiring is the practice of supplementing a base fuel with biomass fuels such as wood waste, short rotation woody crops, short rotation herbaceous crops, alfalfa stems, various types of manure, landfill gas and wastewater treatment gas. The practice began in the 1980s and is becoming commonplace in Europe and the United States. The benefits include reduced carbon dioxide emissions and other airborne emissions such as nitrous oxides (NOx), sulphur dioxide and trace metals; potential for reduced fuel cost; and supporting economic development among wood products and agricultural industries in a given service area. However, technical challenges remain when biomass is directly cofired with coal. These include limited percentage of biomass for cofiring; fuel preparation, storage, and delivery; ash deposition and corrosion associated with the high alkali metal and chlorine content in biomass; fly ash utilization; and impacts on the selective catalytic reduction (SCR) system. This study involved a numerical simulation of cofiring coal and biomass gas in a 600 MWe tangential PC boiler using Fluent software. Combustion behaviour and pollutant formation in the conventional combustion and cofiring cases were compared. The study revealed that reduced NOx emissions can be achieved when producer gas is injected from the lowest layer burner. The nitrogen monoxide (NO) removal rate was between 56.64 and 70.37 per cent. In addition, slagging can be reduced because of the lower temperature. It was concluded that the convection heat transfer area should be increased or the proportion of biomass gas should be decreased to achieve higher boiler efficiency. 8 refs., 4 tabs., 8 figs.

4. Experimental and numerical investigations of Si-based photonic crystals with ordered Ge quantum dots emitters

International Nuclear Information System (INIS)

Jannesari, R.

2014-01-01

In recent years quasi-two-dimensional (2D) photonic crystals, also known as photonic crystal slabs, have been the subject of extensive research. The present work is based on photonic crystals where a hexagonal 2D lattice of air holes is etched through a silicon-on-insulator (SOI) slab. Light is guided in the horizontal plane using photonic band-gap properties, and index guiding provides the optical confinement in the third dimension. This work discusses photonic crystal slabs with Ge quantum dots (QDs) as internal sources. Ge quantum dots have luminescence around 1500nm, which is well suited for optical fiber communication in a way that is fully compatible with standard silicon technology. QD emission can be controlled by epitaxial growth on a pre-patterned SOI substrate. In this way the position of the QDs is controlled, as well as their homogeneity and spectral emission range. During this thesis, photonic crystal fabrication techniques together with techniques for the alignment of the photonic crystal holes with the QDs positions were developed. The employed techniques involve electron beam lithography (EBL) and inductively-coupled-plasma reactive ion etching (ICP-RIE). Perfect ordering of the QDs position was achieved by employing these techniques for pit patterning and the subsequent growth of Ge dots using molecular beam epitaxy (MBE). A second EBL step was then used for photonic crystal writing, which needed to be aligned with respect to the pit pattern with a precision of about ± 30nm. Micro-photoluminescence spectroscopy was used for the optical characterization of the photonic crystal. The emission from ordered quantum dots in different symmetry positions within a unit cell of photonic crystal was theoretically and experimentally investigated and compared with randomly distributed ones. Besides, different geometrical parameters of photonic crystals were studied. The theoretical investigations were mainly based on the rigorous coupled wave analysis (RCWA

5. Skull base tumours part I: Imaging technique, anatomy and anterior skull base tumours

Energy Technology Data Exchange (ETDEWEB)

Borges, Alexandra [Instituto Portugues de Oncologia Francisco Gentil, Centro de Lisboa, Servico de Radiologia, Rua Professor Lima Basto, 1093 Lisboa Codex (Portugal)], E-mail: borgesalexandra@clix.pt

2008-06-15

Advances in cross-sectional imaging, surgical technique and adjuvant treatment have largely contributed to ameliorate the prognosis, lessen the morbidity and mortality of patients with skull base tumours and to the growing medical investment in the management of these patients. Because clinical assessment of the skull base is limited, cross-sectional imaging became indispensable in the diagnosis, treatment planning and follow-up of patients with suspected skull base pathology and the radiologist is increasingly responsible for the fate of these patients. This review will focus on the advances in imaging technique; contribution to patient's management and on the imaging features of the most common tumours affecting the anterior skull base. Emphasis is given to a systematic approach to skull base pathology based upon an anatomic division taking into account the major tissue constituents in each skull base compartment. The most relevant information that should be conveyed to surgeons and radiation oncologists involved in patient's management will be discussed.

6. A novel technique for extracting clouds base height using ground based imaging

Directory of Open Access Journals (Sweden)

E. Hirsch

2011-01-01

Full Text Available The height of a cloud in the atmospheric column is a key parameter in its characterization. Several remote sensing techniques (passive and active, either ground-based or on space-borne platforms and in-situ measurements are routinely used in order to estimate top and base heights of clouds. In this article we present a novel method that combines thermal imaging from the ground and sounded wind profile in order to derive the cloud base height. This method is independent of cloud types, making it efficient for both low boundary layer and high clouds. In addition, using thermal imaging ensures extraction of clouds' features during daytime as well as at nighttime. The proposed technique was validated by comparison to active sounding by ceilometers (which is a standard ground based method, to lifted condensation level (LCL calculations, and to MODIS products obtained from space. As all passive remote sensing techniques, the proposed method extracts only the height of the lowest cloud layer, thus upper cloud layers are not detected. Nevertheless, the information derived from this method can be complementary to space-borne cloud top measurements when deep-convective clouds are present. Unlike techniques such as LCL, this method is not limited to boundary layer clouds, and can extract the cloud base height at any level, as long as sufficient thermal contrast exists between the radiative temperatures of the cloud and its surrounding air parcel. Another advantage of the proposed method is its simplicity and modest power needs, making it particularly suitable for field measurements and deployment at remote locations. Our method can be further simplified for use with visible CCD or CMOS camera (although nighttime clouds will not be observed.

7. Multiplicative noise removal through fractional order tv-based model and fast numerical schemes for its approximation

Science.gov (United States)

Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad

2017-07-01

This paper introduces a fractional order total variation (FOTV) based model with three different weights in the fractional order derivative definition for multiplicative noise removal purpose. The fractional-order Euler Lagrange equation which is a highly non-linear partial differential equation (PDE) is obtained by the minimization of the energy functional for image restoration. Two numerical schemes namely an iterative scheme based on the dual theory and majorization- minimization algorithm (MMA) are used. To improve the restoration results, we opt for an adaptive parameter selection procedure for the proposed model by applying the trial and error method. We report numerical simulations which show the validity and state of the art performance of the fractional-order model in visual improvement as well as an increase in the peak signal to noise ratio comparing to corresponding methods. Numerical experiments also demonstrate that MMAbased methodology is slightly better than that of an iterative scheme.

8. MySQL based selection of appropriate indexing technique in ...

African Journals Online (AJOL)

This paper deals with selection of appropriate indexing technique applied on MySQL Database for a health care system and related performance issues using multiclass support vector machine (SVM). The patient database is generally huge and contains lot of variations. For the quick search or fast retrieval of the desired ...

9. Techniques for Scaling Up Analyses Based on Pre-interpretations

DEFF Research Database (Denmark)

Gallagher, John Patrick; Henriksen, Kim Steen; Banda, Gourinath

2005-01-01

a variety of analyses, both generic (such as mode analysis) and program-specific (with respect to a type describing some particular property of interest). Previous work demonstrated the approach using pre-interpretations over small domains. In this paper we present techniques that allow the method...

10. an architecture-based technique to mobile contact recommendation

African Journals Online (AJOL)

user

Aside being able to store the name of contacts and their phone numbers, there are ... the artificial neural network technique [21], along with ... Recommendation is part of everyday life. This concept ... However, to use RSs some level of intelligence must be ...... [3] Min J.-K. & Cho S.-B.Mobile Human Network Management.

11. MRA Based Efficient Database Storing and Fast Querying Technique

Directory of Open Access Journals (Sweden)

Mitko Kostov

2017-02-01

Full Text Available In this paper we consider a specific way of organizing 1D signals or 2D image databases, such that a more efficient storage and faster querying is achieved. A multiresolution technique of data processing is used in order of saving the most significant processed data.

12. An Algorithm for the Numerical Solution of the Pseudo Compressible Navier-stokes Equations Based on the Experimenting Fields Approach

KAUST Repository

Salama, Amgad; Sun, Shuyu; Amin, Mohamed F. El

2015-01-01

In this work, the experimenting fields approach is applied to the numerical solution of the Navier-Stokes equation for incompressible viscous flow. In this work, the solution is sought for both the pressure and velocity fields in the same time. Apparently, the correct velocity and pressure fields satisfy the governing equations and the boundary conditions. In this technique a set of predefined fields are introduced to the governing equations and the residues are calculated. The flow according to these fields will not satisfy the governing equations and the boundary conditions. However, the residues are used to construct the matrix of coefficients. Although, in this setup it seems trivial constructing the global matrix of coefficients, in other setups it can be quite involved. This technique separates the solver routine from the physics routines and therefore makes easy the coding and debugging procedures. We compare with few examples that demonstrate the capability of this technique.

13. An Algorithm for the Numerical Solution of the Pseudo Compressible Navier-stokes Equations Based on the Experimenting Fields Approach

KAUST Repository

2015-06-01

In this work, the experimenting fields approach is applied to the numerical solution of the Navier-Stokes equation for incompressible viscous flow. In this work, the solution is sought for both the pressure and velocity fields in the same time. Apparently, the correct velocity and pressure fields satisfy the governing equations and the boundary conditions. In this technique a set of predefined fields are introduced to the governing equations and the residues are calculated. The flow according to these fields will not satisfy the governing equations and the boundary conditions. However, the residues are used to construct the matrix of coefficients. Although, in this setup it seems trivial constructing the global matrix of coefficients, in other setups it can be quite involved. This technique separates the solver routine from the physics routines and therefore makes easy the coding and debugging procedures. We compare with few examples that demonstrate the capability of this technique.

14. Introduction to Information Visualization (InfoVis) Techniques for Model-Based Systems Engineering

Science.gov (United States)

Sindiy, Oleg; Litomisky, Krystof; Davidoff, Scott; Dekens, Frank

2013-01-01

This paper presents insights that conform to numerous system modeling languages/representation standards. The insights are drawn from best practices of Information Visualization as applied to aerospace-based applications.

15. Small Private Online Research: A Proposal for A Numerical Methods Course Based on Technology Use and Blended Learning

Science.gov (United States)

2017-01-01

This work presents a proposed model in blended learning for a numerical methods course evolved from traditional teaching into a research lab in scientific visualization. The blended learning approach sets a differentiated and flexible scheme based on a mobile setup and face to face sessions centered on a net of research challenges. Model is…

16. A Spreadsheet-Based Visualized Mindtool for Improving Students' Learning Performance in Identifying Relationships between Numerical Variables

Science.gov (United States)

Lai, Chiu-Lin; Hwang, Gwo-Jen

2015-01-01

In this study, a spreadsheet-based visualized Mindtool was developed for improving students' learning performance when finding relationships between numerical variables by engaging them in reasoning and decision-making activities. To evaluate the effectiveness of the proposed approach, an experiment was conducted on the "phenomena of climate…

17. Numerical Methods for the Optimization of Nonlinear Residual-Based Sungrid-Scale Models Using the Variational Germano Identity

NARCIS (Netherlands)

Maher, G.D.; Hulshoff, S.J.

2014-01-01

The Variational Germano Identity [1, 2] is used to optimize the coefficients of residual-based subgrid-scale models that arise from the application of a Variational Multiscale Method [3, 4]. It is demonstrated that numerical iterative methods can be used to solve the Germano relations to obtain

18. A numerical study into the effects of elongated capsules on the healing efficiency of liquid-based systems

NARCIS (Netherlands)

Mookhoek, S.D.; Fischer, H.R.; Zwaag, S. van der

2009-01-01

In this numerical study the release of healing agent for liquid-based self-healing systems for elongated microcapsules is studied and compared with that for the usual spherical capsules. It is shown that a high aspect ratio and a proper spatial orientation of the elongated capsules have a positive

19. Teaching and Learning Numerical Analysis and Optimization: A Didactic Framework and Applications of Inquiry-Based Learning

Science.gov (United States)

Lappas, Pantelis Z.; Kritikos, Manolis N.

2018-01-01

The main objective of this paper is to propose a didactic framework for teaching Applied Mathematics in higher education. After describing the structure of the framework, several applications of inquiry-based learning in teaching numerical analysis and optimization are provided to illustrate the potential of the proposed framework. The framework…

20. A numerical model for the thermal history of rocks based on confined horizontal fission tracks

DEFF Research Database (Denmark)

Jensen, Peter Klint; Hansen, Kirsten; Kunzendorf, Helmar

1992-01-01

A numerical model for determination of the thermal history of rocks is presented. It is shown that the thermal history may be uniquely determined as a piece-by-piece linear function on the basis of etched confined, horizontal fission track length distributions, their surface densities, and the ur......A numerical model for determination of the thermal history of rocks is presented. It is shown that the thermal history may be uniquely determined as a piece-by-piece linear function on the basis of etched confined, horizontal fission track length distributions, their surface densities...

1. Numerical simulation of terahertz-wave propagation in photonic crystal waveguide based on sapphire shaped crystal

International Nuclear Information System (INIS)

Zaytsev, Kirill I; Katyba, Gleb M; Mukhina, Elena E; Kudrin, Konstantin G; Karasik, Valeriy E; Yurchenko, Stanislav O; Kurlov, Vladimir N; Shikunova, Irina A; Reshetov, Igor V

2016-01-01

Terahertz (THz) waveguiding in sapphire shaped single crystal has been studied using the numerical simulations. The numerical finite-difference analysis has been implemented to characterize the dispersion and loss in the photonic crystalline waveguide containing hollow cylindrical channels, which form the hexagonal lattice. Observed results demonstrate the ability to guide the THz-waves in multi-mode regime in wide frequency range with the minimal power extinction coefficient of 0.02 dB/cm at 1.45 THz. This shows the prospectives of the shaped crystals for highly-efficient THz waveguiding. (paper)

2. A New Three Dimensional Based Key Generation Technique in AVK

Science.gov (United States)

Banerjee, Subhasish; Dutta, Manash Pratim; Bhunia, Chandan Tilak

2017-08-01

In modern era, ensuring high order security becomes one and only objective of computer networks. From the last few decades, many researchers have given their contributions to achieve the secrecy over the communication channel. In achieving perfect security, Shannon had done the pioneer work on perfect secret theorem and illustrated that secrecy of the shared information can be maintained if the key becomes variable in nature instead of static one. In this regard, a key generation technique has been proposed where the key can be changed every time whenever a new block of data needs to be exchanged. In our scheme, the keys not only vary in bit sequences but also in size. The experimental study is also included in this article to prove the correctness and effectiveness of our proposed technique.

3. A Review On Segmentation Based Image Compression Techniques

Directory of Open Access Journals (Sweden)

S.Thayammal

2013-11-01

Full Text Available Abstract -The storage and transmission of imagery become more challenging task in the current scenario of multimedia applications. Hence, an efficient compression scheme is highly essential for imagery, which reduces the requirement of storage medium and transmission bandwidth. Not only improvement in performance and also the compression techniques must converge quickly in order to apply them for real time applications. There are various algorithms have been done in image compression, but everyone has its own pros and cons. Here, an extensive analysis between existing methods is performed. Also, the use of existing works is highlighted, for developing the novel techniques which face the challenging task of image storage and transmission in multimedia applications.

4. Brain tumor segmentation based on a hybrid clustering technique

Directory of Open Access Journals (Sweden)

Eman Abdel-Maksoud

2015-03-01

This paper presents an efficient image segmentation approach using K-means clustering technique integrated with Fuzzy C-means algorithm. It is followed by thresholding and level set segmentation stages to provide an accurate brain tumor detection. The proposed technique can get benefits of the K-means clustering for image segmentation in the aspects of minimal computation time. In addition, it can get advantages of the Fuzzy C-means in the aspects of accuracy. The performance of the proposed image segmentation approach was evaluated by comparing it with some state of the art segmentation algorithms in case of accuracy, processing time, and performance. The accuracy was evaluated by comparing the results with the ground truth of each processed image. The experimental results clarify the effectiveness of our proposed approach to deal with a higher number of segmentation problems via improving the segmentation quality and accuracy in minimal execution time.

5. LFC based adaptive PID controller using ANN and ANFIS techniques

Directory of Open Access Journals (Sweden)

2014-12-01

Full Text Available This paper presents an adaptive PID Load Frequency Control (LFC for power systems using Neuro-Fuzzy Inference Systems (ANFIS and Artificial Neural Networks (ANN oriented by Genetic Algorithm (GA. PID controller parameters are tuned off-line by using GA to minimize integral error square over a wide-range of load variations. The values of PID controller parameters obtained from GA are used to train both ANFIS and ANN. Therefore, the two proposed techniques could, online, tune the PID controller parameters for optimal response at any other load point within the operating range. Testing of the developed techniques shows that the adaptive PID-LFC could preserve optimal performance over the whole loading range. Results signify superiority of ANFIS over ANN in terms of performance measures.

6. New technique for producing the alloys based on transition metals

International Nuclear Information System (INIS)

Dolukhanyan, S.K.; Aleksanyan, A.G.; Shekhtman, V.Sh.; Mantashyan, A.A.; Mayilyan, D.G.; Ter-Galstyan, O.P.

2007-01-01

In principle new technique was elaborated for obtaining the alloys of refractory metals by their hydrides compacting and following dehydrogenation. The elaborated technique is described. The conditions of alloys formation from different hydrides of appropriate metals was investigated in detail. The influence of the process parameters such as: chemical peculiarities, composition of source hydrides, phase transformation during dehydrogenation, etc. on the alloys formation were established. The binary and tertiary alloys of α and ω phases: Ti 0 .8Zr 0 .8; Ti 0 .66Zr 0 .33; Ti 0 .3Zr 0 .8; Ti 0 .2Zr 0 .8; Ti 0 .8Hf 0 .2; Ti 0 .6Hf 0 .4Ti 0 .66Zr 0 .23Hf 0 .11; etc were recieved. Using elaborated special hydride cycle, an earlier unknown effective process for formation of alloys of transition metals was realized. The dependence of final alloy structure on the composition of initial mixture and hydrogen content in source hydrides was established

7. EVE: Explainable Vector Based Embedding Technique Using Wikipedia

OpenAIRE

Qureshi, M. Atif; Greene, Derek

2017-01-01

We present an unsupervised explainable word embedding technique, called EVE, which is built upon the structure of Wikipedia. The proposed model defines the dimensions of a semantic vector representing a word using human-readable labels, thereby it readily interpretable. Specifically, each vector is constructed using the Wikipedia category graph structure together with the Wikipedia article link structure. To test the effectiveness of the proposed word embedding model, we consider its usefulne...

8. Voltage Stabilizer Based on SPWM technique Using Microcontroller

OpenAIRE

K. N. Tarchanidis; J. N. Lygouras; P. Botsaris

2013-01-01

This paper presents an application of the well known SPWM technique on a voltage stabilizer, using a microcontroller. The stabilizer is AC/DC/AC type. So, the system rectifies the input AC voltage to a suitable DC level and the intelligent control of an embedded microcontroller regulates the pulse width of the output voltage in order to produce through a filter a perfect sinusoidal AC voltage. The control program on the microcontroller has the ability to change the FET transistor ...

9. Vesicle Motion during Sustained Exocytosis in Chromaffin Cells: Numerical Model Based on Amperometric Measurements.

Directory of Open Access Journals (Sweden)

Daungruthai Jarukanont

Full Text Available Chromaffin cells release catecholamines by exocytosis, a process that includes vesicle docking, priming and fusion. Although all these steps have been intensively studied, some aspects of their mechanisms, particularly those regarding vesicle transport to the active sites situated at the membrane, are still unclear. In this work, we show that it is possible to extract information on vesicle motion in Chromaffin cells from the combination of Langevin simulations and amperometric measurements. We developed a numerical model based on Langevin simulations of vesicle motion towards the cell membrane and on the statistical analysis of vesicle arrival times. We also performed amperometric experiments in bovine-adrenal Chromaffin cells under Ba2+ stimulation to capture neurotransmitter releases during sustained exocytosis. In the sustained phase, each amperometric peak can be related to a single release from a new vesicle arriving at the active site. The amperometric signal can then be mapped into a spike-series of release events. We normalized the spike-series resulting from the current peaks using a time-rescaling transformation, thus making signals coming from different cells comparable. We discuss why the obtained spike-series may contain information about the motion of all vesicles leading to release of catecholamines. We show that the release statistics in our experiments considerably deviate from Poisson processes. Moreover, the interspike-time probability is reasonably well described by two-parameter gamma distributions. In order to interpret this result we computed the vesicles' arrival statistics from our Langevin simulations. As expected, assuming purely diffusive vesicle motion we obtain Poisson statistics. However, if we assume that all vesicles are guided toward the membrane by an attractive harmonic potential, simulations also lead to gamma distributions of the interspike-time probability, in remarkably good agreement with experiment. We

10. Vesicle Motion during Sustained Exocytosis in Chromaffin Cells: Numerical Model Based on Amperometric Measurements.

Science.gov (United States)

Jarukanont, Daungruthai; Bonifas Arredondo, Imelda; Femat, Ricardo; Garcia, Martin E

2015-01-01

Chromaffin cells release catecholamines by exocytosis, a process that includes vesicle docking, priming and fusion. Although all these steps have been intensively studied, some aspects of their mechanisms, particularly those regarding vesicle transport to the active sites situated at the membrane, are still unclear. In this work, we show that it is possible to extract information on vesicle motion in Chromaffin cells from the combination of Langevin simulations and amperometric measurements. We developed a numerical model based on Langevin simulations of vesicle motion towards the cell membrane and on the statistical analysis of vesicle arrival times. We also performed amperometric experiments in bovine-adrenal Chromaffin cells under Ba2+ stimulation to capture neurotransmitter releases during sustained exocytosis. In the sustained phase, each amperometric peak can be related to a single release from a new vesicle arriving at the active site. The amperometric signal can then be mapped into a spike-series of release events. We normalized the spike-series resulting from the current peaks using a time-rescaling transformation, thus making signals coming from different cells comparable. We discuss why the obtained spike-series may contain information about the motion of all vesicles leading to release of catecholamines. We show that the release statistics in our experiments considerably deviate from Poisson processes. Moreover, the interspike-time probability is reasonably well described by two-parameter gamma distributions. In order to interpret this result we computed the vesicles' arrival statistics from our Langevin simulations. As expected, assuming purely diffusive vesicle motion we obtain Poisson statistics. However, if we assume that all vesicles are guided toward the membrane by an attractive harmonic potential, simulations also lead to gamma distributions of the interspike-time probability, in remarkably good agreement with experiment. We also show that

11. Cloud-Based Numerical Weather Prediction for Near Real-Time Forecasting and Disaster Response

Science.gov (United States)

Molthan, Andrew; Case, Jonathan; Venners, Jason; Schroeder, Richard; Checchi, Milton; Zavodsky, Bradley; Limaye, Ashutosh; O'Brien, Raymond

2015-01-01

The use of cloud computing resources continues to grow within the public and private sector components of the weather enterprise as users become more familiar with cloud-computing concepts, and competition among service providers continues to reduce costs and other barriers to entry. Cloud resources can also provide capabilities similar to high-performance computing environments, supporting multi-node systems required for near real-time, regional weather predictions. Referred to as "Infrastructure as a Service", or IaaS, the use of cloud-based computing hardware in an on-demand payment system allows for rapid deployment of a modeling system in environments lacking access to a large, supercomputing infrastructure. Use of IaaS capabilities to support regional weather prediction may be of particular interest to developing countries that have not yet established large supercomputing resources, but would otherwise benefit from a regional weather forecasting capability. Recently, collaborators from NASA Marshall Space Flight Center and Ames Research Center have developed a scripted, on-demand capability for launching the NOAA/NWS Science and Training Resource Center (STRC) Environmental Modeling System (EMS), which includes pre-compiled binaries of the latest version of the Weather Research and Forecasting (WRF) model. The WRF-EMS provides scripting for downloading appropriate initial and boundary conditions from global models, along with higher-resolution vegetation, land surface, and sea surface temperature data sets provided by the NASA Short-term Prediction Research and Transition (SPoRT) Center. This presentation will provide an overview of the modeling system capabilities and benchmarks performed on the Amazon Elastic Compute Cloud (EC2) environment. In addition, the presentation will discuss future opportunities to deploy the system in support of weather prediction in developing countries supported by NASA's SERVIR Project, which provides capacity building

12. Numerical solution to a multi-dimensional linear inverse heat conduction problem by a splitting-based conjugate gradient method

International Nuclear Information System (INIS)

Dinh Nho Hao; Nguyen Trung Thanh; Sahli, Hichem

2008-01-01

In this paper we consider a multi-dimensional inverse heat conduction problem with time-dependent coefficients in a box, which is well-known to be severely ill-posed, by a variational method. The gradient of the functional to be minimized is obtained by aids of an adjoint problem and the conjugate gradient method with a stopping rule is then applied to this ill-posed optimization problem. To enhance the stability and the accuracy of the numerical solution to the problem we apply this scheme to the discretized inverse problem rather than to the continuous one. The difficulties with large dimensions of discretized problems are overcome by a splitting method which only requires the solution of easy-to-solve one-dimensional problems. The numerical results provided by our method are very good and the techniques seem to be very promising.

13. Numerical analysis of splashing fluid using hybrid method of mesh-based and particle-based modelings

International Nuclear Information System (INIS)

Tanaka, Nobuatsu; Ogawara, Takuya; Kaneda, Takeshi; Maseguchi, Ryo

2009-01-01

In order to simulate splashing and scattering fluid behaviors, we developed a hybrid method of mesh-based model for large-scale continuum fluid and particle-based model for small-scale discrete fluid particles. As for the solver of the continuum fluid, we adopt the CIVA RefIned Multiphase SimulatiON (CRIMSON) code to evaluate two phase flow behaviors based on the recent computational fluid dynamics (CFD) techniques. The phase field model has been introduced to the CRIMSON in order to solve the problem of loosing phase interface sharpness in long-term calculation. As for the solver of the discrete fluid droplets, we applied the idea of Smoothed Particle Hydrodynamics (SPH) method. Both continuum fluid and discrete fluid interact each other through drag interaction force. We verified our method by applying it to a popular benchmark problem of collapse of water column problems, especially focusing on the splashing and scattering fluid behaviors after the column collided against the wall. We confirmed that the gross splashing and scattering behaviors were well reproduced by the introduction of particle model while the detailed behaviors of the particles were slightly different from the experimental results. (author)

14. Numerical Analyses of Subsoil-structure Interaction in Original Non-commercial Software based on FEM

Science.gov (United States)

Cajka, R.; Vaskova, J.; Vasek, J.

2018-04-01

For decades attention has been paid to interaction of foundation structures and subsoil and development of interaction models. Given that analytical solutions of subsoil-structure interaction could be deduced only for some simple shapes of load, analytical solutions are increasingly being replaced by numerical solutions (eg. FEM – Finite element method). Numerical analyses provides greater possibilities for taking into account the real factors involved in the subsoil-structure interaction and was also used in this article. This makes it possible to design the foundation structures more efficiently and still reliably and securely. Currently there are several software that, can deal with the interaction of foundations and subsoil. It has been demonstrated that non-commercial software called MKPINTER (created by Cajka) provides appropriately results close to actual measured values. In MKPINTER software stress-strain analysis of elastic half-space by means of Gauss numerical integration and Jacobean of transformation is done. Input data for numerical analysis were observed by experimental loading test of concrete slab. The loading was performed using unique experimental equipment which was constructed in the area Faculty of Civil Engineering, VŠB-TU Ostrava. The purpose of this paper is to compare resulting deformation of the slab with values observed during experimental loading test.

15. Lateral control required for satisfactory flying qualities based on flight tests of numerous airplanes

Science.gov (United States)

Gilruth, R R; Turner, W N

1941-01-01

Report presents the results of an analysis made of the aileron control characteristics of numerous airplanes tested in flight by the National Advisory Committee for Aeronautics. By the use of previously developed theory, the observed values of pb/2v for the various wing-aileron arrangements were examined to determine the effective section characteristics of the various aileron types.

16. Numerical Study of Wind Turbine Wake Modeling Based on a Actuator Surface Model

DEFF Research Database (Denmark)

Zhou, Huai-yang; Xu, Chang; Han, Xing Xing

2017-01-01

In the Actuator Surface Model (ALM), the turbine blades are represented by porous surfaces of velocity and pressure discontinuities to model the action of lifting surfaces on the flow. The numerical simulation is implemented on FLUENT platform combined with N-S equations. This model is improved o...

17. Towards numerical simulation of turbulent hydrogen combustion  based on flamelet generated manifolds in OpenFOAM

NARCIS (Netherlands)

Fancello, A.; Bastiaans, R.J.M.; Goey, de L.P.H.

2013-01-01

This work proposes an application of the Flamelet-Generated Manifolds (FGM) technique in the OpenFOAM environment. FGM is a chemical reduced method for combustion modeling. This technique treats the combustion process as the solution of a small amount of controlling variables. Regarding laminar

18. Waveform model for an eccentric binary black hole based on the effective-one-body-numerical-relativity formalism

Science.gov (United States)

Cao, Zhoujian; Han, Wen-Biao

2017-08-01

Binary black hole systems are among the most important sources for gravitational wave detection. They are also good objects for theoretical research for general relativity. A gravitational waveform template is important to data analysis. An effective-one-body-numerical-relativity (EOBNR) model has played an essential role in the LIGO data analysis. For future space-based gravitational wave detection, many binary systems will admit a somewhat orbit eccentricity. At the same time, the eccentric binary is also an interesting topic for theoretical study in general relativity. In this paper, we construct the first eccentric binary waveform model based on an effective-one-body-numerical-relativity framework. Our basic assumption in the model construction is that the involved eccentricity is small. We have compared our eccentric EOBNR model to the circular one used in the LIGO data analysis. We have also tested our eccentric EOBNR model against another recently proposed eccentric binary waveform model; against numerical relativity simulation results; and against perturbation approximation results for extreme mass ratio binary systems. Compared to numerical relativity simulations with an eccentricity as large as about 0.2, the overlap factor for our eccentric EOBNR model is better than 0.98 for all tested cases, including spinless binary and spinning binary, equal mass binary, and unequal mass binary. Hopefully, our eccentric model can be the starting point to develop a faithful template for future space-based gravitational wave detectors.

19. Day-ahead electricity prices forecasting by a modified CGSA technique and hybrid WT in LSSVM based scheme

International Nuclear Information System (INIS)

Shayeghi, H.; Ghasemi, A.

2013-01-01

Highlights: • Presenting a hybrid CGSA-LSSVM scheme for price forecasting. • Considering uncertainties for filtering in input data and feature selection to improve efficiency. • Using DWT input featured LSSVM approach to classify next-week prices. • Used three real markets to illustrate performance of the proposed price forecasting model. - Abstract: At the present time, day-ahead electricity market is closely associated with other commodity markets such as fuel market and emission market. Under such an environment, day-ahead electricity price forecasting has become necessary for power producers and consumers in the current deregulated electricity markets. Seeking for more accurate price forecasting techniques, this paper proposes a new combination of a Feature Selection (FS) technique based mutual information (MI) technique and Wavelet Transform (WT) in this study. Moreover, in this paper a new modified version of Gravitational Search Algorithm (GSA) optimization based chaos theory, namely Chaotic Gravitational Search Algorithm (CGSA) is developed to find the optimal parameters of Least Square Support Vector Machine (LSSVM) to predict electricity prices. The performance and price forecast accuracy of the proposed technique is assessed by means of real data from Iran’s, Ontario’s and Spain’s price markets. The simulation results from numerical tables and figures in different cases show that the proposed technique increases electricity price market forecasting accuracy than the other classical and heretical methods in the scientific researches

20. Research and development of LANDSAT-based crop inventory techniques

Science.gov (United States)

Horvath, R.; Cicone, R. C.; Malila, W. A. (Principal Investigator)

1982-01-01

A wide spectrum of technology pertaining to the inventory of crops using LANDSAT without in situ training data is addressed. Methods considered include Bayesian based through-the-season methods, estimation technology based on analytical profile fitting methods, and expert-based computer aided methods. Although the research was conducted using U.S. data, the adaptation of the technology to the Southern Hemisphere, especially Argentina was considered.