Nahavandi, N.; Minuchehr, A.; Zolfaghari, A.; Abbasi, M.
2015-01-01
Highlights: • Powerful hp-SEM refinement approach for P N neutron transport equation has been presented. • The method provides great geometrical flexibility and lower computational cost. • There is a capability of using arbitrary high order and non uniform meshes. • Both posteriori and priori local error estimation approaches have been employed. • High accurate results are compared against other common adaptive and uniform grids. - Abstract: In this work we presented the adaptive hp-SEM approach which is obtained from the incorporation of Spectral Element Method (SEM) and adaptive hp refinement. The SEM nodal discretization and hp adaptive grid-refinement for even-parity Boltzmann neutron transport equation creates powerful grid refinement approach with high accuracy solutions. In this regard a computer code has been developed to solve multi-group neutron transport equation in one-dimensional geometry using even-parity transport theory. The spatial dependence of flux has been developed via SEM method with Lobatto orthogonal polynomial. Two commonly error estimation approaches, the posteriori and the priori has been implemented. The incorporation of SEM nodal discretization method and adaptive hp grid refinement leads to high accurate solutions. Coarser meshes efficiency and significant reduction of computer program runtime in comparison with other common refining methods and uniform meshing approaches is tested along several well-known transport benchmarks
The spectral element approach for the solution of neutron transport problems
Barbarino, A.; Dulla, S.; Ravetto, P.; Mund, E.H.
2011-01-01
In this paper a possible application of the Spectral Element Method to neutron transport problems is presented. The basic features of the numerical scheme on the one-dimensional diffusion equation are illustrated. Then, the AN model for neutron transport is introduced, and the basic steps for the construction of a bi-dimensional solver are described. The AN equations are chosen for their structure, involving a system of coupled elliptic-type equations. Some calculations are carried out on typical benchmark problems and results are compared with the Finite Element Method, in order to evaluate their performances. (author)
Spectral/hp element methods for CFD
Karniadakis, George Em
1999-01-01
Traditionally spectral methods in fluid dynamics were used in direct and large eddy simulations of turbulent flow in simply connected computational domains. The methods are now being applied to more complex geometries, and the spectral/hp element method, which incorporates both multi-domain spectral methods and high-order finite element methods, has been particularly successful. This book provides a comprehensive introduction to these methods. Written by leaders in the field, the book begins with a full explanation of fundamental concepts and implementation issues. It then illustrates how these methods can be applied to advection-diffusion and to incompressible and compressible Navier-Stokes equations. Drawing on both published and unpublished material, the book is an important resource for experienced researchers and for those new to the field.
Introduction to finite and spectral element methods using Matlab
Pozrikidis, Constantine
2014-01-01
The Finite Element Method in One Dimension. Further Applications in One Dimension. High-Order and Spectral Elements in One Dimension. The Finite Element Method in Two Dimensions. Quadratic and Spectral Elements in Two Dimensions. Applications in Mechanics. Viscous Flow. Finite and Spectral Element Methods in Three Dimensions. Appendices. References. Index.
Bessel smoothing filter for spectral-element mesh
Trinh, P. T.; Brossier, R.; Métivier, L.; Virieux, J.; Wellington, P.
2017-06-01
Smoothing filters are extremely important tools in seismic imaging and inversion, such as for traveltime tomography, migration and waveform inversion. For efficiency, and as they can be used a number of times during inversion, it is important that these filters can easily incorporate prior information on the geological structure of the investigated medium, through variable coherent lengths and orientation. In this study, we promote the use of the Bessel filter to achieve these purposes. Instead of considering the direct application of the filter, we demonstrate that we can rely on the equation associated with its inverse filter, which amounts to the solution of an elliptic partial differential equation. This enhances the efficiency of the filter application, and also its flexibility. We apply this strategy within a spectral-element-based elastic full waveform inversion framework. Taking advantage of this formulation, we apply the Bessel filter by solving the associated partial differential equation directly on the spectral-element mesh through the standard weak formulation. This avoids cumbersome projection operators between the spectral-element mesh and a regular Cartesian grid, or expensive explicit windowed convolution on the finite-element mesh, which is often used for applying smoothing operators. The associated linear system is solved efficiently through a parallel conjugate gradient algorithm, in which the matrix vector product is factorized and highly optimized with vectorized computation. Significant scaling behaviour is obtained when comparing this strategy with the explicit convolution method. The theoretical numerical complexity of this approach increases linearly with the coherent length, whereas a sublinear relationship is observed practically. Numerical illustrations are provided here for schematic examples, and for a more realistic elastic full waveform inversion gradient smoothing on the SEAM II benchmark model. These examples illustrate well the
Stability estimates for hp spectral element methods for general ...
We establish basic stability estimates for a non-conforming ℎ- spectral element method which allows for simultaneous mesh refinement and variable polynomial degree. The spectral element functions are non-conforming if the boundary conditions are Dirichlet. For problems with mixed boundary conditions they are ...
Spectral element method for wave propagation on irregular domains
Yan Hui Geng
2018-03-14
Mar 14, 2018 ... Abstract. A spectral element approximation of acoustic propagation problems combined with a new mapping method on irregular domains is proposed. Following this method, the Gauss–Lobatto–Chebyshev nodes in the standard space are applied to the spectral element method (SEM). The nodes in the ...
Spectral element method for wave propagation on irregular domains
A spectral element approximation of acoustic propagation problems combined with a new mapping method on irregular domains is proposed. Following this method, the Gauss–Lobatto–Chebyshev nodes in the standard space are applied to the spectral element method (SEM). The nodes in the physical space are ...
Spectral element method for vector radiative transfer equation
Zhao, J.M.; Liu, L.H.; Hsu, P.-F.; Tan, J.Y.
2010-01-01
A spectral element method (SEM) is developed to solve polarized radiative transfer in multidimensional participating medium. The angular discretization is based on the discrete-ordinates approach, and the spatial discretization is conducted by spectral element approach. Chebyshev polynomial is used to build basis function on each element. Four various test problems are taken as examples to verify the performance of the SEM. The effectiveness of the SEM is demonstrated. The h and the p convergence characteristics of the SEM are studied. The convergence rate of p-refinement follows the exponential decay trend and is superior to that of h-refinement. The accuracy and efficiency of the higher order approximation in the SEM is well demonstrated for the solution of the VRTE. The predicted angular distribution of brightness temperature and Stokes vector by the SEM agree very well with the benchmark solutions in references. Numerical results show that the SEM is accurate, flexible and effective to solve multidimensional polarized radiative transfer problems.
hp Spectral element methods for three dimensional elliptic problems
This is the first of a series of papers devoted to the study of h-p spec- .... element functions defined on mesh elements in the new system of variables with a uni- ... the spectral element functions on these elements and give construction of the stability .... By Hm( ), we denote the usual Sobolev space of integer order m ≥ 0 ...
Convergence analysis of spectral element method for electromechanical devices
Curti, M.; Jansen, J.W.; Lomonova, E.A.
2017-01-01
This paper concerns the comparison of the performance of the Spectral Element Method (SEM) and the Finite Element Method (FEM) for a magnetostatic problem. The convergence of the vector magnetic potential, the magnetic flux density, and the total stored energy in the system is compared with the
Convergence analysis of spectral element method for magnetic devices
Curti, M.; Jansen, J.W.; Lomonova, E.A.
2018-01-01
This paper concerns the comparison of the performance of the Spectral Element Method (SEM) and the Finite Element Method (FEM) for modeling a magnetostatic problem. The convergence of the vector magnetic potential, the magnetic flux density, and the total stored energy in the system is compared with
Spectral/ hp element methods: Recent developments, applications, and perspectives
Xu, Hui; Cantwell, Chris D.; Monteserin, Carlos; Eskilsson, Claes; Engsig-Karup, Allan P.; Sherwin, Spencer J.
2018-02-01
The spectral/ hp element method combines the geometric flexibility of the classical h-type finite element technique with the desirable numerical properties of spectral methods, employing high-degree piecewise polynomial basis functions on coarse finite element-type meshes. The spatial approximation is based upon orthogonal polynomials, such as Legendre or Chebychev polynomials, modified to accommodate a C 0 - continuous expansion. Computationally and theoretically, by increasing the polynomial order p, high-precision solutions and fast convergence can be obtained and, in particular, under certain regularity assumptions an exponential reduction in approximation error between numerical and exact solutions can be achieved. This method has now been applied in many simulation studies of both fundamental and practical engineering flows. This paper briefly describes the formulation of the spectral/ hp element method and provides an overview of its application to computational fluid dynamics. In particular, it focuses on the use of the spectral/ hp element method in transitional flows and ocean engineering. Finally, some of the major challenges to be overcome in order to use the spectral/ hp element method in more complex science and engineering applications are discussed.
Numerical and spectral investigations of novel infinite elements
Barai, P.; Harari, I.; Barbonet, P.E.
1998-01-01
Exterior problems of time-harmonic acoustics are addressed by a novel infinite element formulation, defined on a bounded computational domain. For two-dimensional configurations with circular interfaces, the infinite element results match Quell both analytical values and those obtained from. other methods like DtN. Along 1uith the numerical performance of this formulation, of considerable interest are its complex-valued eigenvalues. Hence, a spectral analysis of the present scheme is also performed here, using various infinite elements
Ostachowicz, W; Kudela, P
2010-01-01
A Spectral Element Method is used for wave propagation modelling. A 3D solid spectral element is derived with shape functions based on Lagrange interpolation and Gauss-Lobatto-Legendre points. This approach is applied for displacement approximation suited for fundamental modes of Lamb waves as well as potential distribution in piezoelectric transducers. The novelty is the model geometry extension from flat to curved elements for application in shell-like structures. Exemplary visualisations of waves excited by the piezoelectric transducers in curved shell structure made of aluminium alloy are presented. Simple signal analysis of wave interaction with crack is performed. The crack is modelled by separation of appropriate nodes between elements. An investigation of influence of the crack length on wave propagation signals is performed. Additionally, some aspects of the spectral element method implementation are discussed.
Nonconforming h-p spectral element methods for elliptic problems
In [6,7,13,14] h-p spectral element methods for solving elliptic boundary value problems on polygonal ... Let M denote the number of corner layers and W denote the number of degrees of .... β is given by Theorem 2.2 of [3] which can be stated.
hp Spectral element methods for three dimensional elliptic problems
elliptic boundary value problems on non-smooth domains in R3. For Dirichlet problems, ... of variable degree bounded by W. Let N denote the number of layers in the geomet- ric mesh ... We prove a stability theorem for mixed problems when the spectral element functions vanish ..... Applying Theorem 3.1,. ∫ r l. |Mu|2dx −.
Unstructured Spectral Element Model for Dispersive and Nonlinear Wave Propagation
Engsig-Karup, Allan Peter; Eskilsson, Claes; Bigoni, Daniele
2016-01-01
We introduce a new stabilized high-order and unstructured numerical model for modeling fully nonlinear and dispersive water waves. The model is based on a nodal spectral element method of arbitrary order in space and a -transformed formulation due to Cai, Langtangen, Nielsen and Tveito (1998). In...
A stabilised nodal spectral element method for fully nonlinear water waves
Engsig-Karup, Allan Peter; Eskilsson, C.; Bigoni, Daniele
2016-01-01
can cause severe aliasing problems and consequently numerical instability for marginally resolved or very steep waves. We show how the scheme can be stabilised through a combination of over-integration of the Galerkin projections and a mild spectral filtering on a per element basis. This effectively......We present an arbitrary-order spectral element method for general-purpose simulation of non-overturning water waves, described by fully nonlinear potential theory. The method can be viewed as a high-order extension of the classical finite element method proposed by Cai et al. (1998) [5], although...... the numerical implementation differs greatly. Features of the proposed spectral element method include: nodal Lagrange basis functions, a general quadrature-free approach and gradient recovery using global L2 projections. The quartic nonlinear terms present in the Zakharov form of the free surface conditions...
Element-by-element parallel spectral-element methods for 3-D teleseismic wave modeling
Liu, Shaolin
2017-09-28
The development of an efficient algorithm for teleseismic wave field modeling is valuable for calculating the gradients of the misfit function (termed misfit gradients) or Fréchet derivatives when the teleseismic waveform is used for adjoint tomography. Here, we introduce an element-by-element parallel spectral-element method (EBE-SEM) for the efficient modeling of teleseismic wave field propagation in a reduced geology model. Under the plane-wave assumption, the frequency-wavenumber (FK) technique is implemented to compute the boundary wave field used to construct the boundary condition of the teleseismic wave incidence. To reduce the memory required for the storage of the boundary wave field for the incidence boundary condition, a strategy is introduced to efficiently store the boundary wave field on the model boundary. The perfectly matched layers absorbing boundary condition (PML ABC) is formulated using the EBE-SEM to absorb the scattered wave field from the model interior. The misfit gradient can easily be constructed in each time step during the calculation of the adjoint wave field. Three synthetic examples demonstrate the validity of the EBE-SEM for use in teleseismic wave field modeling and the misfit gradient calculation.
Symplectic discretization for spectral element solution of Maxwell's equations
Zhao Yanmin; Dai Guidong; Tang Yifa; Liu Qinghuo
2009-01-01
Applying the spectral element method (SEM) based on the Gauss-Lobatto-Legendre (GLL) polynomial to discretize Maxwell's equations, we obtain a Poisson system or a Poisson system with at most a perturbation. For the system, we prove that any symplectic partitioned Runge-Kutta (PRK) method preserves the Poisson structure and its implied symplectic structure. Numerical examples show the high accuracy of SEM and the benefit of conserving energy due to the use of symplectic methods.
PIXE-quantified AXSIA: Elemental mapping by multivariate spectral analysis
Doyle, B.L.; Provencio, P.P.; Kotula, P.G.; Antolak, A.J.; Ryan, C.G.; Campbell, J.L.; Barrett, K.
2006-01-01
Automated, nonbiased, multivariate statistical analysis techniques are useful for converting very large amounts of data into a smaller, more manageable number of chemical components (spectra and images) that are needed to describe the measurement. We report the first use of the multivariate spectral analysis program AXSIA (Automated eXpert Spectral Image Analysis) developed at Sandia National Laboratories to quantitatively analyze micro-PIXE data maps. AXSIA implements a multivariate curve resolution technique that reduces the spectral image data sets into a limited number of physically realizable and easily interpretable components (including both spectra and images). We show that the principal component spectra can be further analyzed using conventional PIXE programs to convert the weighting images into quantitative concentration maps. A common elemental data set has been analyzed using three different PIXE analysis codes and the results compared to the cases when each of these codes is used to separately analyze the associated AXSIA principal component spectral data. We find that these comparisons are in good quantitative agreement with each other
Co-simulation coupling spectral/finite elements for 3D soil/structure interaction problems
Zuchowski, Loïc; Brun, Michael; De Martin, Florent
2018-05-01
The coupling between an implicit finite elements (FE) code and an explicit spectral elements (SE) code has been explored for solving the elastic wave propagation in the case of soil/structure interaction problem. The coupling approach is based on domain decomposition methods in transient dynamics. The spatial coupling at the interface is managed by a standard coupling mortar approach, whereas the time integration is dealt with an hybrid asynchronous time integrator. An external coupling software, handling the interface problem, has been set up in order to couple the FE software Code_Aster with the SE software EFISPEC3D.
Stabilization of numerical interchange in spectral-element magnetohydrodynamics
Sovinec, C. R.
2016-08-01
Auxiliary numerical projections of the divergence of flow velocity and vorticity parallel to magnetic field are developed and tested for the purpose of suppressing unphysical interchange instability in magnetohydrodynamic simulations. The numerical instability arises with equal-order C0 finite- and spectral-element expansions of the flow velocity, magnetic field, and pressure and is sensitive to behavior at the limit of resolution. The auxiliary projections are motivated by physical field-line bending, and coercive responses to the projections are added to the flow-velocity equation. Their incomplete expansions are limited to the highest-order orthogonal polynomial in at least one coordinate of the spectral elements. Cylindrical eigenmode computations show that the projections induce convergence from the stable side with first-order ideal-MHD equations during h-refinement and p-refinement. Hyperbolic and parabolic projections and responses are compared, together with different methods for avoiding magnetic divergence error. The projections are also shown to be effective in linear and nonlinear time-dependent computations with the NIMROD code Sovinec et al. [17], provided that the projections introduce numerical dissipation.
Spectral Element Method for the Simulation of Unsteady Compressible Flows
Diosady, Laslo Tibor; Murman, Scott M.
2013-01-01
This work uses a discontinuous-Galerkin spectral-element method (DGSEM) to solve the compressible Navier-Stokes equations [1{3]. The inviscid ux is computed using the approximate Riemann solver of Roe [4]. The viscous fluxes are computed using the second form of Bassi and Rebay (BR2) [5] in a manner consistent with the spectral-element approximation. The method of lines with the classical 4th-order explicit Runge-Kutta scheme is used for time integration. Results for polynomial orders up to p = 15 (16th order) are presented. The code is parallelized using the Message Passing Interface (MPI). The computations presented in this work are performed using the Sandy Bridge nodes of the NASA Pleiades supercomputer at NASA Ames Research Center. Each Sandy Bridge node consists of 2 eight-core Intel Xeon E5-2670 processors with a clock speed of 2.6Ghz and 2GB per core memory. On a Sandy Bridge node the Tau Benchmark [6] runs in a time of 7.6s.
Spectral response of multi-element silicon detectors
Ludewigt, B.A.; Rossington, C.S.; Chapman, K. [Univ. of California, Berkeley, CA (United States)
1997-04-01
Multi-element silicon strip detectors, in conjunction with integrated circuit pulse-processing electronics, offer an attractive alternative to conventional lithium-drifted silicon Si(Li) and high purity germanium detectors (HPGe) for high count rate, low noise synchrotron x-ray fluorescence applications. One of the major differences between the segmented Si detectors and the commercially available single-element Si(Li) or HPGe detectors is that hundreds of elements can be fabricated on a single Si substrate using standard silicon processing technologies. The segmentation of the detector substrate into many small elements results in very low noise performance at or near, room temperature, and the count rate of the detector is increased many-fold due to the multiplication in the total number of detectors. Traditionally, a single channel of detector with electronics can handle {approximately}100 kHz count rates while maintaining good energy resolution; the segmented detectors can operate at greater than MHz count rates merely due to the multiplication in the number of channels. One of the most critical aspects in the development of the segmented detectors is characterizing the charge sharing and charge loss that occur between the individual detector strips, and determining how these affect the spectral response of the detectors.
Spectral decomposition in advection-diffusion analysis by finite element methods
Nickell, R.E.; Gartling, D.K.; Strang, G.
1978-01-01
In a recent study of the convergence properties of finite element methods in nonlinear fluid mechanics, an indirect approach was taken. A two-dimensional example with a known exact solution was chosen as the vehicle for the study, and various mesh refinements were tested in an attempt to extract information on the effect of the local Reynolds number. However, more direct approaches are usually preferred. In this study one such direct approach is followed, based upon the spectral decomposition of the solution operator. Spectral decomposition is widely employed as a solution technique for linear structural dynamics problems and can be applied readily to linear, transient heat transfer analysis; in this case, the extension to nonlinear problems is of interest. It was shown previously that spectral techniques were applicable to stiff systems of rate equations, while recent studies of geometrically and materially nonlinear structural dynamics have demonstrated the increased information content of the numerical results. The use of spectral decomposition in nonlinear problems of heat and mass transfer would be expected to yield equally increased flow of information to the analyst, and this information could include a quantitative comparison of various solution strategies, meshes, and element hierarchies
Spectral element method for elastic and acoustic waves in frequency domain
Shi, Linlin; Zhou, Yuanguo; Wang, Jia-Min; Zhuang, Mingwei [Institute of Electromagnetics and Acoustics, and Department of Electronic Science, Xiamen, 361005 (China); Liu, Na, E-mail: liuna@xmu.edu.cn [Institute of Electromagnetics and Acoustics, and Department of Electronic Science, Xiamen, 361005 (China); Liu, Qing Huo, E-mail: qhliu@duke.edu [Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27708 (United States)
2016-12-15
Numerical techniques in time domain are widespread in seismic and acoustic modeling. In some applications, however, frequency-domain techniques can be advantageous over the time-domain approach when narrow band results are desired, especially if multiple sources can be handled more conveniently in the frequency domain. Moreover, the medium attenuation effects can be more accurately and conveniently modeled in the frequency domain. In this paper, we present a spectral-element method (SEM) in frequency domain to simulate elastic and acoustic waves in anisotropic, heterogeneous, and lossy media. The SEM is based upon the finite-element framework and has exponential convergence because of the use of GLL basis functions. The anisotropic perfectly matched layer is employed to truncate the boundary for unbounded problems. Compared with the conventional finite-element method, the number of unknowns in the SEM is significantly reduced, and higher order accuracy is obtained due to its spectral accuracy. To account for the acoustic-solid interaction, the domain decomposition method (DDM) based upon the discontinuous Galerkin spectral-element method is proposed. Numerical experiments show the proposed method can be an efficient alternative for accurate calculation of elastic and acoustic waves in frequency domain.
The spectral element method for static neutron transport in AN approximation. Part I
Barbarino, A.; Dulla, S.; Mund, E.H.; Ravetto, P.
2013-01-01
Highlights: ► Spectral elements methods (SEMs) are extended for the neutronics of nuclear reactor cores. ► The second-order, A N formulation of neutron trasport is adopted. ► Results for classical benchmark cases in 2D are presented and compared to finite elements. ► The advantages of SEM in terms of precision and convergence rate are illustrated. ► SEM consitutes a promising approach for the solution of neutron transport problems. - Abstract: Spectral elements methods provide very accurate solutions of elliptic problems. In this paper we apply the method to the A N (i.e. SP 2N−1 ) approximation of neutron transport. Numerical results for classical benchmark cases highlight its performance in comparison with finite element computations, in terms of accuracy per degree of freedom and convergence rate. All calculations presented in this paper refer to two-dimensional problems. The method can easily be extended to three-dimensional cases. The results illustrate promising features of the method for more complex transport problems
Discrete conservation properties for shallow water flows using mixed mimetic spectral elements
Lee, D.; Palha, A.; Gerritsma, M.
2018-01-01
A mixed mimetic spectral element method is applied to solve the rotating shallow water equations. The mixed method uses the recently developed spectral element histopolation functions, which exactly satisfy the fundamental theorem of calculus with respect to the standard Lagrange basis functions in
Efficiency of High Order Spectral Element Methods on Petascale Architectures
Hutchinson, Maxwell; Heinecke, Alexander; Pabst, Hans; Henry, Greg; Parsani, Matteo; Keyes, David E.
2016-01-01
High order methods for the solution of PDEs expose a tradeoff between computational cost and accuracy on a per degree of freedom basis. In many cases, the cost increases due to higher arithmetic intensity while affecting data movement minimally. As architectures tend towards wider vector instructions and expect higher arithmetic intensities, the best order for a particular simulation may change. This study highlights preferred orders by identifying the high order efficiency frontier of the spectral element method implemented in Nek5000 and NekBox: the set of orders and meshes that minimize computational cost at fixed accuracy. First, we extract Nek’s order-dependent computational kernels and demonstrate exceptional hardware utilization by hardware-aware implementations. Then, we perform productionscale calculations of the nonlinear single mode Rayleigh-Taylor instability on BlueGene/Q and Cray XC40-based supercomputers to highlight the influence of the architecture. Accuracy is defined with respect to physical observables, and computational costs are measured by the corehour charge of the entire application. The total number of grid points needed to achieve a given accuracy is reduced by increasing the polynomial order. On the XC40 and BlueGene/Q, polynomial orders as high as 31 and 15 come at no marginal cost per timestep, respectively. Taken together, these observations lead to a strong preference for high order discretizations that use fewer degrees of freedom. From a performance point of view, we demonstrate up to 60% full application bandwidth utilization at scale and achieve ≈1PFlop/s of compute performance in Nek’s most flop-intense methods.
Efficiency of High Order Spectral Element Methods on Petascale Architectures
Hutchinson, Maxwell
2016-06-14
High order methods for the solution of PDEs expose a tradeoff between computational cost and accuracy on a per degree of freedom basis. In many cases, the cost increases due to higher arithmetic intensity while affecting data movement minimally. As architectures tend towards wider vector instructions and expect higher arithmetic intensities, the best order for a particular simulation may change. This study highlights preferred orders by identifying the high order efficiency frontier of the spectral element method implemented in Nek5000 and NekBox: the set of orders and meshes that minimize computational cost at fixed accuracy. First, we extract Nek’s order-dependent computational kernels and demonstrate exceptional hardware utilization by hardware-aware implementations. Then, we perform productionscale calculations of the nonlinear single mode Rayleigh-Taylor instability on BlueGene/Q and Cray XC40-based supercomputers to highlight the influence of the architecture. Accuracy is defined with respect to physical observables, and computational costs are measured by the corehour charge of the entire application. The total number of grid points needed to achieve a given accuracy is reduced by increasing the polynomial order. On the XC40 and BlueGene/Q, polynomial orders as high as 31 and 15 come at no marginal cost per timestep, respectively. Taken together, these observations lead to a strong preference for high order discretizations that use fewer degrees of freedom. From a performance point of view, we demonstrate up to 60% full application bandwidth utilization at scale and achieve ≈1PFlop/s of compute performance in Nek’s most flop-intense methods.
Dynamic analysis of smart composite beams by using the frequency domain spectral element method
Park, Il Wook; Lee, Usik [Inha Univ., Incheon (Korea, Republic of)
2012-08-15
To excite or measure the dynamic responses of a laminated composite structure for the active controls of vibrations or noises, wafertype piezoelectric transducers are often bonded on the surface of the composite structure to form a multi layer smart composite structure. Thus, for such smart composite structures, it is very important to develop and use a very reliable mathematical and/or computational model for predicting accurate dynamic characteristics. In this paper, the axial-bending coupled equations of motion and boundary conditions are derived for two layer smart composite beams by using the Hamilton's principle with Lagrange multipliers. The spectral element model is then formulated in the frequency domain by using the variation approach. Through some numerical examples, the extremely high accuracy of the present spectral element model is verified by comparing with the solutions by the conventional finite element model provided in this paper. The effects of the lay up of composite laminates and surface bonded wafer type piezoelectric (PZT) layer on the dynamics and wave characteristics of smart composite beams are investigated. The effective constraint forces at the interface between the base beam and PZT layer are also investigated via Lagrange multipliers.
New elements - approaching Z=114
Hofmann, S.
1998-03-01
The search for new elements is part of the broader field of investigations of nuclei at the limits of stability. In two series of experiments at SHIP, six new elements (Z=107-112) were synthesized via fusion reactions using 1n-deexcitation channels and lead or bismuth targets. The isotopes were unambiguously identified by means of α-α correlations. Not fission, but alpha decay is the dominant decay mode. The collected decay data establish a means of comparison with theoretical data. This aids in the selection of appropriate models that describe the properties of known nuclei. Predictions based on these models are useful in the preparation of the next generation of experiments. Cross-sections decrease by two orders of magnitude from bohrium (Z=107) to element 112, for which a cross-section of 1 pb was measured. The development of intense beam currents and sensitive detection methods is essential for the production and identification of still heavier elements and new isotopes of already known elements, as well as the measurement of small α-, β- and fission-branching ratios. An equally sensitive set-up is needed for the measurement of excitation functions at low cross-sections. Based on our results, it is likely that the production of isotopes of element 114 close to the island of spherical super heavy elements (SHE) could be achieved by fusion reactions using 208 Pb targets. Systematic studies of the reaction cross-sections indicate that the transfer of nucleons is an important process for the initiation of fusion. The data allow for the fixing of a narrow energy window for the production of SHE using 1n-emission channels. (orig.)
A mass and energy conserving spectral element atmospheric dynamical core on the cubed-sphere grid
Taylor, M A; Edwards, J; Thomas, S; Nair, R
2007-01-01
We present results from a conservative formulation of the spectral element method applied to global atmospheric circulation modeling. Exact local conservation of both mass and energy is obtained via a new compatible formulation of the spectral element method. Compatibility insures that the key integral property of the divergence and gradient operators required to show conservation also hold in discrete form. The spectral element method is used on a cubed-sphere grid to discretize the horizontal directions on the sphere. It can be coupled to any conservative vertical/radial discretization. The accuracy and conservation properties of the method are illustrated using a baroclinic instability test case
Order and correlations in genomic DNA sequences. The spectral approach
Lobzin, Vasilii V; Chechetkin, Vladimir R
2000-01-01
The structural analysis of genomic DNA sequences is discussed in the framework of the spectral approach, which is sufficiently universal due to the reciprocal correspondence and mutual complementarity of Fourier transform length scales. The spectral characteristics of random sequences of the same nucleotide composition possess the property of self-averaging for relatively short sequences of length M≥100-300. Comparison with the characteristics of random sequences determines the statistical significance of the structural features observed. Apart from traditional applications to the search for hidden periodicities, spectral methods are also efficient in studying mutual correlations in DNA sequences. By combining spectra for structure factors and correlation functions, not only integral correlations can be estimated but also their origin identified. Using the structural spectral entropy approach, the regularity of a sequence can be quantitatively assessed. A brief introduction to the problem is also presented and other major methods of DNA sequence analysis described. (reviews of topical problems)
Tomar, S.K.
2002-01-01
It is well known that elliptic problems when posed on non-smooth domains, develop singularities. We examine such problems within the framework of spectral element methods and resolve the singularities with exponential accuracy.
A Bayesian approach to spectral quantitative photoacoustic tomography
Pulkkinen, A; Kaipio, J P; Tarvainen, T; Cox, B T; Arridge, S R
2014-01-01
A Bayesian approach to the optical reconstruction problem associated with spectral quantitative photoacoustic tomography is presented. The approach is derived for commonly used spectral tissue models of optical absorption and scattering: the absorption is described as a weighted sum of absorption spectra of known chromophores (spatially dependent chromophore concentrations), while the scattering is described using Mie scattering theory, with the proportionality constant and spectral power law parameter both spatially-dependent. It is validated using two-dimensional test problems composed of three biologically relevant chromophores: fat, oxygenated blood and deoxygenated blood. Using this approach it is possible to estimate the Grüneisen parameter, the absolute chromophore concentrations, and the Mie scattering parameters associated with spectral photoacoustic tomography problems. In addition, the direct estimation of the spectral parameters is compared to estimates obtained by fitting the spectral parameters to estimates of absorption, scattering and Grüneisen parameter at the investigated wavelengths. It is shown with numerical examples that the direct estimation results in better accuracy of the estimated parameters. (papers)
Spectral/hp element methods: Recent developments, applications, and perspectives
Xu, Hui; Cantwell, Chris; Monteserin, Carlos
2018-01-01
regularity assumptions an exponential reduction in approximation error between numerical and exact solutions can be achieved. This method has now been applied in many simulation studies of both fundamental and practical engineering flows. This paper briefly describes the formulation of the spectral...... is based upon orthogonal polynomials, such as Legendre or Chebychev polynomials, modified to accommodate a C 0 - continuous expansion. Computationally and theoretically, by increasing the polynomial order p, high-precision solutions and fast convergence can be obtained and, in particular, under certain...
Spectrally balanced chromatic landing approach lighting system
Chase, W. D. (Inventor)
1981-01-01
Red warning lights delineate the runway approach with additional blue lights juxtaposed with the red lights such that the red lights are chromatically balanced. The red/blue point light sources result in the phenomenon that the red lights appear in front of the blue lights with about one and one-half times the diameter of the blue. To a pilot observing these lights along a glide path, those red lights directly below appear to be nearer than the blue lights. For those lights farther away seen in perspective at oblique angles, the red lights appear to be in a position closer to the pilot and hence appear to be above the corresponding blue lights. This produces a very pronounced three dimensional effect referred to as chromostereopsis which provides valuable visual cues to enable the pilot to perceive his actual position above the ground and the actual distance to the runway.
Stability estimates for hp spectral element methods for elliptic ...
... parallel preconditioners and error estimates for the solution of the minimization problem which are nearly optimal as the condition number of the preconditioned system is polylogarithmic in , the number of processors and the number of degrees of freedom in each variable on each element. Moreover if the data is analytic ...
Nguyen, Vu-Hieu; Naili, Salah
2013-01-01
This work deals with the ultrasonic wave propagation in the cortical layer of long bones which is known as being a functionally graded anisotropic material coupled with fluids. The viscous effects are taken into account. The geometrical configuration mimics the one of axial transmission technique used for evaluating the bone quality. We present a numerical procedure adapted for this purpose which is based on the spectral finite element method (FEM). By using a combined Laplace-Fourier transform, the vibroacoustic problem may be transformed into the frequency-wavenumber domain in which, as radiation conditions may be exactly introduced in the infinite fluid halfspaces, only the heterogeneous solid layer needs to be analysed using FEM. Several numerical tests are presented showing very good performance of the proposed approach. We present some results to study the influence of the frequency on the first arriving signal velocity in (visco)elastic bone plate.
Spectral Analysis of Large Finite Element Problems by Optimization Methods
Luca Bergamaschi
1994-01-01
Full Text Available Recently an efficient method for the solution of the partial symmetric eigenproblem (DACG, deflated-accelerated conjugate gradient was developed, based on the conjugate gradient (CG minimization of successive Rayleigh quotients over deflated subspaces of decreasing size. In this article four different choices of the coefficient βk required at each DACG iteration for the computation of the new search direction Pk are discussed. The “optimal” choice is the one that yields the same asymptotic convergence rate as the CG scheme applied to the solution of linear systems. Numerical results point out that the optimal βk leads to a very cost effective algorithm in terms of CPU time in all the sample problems presented. Various preconditioners are also analyzed. It is found that DACG using the optimal βk and (LLT−1 as a preconditioner, L being the incomplete Cholesky factor of A, proves a very promising method for the partial eigensolution. It appears to be superior to the Lanczos method in the evaluation of the 40 leftmost eigenpairs of five finite element problems, and particularly for the largest problem, with size equal to 4560, for which the speed gain turns out to fall between 2.5 and 6.0, depending on the eigenpair level.
An Objective Approach to Identify Spectral Distinctiveness for Hearing Impairment
Yeou-Jiunn Chen
2013-01-01
Full Text Available To facilitate the process of developing speech perception, speech-language pathologists have to teach a subject with hearing loss the differences between two syllables by manually enhancing acoustic cues of speech. However, this process is time consuming and difficult. Thus, this study proposes an objective approach to automatically identify the regions of spectral distinctiveness between two syllables, which is used for speech-perception training. To accurately represent the characteristics of speech, mel-frequency cepstrum coefficients are selected as analytical parameters. The mismatch between two syllables in time domain is handled by dynamic time warping. Further, a filter bank is adopted to estimate the components in different frequency bands, which are also represented as mel-frequency cepstrum coefficients. The spectral distinctiveness in different frequency bands is then easily estimated by using Euclidean metrics. Finally, a morphological gradient operator is applied to automatically identify the regions of spectral distinctiveness. To evaluate the proposed approach, the identified regions are manipulated and then the manipulated syllables are measured by a close-set based speech-perception test. The experimental results demonstrated that the identified regions of spectral distinctiveness are very useful in speech perception, which indeed can help speech-language pathologists in speech-perception training.
A spectral approach for discrete dislocation dynamics simulations of nanoindentation
Bertin, Nicolas; Glavas, Vedran; Datta, Dibakar; Cai, Wei
2018-07-01
We present a spectral approach to perform nanoindentation simulations using three-dimensional nodal discrete dislocation dynamics. The method relies on a two step approach. First, the contact problem between an indenter of arbitrary shape and an isotropic elastic half-space is solved using a spectral iterative algorithm, and the contact pressure is fully determined on the half-space surface. The contact pressure is then used as a boundary condition of the spectral solver to determine the resulting stress field produced in the simulation volume. In both stages, the mechanical fields are decomposed into Fourier modes and are efficiently computed using fast Fourier transforms. To further improve the computational efficiency, the method is coupled with a subcycling integrator and a special approach is devised to approximate the displacement field associated with surface steps. As a benchmark, the method is used to compute the response of an elastic half-space using different types of indenter. An example of a dislocation dynamics nanoindentation simulation with complex initial microstructure is presented.
High-precision solution to the moving load problem using an improved spectral element method
Wen, Shu-Rui; Wu, Zhi-Jing; Lu, Nian-Li
2018-02-01
In this paper, the spectral element method (SEM) is improved to solve the moving load problem. In this method, a structure with uniform geometry and material properties is considered as a spectral element, which means that the element number and the degree of freedom can be reduced significantly. Based on the variational method and the Laplace transform theory, the spectral stiffness matrix and the equivalent nodal force of the beam-column element are established. The static Green function is employed to deduce the improved function. The proposed method is applied to two typical engineering practices—the one-span bridge and the horizontal jib of the tower crane. The results have revealed the following. First, the new method can yield extremely high-precision results of the dynamic deflection, the bending moment and the shear force in the moving load problem. In most cases, the relative errors are smaller than 1%. Second, by comparing with the finite element method, one can obtain the highly accurate results using the improved SEM with smaller element numbers. Moreover, the method can be widely used for statically determinate as well as statically indeterminate structures. Third, the dynamic deflection of the twin-lift jib decreases with the increase in the moving load speed, whereas the curvature of the deflection increases. Finally, the dynamic deflection, the bending moment and the shear force of the jib will all increase as the magnitude of the moving load increases.
Constellation modulation - an approach to increase spectral efficiency.
Dash, Soumya Sunder; Pythoud, Frederic; Hillerkuss, David; Baeuerle, Benedikt; Josten, Arne; Leuchtmann, Pascal; Leuthold, Juerg
2017-07-10
Constellation modulation (CM) is introduced as a new degree of freedom to increase the spectral efficiency and to further approach the Shannon limit. Constellation modulation is the art of encoding information not only in the symbols within a constellation but also by encoding information by selecting a constellation from a set of constellations that are switched from time to time. The set of constellations is not limited to sets of partitions from a given constellation but can e.g., be obtained from an existing constellation by applying geometrical transformations such as rotations, translations, scaling, or even more abstract transformations. The architecture of the transmitter and the receiver allows for constellation modulation to be used on top of existing modulations with little penalties on the bit-error ratio (BER) or on the required signal-to-noise ratio (SNR). The spectral bandwidth used by this modulation scheme is identical to the original modulation. Simulations demonstrate a particular advantage of the scheme for low SNR situations. So, for instance, it is demonstrated by simulation that a spectral efficiency increases by up to 33% and 20% can be obtained at a BER of 10 -3 and 2×10 -2 for a regular BPSK modulation format, respectively. Applying constellation modulation, we derive a most power efficient 4D-CM-BPSK modulation format that provides a spectral efficiency of 0.7 bit/s/Hz for an SNR of 0.2 dB at a BER of 2 × 10 -2 .
Spectral element model for 2-D electrostatic fields in a linear synchronous motor
van Beek, T.A.; Curti, M.; Jansen, J.W.; Gysen, B.L.J.; Paulides, J.J.H.; Lomonova, E.A.
2017-01-01
This paper presents a fast and accurate 2-D spectral element model for analyzing electric field distributions in linear synchronous motors. The electric field distribution is derived using the electric scalar potential for static cases. The spatial potential and electric field distributions obtained
The next step in coastal numerical models: spectral/hp element methods?
Eskilsson, Claes; Engsig-Karup, Allan Peter; Sherwin, Spencer J.
2005-01-01
In this paper we outline the application of spectral/hp element methods for modelling nonlinear and dispersive waves. We present one- and two-dimensional test cases for the shallow water equations and Boussinesqtype equations – including highly dispersive Boussinesq-type equations....
Application of Least-Squares Spectral Element Methods to Polynomial Chaos
Vos, P.E.J.; Gerritsma, M.I.
2006-01-01
This papers describes the use of the Least-Squares Spectral Element Method to polynomial Chaos to solve stochastic partial differential equations. The method will be described in detail and a comparison will be presented between the least-squares projection and the conventional Galerkin projection.
Seismoelectric Effects based on Spectral-Element Method for Subsurface Fluid Characterization
Morency, C.
2017-12-01
Present approaches for subsurface imaging rely predominantly on seismic techniques, which alone do not capture fluid properties and related mechanisms. On the other hand, electromagnetic (EM) measurements add constraints on the fluid phase through electrical conductivity and permeability, but EM signals alone do not offer information of the solid structural properties. In the recent years, there have been many efforts to combine both seismic and EM data for exploration geophysics. The most popular approach is based on joint inversion of seismic and EM data, as decoupled phenomena, missing out the coupled nature of seismic and EM phenomena such as seismoeletric effects. Seismoelectric effects are related to pore fluid movements with respect to the solid grains. By analyzing coupled poroelastic seismic and EM signals, one can capture a pore scale behavior and access both structural and fluid properties.Here, we model the seismoelectric response by solving the governing equations derived by Pride and Garambois (1994), which correspond to Biot's poroelastic wave equations and Maxwell's electromagnetic wave equations coupled electrokinetically. We will show that these coupled wave equations can be numerically implemented by taking advantage of viscoelastic-electromagnetic mathematical equivalences. These equations will be solved using a spectral-element method (SEM). The SEM, in contrast to finite-element methods (FEM) uses high degree Lagrange polynomials. Not only does this allow the technique to handle complex geometries similarly to FEM, but it also retains exponential convergence and accuracy due to the use of high degree polynomials. Finally, we will discuss how this is a first step toward full coupled seismic-EM inversion to improve subsurface fluid characterization. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Carella, Alfredo Raul
2012-09-15
Quantifying species transport rates is a main concern in chemical and petrochemical industries. In particular, the design and operation of many large-scale industrial chemical processes is as much dependent on diffusion as it is on reaction rates. However, the existing diffusion models sometimes fail to predict experimentally observed behaviors and their accuracy is usually insufficient for process optimization purposes. Fractional diffusion models offer multiple possibilities for generalizing Flick's law in a consistent manner in order to account for history dependence and nonlocal effects. These models have not been extensively applied to the study of real systems, mainly due to their computational cost and mathematical complexity. A least squares spectral formulation was developed for solving fractional differential equations. The proposed method was proven particularly well-suited for dealing with the numerical difficulties inherent to fractional differential operators. The practical implementation was explained in detail in order to enhance reproducibility, and directions were specified for extending it to multiple dimensions and arbitrarily shaped domains. A numerical framework based on the least-squares spectral element method was developed for studying and comparing anomalous diffusion models in pellets. This simulation tool is capable of solving arbitrary integro-differential equations and can be effortlessly adapted to various problems in any number of dimensions. Simulations of the flow around a cylindrical particle were achieved by extending the functionality of the developed framework. A test case was analyzed by coupling the boundary condition yielded by the fluid model with two families of anomalous diffusion models: hyperbolic diffusion and fractional diffusion. Qualitative guidelines for determining the suitability of diffusion models can be formulated by complementing experimental data with the results obtained from this approach.(Author)
A multimodal spectral approach to characterize rhythm in natural speech.
Alexandrou, Anna Maria; Saarinen, Timo; Kujala, Jan; Salmelin, Riitta
2016-01-01
Human utterances demonstrate temporal patterning, also referred to as rhythm. While simple oromotor behaviors (e.g., chewing) feature a salient periodical structure, conversational speech displays a time-varying quasi-rhythmic pattern. Quantification of periodicity in speech is challenging. Unimodal spectral approaches have highlighted rhythmic aspects of speech. However, speech is a complex multimodal phenomenon that arises from the interplay of articulatory, respiratory, and vocal systems. The present study addressed the question of whether a multimodal spectral approach, in the form of coherence analysis between electromyographic (EMG) and acoustic signals, would allow one to characterize rhythm in natural speech more efficiently than a unimodal analysis. The main experimental task consisted of speech production at three speaking rates; a simple oromotor task served as control. The EMG-acoustic coherence emerged as a sensitive means of tracking speech rhythm, whereas spectral analysis of either EMG or acoustic amplitude envelope alone was less informative. Coherence metrics seem to distinguish and highlight rhythmic structure in natural speech.
Wavelet-based spectral finite element dynamic analysis for an axially moving Timoshenko beam
Mokhtari, Ali; Mirdamadi, Hamid Reza; Ghayour, Mostafa
2017-08-01
In this article, wavelet-based spectral finite element (WSFE) model is formulated for time domain and wave domain dynamic analysis of an axially moving Timoshenko beam subjected to axial pretension. The formulation is similar to conventional FFT-based spectral finite element (SFE) model except that Daubechies wavelet basis functions are used for temporal discretization of the governing partial differential equations into a set of ordinary differential equations. The localized nature of Daubechies wavelet basis functions helps to rule out problems of SFE model due to periodicity assumption, especially during inverse Fourier transformation and back to time domain. The high accuracy of WSFE model is then evaluated by comparing its results with those of conventional finite element and SFE results. The effects of moving beam speed and axial tensile force on vibration and wave characteristics, and static and dynamic stabilities of moving beam are investigated.
McGill, Matthew J. (Inventor); Scott, Vibart S. (Inventor); Marzouk, Marzouk (Inventor)
2001-01-01
A holographic optical element transforms a spectral distribution of light to image points. The element comprises areas, each of which acts as a separate lens to image the light incident in its area to an image point. Each area contains the recorded hologram of a point source object. The image points can be made to lie in a line in the same focal plane so as to align with a linear array detector. A version of the element has been developed that has concentric equal areas to match the circular fringe pattern of a Fabry-Perot interferometer. The element has high transmission efficiency, and when coupled with high quantum efficiency solid state detectors, provides an efficient photon-collecting detection system. The element may be used as part of the detection system in a direct detection Doppler lidar system or multiple field of view lidar system.
An Extended Spectral-Spatial Classification Approach for Hyperspectral Data
Akbari, D.
2017-11-01
In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF) algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1) unsupervised feature extraction methods including principal component analysis (PCA), independent component analysis (ICA), and minimum noise fraction (MNF); (2) supervised feature extraction including decision boundary feature extraction (DBFE), discriminate analysis feature extraction (DAFE), and nonparametric weighted feature extraction (NWFE); (3) genetic algorithm (GA). The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.
Bauer, M.; Weitkamp, C.
1977-01-01
The strongest spectra lines of the 85 stable chemical elements have been compiled and plotted along with lines from other elements that may interfere in applications like spectroscopic multielement analysis. For each line a wavelength range of +- 0.25 A.U. around the line of interest has been considered. The tables contain the wavelength, intensity and assignment to an ionization state of the emitting atom, the plots visualize the lines with a doppler broadening corresponding to 8,000 K. (orig.) [de
Nonlinear Legendre Spectral Finite Elements for Wind Turbine Blade Dynamics: Preprint
Wang, Q.; Sprague, M. A.; Jonkman, J.; Johnson, N.
2014-01-01
This paper presents a numerical implementation and examination of new wind turbine blade finite element model based on Geometrically Exact Beam Theory (GEBT) and a high-order spectral finite element method. The displacement-based GEBT is presented, which includes the coupling effects that exist in composite structures and geometric nonlinearity. Legendre spectral finite elements (LSFEs) are high-order finite elements with nodes located at the Gauss-Legendre-Lobatto points. LSFEs can be an order of magnitude more efficient that low-order finite elements for a given accuracy level. Interpolation of the three-dimensional rotation, a major technical barrier in large-deformation simulation, is discussed in the context of LSFEs. It is shown, by numerical example, that the high-order LSFEs, where weak forms are evaluated with nodal quadrature, do not suffer from a drawback that exists in low-order finite elements where the tangent-stiffness matrix is calculated at the Gauss points. Finally, the new LSFE code is implemented in the new FAST Modularization Framework for dynamic simulation of highly flexible composite-material wind turbine blades. The framework allows for fully interactive simulations of turbine blades in operating conditions. Numerical examples showing validation and LSFE performance will be provided in the final paper.
A numerical spectral approach to solve the dislocation density transport equation
Djaka, K S; Taupin, V; Berbenni, S; Fressengeas, C
2015-01-01
A numerical spectral approach is developed to solve in a fast, stable and accurate fashion, the quasi-linear hyperbolic transport equation governing the spatio-temporal evolution of the dislocation density tensor in the mechanics of dislocation fields. The approach relies on using the Fast Fourier Transform algorithm. Low-pass spectral filters are employed to control both the high frequency Gibbs oscillations inherent to the Fourier method and the fast-growing numerical instabilities resulting from the hyperbolic nature of the transport equation. The numerical scheme is validated by comparison with an exact solution in the 1D case corresponding to dislocation dipole annihilation. The expansion and annihilation of dislocation loops in 2D and 3D settings are also produced and compared with finite element approximations. The spectral solutions are shown to be stable, more accurate for low Courant numbers and much less computation time-consuming than the finite element technique based on an explicit Galerkin-least squares scheme. (paper)
A three-dimensional spectral element model for the solution of the hydrostatic primitive equations
Iskandarani, M; Levin, J C
2003-01-01
We present a spectral element model to solve the hydrostatic primitive equations governing large-scale geophysical flows. The highlights of this new model include unstructured grids, dual h-p paths to convergence, and good scalability characteristics on present day parallel computers including Beowulf-class systems. The behavior of the model is assessed on three process-oriented test problems involving wave propagation, gravitational adjustment, and nonlinear flow rectification, respectively. The first of these test problems is a study of the convergence properties of the model when simulating the linear propagation of baroclinic Kelvin waves. The second is an intercomparison of spectral element and finite-difference model solutions to the adjustment of a density front in a straight channel. Finally, the third problem considers the comparison of model results to measurements obtained from a laboratory simulation of flow around a submarine canyon. The aforementioned tests demonstrate the good performance of th...
Huang, Xin; Yin, Chang-Chun; Cao, Xiao-Yue; Liu, Yun-He; Zhang, Bo; Cai, Jing
2017-09-01
The airborne electromagnetic (AEM) method has a high sampling rate and survey flexibility. However, traditional numerical modeling approaches must use high-resolution physical grids to guarantee modeling accuracy, especially for complex geological structures such as anisotropic earth. This can lead to huge computational costs. To solve this problem, we propose a spectral-element (SE) method for 3D AEM anisotropic modeling, which combines the advantages of spectral and finite-element methods. Thus, the SE method has accuracy as high as that of the spectral method and the ability to model complex geology inherited from the finite-element method. The SE method can improve the modeling accuracy within discrete grids and reduce the dependence of modeling results on the grids. This helps achieve high-accuracy anisotropic AEM modeling. We first introduced a rotating tensor of anisotropic conductivity to Maxwell's equations and described the electrical field via SE basis functions based on GLL interpolation polynomials. We used the Galerkin weighted residual method to establish the linear equation system for the SE method, and we took a vertical magnetic dipole as the transmission source for our AEM modeling. We then applied fourth-order SE calculations with coarse physical grids to check the accuracy of our modeling results against a 1D semi-analytical solution for an anisotropic half-space model and verified the high accuracy of the SE. Moreover, we conducted AEM modeling for different anisotropic 3D abnormal bodies using two physical grid scales and three orders of SE to obtain the convergence conditions for different anisotropic abnormal bodies. Finally, we studied the identification of anisotropy for single anisotropic abnormal bodies, anisotropic surrounding rock, and single anisotropic abnormal body embedded in an anisotropic surrounding rock. This approach will play a key role in the inversion and interpretation of AEM data collected in regions with anisotropic
Multiscale finite element methods for high-contrast problems using local spectral basis functions
Efendiev, Yalchin
2011-02-01
In this paper we study multiscale finite element methods (MsFEMs) using spectral multiscale basis functions that are designed for high-contrast problems. Multiscale basis functions are constructed using eigenvectors of a carefully selected local spectral problem. This local spectral problem strongly depends on the choice of initial partition of unity functions. The resulting space enriches the initial multiscale space using eigenvectors of local spectral problem. The eigenvectors corresponding to small, asymptotically vanishing, eigenvalues detect important features of the solutions that are not captured by initial multiscale basis functions. Multiscale basis functions are constructed such that they span these eigenfunctions that correspond to small, asymptotically vanishing, eigenvalues. We present a convergence study that shows that the convergence rate (in energy norm) is proportional to (H/Λ*)1/2, where Λ* is proportional to the minimum of the eigenvalues that the corresponding eigenvectors are not included in the coarse space. Thus, we would like to reach to a larger eigenvalue with a smaller coarse space. This is accomplished with a careful choice of initial multiscale basis functions and the setup of the eigenvalue problems. Numerical results are presented to back-up our theoretical results and to show higher accuracy of MsFEMs with spectral multiscale basis functions. We also present a hierarchical construction of the eigenvectors that provides CPU savings. © 2010.
Spectral interference of zirconium on 24 analyte elements using CCD based ICP-AES technique
Adya, V.C.; Sengupta, Arijit; Godbole, S.V.
2014-01-01
In the present studies, the spectral interference of zirconium on different analytical lines of 24 critical analytes using CCD based ICP-AES technique is described. Suitable analytical lines for zirconium were identified along with their detection limits. The sensitivity and the detection limits of analytical channels for different elements in presence of Zr matrix were calculated. Subsequently analytical lines with least interference from Zr and better detection limits were selected for their determinations. (author)
The Spectral/hp-Finite Element Method for Partial Differential Equations
Engsig-Karup, Allan Peter
2009-01-01
dimensions. In the course the chosen programming environment is Matlab, however, this is by no means a necessary requirement. The mathematical level needed to grasp the details of this set of notes requires an elementary background in mathematical analysis and linear algebra. Each chapter is supplemented......This set of lecture notes provides an elementary introduction to both the classical Finite Element Method (FEM) and the extended Spectral/$hp$-Finite Element Method for solving Partial Differential Equations (PDEs). Many problems in science and engineering can be formulated mathematically...
Fischer, P.F. [Brown Univ., Providence, RI (United States)
1996-12-31
Efficient solution of the Navier-Stokes equations in complex domains is dependent upon the availability of fast solvers for sparse linear systems. For unsteady incompressible flows, the pressure operator is the leading contributor to stiffness, as the characteristic propagation speed is infinite. In the context of operator splitting formulations, it is the pressure solve which is the most computationally challenging, despite its elliptic origins. We seek to improve existing spectral element iterative methods for the pressure solve in order to overcome the slow convergence frequently observed in the presence of highly refined grids or high-aspect ratio elements.
Spectral/hp least-squares finite element formulation for the Navier-Stokes equations
Pontaza, J.P.; Reddy, J.N.
2003-01-01
We consider the application of least-squares finite element models combined with spectral/hp methods for the numerical solution of viscous flow problems. The paper presents the formulation, validation, and application of a spectral/hp algorithm to the numerical solution of the Navier-Stokes equations governing two- and three-dimensional stationary incompressible and low-speed compressible flows. The Navier-Stokes equations are expressed as an equivalent set of first-order equations by introducing vorticity or velocity gradients as additional independent variables and the least-squares method is used to develop the finite element model. High-order element expansions are used to construct the discrete model. The discrete model thus obtained is linearized by Newton's method, resulting in a linear system of equations with a symmetric positive definite coefficient matrix that is solved in a fully coupled manner by a preconditioned conjugate gradient method. Spectral convergence of the L 2 least-squares functional and L 2 error norms is verified using smooth solutions to the two-dimensional stationary Poisson and incompressible Navier-Stokes equations. Numerical results for flow over a backward-facing step, steady flow past a circular cylinder, three-dimensional lid-driven cavity flow, and compressible buoyant flow inside a square enclosure are presented to demonstrate the predictive capability and robustness of the proposed formulation
Geotourism products industry element: A community approach
Basi Arjana, I. W.; Ernawati, N. M.; Astawa, I. K.
2018-01-01
The ability of a tourism area to provide products that could satisfy the needs and desires of tourists is the key to success in developing tourism. Geotourists are a niche market that has specific needs. This study aims to identify the needs of geotourists, which is undertaken by evaluating the perceptions of geotourists with respect to 6 elements which are the industrial aspects of community-based tourism products, using a qualitative approach. In-depth interview technique is used as data collection method. These products are as follows: there are five major categories of geotourism commercial elements, which include: travel services, accommodation, transportation, food and beverage, souvenir and packaging. The research results show that there are various products which are the output of the industry elements desired by tourists in Batur representing the needs of different market segments and accommodating the sustainability of nature. These needs are arised and inspired by local culture. The necessity to offer an assortment of products packages is indicated to provide plentiful options for tourists, to lengthen tourist’s stay, and also to introduce various product components available in Batur. The research output could be used and contribute in providing a reference in developing geotourism products.
Spectral element method for band-structure calculations of 3D phononic crystals
Shi, Linlin; Liu, Na; Zhou, Jianyang; Zhou, Yuanguo; Wang, Jiamin; Liu, Qing Huo
2016-01-01
The spectral element method (SEM) is a special kind of high-order finite element method (FEM) which combines the flexibility of a finite element method with the accuracy of a spectral method. In contrast to the traditional FEM, the SEM exhibits advantages in the high-order accuracy as the error decreases exponentially with the increase of interpolation degree by employing the Gauss–Lobatto–Legendre (GLL) polynomials as basis functions. In this study, the spectral element method is developed for the first time for the determination of band structures of 3D isotropic/anisotropic phononic crystals (PCs). Based on the Bloch theorem, we present a novel, intuitive discretization formulation for Navier equation in the SEM scheme for periodic media. By virtue of using the orthogonal Legendre polynomials, the generalized eigenvalue problem is converted to a regular one in our SEM implementation to improve the efficiency. Besides, according to the specific geometry structure, 8-node and 27-node hexahedral elements as well as an analytic mesh have been used to accurately capture curved PC models in our SEM scheme. To verify its accuracy and efficiency, this study analyses the phononic-crystal plates with square and triangular lattice arrangements, and the 3D cubic phononic crystals consisting of simple cubic (SC), bulk central cubic (BCC) and faced central cubic (FCC) lattices with isotropic or anisotropic scatters. All the numerical results considered demonstrate that SEM is superior to the conventional FEM and can be an efficient alternative method for accurate determination of band structures of 3D phononic crystals. (paper)
Element-specific spectral imaging of multiple contrast agents: a phantom study
Panta, R. K.; Bell, S. T.; Healy, J. L.; Aamir, R.; Bateman, C. J.; Moghiseh, M.; Butler, A. P. H.; Anderson, N. G.
2018-02-01
This work demonstrates the feasibility of simultaneous discrimination of multiple contrast agents based on their element-specific and energy-dependent X-ray attenuation properties using a pre-clinical photon-counting spectral CT. We used a photon-counting based pre-clinical spectral CT scanner with four energy thresholds to measure the X-ray attenuation properties of various concentrations of iodine (9, 18 and 36 mg/ml), gadolinium (2, 4 and 8 mg/ml) and gold (2, 4 and 8 mg/ml) based contrast agents, calcium chloride (140 and 280 mg/ml) and water. We evaluated the spectral imaging performances of different energy threshold schemes between 25 to 82 keV at 118 kVp, based on K-factor and signal-to-noise ratio and ranked them. K-factor was defined as the X-ray attenuation in the K-edge containing energy range divided by the X-ray attenuation in the preceding energy range, expressed as a percentage. We evaluated the effectiveness of the optimised energy selection to discriminate all three contrast agents in a phantom of 33 mm diameter. A photon-counting spectral CT using four energy thresholds of 27, 33, 49 and 81 keV at 118 kVp simultaneously discriminated three contrast agents based on iodine, gadolinium and gold at various concentrations using their K-edge and energy-dependent X-ray attenuation features in a single scan. A ranking method to evaluate spectral imaging performance enabled energy thresholds to be optimised to discriminate iodine, gadolinium and gold contrast agents in a single spectral CT scan. Simultaneous discrimination of multiple contrast agents in a single scan is likely to open up new possibilities of improving the accuracy of disease diagnosis by simultaneously imaging multiple bio-markers each labelled with a nano-contrast agent.
A spectral element-FCT method for the compressible Euler equations
Giannakouros, J.; Karniadakis, G.E.
1994-01-01
A new algorithm based on spectral element discretizations and flux-corrected transport concepts is developed for the solution of the Euler equations of inviscid compressible fluid flow. A conservative formulation is proposed based on one- and two-dimensional cell-averaging and reconstruction procedures, which employ a staggered mesh of Gauss-Chebyshev and Gauss-Lobatto-Chebyshev collocation points. Particular emphasis is placed on the construction of robust boundary and interfacial conditions in one- and two-dimensions. It is demonstrated through shock-tube problems and two-dimensional simulations that the proposed algorithm leads to stable, non-oscillatory solutions of high accuracy. Of particular importance is the fact that dispersion errors are minimal, as show through experiments. From the operational point of view, casting the method in a spectral element formulation provides flexibility in the discretization, since a variable number of macro-elements or collocation points per element can be employed to accomodate both accuracy and geometric requirements
Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri
2015-04-01
Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
Towards an Entropy Stable Spectral Element Framework for Computational Fluid Dynamics
Carpenter, Mark H.
2016-01-04
Nonlinearly stable finite element methods of arbitrary type and order, are currently unavailable for discretizations of the compressible Navier-Stokes equations. Summation-by-parts (SBP) entropy stability analysis provides a means of constructing nonlinearly stable discrete operators of arbitrary order, but is currently limited to simple element types. Herein, recent progress is reported, on developing entropy-stable (SS) discontinuous spectral collocation formulations for hexahedral elements. Two complementary efforts are discussed. The first effort generalizes previous SS spectral collocation work to extend the applicable set of points from tensor product, Legendre-Gauss-Lobatto (LGL) to tensor product Legendre-Gauss (LG) points. The LG and LGL point formulations are compared on a series of test problems. Both the LGL and LG operators are of comparable efficiency and robustness, as is demonstrated using test problems for which conventional FEM techniques suffer instability. The second effort extends previous work on entropy stability to include p-refinement at nonconforming interfaces. A generalization of existing entropy stability theory is required to accommodate the nuances of fully multidimensional SBP operators. The entropy stability of the compressible Euler equations on nonconforming interfaces is demonstrated using the newly developed LG operators and multidimensional interface interpolation operators. Preliminary studies suggest design order accuracy at nonconforming interfaces.
Zhang, Lei; Cao, Ling; Zhao, Laishi; Algeo, Thomas J.; Chen, Zhong-Qiang; Li, Zhihong; Lv, Zhengyi; Wang, Xiangdong
2017-08-01
Conodont apatite has long been used in paleoenvironmental studies, often with minimal evaluation of the influence of diagenesis on measured elemental and isotopic signals. In this study, we evaluate diagenetic influences on conodonts using an integrated set of analytical techniques. A total of 92 points in 19 coniform conodonts from Ordovician marine units of South China were analyzed by micro-laser Raman spectroscopy (M-LRS), laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS), high-resolution X-ray microdiffraction (HXRD), and secondary ion mass spectrometry (SIMS). Each conodont element was analyzed along its full length, including the albid crown, hyaline crown, and basal body, in either a whole specimen (i.e., reflecting the composition of its outer layer) or a split specimen (i.e., reflecting the composition of its interior). In the conodonts of this study, the outer surfaces consist of hydroxyfluorapatite and the interiors of strontian hydroxyfluorapatite. Ionic substitutions resulted in characteristic Raman spectral shifts in the position (SS1) and width (SS2) of the ν1-PO43- stretching band. Although multiple elements were enriched (Sr2+, Mg2+) and depleted (Fe3+, Mn2+, Ca2+) during diagenesis, geochemical modeling constraints and known Raman spectral patterns suggest that Sr uptake was the dominant influence on diagenetic redshifts of SS1. All study specimens show lower SS2 values than modern bioapatite and synthetic apatite, suggesting that band width decreases with time in ancient bioapatite, possibly through an annealing process that produces larger, more uniform crystal domains. Most specimens consist mainly of amorphous or poorly crystalline apatite, which is inferred to represent the original microstructure of conodonts. In a subset of specimens, some tissues (especially albid crown) exhibit an increased degree of crystallinity developed through aggrading neomorphism. However, no systematic relationship was observed between
Chen, Xiao-Li; Guo, Wen-Zhong; Xue, Xu-Zhang; Wang, Li-Chun; Li, Liang; Chen, Fei
2013-08-01
Mineral elements absorption and content of Lactuca sativa under different spectral component conditions were studied by ICP-AES technology. The results showed that: (1) For Lactuca sativa, the average proportion for Ca : Mg : K : Na : P was 5.5 : 2.5 : 2.3 : 1.5 : 1.0, the average proportion for Fe : Mn : Zn : Cu : B was 25.9 : 5.9 : 2.8 : 1.1 : 1.0; (2) The absorptions for K, P, Ca, Mg and B are the largest under the LED treatment R/B = 1 : 2.75, red light from fluorescent lamps and LED can both promote the absorptions of Fe and Cu; (3)The LED treatments exhibiting relatively higher content of mineral elements are R/B = 1 : 2.75 and R/W = 1 : 1 while higher dry matter accumulations are R/B = 1 : 2.75 and B/W = 1 : 1.
Computational Performance of a Parallelized Three-Dimensional High-Order Spectral Element Toolbox
Bosshard, Christoph; Bouffanais, Roland; Clémençon, Christian; Deville, Michel O.; Fiétier, Nicolas; Gruber, Ralf; Kehtari, Sohrab; Keller, Vincent; Latt, Jonas
In this paper, a comprehensive performance review of an MPI-based high-order three-dimensional spectral element method C++ toolbox is presented. The focus is put on the performance evaluation of several aspects with a particular emphasis on the parallel efficiency. The performance evaluation is analyzed with help of a time prediction model based on a parameterization of the application and the hardware resources. A tailor-made CFD computation benchmark case is introduced and used to carry out this review, stressing the particular interest for clusters with up to 8192 cores. Some problems in the parallel implementation have been detected and corrected. The theoretical complexities with respect to the number of elements, to the polynomial degree, and to communication needs are correctly reproduced. It is concluded that this type of code has a nearly perfect speed up on machines with thousands of cores, and is ready to make the step to next-generation petaflop machines.
Rahimi Dalkhani, Amin; Javaherian, Abdolrahim; Mahdavi Basir, Hadi
2018-04-01
Wave propagation modeling as a vital tool in seismology can be done via several different numerical methods among them are finite-difference, finite-element, and spectral-element methods (FDM, FEM and SEM). Some advanced applications in seismic exploration benefit the frequency domain modeling. Regarding flexibility in complex geological models and dealing with the free surface boundary condition, we studied the frequency domain acoustic wave equation using FEM and SEM. The results demonstrated that the frequency domain FEM and SEM have a good accuracy and numerical efficiency with the second order interpolation polynomials. Furthermore, we developed the second order Clayton and Engquist absorbing boundary condition (CE-ABC2) and compared it with the perfectly matched layer (PML) for the frequency domain FEM and SEM. In spite of PML method, CE-ABC2 does not add any additional computational cost to the modeling except assembling boundary matrices. As a result, considering CE-ABC2 is more efficient than PML for the frequency domain acoustic wave propagation modeling especially when computational cost is high and high-level absorbing performance is unnecessary.
Discrete conservation properties for shallow water flows using mixed mimetic spectral elements
Lee, D.; Palha, A.; Gerritsma, M.
2018-03-01
A mixed mimetic spectral element method is applied to solve the rotating shallow water equations. The mixed method uses the recently developed spectral element histopolation functions, which exactly satisfy the fundamental theorem of calculus with respect to the standard Lagrange basis functions in one dimension. These are used to construct tensor product solution spaces which satisfy the generalized Stokes theorem, as well as the annihilation of the gradient operator by the curl and the curl by the divergence. This allows for the exact conservation of first order moments (mass, vorticity), as well as higher moments (energy, potential enstrophy), subject to the truncation error of the time stepping scheme. The continuity equation is solved in the strong form, such that mass conservation holds point wise, while the momentum equation is solved in the weak form such that vorticity is globally conserved. While mass, vorticity and energy conservation hold for any quadrature rule, potential enstrophy conservation is dependent on exact spatial integration. The method possesses a weak form statement of geostrophic balance due to the compatible nature of the solution spaces and arbitrarily high order spatial error convergence.
Karaoǧlu, Haydar; Romanowicz, Barbara
2018-06-01
We present a global upper-mantle shear wave attenuation model that is built through a hybrid full-waveform inversion algorithm applied to long-period waveforms, using the spectral element method for wavefield computations. Our inversion strategy is based on an iterative approach that involves the inversion for successive updates in the attenuation parameter (δ Q^{-1}_μ) and elastic parameters (isotropic velocity VS, and radial anisotropy parameter ξ) through a Gauss-Newton-type optimization scheme that employs envelope- and waveform-type misfit functionals for the two steps, respectively. We also include source and receiver terms in the inversion steps for attenuation structure. We conducted a total of eight iterations (six for attenuation and two for elastic structure), and one inversion for updates to source parameters. The starting model included the elastic part of the relatively high-resolution 3-D whole mantle seismic velocity model, SEMUCB-WM1, which served to account for elastic focusing effects. The data set is a subset of the three-component surface waveform data set, filtered between 400 and 60 s, that contributed to the construction of the whole-mantle tomographic model SEMUCB-WM1. We applied strict selection criteria to this data set for the attenuation iteration steps, and investigated the effect of attenuation crustal structure on the retrieved mantle attenuation structure. While a constant 1-D Qμ model with a constant value of 165 throughout the upper mantle was used as starting model for attenuation inversion, we were able to recover, in depth extent and strength, the high-attenuation zone present in the depth range 80-200 km. The final 3-D model, SEMUCB-UMQ, shows strong correlation with tectonic features down to 200-250 km depth, with low attenuation beneath the cratons, stable parts of continents and regions of old oceanic crust, and high attenuation along mid-ocean ridges and backarcs. Below 250 km, we observe strong attenuation in the
Spectral-element Method for 3D Marine Controlled-source EM Modeling
Liu, L.; Yin, C.; Zhang, B., Sr.; Liu, Y.; Qiu, C.; Huang, X.; Zhu, J.
2017-12-01
As one of the predrill reservoir appraisal methods, marine controlled-source EM (MCSEM) has been widely used in mapping oil reservoirs to reduce risk of deep water exploration. With the technical development of MCSEM, the need for improved forward modeling tools has become evident. We introduce in this paper spectral element method (SEM) for 3D MCSEM modeling. It combines the flexibility of finite-element and high accuracy of spectral method. We use Galerkin weighted residual method to discretize the vector Helmholtz equation, where the curl-conforming Gauss-Lobatto-Chebyshev (GLC) polynomials are chosen as vector basis functions. As a kind of high-order complete orthogonal polynomials, the GLC have the characteristic of exponential convergence. This helps derive the matrix elements analytically and improves the modeling accuracy. Numerical 1D models using SEM with different orders show that SEM method delivers accurate results. With increasing SEM orders, the modeling accuracy improves largely. Further we compare our SEM with finite-difference (FD) method for a 3D reservoir model (Figure 1). The results show that SEM method is more effective than FD method. Only when the mesh is fine enough, can FD achieve the same accuracy of SEM. Therefore, to obtain the same precision, SEM greatly reduces the degrees of freedom and cost. Numerical experiments with different models (not shown here) demonstrate that SEM is an efficient and effective tool for MSCEM modeling that has significant advantages over traditional numerical methods.This research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900).
Schulte, R T; Fritzen, C-P; Moll, J
2010-01-01
During the last decades, guided waves have shown great potential for Structural Health Monitoring (SHM) applications. These waves can be excited and sensed by piezoelectric elements that can be permanently attached onto a structure offering online monitoring capability. However, the setup of wave based SHM systems for complex structures may be very difficult and time consuming. For that reason there is a growing demand for efficient simulation tools providing the opportunity to design wave based SHM systems in a virtual environment. As usually high frequency waves are used, the associated short wavelength leads to the necessity of a very dense mesh, which makes conventional finite elements not well suited for this purpose. Therefore in this contribution a flat shell spectral element approach is presented. By including electromechanical coupling a SHM system can be simulated entirely from actuator voltage to sensor voltage. Besides a comparison to measured data for anisotropic materials including delamination, a numerical example of a more complex, stiffened shell structure with debonding is presented.
Zhang, Yan; Tang, Baoping; Liu, Ziran; Chen, Rengxiang
2016-02-01
Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses
Zhang, Yan; Tang, Baoping; Chen, Rengxiang; Liu, Ziran
2016-01-01
Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses
Liu, Youshan, E-mail: ysliu@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Teng, Jiwen, E-mail: jwteng@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Xu, Tao, E-mail: xutao@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); CAS Center for Excellence in Tibetan Plateau Earth Sciences, Beijing, 100101 (China); Badal, José, E-mail: badal@unizar.es [Physics of the Earth, Sciences B, University of Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza (Spain)
2017-05-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational
Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José
2017-05-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational
Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José
2017-01-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational
Towards an Entropy Stable Spectral Element Framework for Computational Fluid Dynamics
Carpenter, Mark H.; Parsani, Matteo; Fisher, Travis C.; Nielsen, Eric J.
2016-01-01
Entropy stable (SS) discontinuous spectral collocation formulations of any order are developed for the compressible Navier-Stokes equations on hexahedral elements. Recent progress on two complementary efforts is presented. The first effort is a generalization of previous SS spectral collocation work to extend the applicable set of points from tensor product, Legendre-Gauss-Lobatto (LGL) to tensor product Legendre-Gauss (LG) points. The LG and LGL point formulations are compared on a series of test problems. Although being more costly to implement, it is shown that the LG operators are significantly more accurate on comparable grids. Both the LGL and LG operators are of comparable efficiency and robustness, as is demonstrated using test problems for which conventional FEM techniques suffer instability. The second effort generalizes previous SS work to include the possibility of p-refinement at non-conforming interfaces. A generalization of existing entropy stability machinery is developed to accommodate the nuances of fully multi-dimensional summation-by-parts (SBP) operators. The entropy stability of the compressible Euler equations on non-conforming interfaces is demonstrated using the newly developed LG operators and multi-dimensional interface interpolation operators.
Eiber, Calvin D; Dokos, Socrates; Lovell, Nigel H; Suaning, Gregg J
2017-05-01
The capacity to quickly and accurately simulate extracellular stimulation of neurons is essential to the design of next-generation neural prostheses. Existing platforms for simulating neurons are largely based on finite-difference techniques; due to the complex geometries involved, the more powerful spectral or differential quadrature techniques cannot be applied directly. This paper presents a mathematical basis for the application of a spectral element method to the problem of simulating the extracellular stimulation of retinal neurons, which is readily extensible to neural fibers of any kind. The activating function formalism is extended to arbitrary neuron geometries, and a segmentation method to guarantee an appropriate choice of collocation points is presented. Differential quadrature may then be applied to efficiently solve the resulting cable equations. The capacity for this model to simulate action potentials propagating through branching structures and to predict minimum extracellular stimulation thresholds for individual neurons is demonstrated. The presented model is validated against published values for extracellular stimulation threshold and conduction velocity for realistic physiological parameter values. This model suggests that convoluted axon geometries are more readily activated by extracellular stimulation than linear axon geometries, which may have ramifications for the design of neural prostheses.
Using spectral element method to solve variational inequalities with applications in finance
Moradipour, M.; Yousefi, S.A.
2015-01-01
Under the Black–Scholes model, the value of an American option solves a time dependent variational inequality problem (VIP). In this paper, first we discretize the variational inequality of American option in temporal direction by applying the Rannacher time stepping and achieve a sequence of elliptic variational inequalities. Second we discretize the spatial domain of variational inequalities by using spectral element methods with high order Lagrangian polynomials introduced on Gauss–Legendre–Lobatto points. Also by computing integrals by the Gauss–Legendre–Lobatto quadrature rule we derive a sequence of the linear complementarity problems (LCPs) having a positive definite sparse coefficient matrix. To find the unique solutions of the LCPs, we use the projected successive over-relaxation (PSOR) algorithm. Furthermore we present some existence and uniqueness theorems for the variational inequalities and LCPs. Finally, theoretical results are verified on the relevant numerical examples.
Direct numerical simulation of the Rayleigh-Taylor instability with the spectral element method
Zhang Xu; Tan Duowang
2009-01-01
A novel method is proposed to simulate Rayleigh-Taylor instabilities using a specially-developed unsteady three-dimensional high-order spectral element method code. The numerical model used consists of Navier-Stokes equations and a transport-diffusive equation. The code is first validated with the results of linear stability perturbation theory. Then several characteristics of the Rayleigh-Taylor instabilities are studied using this three-dimensional unsteady code, including instantaneous turbulent structures and statistical turbulent mixing heights under different initial wave numbers. These results indicate that turbulent structures of Rayleigh-Taylor instabilities are strongly dependent on the initial conditions. The results also suggest that a high-order numerical method should provide the capability of simulating small scale fluctuations of Rayleigh-Taylor instabilities of turbulent flows. (authors)
Direct Numerical Simulation of the Rayleigh−Taylor Instability with the Spectral Element Method
Xu, Zhang; Duo-Wang, Tan
2009-01-01
A novel method is proposed to simulate Rayleigh−Taylor instabilities using a specially-developed unsteady three-dimensional high-order spectral element method code. The numerical model used consists of Navier–Stokes equations and a transport-diffusive equation. The code is first validated with the results of linear stability perturbation theory. Then several characteristics of the Rayleigh−Taylor instabilities are studied using this three-dimensional unsteady code, including instantaneous turbulent structures and statistical turbulent mixing heights under different initial wave numbers. These results indicate that turbulent structures of Rayleigh–Taylor instabilities are strongly dependent on the initial conditions. The results also suggest that a high-order numerical method should provide the capability of simulating small scale fluctuations of Rayleigh−Taylor instabilities of turbulent flows. (fundamental areas of phenomenology (including applications))
Jafari, Azadeh; Deville, Michel O.; Fiétier, Nicolas
2008-09-01
This study discusses the capability of the constitutive laws for the matrix logarithm of the conformation tensor (LCT model) within the framework of the spectral elements method. The high Weissenberg number problems (HWNP) usually produce a lack of convergence of the numerical algorithms. Even though the question whether the HWNP is a purely numerical problem or rather a breakdown of the constitutive law of the model has remained somewhat of a mystery, it has been recognized that the selection of an appropriate constitutive equation constitutes a very crucial step although implementing a suitable numerical technique is still important for successful discrete modeling of non-Newtonian flows. The LCT model formulation of the viscoelastic equations originally suggested by Fattal and Kupferman is applied for 2-dimensional (2D) FENE-CR model. The Planar Poiseuille flow is considered as a benchmark problem to test this representation at high Weissenberg number. The numerical results are compared with numerical solution of the standard constitutive equation.
Viner, K.; Reinecke, P. A.; Gabersek, S.; Flagg, D. D.; Doyle, J. D.; Martini, M.; Ryglicki, D.; Michalakes, J.; Giraldo, F.
2016-12-01
NEPTUNE: the Navy Environmental Prediction sysTem Using the NUMA*corE, is a 3D spectral element atmospheric model composed of a full suite of physics parameterizations and pre- and post-processing infrastructure with plans for data assimilation and coupling components to a variety of Earth-system models. This talk will focus on the initial struggles and solutions in adapting NUMA for stable and accurate integration on the sphere using both the deep atmosphere equations and a newly developed shallow-atmosphere approximation, as demonstrated through idealized test cases. In addition, details of the physics-dynamics coupling methodology will be discussed. NEPTUNE results for test cases from the 2016 Dynamical Core Model Intercomparison Project (DCMIP-2016) will be shown and discussed. *NUMA: Nonhydrostatic Unified Model of the Atmosphere; Kelly and Giraldo 2012, JCP
Gopalakrishnan, Srinivasan; Roy Mahapatra, Debiprosad
2008-01-01
The use of composites and Functionally Graded Materials (FGMs) in structural applications has increased. FGMs allow the user to design materials for a specified functionality and have many uses in structural engineering. However, the behaviour of these structures under high-impact loading is not well understood. This book is the first to apply the Spectral Finite Element Method (SFEM) to inhomogeneous and anisotropic structures in a unified and systematic manner. It focuses on some of the problems with this media which were previously thought unmanageable. Types of SFEM for regular and damaged 1-D and 2-D waveguides, solution techniques, methods of detecting the presence of damages and their locations, and methods for controlling the wave propagation responses are discussed. Tables, figures and graphs support the theory and case studies are included. This book is of value to senior undergraduates and postgraduates studying in this field, and researchers and practicing engineers in structural integrity.
3D airborne EM modeling based on the spectral-element time-domain (SETD) method
Cao, X.; Yin, C.; Huang, X.; Liu, Y.; Zhang, B., Sr.; Cai, J.; Liu, L.
2017-12-01
In the field of 3D airborne electromagnetic (AEM) modeling, both finite-difference time-domain (FDTD) method and finite-element time-domain (FETD) method have limitations that FDTD method depends too much on the grids and time steps, while FETD requires large number of grids for complex structures. We propose a time-domain spectral-element (SETD) method based on GLL interpolation basis functions for spatial discretization and Backward Euler (BE) technique for time discretization. The spectral-element method is based on a weighted residual technique with polynomials as vector basis functions. It can contribute to an accurate result by increasing the order of polynomials and suppressing spurious solution. BE method is a stable tine discretization technique that has no limitation on time steps and can guarantee a higher accuracy during the iteration process. To minimize the non-zero number of sparse matrix and obtain a diagonal mass matrix, we apply the reduced order integral technique. A direct solver with its speed independent of the condition number is adopted for quickly solving the large-scale sparse linear equations system. To check the accuracy of our SETD algorithm, we compare our results with semi-analytical solutions for a three-layered earth model within the time lapse 10-6-10-2s for different physical meshes and SE orders. The results show that the relative errors for magnetic field B and magnetic induction are both around 3-5%. Further we calculate AEM responses for an AEM system over a 3D earth model in Figure 1. From numerical experiments for both 1D and 3D model, we draw the conclusions that: 1) SETD can deliver an accurate results for both dB/dt and B; 2) increasing SE order improves the modeling accuracy for early to middle time channels when the EM field diffuses fast so the high-order SE can model the detailed variation; 3) at very late time channels, increasing SE order has little improvement on modeling accuracy, but the time interval plays
Variational approach to probabilistic finite elements
Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.
1991-08-01
Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.
Effective approach to spectroscopy and spectral analysis techniques using Matlab
Li, Xiang; Lv, Yong
2017-08-01
With the development of electronic information, computer and network, modern education technology has entered new era, which would give a great impact on teaching process. Spectroscopy and spectral analysis is an elective course for Optoelectronic Information Science and engineering. The teaching objective of this course is to master the basic concepts and principles of spectroscopy, spectral analysis and testing of basic technical means. Then, let the students learn the principle and technology of the spectrum to study the structure and state of the material and the developing process of the technology. MATLAB (matrix laboratory) is a multi-paradigm numerical computing environment and fourth-generation programming language. A proprietary programming language developed by MathWorks, MATLAB allows matrix manipulations, plotting of functions and data, Based on the teaching practice, this paper summarizes the new situation of applying Matlab to the teaching of spectroscopy. This would be suitable for most of the current school multimedia assisted teaching
A time-spectral approach to numerical weather prediction
Scheffel, Jan; Lindvall, Kristoffer; Yik, Hiu Fai
2018-05-01
Finite difference methods are traditionally used for modelling the time domain in numerical weather prediction (NWP). Time-spectral solution is an attractive alternative for reasons of accuracy and efficiency and because time step limitations associated with causal CFL-like criteria, typical for explicit finite difference methods, are avoided. In this work, the Lorenz 1984 chaotic equations are solved using the time-spectral algorithm GWRM (Generalized Weighted Residual Method). Comparisons of accuracy and efficiency are carried out for both explicit and implicit time-stepping algorithms. It is found that the efficiency of the GWRM compares well with these methods, in particular at high accuracy. For perturbative scenarios, the GWRM was found to be as much as four times faster than the finite difference methods. A primary reason is that the GWRM time intervals typically are two orders of magnitude larger than those of the finite difference methods. The GWRM has the additional advantage to produce analytical solutions in the form of Chebyshev series expansions. The results are encouraging for pursuing further studies, including spatial dependence, of the relevance of time-spectral methods for NWP modelling.
Paul, Subir; Nagesh Kumar, D.
2018-04-01
Hyperspectral (HS) data comprises of continuous spectral responses of hundreds of narrow spectral bands with very fine spectral resolution or bandwidth, which offer feature identification and classification with high accuracy. In the present study, Mutual Information (MI) based Segmented Stacked Autoencoder (S-SAE) approach for spectral-spatial classification of the HS data is proposed to reduce the complexity and computational time compared to Stacked Autoencoder (SAE) based feature extraction. A non-parametric dependency measure (MI) based spectral segmentation is proposed instead of linear and parametric dependency measure to take care of both linear and nonlinear inter-band dependency for spectral segmentation of the HS bands. Then morphological profiles are created corresponding to segmented spectral features to assimilate the spatial information in the spectral-spatial classification approach. Two non-parametric classifiers, Support Vector Machine (SVM) with Gaussian kernel and Random Forest (RF) are used for classification of the three most popularly used HS datasets. Results of the numerical experiments carried out in this study have shown that SVM with a Gaussian kernel is providing better results for the Pavia University and Botswana datasets whereas RF is performing better for Indian Pines dataset. The experiments performed with the proposed methodology provide encouraging results compared to numerous existing approaches.
Blood velocity estimation using ultrasound and spectral iterative adaptive approaches
Gudmundson, Erik; Jakobsson, Andreas; Jensen, Jørgen Arendt
2011-01-01
-mode images are interleaved with the Doppler emissions. Furthermore, the techniques are shown, using both simplified and more realistic Field II simulations as well as in vivo data, to outperform current state-of-the-art techniques, allowing for accurate estimation of the blood velocity spectrum using only 30......This paper proposes two novel iterative data-adaptive spectral estimation techniques for blood velocity estimation using medical ultrasound scanners. The techniques make no assumption on the sampling pattern of the emissions or the depth samples, allowing for duplex mode transmissions where B...
Numerical Methods for Stochastic Computations A Spectral Method Approach
Xiu, Dongbin
2010-01-01
The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth
A study of flow patterns for staggered cylinders at low Reynolds number by spectral element method
Hsu, Li-Chieh; Chen, Chien-Lin; Ye, Jian-Zhi [National Yunlin University of Science and Technology, Taiwan (China)
2017-06-15
This study investigates the pattern of flow past two staggered array cylinders using the spectral element method by varying the distance between the cylinders and the angle of incidence (α) at low Reynolds numbers (Re = 100-800). Six flow patterns are identified as Shear layer reattachment (SLR), Induced separation (IS), Vortex impingement (VI), Synchronized vortex shedding (SVS), Vortex pairing and enveloping (VPE), and Vortex pairing splitting and enveloping (VPSE). These flow patterns can be transformed from one to another by changing the distance between the cylinders, the angle of incidence, or Re. SLR, IS and VI flow patterns appear in regimes with small angles of incidence (i.e., α ≤ 30° ) and hold only a single von Karman vortex shedding in a wake with one shedding frequency. SVS, VPE and VPSE flow patterns appear in regimes with large angles of incidence (i.e., 30° ≤ α ≤ 50° ) and present two synchronized von Karman vortices. Quantitative analyses and physical interpretation are also conducted to determine the generation mechanisms of the said flow patterns.
Near-fault earthquake ground motion prediction by a high-performance spectral element numerical code
Paolucci, Roberto; Stupazzini, Marco
2008-01-01
Near-fault effects have been widely recognised to produce specific features of earthquake ground motion, that cannot be reliably predicted by 1D seismic wave propagation modelling, used as a standard in engineering applications. These features may have a relevant impact on the structural response, especially in the nonlinear range, that is hard to predict and to be put in a design format, due to the scarcity of significant earthquake records and of reliable numerical simulations. In this contribution a pilot study is presented for the evaluation of seismic ground-motions in the near-fault region, based on a high-performance numerical code for 3D seismic wave propagation analyses, including the seismic fault, the wave propagation path and the near-surface geological or topographical irregularity. For this purpose, the software package GeoELSE is adopted, based on the spectral element method. The set-up of the numerical benchmark of 3D ground motion simulation in the valley of Grenoble (French Alps) is chosen to study the effect of the complex interaction between basin geometry and radiation mechanism on the variability of earthquake ground motion
Local and Global Gestalt Laws: A Neurally Based Spectral Approach.
Favali, Marta; Citti, Giovanna; Sarti, Alessandro
2017-02-01
This letter presents a mathematical model of figure-ground articulation that takes into account both local and global gestalt laws and is compatible with the functional architecture of the primary visual cortex (V1). The local gestalt law of good continuation is described by means of suitable connectivity kernels that are derived from Lie group theory and quantitatively compared with long-range connectivity in V1. Global gestalt constraints are then introduced in terms of spectral analysis of a connectivity matrix derived from these kernels. This analysis performs grouping of local features and individuates perceptual units with the highest salience. Numerical simulations are performed, and results are obtained by applying the technique to a number of stimuli.
Wan-You Li
2014-01-01
Full Text Available A novel hybrid method, which simultaneously possesses the efficiency of Fourier spectral method (FSM and the applicability of the finite element method (FEM, is presented for the vibration analysis of structures with elastic boundary conditions. The FSM, as one type of analytical approaches with excellent convergence and accuracy, is mainly limited to problems with relatively regular geometry. The purpose of the current study is to extend the FSM to problems with irregular geometry via the FEM and attempt to take full advantage of the FSM and the conventional FEM for structural vibration problems. The computational domain of general shape is divided into several subdomains firstly, some of which are represented by the FSM while the rest by the FEM. Then, fictitious springs are introduced for connecting these subdomains. Sufficient details are given to describe the development of such a hybrid method. Numerical examples of a one-dimensional Euler-Bernoulli beam and a two-dimensional rectangular plate show that the present method has good accuracy and efficiency. Further, one irregular-shaped plate which consists of one rectangular plate and one semi-circular plate also demonstrates the capability of the present method applied to irregular structures.
Spectral-element Seismic Wave Propagation on CUDA/OpenCL Hardware Accelerators
Peter, D. B.; Videau, B.; Pouget, K.; Komatitsch, D.
2015-12-01
Seismic wave propagation codes are essential tools to investigate a variety of wave phenomena in the Earth. Furthermore, they can now be used for seismic full-waveform inversions in regional- and global-scale adjoint tomography. Although these seismic wave propagation solvers are crucial ingredients to improve the resolution of tomographic images to answer important questions about the nature of Earth's internal processes and subsurface structure, their practical application is often limited due to high computational costs. They thus need high-performance computing (HPC) facilities to improving the current state of knowledge. At present, numerous large HPC systems embed many-core architectures such as graphics processing units (GPUs) to enhance numerical performance. Such hardware accelerators can be programmed using either the CUDA programming environment or the OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted by additional hardware accelerators, like e.g. AMD graphic cards, ARM-based processors as well as Intel Xeon Phi coprocessors. For seismic wave propagation simulations using the open-source spectral-element code package SPECFEM3D_GLOBE, we incorporated an automatic source-to-source code generation tool (BOAST) which allows us to use meta-programming of all computational kernels for forward and adjoint runs. Using our BOAST kernels, we generate optimized source code for both CUDA and OpenCL languages within the source code package. Thus, seismic wave simulations are able now to fully utilize CUDA and OpenCL hardware accelerators. We show benchmarks of forward seismic wave propagation simulations using SPECFEM3D_GLOBE on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
Spectrally-balanced chromatic approach-lighting system
Chase, W. D.
1977-01-01
Approach lighting system employing combinations of red and blue lights reduces problem of color-based optical illusions. System exploits inherent chromatic aberration of eye to create three-dimensional effect, giving pilot visual clues of position.
Guided-wave approaches to spectrally selective energy absorption
Stegeman, G. I.; Burke, J. J.
1987-01-01
Results of experiments designed to demonstrate spectrally selective absorption in dielectric waveguides on semiconductor substrates are reported. These experiments were conducted with three waveguides formed by sputtering films of PSK2 glass onto silicon-oxide layers grown on silicon substrates. The three waveguide samples were studied at 633 and 532 nm. The samples differed only in the thickness of the silicon-oxide layer, specifically 256 nm, 506 nm, and 740 nm. Agreement between theoretical predictions and measurements of propagation constants (mode angles) of the six or seven modes supported by these samples was excellent. However, the loss measurements were inconclusive because of high scattering losses in the structures fabricated (in excess of 10 dB/cm). Theoretical calculations indicated that the power distribution among all the modes supported by these structures will reach its steady state value after a propagation length of only 1 mm. Accordingly, the measured loss rates were found to be almost independent of which mode was initially excited. The excellent agreement between theory and experiment leads to the conclusion that low loss waveguides confirm the predicted loss rates.
Koch, Stephan
2009-01-01
problem-tailored discretization approach is based on a geometrical modeling of reduced spatial dimension inside respective domains of symmetry. For the approximation of the electromagnetic fields, orthogonal polynomials along the direction of symmetry are combined with finite element shape functions at the remaining cross-section. This leads to an efficient method providing a high accuracy. The domains of symmetry are embedded into the surrounding region by means of a strong coupling at the discrete level in terms of a domain decomposition approach. Using this strategy, for certain examples a level of accuracy corresponding to numerical models featuring several millions of degrees of freedom in classical finite element methods can be achieved with only one hundred thousand unknowns. This is demonstrated for different examples, e.g., a cylindrical power transformer and the already mentioned accelerator magnet. (orig.)
Kirsanov, N. Yu.; Latukhina, N. V., E-mail: natalat@yandex.ru; Lizunkova, D. A.; Rogozhina, G. A. [Samara National Research University (Russian Federation); Stepikhova, M. V. [Russian Academy of Sciences, Institute for Physics of Microstructures (Russian Federation)
2017-03-15
The spectral characteristics of the specular reflectance, photosensitivity, and photoluminescence (PL) of multilayer structures based on porous silicon with rare-earth-element (REE) ions are investigated. It is shown that the photosensitivity of these structures in the wavelength range of 0.4–1.0 μm is higher than in structures free of REEs. The structures with Er{sup 3+} ions exhibit a luminescence response at room temperature in the spectral range from 1.1 to 1.7 μm. The PL spectrum of the erbium impurity is characterized by a fine line structure, which is determined by the splitting of the {sup 4}I{sub 15/2} multiplet of the Er{sup 3+} ion. It is shown that the structures with a porous layer on the working surface have a much lower reflectance in the entire spectral range under study (0.2–1.0 μm).
Electromagnetic microinstabilities in tokamak plasmas using a global spectral approach
Falchetto, G. L
2002-03-01
Electromagnetic microinstabilities in tokamak plasmas are studied by means of a linear global eigenvalue numerical code. The code is the electromagnetic extension of an existing electrostatic global gyrokinetic spectral toroidal code, called GLOGYSTO. Ion dynamics is described by the gyrokinetic equation, so that ion finite Larmor radius effects are taken into account to all orders. Non adiabatic electrons are included in the model, with passing particles described by the drift-kinetic equation and trapped particles through the bounce averaged drift-kinetic equation. A low frequency electromagnetic perturbation is applied to a low -but finite- {beta}plasma (where the parameter {beta} identifies the ratio of plasma pressure to magnetic pressure); thus, the parallel perturbations of the magnetic field are neglected. The system is closed by the quasi-neutrality equation and the parallel component of Ampere's law. The formulation is applied to a large aspect ratio toroidal configuration, with circular shifted surfaces. Such a simple configuration enables one to derive analytically the gyrocenter trajectories. The system is solved in Fourier space, taking advantage of a decomposition adapted to the toroidal geometry. The major contributions of this thesis are as follows. The electromagnetic effects on toroidal Ion Temperature Gradient driven (ITG) modes are studied. The stabilization of these modes with increasing {beta}, as predicted in previous work, is confirmed. The inclusion of trapped electron dynamics enables the study of its coupling to the ITG modes and of Trapped Electron Modes (TEM) .The effects of finite {beta} are considered together with those of different magnetic shear profiles and of the Shafranov shift. The threshold for the destabilization of an electromagnetic mode is identified. Moreover, the global formulation yields for the first time the radial structure of this so-called Alfvenic Ion Temperature Gradient (AITG) mode. The stability of the
A singular-value decomposition approach to X-ray spectral estimation from attenuation data
Tominaga, Shoji
1986-01-01
A singular-value decomposition (SVD) approach is described for estimating the exposure-rate spectral distributions of X-rays from attenuation data measured withvarious filtrations. This estimation problem with noisy measurements is formulated as the problem of solving a system of linear equations with an ill-conditioned nature. The principle of the SVD approach is that a response matrix, representing the X-ray attenuation effect by filtrations at various energies, can be expanded into summation of inherent component matrices, and thereby the spectral distributions can be represented as a linear combination of some component curves. A criterion function is presented for choosing the components needed to form a reliable estimate. The feasibility of the proposed approach is studied in detail in a computer simulation using a hypothetical X-ray spectrum. The application results of the spectral distributions emitted from a therapeutic X-ray generator are shown. Finally some advantages of this approach are pointed out. (orig.)
Li, Mao; Qiu, Zihua; Liang, Chunlei; Sprague, Michael; Xu, Min
2017-01-13
In the present study, a new spectral difference (SD) method is developed for viscous flows on meshes with a mixture of triangular and quadrilateral elements. The standard SD method for triangular elements, which employs Lagrangian interpolating functions for fluxes, is not stable when the designed accuracy of spatial discretization is third-order or higher. Unlike the standard SD method, the method examined here uses vector interpolating functions in the Raviart-Thomas (RT) spaces to construct continuous flux functions on reference elements. Studies have been performed for 2D wave equation and Euler equa- tions. Our present results demonstrated that the SDRT method is stable and high-order accurate for a number of test problems by using triangular-, quadrilateral-, and mixed- element meshes.
Lowet, Eric; Roberts, Mark J.; Bonizzi, Pietro; Karel, Joël; De Weerd, Peter
2016-01-01
Synchronization or phase-locking between oscillating neuronal groups is considered to be important for coordination of information among cortical networks. Spectral coherence is a commonly used approach to quantify phase locking between neural signals. We systematically explored the validity of spectral coherence measures for quantifying synchronization among neural oscillators. To that aim, we simulated coupled oscillatory signals that exhibited synchronization dynamics using an abstract phase-oscillator model as well as interacting gamma-generating spiking neural networks. We found that, within a large parameter range, the spectral coherence measure deviated substantially from the expected phase-locking. Moreover, spectral coherence did not converge to the expected value with increasing signal-to-noise ratio. We found that spectral coherence particularly failed when oscillators were in the partially (intermittent) synchronized state, which we expect to be the most likely state for neural synchronization. The failure was due to the fast frequency and amplitude changes induced by synchronization forces. We then investigated whether spectral coherence reflected the information flow among networks measured by transfer entropy (TE) of spike trains. We found that spectral coherence failed to robustly reflect changes in synchrony-mediated information flow between neural networks in many instances. As an alternative approach we explored a phase-locking value (PLV) method based on the reconstruction of the instantaneous phase. As one approach for reconstructing instantaneous phase, we used the Hilbert Transform (HT) preceded by Singular Spectrum Decomposition (SSD) of the signal. PLV estimates have broad applicability as they do not rely on stationarity, and, unlike spectral coherence, they enable more accurate estimations of oscillatory synchronization across a wide range of different synchronization regimes, and better tracking of synchronization-mediated information
Eric Lowet
Full Text Available Synchronization or phase-locking between oscillating neuronal groups is considered to be important for coordination of information among cortical networks. Spectral coherence is a commonly used approach to quantify phase locking between neural signals. We systematically explored the validity of spectral coherence measures for quantifying synchronization among neural oscillators. To that aim, we simulated coupled oscillatory signals that exhibited synchronization dynamics using an abstract phase-oscillator model as well as interacting gamma-generating spiking neural networks. We found that, within a large parameter range, the spectral coherence measure deviated substantially from the expected phase-locking. Moreover, spectral coherence did not converge to the expected value with increasing signal-to-noise ratio. We found that spectral coherence particularly failed when oscillators were in the partially (intermittent synchronized state, which we expect to be the most likely state for neural synchronization. The failure was due to the fast frequency and amplitude changes induced by synchronization forces. We then investigated whether spectral coherence reflected the information flow among networks measured by transfer entropy (TE of spike trains. We found that spectral coherence failed to robustly reflect changes in synchrony-mediated information flow between neural networks in many instances. As an alternative approach we explored a phase-locking value (PLV method based on the reconstruction of the instantaneous phase. As one approach for reconstructing instantaneous phase, we used the Hilbert Transform (HT preceded by Singular Spectrum Decomposition (SSD of the signal. PLV estimates have broad applicability as they do not rely on stationarity, and, unlike spectral coherence, they enable more accurate estimations of oscillatory synchronization across a wide range of different synchronization regimes, and better tracking of synchronization
Taneja, Ankur; Higdon, Jonathan
2018-01-01
A high-order spectral element discontinuous Galerkin method is presented for simulating immiscible two-phase flow in petroleum reservoirs. The governing equations involve a coupled system of strongly nonlinear partial differential equations for the pressure and fluid saturation in the reservoir. A fully implicit method is used with a high-order accurate time integration using an implicit Rosenbrock method. Numerical tests give the first demonstration of high order hp spatial convergence results for multiphase flow in petroleum reservoirs with industry standard relative permeability models. High order convergence is shown formally for spectral elements with up to 8th order polynomials for both homogeneous and heterogeneous permeability fields. Numerical results are presented for multiphase fluid flow in heterogeneous reservoirs with complex geometric or geologic features using up to 11th order polynomials. Robust, stable simulations are presented for heterogeneous geologic features, including globally heterogeneous permeability fields, anisotropic permeability tensors, broad regions of low-permeability, high-permeability channels, thin shale barriers and thin high-permeability fractures. A major result of this paper is the demonstration that the resolution of the high order spectral element method may be exploited to achieve accurate results utilizing a simple cartesian mesh for non-conforming geological features. Eliminating the need to mesh to the boundaries of geological features greatly simplifies the workflow for petroleum engineers testing multiple scenarios in the face of uncertainty in the subsurface geology.
Simulating high-frequency seismograms in complicated media: A spectral approach
Orrey, J.L.; Archambeau, C.B.
1993-01-01
The main attraction of using a spectral method instead of a conventional finite difference or finite element technique for full-wavefield forward modeling in elastic media is the increased accuracy of a spectral approximation. While a finite difference method accurate to second order typically requires 8 to 10 computational grid points to resolve the smallest wavelengths on a 1-D grid, a spectral method that approximates the wavefield by trignometric functions theoretically requires only 2 grid points per minimum wavelength and produces no numerical dispersion from the spatial discretization. The resultant savings in computer memory, which is very significant in 2 and 3 dimensions, allows for larger scale and/or higher frequency simulations
Oral, Elif; Gélis, Céline; Bonilla, Luis Fabián; Delavaud, Elise
2017-12-01
Numerical modelling of seismic wave propagation, considering soil nonlinearity, has become a major topic in seismic hazard studies when strong shaking is involved under particular soil conditions. Indeed, when strong ground motion propagates in saturated soils, pore pressure is another important parameter to take into account when successive phases of contractive and dilatant soil behaviour are expected. Here, we model 1-D seismic wave propagation in linear and nonlinear media using the spectral element numerical method. The study uses a three-component (3C) nonlinear rheology and includes pore-pressure excess. The 1-D-3C model is used to study the 1987 Superstition Hills earthquake (ML 6.6), which was recorded at the Wildlife Refuge Liquefaction Array, USA. The data of this event present strong soil nonlinearity involving pore-pressure effects. The ground motion is numerically modelled for different assumptions on soil rheology and input motion (1C versus 3C), using the recorded borehole signals as input motion. The computed acceleration-time histories show low-frequency amplification and strong high-frequency damping due to the development of pore pressure in one of the soil layers. Furthermore, the soil is found to be more nonlinear and more dilatant under triaxial loading compared to the classical 1C analysis, and significant differences in surface displacements are observed between the 1C and 3C approaches. This study contributes to identify and understand the dominant phenomena occurring in superficial layers, depending on local soil properties and input motions, conditions relevant for site-specific studies.
Premixed Turbulent Flames and Spectral Approach Flammes turbulentes de prémélange Approche spectrale
Mathieu J.
2006-11-01
Full Text Available Scientific and technical approach concerning the behaviour of flames developing in a turbulent medium are related in many recent papers. On the whole the problem is very complex!The chemical reaction develops inside a turbulent flow which requires a double scaling. Characteristic times and characteristic lengths have to be defined for both flame and the turbulent fields. With a view to enlarging these comparisons a spectral analyses of the turbulent field is proposed. It is widely supported by previous experimental data. The flame can be acted upon by an external turbulent fields. That supposes the flame to be thicker that the smallest turbulent structures in connection with the Kolmogorov scale. With increasing Reynolds numbers turbulent structures penetrate the flame front, they can disturb the preheat zone or event the chemical zone. The passage of a flame front regime to the case of a chemical reaction developing in a volume is thereby emphasized. As the reaction rate is decreasing, the domain affected by the reaction is increased chemical reactions generate a segregation process whereas the chemical species are mixed by the turbulent motion. In the premixed combustion engine a large range of operating points can be defined. The diagram usually used is that of Barrere Borghi. Several modeling methods should probably be developed according to the positions of the operating points in the diagram. Modeling methods are not presented herein. However the existence of typical structures in connection with the architecture of the combustion chamber could be examined in subsequent paper. The flame front can be subjected to distorting effects due to isolated rolling or to a sequence of vortices. Previously this last case has been touched upon. Using a spectral approach no discremination has to be made as for the sizes of these rollings: that could lead to new modeling methods if restricted shapes of vortices are accepted. By using a spectral method
Aglitskiy, Yefim; Weaver, J. L.; Karasik, M.; Serlin, V.; Obenschain, S. P.; Ralchenko, Yu.
2014-10-01
The spectra of multi-charged ions of Hf, Ta, W, Pt, Au and Bi have been studied on Nike krypton-fluoride laser facility with the help of two kinds of X-ray spectrometers. First, survey instrument covering a spectral range from 0.5 to 19.5 angstroms which allows simultaneous observation of both M- and N- spectra of above mentioned elements with high spectral resolution. Second, an imaging spectrometer with interchangeable spherically bent Quartz crystals that added higher efficiency, higher spectral resolution and high spatial resolution to the qualities of the former one. Multiple spectral lines with X-ray energies as high as 4 keV that belong to the isoelectronic sequences of Fe, Co, Ni, Cu and Zn were identified with the help of NOMAD package developed by Dr. Yu. Ralchenko and colleagues. In our continuous effort to support DOE-NNSA's inertial fusion program, this campaign covered a wide range of plasma conditions that result in production of relatively energetic X-rays. Work supported by the US DOE/NNSA.
A finite element approach to self-consistent field theory calculations of multiblock polymers
Ackerman, David M. [Department of Mechanical Engineering, Iowa State University, Ames, IA 50011 (United States); Delaney, Kris; Fredrickson, Glenn H. [Materials Research Laboratory, University of California, Santa Barbara (United States); Ganapathysubramanian, Baskar, E-mail: baskarg@iastate.edu [Department of Mechanical Engineering, Iowa State University, Ames, IA 50011 (United States)
2017-02-15
Self-consistent field theory (SCFT) has proven to be a powerful tool for modeling equilibrium microstructures of soft materials, particularly for multiblock polymers. A very successful approach to numerically solving the SCFT set of equations is based on using a spectral approach. While widely successful, this approach has limitations especially in the context of current technologically relevant applications. These limitations include non-trivial approaches for modeling complex geometries, difficulties in extending to non-periodic domains, as well as non-trivial extensions for spatial adaptivity. As a viable alternative to spectral schemes, we develop a finite element formulation of the SCFT paradigm for calculating equilibrium polymer morphologies. We discuss the formulation and address implementation challenges that ensure accuracy and efficiency. We explore higher order chain contour steppers that are efficiently implemented with Richardson Extrapolation. This approach is highly scalable and suitable for systems with arbitrary shapes. We show spatial and temporal convergence and illustrate scaling on up to 2048 cores. Finally, we illustrate confinement effects for selected complex geometries. This has implications for materials design for nanoscale applications where dimensions are such that equilibrium morphologies dramatically differ from the bulk phases.
Stability Estimates for h-p Spectral Element Methods for Elliptic Problems
Dutt, Pravir; Tomar, S.K.; Kumar, B.V. Rathish
2002-01-01
In a series of papers of which this is the first we study how to solve elliptic problems on polygonal domains using spectral methods on parallel computers. To overcome the singularities that arise in a neighborhood of the corners we use a geometrical mesh. With this mesh we seek a solution which
A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis
Jokhio, G. A.; Izzuddin, B. A.
2015-05-01
This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.
Rudianto, Indra; Sudarmaji
2018-04-01
We present an implementation of the spectral-element method for simulation of two-dimensional elastic wave propagation in fully heterogeneous media. We have incorporated most of realistic geological features in the model, including surface topography, curved layer interfaces, and 2-D wave-speed heterogeneity. To accommodate such complexity, we use an unstructured quadrilateral meshing technique. Simulation was performed on a GPU cluster, which consists of 24 core processors Intel Xeon CPU and 4 NVIDIA Quadro graphics cards using CUDA and MPI implementation. We speed up the computation by a factor of about 5 compared to MPI only, and by a factor of about 40 compared to Serial implementation.
A new dynamical downscaling approach with GCM bias corrections and spectral nudging
Xu, Zhongfeng; Yang, Zong-Liang
2015-04-01
To improve confidence in regional projections of future climate, a new dynamical downscaling (NDD) approach with both general circulation model (GCM) bias corrections and spectral nudging is developed and assessed over North America. GCM biases are corrected by adjusting GCM climatological means and variances based on reanalysis data before the GCM output is used to drive a regional climate model (RCM). Spectral nudging is also applied to constrain RCM-based biases. Three sets of RCM experiments are integrated over a 31 year period. In the first set of experiments, the model configurations are identical except that the initial and lateral boundary conditions are derived from either the original GCM output, the bias-corrected GCM output, or the reanalysis data. The second set of experiments is the same as the first set except spectral nudging is applied. The third set of experiments includes two sensitivity runs with both GCM bias corrections and nudging where the nudging strength is progressively reduced. All RCM simulations are assessed against North American Regional Reanalysis. The results show that NDD significantly improves the downscaled mean climate and climate variability relative to other GCM-driven RCM downscaling approach in terms of climatological mean air temperature, geopotential height, wind vectors, and surface air temperature variability. In the NDD approach, spectral nudging introduces the effects of GCM bias corrections throughout the RCM domain rather than just limiting them to the initial and lateral boundary conditions, thereby minimizing climate drifts resulting from both the GCM and RCM biases.
Heavy element stable isotope ratios. Analytical approaches and applications
Tanimizu, Masaharu; Sohrin, Yoshiki; Hirata, Takafumi
2013-01-01
Continuous developments in inorganic mass spectrometry techniques, including a combination of an inductively coupled plasma ion source and a magnetic sector-based mass spectrometer equipped with a multiple-collector array, have revolutionized the precision of isotope ratio measurements, and applications of inorganic mass spectrometry for biochemistry, geochemistry, and marine chemistry are beginning to appear on the horizon. Series of pioneering studies have revealed that natural stable isotope fractionations of many elements heavier than S (e.g., Fe, Cu, Zn, Sr, Ce, Nd, Mo, Cd, W, Tl, and U) are common on Earth, and it had been widely recognized that most physicochemical reactions or biochemical processes induce mass-dependent isotope fractionation. The variations in isotope ratios of the heavy elements can provide new insights into past and present biochemical and geochemical processes. To achieve this, the analytical community is actively solving problems such as spectral interference, mass discrimination drift, chemical separation and purification, and reduction of the contamination of analytes. This article describes data calibration and standardization protocols to allow interlaboratory comparisons or to maintain traceability of data, and basic principles of isotope fractionation in nature, together with high-selectivity and high-yield chemical separation and purification techniques for stable isotope studies.
Spectral Subtraction Approach for Interference Reduction of MIMO Channel Wireless Systems
Tomohiro Ono
2005-08-01
Full Text Available In this paper, a generalized spectral subtraction approach for reducing additive impulsive noise, narrowband signals, white Gaussian noise and DS-CDMA interferences in MIMO channel DS-CDMA wireless communication systems is investigated. The interference noise reduction or suppression is essential problem in wireless mobile communication systems to improve the quality of communication. The spectrum subtraction scheme is applied to the interference noise reduction problems for noisy MIMO channel systems. The interferences in space and time domain signals can effectively be suppressed by selecting threshold values, and the computational load with the FFT is not large. Further, the fading effects of channel are compensated by spectral modification with the spectral subtraction process. In the simulations, the effectiveness of the proposed methods for the MIMO channel DS-CDMA is shown to compare with the conventional MIMO channel DS-CDMA.
Lazcano, R.; Madroñal, D.; Fabelo, H.; Ortega, S.; Salvador, R.; Callicó, G. M.; Juárez, E.; Sanz, C.
2017-10-01
Hyperspectral Imaging (HI) assembles high resolution spectral information from hundreds of narrow bands across the electromagnetic spectrum, thus generating 3D data cubes in which each pixel gathers the spectral information of the reflectance of every spatial pixel. As a result, each image is composed of large volumes of data, which turns its processing into a challenge, as performance requirements have been continuously tightened. For instance, new HI applications demand real-time responses. Hence, parallel processing becomes a necessity to achieve this requirement, so the intrinsic parallelism of the algorithms must be exploited. In this paper, a spatial-spectral classification approach has been implemented using a dataflow language known as RVCCAL. This language represents a system as a set of functional units, and its main advantage is that it simplifies the parallelization process by mapping the different blocks over different processing units. The spatial-spectral classification approach aims at refining the classification results previously obtained by using a K-Nearest Neighbors (KNN) filtering process, in which both the pixel spectral value and the spatial coordinates are considered. To do so, KNN needs two inputs: a one-band representation of the hyperspectral image and the classification results provided by a pixel-wise classifier. Thus, spatial-spectral classification algorithm is divided into three different stages: a Principal Component Analysis (PCA) algorithm for computing the one-band representation of the image, a Support Vector Machine (SVM) classifier, and the KNN-based filtering algorithm. The parallelization of these algorithms shows promising results in terms of computational time, as the mapping of them over different cores presents a speedup of 2.69x when using 3 cores. Consequently, experimental results demonstrate that real-time processing of hyperspectral images is achievable.
Majaron, B; Milanic, M
2010-01-01
Pulsed photothermal profiling involves reconstruction of temperature depth profile induced in a layered sample by single-pulse laser exposure, based on transient change in mid-infrared (IR) emission from its surface. Earlier studies have indicated that in watery tissues, featuring a pronounced spectral variation of mid-IR absorption coefficient, analysis of broadband radiometric signals within the customary monochromatic approximation adversely affects profiling accuracy. We present here an experimental comparison of pulsed photothermal profiling in layered agar gel samples utilizing a spectrally composite kernel matrix vs. the customary approach. By utilizing a custom reconstruction code, the augmented approach reduces broadening of individual temperature peaks to 14% of the absorber depth, in contrast to 21% obtained with the customary approach.
An abstract approach to some spectral problems of direct sum differential operators
Maksim S. Sokolov
2003-07-01
Full Text Available In this paper, we study the common spectral properties of abstract self-adjoint direct sum operators, considered in a direct sum Hilbert space. Applications of such operators arise in the modelling of processes of multi-particle quantum mechanics, quantum field theory and, specifically, in multi-interval boundary problems of differential equations. We show that a direct sum operator does not depend in a straightforward manner on the separate operators involved. That is, on having a set of self-adjoint operators giving a direct sum operator, we show how the spectral representation for this operator depends on the spectral representations for the individual operators (the coordinate operators involved in forming this sum operator. In particular it is shown that this problem is not immediately solved by taking a direct sum of the spectral properties of the coordinate operators. Primarily, these results are to be applied to operators generated by a multi-interval quasi-differential system studied, in the earlier works of Ashurov, Everitt, Gesztezy, Kirsch, Markus and Zettl. The abstract approach in this paper indicates the need for further development of spectral theory for direct sum differential operators.
Investigations on Actuator Dynamics through Theoretical and Finite Element Approach
Somashekhar S. Hiremath
2010-01-01
Full Text Available This paper gives a new approach for modeling the fluid-structure interaction of servovalve component-actuator. The analyzed valve is a precision flow control valve-jet pipe electrohydraulic servovalve. The positioning of an actuator depends upon the flow rate from control ports, in turn depends on the spool position. Theoretical investigation is made for No-load condition and Load condition for an actuator. These are used in finite element modeling of an actuator. The fluid-structure-interaction (FSI is established between the piston and the fluid cavities at the piston end. The fluid cavities were modeled with special purpose hydrostatic fluid elements while the piston is modeled with brick elements. The finite element method is used to simulate the variation of cavity pressure, cavity volume, mass flow rate, and the actuator velocity. The finite element analysis is extended to study the system's linearized response to harmonic excitation using direct solution steady-state dynamics. It was observed from the analysis that the natural frequency of the actuator depends upon the position of the piston in the cylinder. This is a close match with theoretical and simulation results. The effect of bulk modulus is also presented in the paper.
Elements of a function analytic approach to probability.
Ghanem, Roger Georges (University of Southern California, Los Angeles, CA); Red-Horse, John Robert
2008-02-01
We first provide a detailed motivation for using probability theory as a mathematical context in which to analyze engineering and scientific systems that possess uncertainties. We then present introductory notes on the function analytic approach to probabilistic analysis, emphasizing the connections to various classical deterministic mathematical analysis elements. Lastly, we describe how to use the approach as a means to augment deterministic analysis methods in a particular Hilbert space context, and thus enable a rigorous framework for commingling deterministic and probabilistic analysis tools in an application setting.
Nguyen, Vu-Hieu; Naili, Salah
2012-08-01
This paper deals with the modeling of guided waves propagation in in vivo cortical long bone, which is known to be anisotropic medium with functionally graded porosity. The bone is modeled as an anisotropic poroelastic material by using Biot's theory formulated in high frequency domain. A hybrid spectral/finite element formulation has been developed to find the time-domain solution of ultrasonic waves propagating in a poroelastic plate immersed in two fluid halfspaces. The numerical technique is based on a combined Laplace-Fourier transform, which allows to obtain a reduced dimension problem in the frequency-wavenumber domain. In the spectral domain, as radiation conditions representing infinite fluid halfspaces may be exactly introduced, only the heterogeneous solid layer needs to be analyzed by using finite element method. Several numerical tests are presented showing very good performance of the proposed procedure. A preliminary study on the first arrived signal velocities computed by using equivalent elastic and poroelastic models will be presented. Copyright © 2012 John Wiley & Sons, Ltd.
Smith, J. A.; Peter, D. B.; Tromp, J.; Komatitsch, D.; Lefebvre, M. P.
2015-12-01
We present both SPECFEM3D_Cartesian and SPECFEM3D_GLOBE open-source codes, representing high-performance numerical wave solvers simulating seismic wave propagation for local-, regional-, and global-scale application. These codes are suitable for both forward propagation in complex media and tomographic imaging. Both solvers compute highly accurate seismic wave fields using the continuous Galerkin spectral-element method on unstructured meshes. Lateral variations in compressional- and shear-wave speeds, density, as well as 3D attenuation Q models, topography and fluid-solid coupling are all readily included in both codes. For global simulations, effects due to rotation, ellipticity, the oceans, 3D crustal models, and self-gravitation are additionally included. Both packages provide forward and adjoint functionality suitable for adjoint tomography on high-performance computing architectures. We highlight the most recent release of the global version which includes improved performance, simultaneous MPI runs, OpenCL and CUDA support via an automatic source-to-source transformation library (BOAST), parallel I/O readers and writers for databases using ADIOS and seismograms using the recently developed Adaptable Seismic Data Format (ASDF) with built-in provenance. This makes our spectral-element solvers current state-of-the-art, open-source community codes for high-performance seismic wave propagation on arbitrarily complex 3D models. Together with these solvers, we provide full-waveform inversion tools to image the Earth's interior at unprecedented resolution.
Magnetic modeling of a Linear Synchronous Machine with the spectral element method
Curti, M.; Paulides, J.J.H.; Lomonova, E.
2017-01-01
The field calculus for electrical machines is realized solving subdomain problems. Most often, the latter are solved using either finite element analysis or the semi-analytical solution of a Laplace or Poisson equation obtained by separation of variables. The first option can capture complex
Magnetic modeling of a Linear Synchronous Machine with the spectral element method
Curti, M.; Paulides, J.J.H.; Lomonova, E.
2017-01-01
The field calculus for electrical machines (EMs) is realized solving subdomain problems. Most often, the latter are solved using either finite element analysis (FEA) or the semi-analytical solution of a Laplace or Poisson equation obtained by separation of variables. The first option can capture
Co-clustering Analysis of Weblogs Using Bipartite Spectral Projection Approach
Xu, Guandong; Zong, Yu; Dolog, Peter
2010-01-01
Web clustering is an approach for aggregating Web objects into various groups according to underlying relationships among them. Finding co-clusters of Web objects is an interesting topic in the context of Web usage mining, which is able to capture the underlying user navigational interest...... and content preference simultaneously. In this paper we will present an algorithm using bipartite spectral clustering to co-cluster Web users and pages. The usage data of users visiting Web sites is modeled as a bipartite graph and the spectral clustering is then applied to the graph representation of usage...... data. The proposed approach is evaluated by experiments performed on real datasets, and the impact of using various clustering algorithms is also investigated. Experimental results have demonstrated the employed method can effectively reveal the subset aggregates of Web users and pages which...
Pontaza, J.P.; Reddy, J.N.
2004-01-01
We consider least-squares finite element models for the numerical solution of the non-stationary Navier-Stokes equations governing viscous incompressible fluid flows. The paper presents a formulation where the effects of space and time are coupled, resulting in a true space-time least-squares minimization procedure, as opposed to a space-time decoupled formulation where a least-squares minimization procedure is performed in space at each time step. The formulation is first presented for the linear advection-diffusion equation and then extended to the Navier-Stokes equations. The formulation has no time step stability restrictions and is spectrally accurate in both space and time. To allow the use of practical C 0 element expansions in the resulting finite element model, the Navier-Stokes equations are expressed as an equivalent set of first-order equations by introducing vorticity as an additional independent variable and the least-squares method is used to develop the finite element model of the governing equations. High-order element expansions are used to construct the discrete model. The discrete model thus obtained is linearized by Newton's method, resulting in a linear system of equations with a symmetric positive definite coefficient matrix that is solved in a fully coupled manner by a preconditioned conjugate gradient method in matrix-free form. Spectral convergence of the L 2 least-squares functional and L 2 error norms in space-time is verified using a smooth solution to the two-dimensional non-stationary incompressible Navier-Stokes equations. Numerical results are presented for impulsively started lid-driven cavity flow, oscillatory lid-driven cavity flow, transient flow over a backward-facing step, and flow around a circular cylinder; the results demonstrate the predictive capability and robustness of the proposed formulation. Even though the space-time coupled formulation is emphasized, we also present the formulation and numerical results for least
Conjugation of fiber-coupled wide-band light sources and acousto-optical spectral elements
Machikhin, Alexander; Batshev, Vladislav; Polschikova, Olga; Khokhlov, Demid; Pozhar, Vitold; Gorevoy, Alexey
2017-12-01
Endoscopic instrumentation is widely used for diagnostics and surgery. The imaging systems, which provide the hyperspectral information of the tissues accessible by endoscopes, are particularly interesting and promising for in vivo photoluminescence diagnostics and therapy of tumour and inflammatory diseases. To add the spectral imaging feature to standard video endoscopes, we propose to implement acousto-optical (AO) filtration of wide-band illumination of incandescent-lamp-based light sources. To collect maximum light and direct it to the fiber-optic light guide inside the endoscopic probe, we have developed and tested the optical system for coupling the light source, the acousto-optical tunable filter (AOTF) and the light guide. The system is compact and compatible with the standard endoscopic components.
Spectral and thermal behaviours of rare earth element complexes with 3,5-dimethoxybenzoic acid
JANUSZ CHRUŚCIEL
2003-10-01
Full Text Available The conditions for the formation of rare earth element 3,5-dimethytoxybenzoates were studied and their quantitative composition and solubilities in water at 293 K were determined. The complexes are anhydrous or hydrated salts and their solubilities are of the orders of 10-5 10-4 mol dm-3. Their FTIR, FIR and X-ray spectra were recorded. The compounds were also characterized by thermogravimetric studies in air and nitrogen atmospheres and by magnetic measurements. All complexes are crystalline compounds. The carboxylate group in these complexes is a bidentate, chelating ligand. On heating in air to 1173 K, the 3,5-dimethoxybenzoates of rare earth elements decompose in various ways. The hydrated complexes first dehydrate to form anhydrous salts which then decompose in air to the oxides of the respective metals while in nitrogen to mixtures of carbon and oxides of the respective metals. The complexes are more stable in air than in nitrogen.
Xingjian Dong
2014-02-01
Full Text Available An efficient spectral element (SE with electric potential degrees of freedom (DOF is proposed to investigate the static electromechanical responses of a piezoelectric bimorph for its actuator and sensor functions. A sublayer model based on the piecewise linear approximation for the electric potential is used to describe the nonlinear distribution of electric potential through the thickness of the piezoelectric layers. An equivalent single layer (ESL model based on first-order shear deformation theory (FSDT is used to describe the displacement field. The Legendre orthogonal polynomials of order 5 are used in the element interpolation functions. The validity and the capability of the present SE model for investigation of global and local responses of the piezoelectric bimorph are confirmed by comparing the present solutions with those obtained from coupled 3-D finite element (FE analysis. It is shown that, without introducing any higher-order electric potential assumptions, the current method can accurately describe the distribution of the electric potential across the thickness even for a rather thick bimorph. It is revealed that the effect of electric potential is significant when the bimorph is used as sensor while the effect is insignificant when the bimorph is used as actuator, and therefore, the present study may provide a better understanding of the nonlinear induced electric potential for bimorph sensor and actuator.
A coordination chemistry approach for modeling trace element adsorption
Bourg, A.C.M.
1986-01-01
The traditional distribution coefficient, Kd, is highly dependent on the water chemistry and the surface properties of the geological system being studied and is therefore quite inappropriate for use in predictive models. Adsorption, one of the many processes included in Kd values, is described here using a coordination chemistry approach. The concept of adsorption of cationic trace elements by solid hydrous oxides can be applied to natural solids. The adsorption process is thus understood in terms of a classical complexation leading to the formation of surface (heterogeneous) ligands. Applications of this concept to some freshwater, estuarine and marine environments are discussed. (author)
Sharma, Dharmendar Kumar; Irfanullah, Mir; Basu, Santanu Kumar; Madhu, Sheri; De, Suman; Jadhav, Sameer; Ravikanth, Mangalampalli; Chowdhury, Arindam
2017-03-01
While fluorescence microscopy has become an essential tool amongst chemists and biologists for the detection of various analyte within cellular environments, non-uniform spatial distribution of sensors within cells often restricts extraction of reliable information on relative abundance of analytes in different subcellular regions. As an alternative to existing sensing methodologies such as ratiometric or FRET imaging, where relative proportion of analyte with respect to the sensor can be obtained within cells, we propose a methodology using spectrally-resolved fluorescence microscopy, via which both the relative abundance of sensor as well as their relative proportion with respect to the analyte can be simultaneously extracted for local subcellular regions. This method is exemplified using a BODIPY sensor, capable of detecting mercury ions within cellular environments, characterized by spectral blue-shift and concurrent enhancement of emission intensity. Spectral emission envelopes collected from sub-microscopic regions allowed us to compare the shift in transition energies as well as integrated emission intensities within various intracellular regions. Construction of a 2D scatter plot using spectral shifts and emission intensities, which depend on the relative amount of analyte with respect to sensor and the approximate local amounts of the probe, respectively, enabled qualitative extraction of relative abundance of analyte in various local regions within a single cell as well as amongst different cells. Although the comparisons remain semi-quantitative, this approach involving analysis of multiple spectral parameters opens up an alternative way to extract spatial distribution of analyte in heterogeneous systems. The proposed method would be especially relevant for fluorescent probes that undergo relatively nominal shift in transition energies compared to their emission bandwidths, which often restricts their usage for quantitative ratiometric imaging in
Spectral Analysis of Rare Earth Elements using Laser-Induced Breakdown Spectroscopy
Martin, Madhavi Z [ORNL; Fox, Dr. Richard V [Idaho National Laboratory (INL); Miziolek, Andrzej W [United States Army Research Laboratory; DeLucia, Frank C [United States Army Research Laboratory; Andre, Nicolas O [ORNL
2015-01-01
There is growing interest in rapid analysis of rare earth elements (REEs) both due to the need to find new natural sources to satisfy increased demand in their use in various electronic devices, as well as the fact that they are used to estimate actinide masses for nuclear safeguards and nonproliferation. Laser-Induced Breakdown Spectroscopy (LIBS) appears to be a particularly well-suited spectroscopy-based technology to rapidly and accurately analyze the REEs in various matrices at low concentration levels (parts-per-million). Although LIBS spectra of REEs have been reported for a number of years, further work is still necessary in order to be able to quantify the concentrations of various REEs in realworld complex samples. LIBS offers advantages over conventional solution-based radiochemistry in terms of cost, analytical turnaround, waste generation, personnel dose, and contamination risk. Rare earth elements of commercial interest are found in the following three matrix groups: 1) raw ores and unrefined materials, 2) as components in refined products such as magnets, lighting phosphors, consumer electronics (which are mostly magnets and phosphors), catalysts, batteries, etc., and 3) waste/recyclable materials (aka e-waste). LIBS spectra for REEs such as Gd, Nd, and Sm found in rare earth magnets are presented.
Spectral Analysis of Rare Earth Elements using Laser-Induced Breakdown Spectroscopy
Martin, Madhavi Z [ORNL; Fox, Dr. Richard V [Idaho National Laboratory (INL); Miziolek, Andrzej W [United States Army Research Laboratory; DeLucia, Frank C [United States Army Research Laboratory; Andre, Nicolas O [ORNL
2015-01-01
There is growing interest in rapid analysis of rare earth elements (REEs) both due to the need to find new natural sources to satisfy increased demand in their use in various electronic devices, as well as the fact that they are used to estimate actinide masses for nuclear safeguards and nonproliferation. Laser-Induced Breakdown Spectroscopy (LIBS) appears to be a particularly well-suited spectroscopy-based technology to rapidly and accurately analyze the REEs in various matrices at low concentration levels (parts-per-million). Although LIBS spectra of REEs have been reported for a number of years, further work is still necessary in order to be able to quantify the concentrations of various REEs in real-world complex samples. LIBS offers advantages over conventional solution-based radiochemistry in terms of cost, analytical turnaround, waste generation, personnel dose, and contamination risk. Rare earth elements of commercial interest are found in the following three matrix groups: 1) raw ores and unrefined materials, 2) as components in refined products such as magnets, lighting phosphors, consumer electronics (which are mostly magnets and phosphors), catalysts, batteries, etc., and 3) waste/recyclable materials (aka e-waste). LIBS spectra for REEs such as Gd, Nd, and Sm found in rare earth magnets are presented.
Ryzhenkov, V.; Ivashchenko, V.; Vinuesa, R.; Mullyadzhanov, R.
2016-10-01
We use the open-source code nek5000 to assess the accuracy of high-order spectral element large-eddy simulations (LES) of a turbulent channel flow depending on the spatial resolution compared to the direct numerical simulation (DNS). The Reynolds number Re = 6800 is considered based on the bulk velocity and half-width of the channel. The filtered governing equations are closed with the dynamic Smagorinsky model for subgrid stresses and heat flux. The results show very good agreement between LES and DNS for time-averaged velocity and temperature profiles and their fluctuations. Even the coarse LES grid which contains around 30 times less points than the DNS one provided predictions of the friction velocity within 2.0% accuracy interval.
Wu, Zhijing; Li, Fengming; Zhang, Chuanzeng
2018-05-01
Inspired by the hierarchical structures of butterfly wing surfaces, a new kind of lattice structures with a two-order hierarchical periodicity is proposed and designed, and the band-gap properties are investigated by the spectral element method (SEM). The equations of motion of the whole structure are established considering the macro and micro periodicities of the system. The efficiency of the SEM is exploited in the modeling process and validated by comparing the results with that of the finite element method (FEM). Based on the highly accurate results in the frequency domain, the dynamic behaviors of the proposed two-order hierarchical structures are analyzed. An original and interesting finding is the existence of the distinct macro and micro stop-bands in the given frequency domain. The mechanisms for these two types of band-gaps are also explored. Finally, the relations between the hierarchical periodicities and the different types of the stop-bands are investigated by analyzing the parametrical influences.
Cho, Moses A
2010-11-01
Full Text Available sensing. The objectives of this paper were to (i) evaluate the classification performance of a multiple-endmember spectral angle mapper (SAM) classification approach (conventionally known as the nearest neighbour) in discriminating ten common African...
A New High-Resolution Spectral Approach to Noninvasively Evaluate Wall Deformations in Arteries
Ivonne Bazan
2014-01-01
Full Text Available By locally measuring changes on arterial wall thickness as a function of pressure, the related Young modulus can be evaluated. This physical magnitude has shown to be an important predictive factor for cardiovascular diseases. For evaluating those changes, imaging segmentation or time correlations of ultrasonic echoes, coming from wall interfaces, are usually employed. In this paper, an alternative low-cost technique is proposed to locally evaluate variations on arterial walls, which are dynamically measured with an improved high-resolution calculation of power spectral densities in echo-traces of the wall interfaces, by using a parametric autoregressive processing. Certain wall deformations are finely detected by evaluating the echoes overtones peaks with power spectral estimations that implement Burg and Yule Walker algorithms. Results of this spectral approach are compared with a classical cross-correlation operator, in a tube phantom and “in vitro” carotid tissue. A circulating loop, mimicking heart periods and blood pressure changes, is employed to dynamically inspect each sample with a broadband ultrasonic probe, acquiring multiple A-Scans which are windowed to isolate echo-traces packets coming from distinct walls. Then the new technique and cross-correlation operator are applied to evaluate changing parietal deformations from the detection of displacements registered on the wall faces under periodic regime.
A New High-Resolution Spectral Approach to Noninvasively Evaluate Wall Deformations in Arteries
Bazan, Ivonne; Negreira, Carlos; Ramos, Antonio; Brum, Javier; Ramirez, Alfredo
2014-01-01
By locally measuring changes on arterial wall thickness as a function of pressure, the related Young modulus can be evaluated. This physical magnitude has shown to be an important predictive factor for cardiovascular diseases. For evaluating those changes, imaging segmentation or time correlations of ultrasonic echoes, coming from wall interfaces, are usually employed. In this paper, an alternative low-cost technique is proposed to locally evaluate variations on arterial walls, which are dynamically measured with an improved high-resolution calculation of power spectral densities in echo-traces of the wall interfaces, by using a parametric autoregressive processing. Certain wall deformations are finely detected by evaluating the echoes overtones peaks with power spectral estimations that implement Burg and Yule Walker algorithms. Results of this spectral approach are compared with a classical cross-correlation operator, in a tube phantom and “in vitro” carotid tissue. A circulating loop, mimicking heart periods and blood pressure changes, is employed to dynamically inspect each sample with a broadband ultrasonic probe, acquiring multiple A-Scans which are windowed to isolate echo-traces packets coming from distinct walls. Then the new technique and cross-correlation operator are applied to evaluate changing parietal deformations from the detection of displacements registered on the wall faces under periodic regime. PMID:24688596
Masè, Michela; Cristoforetti, Alessandro; Avogaro, Laura; Tessarolo, Francesco; Piccoli, Federico; Caola, Iole; Pederzolli, Carlo; Graffigna, Angelo; Ravelli, Flavia
2015-01-01
The assessment of collagen structure in cardiac pathology, such as atrial fibrillation (AF), is essential for a complete understanding of the disease. This paper introduces a novel methodology for the quantitative description of collagen network properties, based on the combination of nonlinear optical microscopy with a spectral approach of image processing and analysis. Second-harmonic generation (SHG) microscopy was applied to atrial tissue samples from cardiac surgery patients, providing label-free, selective visualization of the collagen structure. The spectral analysis framework, based on 2D-FFT, was applied to the SHG images, yielding a multiparametric description of collagen fiber orientation (angle and anisotropy indexes) and texture scale (dominant wavelength and peak dispersion indexes). The proof-of-concept application of the methodology showed the capability of our approach to detect and quantify differences in the structural properties of the collagen network in AF versus sinus rhythm patients. These results suggest the potential of our approach in the assessment of collagen properties in cardiac pathologies related to a fibrotic structural component.
Insights into the Biology of IRES Elements through Riboproteomic Approaches
Almudena Pacheco
2010-01-01
Full Text Available Translation initiation is a highly regulated process that exerts a strong influence on the posttranscriptional control of gene expression. Two alternative mechanisms govern translation initiation in eukaryotic mRNAs, the cap-dependent initiation mechanism operating in most mRNAs, and the internal ribosome entry site (IRES-dependent mechanism, first discovered in picornaviruses. IRES elements are highly structured RNA sequences that, in most instances, require specific proteins for recruitment of the translation machinery. Some of these proteins are eukaryotic initiation factors. In addition, RNA-binding proteins (RBPs play a key role in internal initiation control. RBPs are pivotal regulators of gene expression in response to numerous stresses, including virus infection. This review discusses recent advances on riboproteomic approaches to identify IRES transacting factors (ITAFs and the relationship between RNA-protein interaction and IRES activity, highlighting the most relevant features on picornavirus and hepatitis C virus IRESs.
Bayesian Approach to Spectral Function Reconstruction for Euclidean Quantum Field Theories
Burnier, Yannis; Rothkopf, Alexander
2013-11-01
We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33TC.
A spectral k-means approach to bright-field cell image segmentation.
Bradbury, Laura; Wan, Justin W L
2010-01-01
Automatic segmentation of bright-field cell images is important to cell biologists, but difficult to complete due to the complex nature of the cells in bright-field images (poor contrast, broken halo, missing boundaries). Standard approaches such as level set segmentation and active contours work well for fluorescent images where cells appear as round shape, but become less effective when optical artifacts such as halo exist in bright-field images. In this paper, we present a robust segmentation method which combines the spectral and k-means clustering techniques to locate cells in bright-field images. This approach models an image as a matrix graph and segment different regions of the image by computing the appropriate eigenvectors of the matrix graph and using the k-means algorithm. We illustrate the effectiveness of the method by segmentation results of C2C12 (muscle) cells in bright-field images.
Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier
2009-01-01
The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989
Garai, Anirban; Diosady, Laslo T.; Murman, Scott M.; Madavan, Nateri K.
2016-01-01
Recent progress towards developing a new computational capability for accurate and efficient high-fidelity direct numerical simulation (DNS) and large-eddy simulation (LES) of turbomachinery is described. This capability is based on an entropy- stable Discontinuous-Galerkin spectral-element approach that extends to arbitrarily high orders of spatial and temporal accuracy, and is implemented in a computationally efficient manner on a modern high performance computer architecture. An inflow turbulence generation procedure based on a linear forcing approach has been incorporated in this framework and DNS conducted to study the effect of inflow turbulence on the suction- side separation bubble in low-pressure turbine (LPT) cascades. The T106 series of airfoil cascades in both lightly (T106A) and highly loaded (T106C) configurations at exit isentropic Reynolds numbers of 60,000 and 80,000, respectively, are considered. The numerical simulations are performed using 8th-order accurate spatial and 4th-order accurate temporal discretization. The changes in separation bubble topology due to elevated inflow turbulence is captured by the present method and the physical mechanisms leading to the changes are explained. The present results are in good agreement with prior numerical simulations but some expected discrepancies with the experimental data for the T106C case are noted and discussed.
Maria Mallén-Alberdi
2016-03-01
Full Text Available Impedance-based biosensors for bacterial detection offer a rapid and cost-effective alternative to conventional techniques that are time-consuming and require specialized equipment and trained users. In this work, a new bacteria detection scheme is presented based on impedance measurements with antibody-modified polysilicon interdigitated electrodes (3 μm pitch, IDEs. The detection approach was carried out taking advantage of the E. coli structure which, in electrical terms, is constituted by two insulating cell membranes that separate a conductive cytoplasmatic medium and a more conductive periplasm. Impedance detection of bacteria is usually analyzed using electrical equivalent circuit models that show limitations for the interpretation of such complex cell structure. Here, a differential impedance spectrum representation is used to study the unique fingerprint that arises when bacteria attach to the surface of IDEs. That fingerprint shows the dual electrical behavior, insulating and conductive, at different frequency ranges. In parallel, finite-element simulations of this system using a three-shell bacteria model are performed to explain such phenomena. Overall, a new approach to detect bacteria is proposed that also enables to differentiate viable bacteria from other components non-specifically attached to the IDE surface by just detecting their spectral fingerprints. Keywords: Impedance spectroscopy, Bacterial detection, Interdigitated electrodes, Label-free detection, Immuno-detection, E. coli O157:H7
Sedov, A. V.; Kalinchuk, V. V.; Bocharova, O. V.
2018-01-01
The evaluation of static stresses and strength of units and components is a crucial task for increasing reliability in the operation of vehicles and equipment, to prevent emergencies, especially in structures made of metal and composite materials. At the stage of creation and commissioning of structures to control the quality of manufacturing of individual elements and components, diagnostic control methods are widely used. They are acoustic, ultrasonic, X-ray, radiation methods and others. The using of these methods to control the residual life and the degree of static stresses of units and parts during operation is fraught with great difficulties both in methodology and in instrumentation. In this paper, the authors propose an effective approach of operative control of the degree of static stresses of units and parts of mechanical structures which are in working condition, based on recording the changing in the surface wave properties of a system consisting of a sensor and a controlled environment (unit, part). The proposed approach of low-frequency diagnostics of static stresses presupposes a new adaptive-spectral analysis of a surface wave created by external action (impact). It is possible to estimate implicit stresses of structures in the experiment due to this approach.
Finite element meshing approached as a global minimization process
WITKOWSKI,WALTER R.; JUNG,JOSEPH; DOHRMANN,CLARK R.; LEUNG,VITUS J.
2000-03-01
The ability to generate a suitable finite element mesh in an automatic fashion is becoming the key to being able to automate the entire engineering analysis process. However, placing an all-hexahedron mesh in a general three-dimensional body continues to be an elusive goal. The approach investigated in this research is fundamentally different from any other that is known of by the authors. A physical analogy viewpoint is used to formulate the actual meshing problem which constructs a global mathematical description of the problem. The analogy used was that of minimizing the electrical potential of a system charged particles within a charged domain. The particles in the presented analogy represent duals to mesh elements (i.e., quads or hexes). Particle movement is governed by a mathematical functional which accounts for inter-particles repulsive, attractive and alignment forces. This functional is minimized to find the optimal location and orientation of each particle. After the particles are connected a mesh can be easily resolved. The mathematical description for this problem is as easy to formulate in three-dimensions as it is in two- or one-dimensions. The meshing algorithm was developed within CoMeT. It can solve the two-dimensional meshing problem for convex and concave geometries in a purely automated fashion. Investigation of the robustness of the technique has shown a success rate of approximately 99% for the two-dimensional geometries tested. Run times to mesh a 100 element complex geometry were typically in the 10 minute range. Efficiency of the technique is still an issue that needs to be addressed. Performance is an issue that is critical for most engineers generating meshes. It was not for this project. The primary focus of this work was to investigate and evaluate a meshing algorithm/philosophy with efficiency issues being secondary. The algorithm was also extended to mesh three-dimensional geometries. Unfortunately, only simple geometries were tested
Ying, Yingzi; Bean, Christopher J.
2014-05-01
Ocean-generated microseisms are faint Earth tremors associated with the interaction between ocean water waves and the solid Earth. The microseism noise recorded as low frequency ground vibrations by seismometers contains significant information about the Earth's interior and the sea states. In this work, we first aim to investigate the forward propagation of microseisms in a deep-ocean environment. We employ a 3D North-East Atlantic geological model and simulate wave propagation in a coupled fluid-solid domain, using a spectral-element method. The aim is to investigate the effects of the continental shelf on microseism wave propagation. A second goal of this work is to perform noise simulation to calculate synthetic ensemble averaged cross-correlations of microseism noise signals with time reversal method. The algorithm can relieve computational cost by avoiding time stacking and get cross-correlations between the designated master station and all the remaining slave stations, at one time. The origins of microseisms are non-uniform, so we also test the effect of simulated noise source distribution on the determined cross-correlations.
Delange, Pascal; Backes, Steffen; van Roekeghem, Ambroise; Pourovskii, Leonid; Jiang, Hong; Biermann, Silke
2018-04-01
The most intriguing properties of emergent materials are typically consequences of highly correlated quantum states of their electronic degrees of freedom. Describing those materials from first principles remains a challenge for modern condensed matter theory. Here, we review, apply and discuss novel approaches to spectral properties of correlated electron materials, assessing current day predictive capabilities of electronic structure calculations. In particular, we focus on the recent Screened Exchange Dynamical Mean-Field Theory scheme and its relation to generalized Kohn-Sham Theory. These concepts are illustrated on the transition metal pnictide BaCo2As2 and elemental zinc and cadmium.
Spectral unmixing of urban land cover using a generic library approach
Degerickx, Jeroen; Lordache, Marian-Daniel; Okujeni, Akpona; Hermy, Martin; van der Linden, Sebastian; Somers, Ben
2016-10-01
Remote sensing based land cover classification in urban areas generally requires the use of subpixel classification algorithms to take into account the high spatial heterogeneity. These spectral unmixing techniques often rely on spectral libraries, i.e. collections of pure material spectra (endmembers, EM), which ideally cover the large EM variability typically present in urban scenes. Despite the advent of several (semi-) automated EM detection algorithms, the collection of such image-specific libraries remains a tedious and time-consuming task. As an alternative, we suggest the use of a generic urban EM library, containing material spectra under varying conditions, acquired from different locations and sensors. This approach requires an efficient EM selection technique, capable of only selecting those spectra relevant for a specific image. In this paper, we evaluate and compare the potential of different existing library pruning algorithms (Iterative Endmember Selection and MUSIC) using simulated hyperspectral (APEX) data of the Brussels metropolitan area. In addition, we develop a new hybrid EM selection method which is shown to be highly efficient in dealing with both imagespecific and generic libraries, subsequently yielding more robust land cover classification results compared to existing methods. Future research will include further optimization of the proposed algorithm and additional tests on both simulated and real hyperspectral data.
A New Spectral Shape-Based Record Selection Approach Using Np and Genetic Algorithms
Edén Bojórquez
2013-01-01
Full Text Available With the aim to improve code-based real records selection criteria, an approach inspired in a parameter proxy of spectral shape, named Np, is analyzed. The procedure is based on several objectives aimed to minimize the record-to-record variability of the ground motions selected for seismic structural assessment. In order to select the best ground motion set of records to be used as an input for nonlinear dynamic analysis, an optimization approach is applied using genetic algorithms focuse on finding the set of records more compatible with a target spectrum and target Np values. The results of the new Np-based approach suggest that the real accelerograms obtained with this procedure, reduce the scatter of the response spectra as compared with the traditional approach; furthermore, the mean spectrum of the set of records is very similar to the target seismic design spectrum in the range of interest periods, and at the same time, similar Np values are obtained for the selected records and the target spectrum.
Alphavirus replicon approach to promoterless analysis of IRES elements.
Kamrud, K I; Custer, M; Dudek, J M; Owens, G; Alterson, K D; Lee, J S; Groebner, J L; Smith, J F
2007-04-10
Here we describe a system for promoterless analysis of putative internal ribosome entry site (IRES) elements using an alphavirus (family Togaviridae) replicon vector. The system uses the alphavirus subgenomic promoter to produce transcripts that, when modified to contain a spacer region upstream of an IRES element, allow analysis of cap-independent translation of genes of interest (GOI). If the IRES element is removed, translation of the subgenomic transcript can be reduced >95% compared to the same transcript containing a functional IRES element. Alphavirus replicons, used in this manner, offer an alternative to standard dicistronic DNA vectors or in vitro translation systems currently used to analyze putative IRES elements. In addition, protein expression levels varied depending on the spacer element located upstream of each IRES. The ability to modulate the level of expression from alphavirus vectors should extend the utility of these vectors in vaccine development.
A brute-force spectral approach for wave estimation using measured vessel motions
Nielsen, Ulrik D.; Brodtkorb, Astrid H.; Sørensen, Asgeir J.
2018-01-01
, and the procedure is simple in its mathematical formulation. The actual formulation is extending another recent work by including vessel advance speed and short-crested seas. Due to its simplicity, the procedure is computationally efficient, providing wave spectrum estimates in the order of a few seconds......The article introduces a spectral procedure for sea state estimation based on measurements of motion responses of a ship in a short-crested seaway. The procedure relies fundamentally on the wave buoy analogy, but the wave spectrum estimate is obtained in a direct - brute-force - approach......, and the estimation procedure will therefore be appealing to applications related to realtime, onboard control and decision support systems for safe and efficient marine operations. The procedure's performance is evaluated by use of numerical simulation of motion measurements, and it is shown that accurate wave...
Energy Efficiency - Spectral Efficiency Trade-off: A Multiobjective Optimization Approach
Amin, Osama
2015-04-23
In this paper, we consider the resource allocation problem for energy efficiency (EE) - spectral efficiency (SE) trade-off. Unlike traditional research that uses the EE as an objective function and imposes constraints either on the SE or achievable rate, we propound a multiobjective optimization approach that can flexibly switch between the EE and SE functions or change the priority level of each function using a trade-off parameter. Our dynamic approach is more tractable than the conventional approaches and more convenient to realistic communication applications and scenarios. We prove that the multiobjective optimization of the EE and SE is equivalent to a simple problem that maximizes the achievable rate/SE and minimizes the total power consumption. Then we apply the generalized framework of the resource allocation for the EE-SE trade-off to optimally allocate the subcarriers’ power for orthogonal frequency division multiplexing (OFDM) with imperfect channel estimation. Finally, we use numerical results to discuss the choice of the trade-off parameter and study the effect of the estimation error, transmission power budget and channel-to-noise ratio on the multiobjective optimization.
Energy Efficiency - Spectral Efficiency Trade-off: A Multiobjective Optimization Approach
Amin, Osama; Bedeer, Ebrahim; Ahmed, Mohamed; Dobre, Octavia
2015-01-01
In this paper, we consider the resource allocation problem for energy efficiency (EE) - spectral efficiency (SE) trade-off. Unlike traditional research that uses the EE as an objective function and imposes constraints either on the SE or achievable rate, we propound a multiobjective optimization approach that can flexibly switch between the EE and SE functions or change the priority level of each function using a trade-off parameter. Our dynamic approach is more tractable than the conventional approaches and more convenient to realistic communication applications and scenarios. We prove that the multiobjective optimization of the EE and SE is equivalent to a simple problem that maximizes the achievable rate/SE and minimizes the total power consumption. Then we apply the generalized framework of the resource allocation for the EE-SE trade-off to optimally allocate the subcarriers’ power for orthogonal frequency division multiplexing (OFDM) with imperfect channel estimation. Finally, we use numerical results to discuss the choice of the trade-off parameter and study the effect of the estimation error, transmission power budget and channel-to-noise ratio on the multiobjective optimization.
Analysis of Tube Drawing Process – A Finite Element Approach ...
In this paper the effect of die semi angle on drawing load in cold tube drawing has been investigated numerically using the finite element method. The equation governing the stress distribution was derived and solved using Galerkin finite element method. An isoparametric formulation for the governing equation was utilized ...
Tang, Hong; Lin, Jian-Zhong
2013-01-01
An improved anomalous diffraction approximation (ADA) method is presented for calculating the extinction efficiency of spheroids firstly. In this approach, the extinction efficiency of spheroid particles can be calculated with good accuracy and high efficiency in a wider size range by combining the Latimer method and the ADA theory, and this method can present a more general expression for calculating the extinction efficiency of spheroid particles with various complex refractive indices and aspect ratios. Meanwhile, the visible spectral extinction with varied spheroid particle size distributions and complex refractive indices is surveyed. Furthermore, a selection principle about the spectral extinction data is developed based on PCA (principle component analysis) of first derivative spectral extinction. By calculating the contribution rate of first derivative spectral extinction, the spectral extinction with more significant features can be selected as the input data, and those with less features is removed from the inversion data. In addition, we propose an improved Tikhonov iteration method to retrieve the spheroid particle size distributions in the independent mode. Simulation experiments indicate that the spheroid particle size distributions obtained with the proposed method coincide fairly well with the given distributions, and this inversion method provides a simple, reliable and efficient method to retrieve the spheroid particle size distributions from the spectral extinction data. -- Highlights: ► Improved ADA is presented for calculating the extinction efficiency of spheroids. ► Selection principle about spectral extinction data is developed based on PCA. ► Improved Tikhonov iteration method is proposed to retrieve the spheroid PSD.
Tsuboi, S.; Miyoshi, T.; Obayashi, M.; Tono, Y.; Ando, K.
2014-12-01
Recent progress in large scale computing by using waveform modeling technique and high performance computing facility has demonstrated possibilities to perform full-waveform inversion of three dimensional (3D) seismological structure inside the Earth. We apply the adjoint method (Liu and Tromp, 2006) to obtain 3D structure beneath Japanese Islands. First we implemented Spectral-Element Method to K-computer in Kobe, Japan. We have optimized SPECFEM3D_GLOBE (Komatitsch and Tromp, 2002) by using OpenMP so that the code fits hybrid architecture of K-computer. Now we could use 82,134 nodes of K-computer (657,072 cores) to compute synthetic waveform with about 1 sec accuracy for realistic 3D Earth model and its performance was 1.2 PFLOPS. We use this optimized SPECFEM3D_GLOBE code and take one chunk around Japanese Islands from global mesh and compute synthetic seismograms with accuracy of about 10 second. We use GAP-P2 mantle tomography model (Obayashi et al., 2009) as an initial 3D model and use as many broadband seismic stations available in this region as possible to perform inversion. We then use the time windows for body waves and surface waves to compute adjoint sources and calculate adjoint kernels for seismic structure. We have performed several iteration and obtained improved 3D structure beneath Japanese Islands. The result demonstrates that waveform misfits between observed and theoretical seismograms improves as the iteration proceeds. We now prepare to use much shorter period in our synthetic waveform computation and try to obtain seismic structure for basin scale model, such as Kanto basin, where there are dense seismic network and high seismic activity. Acknowledgements: This research was partly supported by MEXT Strategic Program for Innovative Research. We used F-net seismograms of the National Research Institute for Earth Science and Disaster Prevention.
Toxic Elements in Food: Occurrence, Binding, and Reduction Approaches
Hajeb, P.; Sloth, Jens Jørgen; Shakibazadeh, Sh
2014-01-01
Toxic elements such as mercury, arsenic, cadmium, and lead, sometimes called heavy metals, can diminish mental and central nervous system function; elicit damage to blood composition as well as the kidneys, lungs, and liver; and reduce energy levels. Food is considered one of the main routes...... of their entry into the human body. Numerous studies have been performed to examine the effects of common food processing procedures on the levels of toxic elements in food. While some studies have reported negative effects of processing, several have shown that processing practices may have a positive effect...... on the reduction of toxic elements in foodstuffs. A number of studies have also introduced protocols and suggested chemical agents that reduce the amount of toxic elements in the final food products. In this review, the reported methods employed for the reduction of toxic elements are discussed with particular...
Sudarmaji; Rudianto, Indra; Eka Nurcahya, Budi
2018-04-01
A strong tectonic earthquake with a magnitude of 5.9 Richter scale has been occurred in Yogyakarta and Central Java on May 26, 2006. The earthquake has caused severe damage in Yogyakarta and the southern part of Central Java, Indonesia. The understanding of seismic response of earthquake among ground shaking and the level of building damage is important. We present numerical modeling of 3D seismic wave propagation around Yogyakarta and the southern part of Central Java using spectral-element method on MPI-GPU (Graphics Processing Unit) computer cluster to observe its seismic response due to the earthquake. The homogeneous 3D realistic model is generated with detailed topography surface. The influences of free surface topography and layer discontinuity of the 3D model among the seismic response are observed. The seismic wave field is discretized using spectral-element method. The spectral-element method is solved on a mesh of hexahedral elements that is adapted to the free surface topography and the internal discontinuity of the model. To increase the data processing capabilities, the simulation is performed on a GPU cluster with implementation of MPI (Message Passing Interface).
Garg, R; Fahmi, N.; Singh, R.V.
2007-01-01
The Schiff bases, 3-(indolin-2-one)hydrazinecarbothioamide, 3-(indolin-2-one)hydrazinecarboxamide, 5,6-dimethyl-3-(indolin-2-one)hydrazinecarbothioamide, and 5,6-dimethyl-3-(indolin-2-one)hydrazinecarboxamide, have been synthesized by the condensation of 1H-indol-2,3-dione and 5,6-dimethyl-1H-indol-2,3-dione with the corresponding hydrazinecarbothioamide and hydrazinecarboxamide, respectively. The complexes of oxovanadium and ligands have been characterized by elemental analyses, melting points, conductance measurements, molecular weight determinations, and IR, 1 H NMR and UV spectral studies. These studies showed that the ligands coordinated to the oxovanadium in a monobasic bidentate fashion through oxygen or sulfur and the nitrogen donor system. Thus, penta- and hexa coordinated environment around the vanadium atom has been proposed. All the complexes and their parent organic moieties have been screened for their biological activity on several pathogenic fungi and bacteria and were found to possess appreciable fungicidal and bactericidal properties [ru
Sources of trace elements in total diet. A statistical approach
Aras, N.K.; Chatt, A.
2004-01-01
Sixteen total diet samples have been collected from two socioeconomic groups in Turkey by duplicate portion techniques. Samples were homogenized with titanium-blade homogenizer, freeze dried and analyzed for their minor and trace elements mostly by neutron activation analysis. Bread and flour samples were also collected from the same regions and analyzed similarly by instrumental neutron activation analysis. Concentrations of more than 25 elements in total diets, bread and flour, and fiber and phytate in total diets have been determined. Daily dietary intakes of these population groups, probable source of elements through correlation coefficients, and enrichment factor calculations have been determined. (author)
Multimedia Based on Scientific Approach for Periodic System of Element
Sari, S.; Aryana, D. M.; Subarkah, C. Z.; Ramdhani, M. A.
2018-01-01
This study aims to describe the application of interactive multimedia on the concept of the periodic system of elements. The study was conducted by using the one-shot case study design. The subjects in this study were 35 high school students of class XI IPA. Results showed that the stages of observing, questioning, data collecting (experimenting), and communicating are all considered very good. This shows that multimedia can assist students in explaining the development of the periodic system of elements, ranging from Triade doberrainer, Newland Octarchic Law, Mendeleyev, and the modern periodic, as well as atomic radius, ionization energy, and electronegativity of an element in the periodic system.
Shuxia, ZHAO; Lei, ZHANG; Jiajia, HOU; Yang, ZHAO; Wangbao, YIN; Weiguang, MA; Lei, DONG; Liantuan, XIAO; Suotang, JIA
2018-03-01
The chemical composition of alloys directly determines their mechanical behaviors and application fields. Accurate and rapid analysis of both major and minor elements in alloys plays a key role in metallurgy quality control and material classification processes. A quantitative calibration-free laser-induced breakdown spectroscopy (CF-LIBS) analysis method, which carries out combined correction of plasma temperature and spectral intensity by using a second-order iterative algorithm and two boundary standard samples, is proposed to realize accurate composition measurements. Experimental results show that, compared to conventional CF-LIBS analysis, the relative errors for major elements Cu and Zn and minor element Pb in the copper-lead alloys has been reduced from 12%, 26% and 32% to 1.8%, 2.7% and 13.4%, respectively. The measurement accuracy for all elements has been improved substantially.
Espath, L. F R; Braun, Alexandre Luis; Awruch, Armando Miguel; Dalcin, Lisandro
2015-01-01
A numerical model to deal with nonlinear elastodynamics involving large rotations within the framework of the finite element based on NURBS (Non-Uniform Rational B-Spline) basis is presented. A comprehensive kinematical description using a corotational approach and an orthogonal tensor given by the exact polar decomposition is adopted. The state equation is written in terms of corotational variables according to the hypoelastic theory, relating the Jaumann derivative of the Cauchy stress to the Eulerian strain rate.The generalized-α method (Gα) method and Generalized Energy-Momentum Method with an additional parameter (GEMM+ξ) are employed in order to obtain a stable and controllable dissipative time-stepping scheme with algorithmic conservative properties for nonlinear dynamic analyses.The main contribution is to show that the energy-momentum conservation properties and numerical stability may be improved once a NURBS-based FEM in the spatial discretization is used. Also it is shown that high continuity can postpone the numerical instability when GEMM+ξ with consistent mass is employed; likewise, increasing the continuity class yields a decrease in the numerical dissipation. A parametric study is carried out in order to show the stability and energy budget in terms of several properties such as continuity class, spectral radius and lumped as well as consistent mass matrices.
Espath, L. F R
2015-02-03
A numerical model to deal with nonlinear elastodynamics involving large rotations within the framework of the finite element based on NURBS (Non-Uniform Rational B-Spline) basis is presented. A comprehensive kinematical description using a corotational approach and an orthogonal tensor given by the exact polar decomposition is adopted. The state equation is written in terms of corotational variables according to the hypoelastic theory, relating the Jaumann derivative of the Cauchy stress to the Eulerian strain rate.The generalized-α method (Gα) method and Generalized Energy-Momentum Method with an additional parameter (GEMM+ξ) are employed in order to obtain a stable and controllable dissipative time-stepping scheme with algorithmic conservative properties for nonlinear dynamic analyses.The main contribution is to show that the energy-momentum conservation properties and numerical stability may be improved once a NURBS-based FEM in the spatial discretization is used. Also it is shown that high continuity can postpone the numerical instability when GEMM+ξ with consistent mass is employed; likewise, increasing the continuity class yields a decrease in the numerical dissipation. A parametric study is carried out in order to show the stability and energy budget in terms of several properties such as continuity class, spectral radius and lumped as well as consistent mass matrices.
The "Critical" Elements of Illness Management and Recovery: Comparing Methodological Approaches.
McGuire, Alan B; Luther, Lauren; White, Dominique; White, Laura M; McGrew, John; Salyers, Michelle P
2016-01-01
This study examined three methodological approaches to defining the critical elements of Illness Management and Recovery (IMR), a curriculum-based approach to recovery. Sixty-seven IMR experts rated the criticality of 16 IMR elements on three dimensions: defining, essential, and impactful. Three elements (Recovery Orientation, Goal Setting and Follow-up, and IMR Curriculum) met all criteria for essential and defining and all but the most stringent criteria for impactful. Practitioners should consider competence in these areas as preeminent. The remaining 13 elements met varying criteria for essential and impactful. Findings suggest that criticality is a multifaceted construct, necessitating judgments about model elements across different criticality dimensions.
MILP approaches to sustainable production and distribution of meal elements
Akkerman, Renzo; Wang, Yang; Grunow, Martin
2009-01-01
This paper studies the production and distribution system for professionally prepared meals, in which a new innovative concept is applied. The concept aims to improve the sustainability of the system by distributing meal elements super-chilled in the conventional cold chain. Here, sustainability...
Factor analytical approaches for evaluating groundwater trace element chemistry data
Farnham, I.M.; Johannesson, K.H.; Singh, A.K.; Hodge, V.F.; Stetzenbach, K.J.
2003-01-01
The multivariate statistical techniques principal component analysis (PCA), Q-mode factor analysis (QFA), and correspondence analysis (CA) were applied to a dataset containing trace element concentrations in groundwater samples collected from a number of wells located downgradient from the potential nuclear waste repository at Yucca Mountain, Nevada. PCA results reflect the similarities in the concentrations of trace elements in the water samples resulting from different geochemical processes. QFA results reflect similarities in the trace element compositions, whereas CA reflects similarities in the trace elements that are dominant in the waters relative to all other groundwater samples included in the dataset. These differences are mainly due to the ways in which data are preprocessed by each of the three methods. The highly concentrated, and thus possibly more mature (i.e. older), groundwaters are separated from the more dilute waters using principal component 1 (PC 1). PC 2, as well as dimension 1 of the CA results, describe differences in the trace element chemistry of the groundwaters resulting from the different aquifer materials through which they have flowed. Groundwaters thought to be representative of those flowing through an aquifer composed dominantly of volcanic rocks are characterized by elevated concentrations of Li, Be, Ge, Rb, Cs, and Ba, whereas those associated with an aquifer dominated by carbonate rocks exhibit greater concentrations of Ti, Ni, Sr, Rh, and Bi. PC 3, and to a lesser extent dimension 2 of the CA results, show a strong monotonic relationship with the percentage of As(III) in the groundwater suggesting that these multivariate statistical results reflect, in a qualitative sense, the oxidizing/reducing conditions within the groundwater. Groundwaters that are relatively more reducing exhibit greater concentrations of Mn, Cs, Co, Ba, Rb, and Be, and those that are more oxidizing are characterized by greater concentrations of V, Cr, Ga
Rogge, Derek; Bachmann, Martin; Rivard, Benoit
2014-01-01
Spectral decorrelation (transformations) methods have long been used in remote sensing. Transformation of the image data onto eigenvectors that comprise physically meaningful spectral properties (signal) can be used to reduce the dimensionality of hyperspectral images as the number of spectrally...... distinct signal sources composing a given hyperspectral scene is generally much less than the number of spectral bands. Determining eigenvectors dominated by signal variance as opposed to noise is a difficult task. Problems also arise in using these transformations on large images, multiple flight...... and spectral subsampling to the data, which is accomplished by deriving a limited set of eigenvectors for spatially contiguous subsets. These subset eigenvectors are compiled together to form a new noise reduced data set, which is subsequently used to derive a set of global orthogonal eigenvectors. Data from...
A HyperSpectral Imaging (HSI) approach for bio-digestate real time monitoring
Bonifazi, Giuseppe; Fabbri, Andrea; Serranti, Silvia
2014-05-01
One of the key issues in developing Good Agricultural Practices (GAP) is represented by the optimal utilisation of fertilisers and herbicidal to reduce the impact of Nitrates in soils and the environment. In traditional agriculture practises, these substances were provided to the soils through the use of chemical products (inorganic/organic fertilizers, soil improvers/conditioners, etc.), usually associated to several major environmental problems, such as: water pollution and contamination, fertilizer dependency, soil acidification, trace mineral depletion, over-fertilization, high energy consumption, contribution to climate change, impacts on mycorrhizas, lack of long-term sustainability, etc. For this reason, the agricultural market is more and more interested in the utilisation of organic fertilisers and soil improvers. Among organic fertilizers, there is an emerging interest for the digestate, a sub-product resulting from anaerobic digestion (AD) processes. Several studies confirm the high properties of digestate if used as organic fertilizer and soil improver/conditioner. Digestate, in fact, is somehow similar to compost: AD converts a major part of organic nitrogen to ammonia, which is then directly available to plants as nitrogen. In this paper, new analytical tools, based on HyperSpectral Imaging (HSI) sensing devices, and related detection architectures, is presented and discussed in order to define and apply simple to use, reliable, robust and low cost strategies finalised to define and implement innovative smart detection engines for digestate characterization and monitoring. This approach is finalized to utilize this "waste product" as a valuable organic fertilizer and soil conditioner, in a reduced impact and an "ad hoc" soil fertilisation perspective. Furthermore, the possibility to contemporary utilize the HSI approach to realize a real time physicalchemical characterisation of agricultural soils (i.e. nitrogen, phosphorus, etc., detection) could
Birge, Jonathan R.; Kaertner, Franz X.
2008-01-01
We derive an analytical approximation for the measured pulse width error in spectral shearing methods, such as spectral phase interferometry for direct electric-field reconstruction (SPIDER), caused by an anomalous delay between the two sheared pulse components. This analysis suggests that, as pulses approach the single-cycle limit, the resulting requirements on the calibration and stability of this delay become significant, requiring precision orders of magnitude higher than the scale of a wavelength. This is demonstrated by numerical simulations of SPIDER pulse reconstruction using actual data from a sub-two-cycle laser. We briefly propose methods to minimize the effects of this sensitivity in SPIDER and review variants of spectral shearing that attempt to avoid this difficulty
Spacetime Discontinuous Galerkin FEM: Spectral Response
Abedi, R; Omidi, O; Clarke, P L
2014-01-01
Materials in nature demonstrate certain spectral shapes in terms of their material properties. Since successful experimental demonstrations in 2000, metamaterials have provided a means to engineer materials with desired spectral shapes for their material properties. Computational tools are employed in two different aspects for metamaterial modeling: 1. Mircoscale unit cell analysis to derive and possibly optimize material's spectral response; 2. macroscale to analyze their interaction with conventional material. We compare two different approaches of Time-Domain (TD) and Frequency Domain (FD) methods for metamaterial applications. Finally, we discuss advantages of the TD method of Spacetime Discontinuous Galerkin finite element method (FEM) for spectral analysis of metamaterials
Song, Yeo-Ul; Youn, Sung-Kie; Park, K. C.
2017-10-01
A method for three-dimensional non-matching interface treatment with a virtual gap element is developed. When partitioned structures contain curved interfaces and have different brick meshes, the discretized models have gaps along the interfaces. As these gaps bring unexpected errors, special treatments are required to handle the gaps. In the present work, a virtual gap element is introduced to link the frame and surface domain nodes in the frame work of the mortar method. Since the surface of the hexahedron element is quadrilateral, the gap element is pyramidal. The pyramidal gap element consists of four domain nodes and one frame node. Zero-strain condition in the gap element is utilized for the interpolation of frame nodes in terms of the domain nodes. This approach is taken to satisfy the momentum and energy conservation. The present method is applicable not only to curved interfaces with gaps, but also to flat interfaces in three dimensions. Several numerical examples are given to describe the effectiveness and accuracy of the proposed method.
An integrated approach to fingerprint indexing using spectral clustering based on minutiae points
Mngenge, NA
2015-07-01
Full Text Available this problem by constructing a rotational, scale and translation (RST) invariant fingerprint descriptor based on minutiae points. The proposed RST invariant descriptor dimensions are then reduced and passed to a spectral clustering algorithm which automatically...
A wavelet and least square filter based spatial-spectral denoising approach of hyperspectral imagery
Li, Ting; Chen, Xiao-Mei; Chen, Gang; Xue, Bo; Ni, Guo-Qiang
2009-11-01
Noise reduction is a crucial step in hyperspectral imagery pre-processing. Based on sensor characteristics, the noise of hyperspectral imagery represents in both spatial and spectral domain. However, most prevailing denosing techniques process the imagery in only one specific domain, which have not utilized multi-domain nature of hyperspectral imagery. In this paper, a new spatial-spectral noise reduction algorithm is proposed, which is based on wavelet analysis and least squares filtering techniques. First, in the spatial domain, a new stationary wavelet shrinking algorithm with improved threshold function is utilized to adjust the noise level band-by-band. This new algorithm uses BayesShrink for threshold estimation, and amends the traditional soft-threshold function by adding shape tuning parameters. Comparing with soft or hard threshold function, the improved one, which is first-order derivable and has a smooth transitional region between noise and signal, could save more details of image edge and weaken Pseudo-Gibbs. Then, in the spectral domain, cubic Savitzky-Golay filter based on least squares method is used to remove spectral noise and artificial noise that may have been introduced in during the spatial denoising. Appropriately selecting the filter window width according to prior knowledge, this algorithm has effective performance in smoothing the spectral curve. The performance of the new algorithm is experimented on a set of Hyperion imageries acquired in 2007. The result shows that the new spatial-spectral denoising algorithm provides more significant signal-to-noise-ratio improvement than traditional spatial or spectral method, while saves the local spectral absorption features better.
Pitton, Giuseppe; Heltai, Luca
2018-01-01
Non Uniform Rational B-spline (NURBS) patches are a standard way to describe complex geometries in Computer Aided Design tools, and have gained a lot of popularity in recent years also for the approximation of partial differential equations, via the Isogeometric Analysis (IGA) paradigm. However, spectral accuracy in IGA is limited to relatively small NURBS patch degrees (roughly p
Nallala, Jayakrupakar; Gobinet, Cyril; Diebold, Marie-Danièle; Untereiner, Valérie; Bouché, Olivier; Manfait, Michel; Sockalingum, Ganesh Dhruvananda; Piot, Olivier
2012-11-01
Innovative diagnostic methods are the need of the hour that could complement conventional histopathology for cancer diagnosis. In this perspective, we propose a new concept based on spectral histopathology, using IR spectral micro-imaging, directly applied to paraffinized colon tissue array stabilized in an agarose matrix without any chemical pre-treatment. In order to correct spectral interferences from paraffin and agarose, a mathematical procedure is implemented. The corrected spectral images are then processed by a multivariate clustering method to automatically recover, on the basis of their intrinsic molecular composition, the main histological classes of the normal and the tumoral colon tissue. The spectral signatures from different histological classes of the colonic tissues are analyzed using statistical methods (Kruskal-Wallis test and principal component analysis) to identify the most discriminant IR features. These features allow characterizing some of the biomolecular alterations associated with malignancy. Thus, via a single analysis, in a label-free and nondestructive manner, main changes associated with nucleotide, carbohydrates, and collagen features can be identified simultaneously between the compared normal and the cancerous tissues. The present study demonstrates the potential of IR spectral imaging as a complementary modern tool, to conventional histopathology, for an objective cancer diagnosis directly from paraffin-embedded tissue arrays.
Lang, Harold R.
1991-01-01
A new approach to stratigraphic analysis is described which uses photogeologic and spectral interpretation of multispectral remote sensing data combined with topographic information to determine the attitude, thickness, and lithology of strata exposed at the surface. The new stratigraphic procedure is illustrated by examples in the literature. The published results demonstrate the potential of spectral stratigraphy for mapping strata, determining dip and strike, measuring and correlating stratigraphic sequences, defining lithofacies, mapping biofacies, and interpreting geological structures.
Martin, Roland; Chevrot, Sébastien; Komatitsch, Dimitri; Seoane, Lucia; Spangenberg, Hannah; Wang, Yi; Dufréchou, Grégory; Bonvalot, Sylvain; Bruinsma, Sean
2017-04-01
We image the internal density structure of the Pyrenees by inverting gravity data using an a priori density model derived by scaling a Vp model obtained by full waveform inversion of teleseismic P-waves. Gravity anomalies are computed via a 3-D high-order finite-element integration in the same high-order spectral-element grid as the one used to solve the wave equation and thus to obtain the velocity model. The curvature of the Earth and surface topography are taken into account in order to obtain a density model as accurate as possible. The method is validated through comparisons with exact semi-analytical solutions. We show that the spectral-element method drastically accelerates the computations when compared to other more classical methods. Different scaling relations between compressional velocity and density are tested, and the Nafe-Drake relation is the one that leads to the best agreement between computed and observed gravity anomalies. Gravity data inversion is then performed and the results allow us to put more constraints on the density structure of the shallow crust and on the deep architecture of the mountain range.
Cumulative damage fraction design approach for LMFBR metallic fuel elements
Johnson, D.L.; Einziger, R.E.; Huchman, G.D.
1979-01-01
The cumulative damage fraction (CDF) analytical technique is currently being used to analyze the performance of metallic fuel elements for proliferation-resistant LMFBRs. In this technique, the fraction of the total time to rupture of the cladding is calculated as a function of the thermal, stress, and neutronic history. Cladding breach or rupture is implied by CDF = 1. Cladding wastage, caused by interactions with both the fuel and sodium coolant, is assumed to uniformly thin the cladding wall. The irradiation experience of the EBR-II Mark-II driver fuel with solution-annealed Type 316 stainless steel cladding provides an excellent data base for testing the applicability of the CDF technique to metallic fuel. The advanced metal fuels being considered for use in LMFBRs are U-15-Pu-10Zr, Th-20Pu and Th-2OU (compositions are given in weight percent). The two cladding alloys being considered are Type 316 stainless steel and a titanium-stabilized Type 316 stainless steel. Both are in the cold-worked condition. The CDF technique was applied to these fuels and claddings under the assumed steady-state operating conditions
Multi element synthetic aperture transmission using a frequency division approach
Gran, Fredrik; Jensen, Jørgen Arendt
2003-01-01
transmitted into the tissue is low. This paper describes a novel method in which the available spectrum is divided into 2N overlapping subbands. This will assure a smooth broadband high resolution spectrum when combined. The signals are grouped into two subsets in which all signals are fully orthogonal...... can therefore be used for flow imaging, unlike with Hadamard and Golay coding. The frequency division approach increases the SNR by a factor of N2 compared to conventional pulsed synthetic aperture imaging, provided that N transmission centers are used. Simulations and phantom measurements...
Okonda, J.J.
2015-01-01
Energy dispersive X-ray fluorescence (EDXRF) spectroscopy is an analytical method for identification and quantification of elements in materials by measurement of their spectral energy and intensity. EDXRFS spectroscopic technique involves simultaneous non-invasive acquisition of both fluorescence and scatter spectra from samples for quantitative determination of trace elemental content in complex matrix materials. The objective is develop a chemometric-aided EDXRFS method for rapid diagnosis of cancer and its severity (staging) based on analysis of trace elements (Cu, Zn, Fe, Se and Mn), their speciation and multivariate alterations of the elements in cancerous body tissue samples as cancer biomarkers. The quest for early diagnosis of cancer is based on the fact that early intervention translates to higher survival rate and better quality of life. Chemometric aided EDXRFS cancer diagnostic model has been evaluated as a direct and rapid superior alternative for the traditional quantitative methods used in XRF such as FP method. PCA results of cultured samples indicate that it is possible to characterize cancer at early and late stage of development based on trace elemental profiles
A probabilistic approach for the estimation of earthquake source parameters from spectral inversion
Supino, M.; Festa, G.; Zollo, A.
2017-12-01
The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to
A FINITE-ELEMENTS APPROACH TO THE STUDY OF FUNCTIONAL ARCHITECTURE IN SKELETAL-MUSCLE
OTTEN, E; HULLIGER, M
1994-01-01
A mathematical model that simulates the mechanical processes inside a skeletal muscle under various conditions of muscle recruitment was formulated. The model is based on the finite-elements approach and simulates both contractile and passive elastic elements. Apart from the classic strategy of
Restraining approach for the spurious kinematic modes in hybrid equilibrium element
Parrinello, F.
2013-10-01
The present paper proposes a rigorous approach for the elimination of spurious kinematic modes in hybrid equilibrium elements, for three well known mesh patches. The approach is based on the identification of the dependent equations in the set of inter-element and boundary equilibrium equations of the sides involved in the spurious kinematic mode. Then the kinematic variables related to the dependent equations are reciprocally constrained and, by application of master slave elimination method, the set of inter-element equilibrium equations is reduced to full rank. The elastic solutions produced by means of the proposed approach verify the homogeneous, the inter-element and the boundary equilibrium equations. Hybrid stress formulation is developed in a rigorous mathematical setting. The results of linear elastic analysis obtained by the proposed approach and by classical displacement based method are compared for some structural examples.
Spectral Approach to Derive the Representation Formulae for Solutions of the Wave Equation
Gusein Sh. Guseinov
2012-01-01
Full Text Available Using spectral properties of the Laplace operator and some structural formula for rapidly decreasing functions of the Laplace operator, we offer a novel method to derive explicit formulae for solutions to the Cauchy problem for classical wave equation in arbitrary dimensions. Among them are the well-known d'Alembert, Poisson, and Kirchhoff representation formulae in low space dimensions.
A new approach to passivity preserving model reduction : the dominant spectral zero method
Ionutiu, R.; Rommes, J.; Antoulas, A.C.; Roos, J.; Costa, L.R.J.
2010-01-01
A new model reduction method for circuit simulation is presented, which preserves passivity by interpolating dominant spectral zeros. These are computed as poles of an associated Hamiltonian system, using an iterative solver: the subspace accelerated dominant pole algorithm (SADPA). Based on a
Unstructured grids and an element based conservative approach for a black-oil reservoir simulation
Nogueira, Regis Lopes; Fernandes, Bruno Ramon Batista [Federal University of Ceara, Fortaleza, CE (Brazil). Dept. of Chemical Engineering; Araujo, Andre Luiz de Souza [Federal Institution of Education, Science and Technology of Ceara - IFCE, Fortaleza (Brazil). Industry Department], e-mail: andre@ifce.edu.br; Marcondes, Francisco [Federal University of Ceara, Fortaleza, CE (Brazil). Dept. of Metallurgical Engineering and Material Science], e-mail: marcondes@ufc.br
2010-07-01
Unstructured meshes presented one upgrade in modeling the main important features of the reservoir such as discrete fractures, faults, and irregular boundaries. From several methodologies available, the Element based Finite Volume Method (EbFVM), in conjunction with unstructured meshes, is one methodology that deserves large attention. In this approach, the reservoir, for 2D domains, is discretized using a mixed two-dimensional mesh using quadrilateral and triangle elements. After the initial step of discretization, each element is divided into sub-elements and the mass balance for each component is developed for each sub-element. The equations for each control-volume using a cell vertex construction are formulated through the contribution of different neighboured elements. This paper presents an investigation of an element-based approach using the black-oil model based on pressure and global mass fractions. In this approach, even when all gas phase is dissolved in oil phase the global mass fraction of gas will be different from zero. Therefore, no additional numerical procedure is necessary in order to treat the gas phase appear/disappearance. In this paper the above mentioned approach is applied to multiphase flows involving oil, gas, and water. The mass balance equations in terms of global mass fraction of oil, gas and water are discretized through the EbFVM and linearized by the Newton's method. The results are presented in terms of volumetric rates of oil, gas, and water and phase saturations. (author)
Collewet, Guylaine; Moussaoui, Saïd; Deligny, Cécile; Lucas, Tiphaine; Idier, Jérôme
2018-06-01
Multi-tissue partial volume estimation in MRI images is investigated with a viewpoint related to spectral unmixing as used in hyperspectral imaging. The main contribution of this paper is twofold. It firstly proposes a theoretical analysis of the statistical optimality conditions of the proportion estimation problem, which in the context of multi-contrast MRI data acquisition allows to appropriately set the imaging sequence parameters. Secondly, an efficient proportion quantification algorithm based on the minimisation of a penalised least-square criterion incorporating a regularity constraint on the spatial distribution of the proportions is proposed. Furthermore, the resulting developments are discussed using empirical simulations. The practical usefulness of the spectral unmixing approach for partial volume quantification in MRI is illustrated through an application to food analysis on the proving of a Danish pastry. Copyright © 2018 Elsevier Inc. All rights reserved.
Boccaleri, Enrico; Arrais, Aldo; Frache, Alberto; Gianelli, Walter; Fino, Paolo; Camino, Giovanni
2006-01-01
A wide series of carbon nanostructures (ranging from fullerenes, through carbon nanotubes, up to carbon nanofibers) promise to change several fields in material science, but a real industrial implementation depends on their availability at reasonable prices with affordable and reproducible degrees of purity. In this study we propose simple instrumental approaches to efficiently characterize different commercial samples, particularly for qualitative evaluation of impurities, the discrimination of their respective spectral features and, when possible, for quantitative determination. We critically discuss information that researchers in the field of nanocomposite technology can achieve in this aim by spectral techniques such as Raman and FT-IR spectroscopy, thermo-gravimetrical analysis, mass spectrometry-hyphenated thermogravimetry, X-ray diffraction and energy dispersive spectroscopy. All these can be helpful, in applied research on material science, for a fast reliable monitoring of the actual purity of carbon products in both commercial and laboratory-produced samples as well as in composite materials
Dahlberg, Peter D; Boughter, Christopher T; Faruk, Nabil F; Hong, Lu; Koh, Young Hoon; Reyer, Matthew A; Shaiber, Alon; Sherani, Aiman; Zhang, Jiacheng; Jureller, Justin E; Hammond, Adam T
2016-11-01
A standard wide field inverted microscope was converted to a spatially selective spectrally resolved microscope through the addition of a polarizing beam splitter, a pair of polarizers, an amplitude-mode liquid crystal-spatial light modulator, and a USB spectrometer. The instrument is capable of simultaneously imaging and acquiring spectra over user defined regions of interest. The microscope can also be operated in a bright-field mode to acquire absorption spectra of micron scale objects. The utility of the instrument is demonstrated on three different samples. First, the instrument is used to resolve three differently labeled fluorescent beads in vitro. Second, the instrument is used to recover time dependent bleaching dynamics that have distinct spectral changes in the cyanobacteria, Synechococcus leopoliensis UTEX 625. Lastly, the technique is used to acquire the absorption spectra of CH 3 NH 3 PbBr 3 perovskites and measure differences between nanocrystal films and micron scale crystals.
Dahlberg, Peter D.; Boughter, Christopher T.; Faruk, Nabil F.; Hong, Lu; Koh, Young Hoon; Reyer, Matthew A.; Shaiber, Alon; Sherani, Aiman; Zhang, Jiacheng; Jureller, Justin E.; Hammond, Adam T.
2016-11-01
A standard wide field inverted microscope was converted to a spatially selective spectrally resolved microscope through the addition of a polarizing beam splitter, a pair of polarizers, an amplitude-mode liquid crystal-spatial light modulator, and a USB spectrometer. The instrument is capable of simultaneously imaging and acquiring spectra over user defined regions of interest. The microscope can also be operated in a bright-field mode to acquire absorption spectra of micron scale objects. The utility of the instrument is demonstrated on three different samples. First, the instrument is used to resolve three differently labeled fluorescent beads in vitro. Second, the instrument is used to recover time dependent bleaching dynamics that have distinct spectral changes in the cyanobacteria, Synechococcus leopoliensis UTEX 625. Lastly, the technique is used to acquire the absorption spectra of CH3NH3PbBr3 perovskites and measure differences between nanocrystal films and micron scale crystals.
Jin, Zhonghai; Wielicki, Bruce A.; Loukachine, Constantin; Charlock, Thomas P.; Young, David; Noeel, Stefan
2011-01-01
The radiative kernel approach provides a simple way to separate the radiative response to different climate parameters and to decompose the feedback into radiative and climate response components. Using CERES/MODIS/Geostationary data, we calculated and analyzed the solar spectral reflectance kernels for various climate parameters on zonal, regional, and global spatial scales. The kernel linearity is tested. Errors in the kernel due to nonlinearity can vary strongly depending on climate parameter, wavelength, surface, and solar elevation; they are large in some absorption bands for some parameters but are negligible in most conditions. The spectral kernels are used to calculate the radiative responses to different climate parameter changes in different latitudes. The results show that the radiative response in high latitudes is sensitive to the coverage of snow and sea ice. The radiative response in low latitudes is contributed mainly by cloud property changes, especially cloud fraction and optical depth. The large cloud height effect is confined to absorption bands, while the cloud particle size effect is found mainly in the near infrared. The kernel approach, which is based on calculations using CERES retrievals, is then tested by direct comparison with spectral measurements from Scanning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY) (a different instrument on a different spacecraft). The monthly mean interannual variability of spectral reflectance based on the kernel technique is consistent with satellite observations over the ocean, but not over land, where both model and data have large uncertainty. RMS errors in kernel ]derived monthly global mean reflectance over the ocean compared to observations are about 0.001, and the sampling error is likely a major component.
Dahlberg, Peter D.; Boughter, Christopher T.; Faruk, Nabil F.; Hong, Lu; Koh, Young Hoon; Reyer, Matthew A.; Shaiber, Alon; Sherani, Aiman; Zhang, Jiacheng; Jureller, Justin E.; Hammond, Adam T.
2016-01-01
A standard wide field inverted microscope was converted to a spatially selective spectrally resolved microscope through the addition of a polarizing beam splitter, a pair of polarizers, an amplitude-mode liquid crystal-spatial light modulator, and a USB spectrometer. The instrument is capable of simultaneously imaging and acquiring spectra over user defined regions of interest. The microscope can also be operated in a bright-field mode to acquire absorption spectra of micron scale objects. Th...
Measurement of high-temperature spectral emissivity using integral blackbody approach
Pan, Yijie; Dong, Wei; Lin, Hong; Yuan, Zundong; Bloembergen, Pieter
2016-11-01
Spectral emissivity is one of the most critical thermophysical properties of a material for heat design and analysis. Especially in the traditional radiation thermometry, normal spectral emissivity is very important. We developed a prototype instrument based upon an integral blackbody method to measure material's spectral emissivity at elevated temperatures. An optimized commercial variable-high-temperature blackbody, a high speed linear actuator, a linear pyrometer, and an in-house designed synchronization circuit was used to implemented the system. A sample was placed in a crucible at the bottom of the blackbody furnace, by which the sample and the tube formed a simulated reference blackbody which had an effective total emissivity greater than 0.985. During the measurement, a pneumatic cylinder pushed a graphite rode and then the sample crucible to the cold opening within hundreds of microseconds. The linear pyrometer was used to monitor the brightness temperature of the sample surface, and the corresponding opto-converted voltage was fed and recorded by a digital multimeter. To evaluate the temperature drop of the sample along the pushing process, a physical model was proposed. The tube was discretized into several isothermal cylindrical rings, and the temperature of each ring was measurement. View factors between sample and rings were utilized. Then, the actual surface temperature of the sample at the end opening was obtained. Taking advantages of the above measured voltage signal and the calculated actual temperature, normal spectral emissivity under the that temperature point was obtained. Graphite sample at 1300°C was measured to prove the validity of the method.
Coherent Structures and Spectral Energy Transfer in Turbulent Plasma: A Space-Filter Approach
Camporeale, E.; Sorriso-Valvo, L.; Califano, F.; Retinò, A.
2018-03-01
Plasma turbulence at scales of the order of the ion inertial length is mediated by several mechanisms, including linear wave damping, magnetic reconnection, the formation and dissipation of thin current sheets, and stochastic heating. It is now understood that the presence of localized coherent structures enhances the dissipation channels and the kinetic features of the plasma. However, no formal way of quantifying the relationship between scale-to-scale energy transfer and the presence of spatial structures has been presented so far. In the Letter we quantify such a relationship analyzing the results of a two-dimensional high-resolution Hall magnetohydrodynamic simulation. In particular, we employ the technique of space filtering to derive a spectral energy flux term which defines, in any point of the computational domain, the signed flux of spectral energy across a given wave number. The characterization of coherent structures is performed by means of a traditional two-dimensional wavelet transformation. By studying the correlation between the spectral energy flux and the wavelet amplitude, we demonstrate the strong relationship between scale-to-scale transfer and coherent structures. Furthermore, by conditioning one quantity with respect to the other, we are able for the first time to quantify the inhomogeneity of the turbulence cascade induced by topological structures in the magnetic field. Taking into account the low space-filling factor of coherent structures (i.e., they cover a small portion of space), it emerges that 80% of the spectral energy transfer (both in the direct and inverse cascade directions) is localized in about 50% of space, and 50% of the energy transfer is localized in only 25% of space.
FEHL, DAVID LEE; BIGGS, F.; CHANDLER, GORDON A.; STYGAR, WILLIAM A.
2000-01-01
The generalized method of Backus and Gilbert (BG) is described and applied to the inverse problem of obtaining spectra from a 5-channel, filtered array of x-ray detectors (XRD's). This diagnostic is routinely fielded on the Z facility at Sandia National Laboratories to study soft x-ray photons ((le)2300 eV), emitted by high density Z-pinch plasmas. The BG method defines spectral resolution limits on the system of response functions that are in good agreement with the unfold method currently in use. The resolution so defined is independent of the source spectrum. For noise-free, simulated data the BG approximating function is also in reasonable agreement with the source spectrum (150 eV black-body) and the unfold. This function may be used as an initial trial function for iterative methods or a regularization model
Lawson, K.; Peacock, N.; Gianella, R.
1998-12-01
The derivation of elemental components of radiated powers and impurity concentrations in bulk tokamak plasmas is complex, often requiring a full description of the impurity transport. A novel, empirical method, the Line Intensity Normalization Technique (LINT) has been developed on the JET (Joint European Torus) tokamak to provide routine information about the impurity content of the plasma and elemental components of radiated power (P rad ). The technique employs a few VUV and XUV resonance line intensities to represent the intrinsic impurity elements in the plasma. From a data base comprising these spectral features, the total bolometric measurement of the radiated power and the Z eff measured by visible spectroscopy, separate elemental components of P rad and Z eff are derived. The method, which converts local spectroscopic signals into global plasma parameters, has the advantage of simplicity, allowing large numbers of pulses to be processed, and, in many operational modes of JET, is found to be both reliable and accurate. It relies on normalizing the line intensities to the absolute calibration of the bolometers and visible spectrometers, using coefficients independent of density and temperature. Accuracies of the order of ± 15% can be achieved for the elemental P rad components of the most significant impurities and the impurity concentrations can be determined to within ±30%. Trace elements can be monitored, although with reduced accuracy. The present paper deals with limiter discharges, which have been the main application to date. As a check on the technique and to demonstrate the value of the LINT results, they have been applied to the transport modelling of intrinsic impurities carried out with the SANCO transport code, which uses atomic data from ADAS. The simulations provide independent confirmation of the concentrations empirically derived using the LINT technique. For this analysis, the simple case of the L-mode regime is considered, the chosen
Fauqueux, S.
2003-02-01
We consider the propagation of elastic waves in unbounded domains. A new formulation of the linear elasticity system as an H (div) - L{sup 2} system enables us to use the 'mixed spectral finite element method', This new method is based on the definition of new spaces of approximation and the use of mass-lumping. It leads to an explicit scheme with reduced storage and provides the same solution as the spectral finite element method. Then, we model unbounded domains by using Perfectly Matched Layers. Instabilities in the PML in the case of particular 2D elastic media are pointed out and investigated. The numerical method is validated and tested in the case of acoustic and elastic realistic models. A plane wave analysis gives results about numerical dispersion and shows that meshes adapted to the physical and geometrical properties of the media are more accurate than the others. Then, an extension of the method to fluid-solid coupling is introduced for 2D seismic propagation. (author)
A. Ehrlich
2008-12-01
Full Text Available Arctic boundary-layer clouds were investigated with remote sensing and in situ instruments during the Arctic Study of Tropospheric Aerosol, Clouds and Radiation (ASTAR campaign in March and April 2007. The clouds formed in a cold air outbreak over the open Greenland Sea. Beside the predominant mixed-phase clouds pure liquid water and ice clouds were observed. Utilizing measurements of solar radiation reflected by the clouds three methods to retrieve the thermodynamic phase of the cloud are introduced and compared. Two ice indices I_{S} and I_{P} were obtained by analyzing the spectral pattern of the cloud top reflectance in the near infrared (1500–1800 nm wavelength spectral range which is characterized by ice and water absorption. While I_{S} analyzes the spectral slope of the reflectance in this wavelength range, I_{S} utilizes a principle component analysis (PCA of the spectral reflectance. A third ice index I_{A} is based on the different side scattering of spherical liquid water particles and nonspherical ice crystals which was recorded in simultaneous measurements of spectral cloud albedo and reflectance.
Radiative transfer simulations show that I_{S}, I_{P} and I_{A} range between 5 to 80, 0 to 8 and 1 to 1.25 respectively with lowest values indicating pure liquid water clouds and highest values pure ice clouds. The spectral slope ice index I_{S} and the PCA ice index I_{P} are found to be strongly sensitive to the effective diameter of the ice crystals present in the cloud. Therefore, the identification of mixed-phase clouds requires a priori knowledge of the ice crystal dimension. The reflectance-albedo ice index I_{A} is mainly dominated by the uppermost cloud layer (τ<1.5. Therefore, typical boundary-layer mixed-phase clouds with a liquid cloud top layer will
Spatial-Spectral Approaches to Edge Detection in Hyperspectral Remote Sensing
Cox, Cary M.
This dissertation advances geoinformation science at the intersection of hyperspectral remote sensing and edge detection methods. A relatively new phenomenology among its remote sensing peers, hyperspectral imagery (HSI) comprises only about 7% of all remote sensing research - there are five times as many radar-focused peer reviewed journal articles than hyperspectral-focused peer reviewed journal articles. Similarly, edge detection studies comprise only about 8% of image processing research, most of which is dedicated to image processing techniques most closely associated with end results, such as image classification and feature extraction. Given the centrality of edge detection to mapping, that most important of geographic functions, improving the collective understanding of hyperspectral imagery edge detection methods constitutes a research objective aligned to the heart of geoinformation sciences. Consequently, this dissertation endeavors to narrow the HSI edge detection research gap by advancing three HSI edge detection methods designed to leverage HSI's unique chemical identification capabilities in pursuit of generating accurate, high-quality edge planes. The Di Zenzo-based gradient edge detection algorithm, an innovative version of the Resmini HySPADE edge detection algorithm and a level set-based edge detection algorithm are tested against 15 traditional and non-traditional HSI datasets spanning a range of HSI data configurations, spectral resolutions, spatial resolutions, bandpasses and applications. This study empirically measures algorithm performance against Dr. John Canny's six criteria for a good edge operator: false positives, false negatives, localization, single-point response, robustness to noise and unbroken edges. The end state is a suite of spatial-spectral edge detection algorithms that produce satisfactory edge results against a range of hyperspectral data types applicable to a diverse set of earth remote sensing applications. This work
A spectral approach to compute the mean performance measures of the queue with low-order BMAP input
Ho Woo Lee
2003-01-01
Full Text Available This paper targets engineers and practitioners who want a simple procedure to compute the mean performance measures of the Batch Markovian Arrival process (BMAP/G/1 queueing system when the parameter matrices order is very low. We develop a set of system equations and derive the vector generating function of the queue length. Starting from the generating function, we propose a spectral approach that can be understandable to those who have basic knowledge of M/G/1 queues and eigenvalue algebra.
Marcondes, Francisco [Federal University of Ceara, Fortaleza (Brazil). Dept. of Metallurgical Engineering and Material Science], e-mail: marcondes@ufc.br; Varavei, Abdoljalil; Sepehrnoori, Kamy [The University of Texas at Austin (United States). Petroleum and Geosystems Engineering Dept.], e-mails: varavei@mail.utexas.edu, kamys@mail.utexas.edu
2010-07-01
An element-based finite-volume approach in conjunction with unstructured grids for naturally fractured compositional reservoir simulation is presented. In this approach, both the discrete fracture and the matrix mass balances are taken into account without any additional models to couple the matrix and discrete fractures. The mesh, for two dimensional domains, can be built of triangles, quadrilaterals, or a mix of these elements. However, due to the available mesh generator to handle both matrix and discrete fractures, only results using triangular elements will be presented. The discrete fractures are located along the edges of each element. To obtain the approximated matrix equation, each element is divided into three sub-elements and then the mass balance equations for each component are integrated along each interface of the sub-elements. The finite-volume conservation equations are assembled from the contribution of all the elements that share a vertex, creating a cell vertex approach. The discrete fracture equations are discretized only along the edges of each element and then summed up with the matrix equations in order to obtain a conservative equation for both matrix and discrete fractures. In order to mimic real field simulations, the capillary pressure is included in both matrix and discrete fracture media. In the implemented model, the saturation field in the matrix and discrete fractures can be different, but the potential of each phase in the matrix and discrete fracture interface needs to be the same. The results for several naturally fractured reservoirs are presented to demonstrate the applicability of the method. (author)
Heat transfer analysis in internally-cooled fuel elements by means of a conformal mapping approach
Sarmiento, G.S.; Laura, P.A.A.
1981-01-01
The present paper deals with an approximate solution of the steady-state heat conduction problem in internally cooled fuel elements of fast breeder reactors. Explicit expressions for the dimensionless temperature distribution in terms of the governing physical and geometrical parameters are determined by means of a coupled conformal mapping-variational approach. The results obtained are found to be in very good agreement with those calculated by means of a finite element code. (orig.)
A New Statistical Approach to the Optical Spectral Variability in Blazars
Jose A. Acosta-Pulido
2016-12-01
Full Text Available We present a spectral variability study of a sample of about 25 bright blazars, based on optical spectroscopy. Observations cover the period from the end of 2008 to mid 2015, with an approximately monthly cadence. Emission lines have been identified and measured in the spectra, which permits us to classify the sources into BL Lac-type or FSRQs, according to the commonly used EW limit. We have obtained synthetic photometry and produced colour-magnitude diagrams which show different trends associated with the object classes: generally, BL Lacs tend to become bluer when brighter and FSRQs become redder when brighter, although several objects exhibit both trends, depending on brightness. We have also applied a pattern recognition algorithm to obtain the minimum number of physical components which can explain the variability of the optical spectrum. We have used NMF (Non-Negative Matrix Factorization instead of PCA (Principal Component Analysis to avoid un-realistic negative components. For most targets we found that 2 or 3 meta-components are enough to explain the observed spectral variability.
Yoon, Gil Ho; Kim, Y.Y.; Langelaar, M.
2008-01-01
The internal element connectivity parameterization (I-ECP) method is an alternative approach to overcome numerical instabilities associated with low-stiffness element states in non-linear problems. In I-ECP, elements are connected by zero-length links while their link stiffness values are varied....... Therefore, it is important to interpolate link stiffness properly to obtain stably converging results. The main objective of this work is two-fold (1) the investigation of the relationship between the link stiffness and the stiffness of a domain-discretizing patch by using a discrete model and a homogenized...
Terry, J.L.; Manning, H.L.; Marmar, E.S.
1986-07-01
Two methods which together allow sensitivity calibration from 20 A to 430 A are described in detail. The first method, useful up to 120 A, uses a low power source to generate Kα x-rays which are alternately viewed by an absolute detector (a proportional counter) and the spectrometer. The second method extends that calibration to 430 A. It relies on the 2:1 brightness ratio of bright doublet lines from impurity ions which have a single outer shell electron and which are present in hot, magnetically confined plasmas. It requires that the absolute sensitivity of the spectrometer be known at one wavelength point, and in practice requires a multi-element spectral detector
Coupled thermomechanical behavior of graphene using the spring-based finite element approach
Georgantzinos, S. K., E-mail: sgeor@mech.upatras.gr; Anifantis, N. K., E-mail: nanif@mech.upatras.gr [Machine Design Laboratory, Department of Mechanical Engineering and Aeronautics, University of Patras, Rio, 26500 Patras (Greece); Giannopoulos, G. I., E-mail: ggiannopoulos@teiwest.gr [Materials Science Laboratory, Department of Mechanical Engineering, Technological Educational Institute of Western Greece, 1 Megalou Alexandrou Street, 26334 Patras (Greece)
2016-07-07
The prediction of the thermomechanical behavior of graphene using a new coupled thermomechanical spring-based finite element approach is the aim of this work. Graphene sheets are modeled in nanoscale according to their atomistic structure. Based on molecular theory, the potential energy is defined as a function of temperature, describing the interatomic interactions in different temperature environments. The force field is approached by suitable straight spring finite elements. Springs simulate the interatomic interactions and interconnect nodes located at the atomic positions. Their stiffness matrix is expressed as a function of temperature. By using appropriate boundary conditions, various different graphene configurations are analyzed and their thermo-mechanical response is approached using conventional finite element procedures. A complete parametric study with respect to the geometric characteristics of graphene is performed, and the temperature dependency of the elastic material properties is finally predicted. Comparisons with available published works found in the literature demonstrate the accuracy of the proposed method.
The node-weighted Steiner tree approach to identify elements of cancer-related signaling pathways.
Sun, Yahui; Ma, Chenkai; Halgamuge, Saman
2017-12-28
Cancer constitutes a momentous health burden in our society. Critical information on cancer may be hidden in its signaling pathways. However, even though a large amount of money has been spent on cancer research, some critical information on cancer-related signaling pathways still remains elusive. Hence, new works towards a complete understanding of cancer-related signaling pathways will greatly benefit the prevention, diagnosis, and treatment of cancer. We propose the node-weighted Steiner tree approach to identify important elements of cancer-related signaling pathways at the level of proteins. This new approach has advantages over previous approaches since it is fast in processing large protein-protein interaction networks. We apply this new approach to identify important elements of two well-known cancer-related signaling pathways: PI3K/Akt and MAPK. First, we generate a node-weighted protein-protein interaction network using protein and signaling pathway data. Second, we modify and use two preprocessing techniques and a state-of-the-art Steiner tree algorithm to identify a subnetwork in the generated network. Third, we propose two new metrics to select important elements from this subnetwork. On a commonly used personal computer, this new approach takes less than 2 s to identify the important elements of PI3K/Akt and MAPK signaling pathways in a large node-weighted protein-protein interaction network with 16,843 vertices and 1,736,922 edges. We further analyze and demonstrate the significance of these identified elements to cancer signal transduction by exploring previously reported experimental evidences. Our node-weighted Steiner tree approach is shown to be both fast and effective to identify important elements of cancer-related signaling pathways. Furthermore, it may provide new perspectives into the identification of signaling pathways for other human diseases.
Spectral information interpretation of X-ray analysis based on expert system approach
Drakunov, Yu.M.; Lezin, A.N.; Pukha, N.P.; Silachev, I.Yu.
2000-01-01
An expert subprogram for automated identification for element composition of the samples of different nature according to the result of energy-dispersive X-ray fluorescence analysis is elaborated, The flowchart of the subprogram is presented, brief description of expert system structure and its algorithm is given. (author)
Comparing geological and statistical approaches for element selection in sediment tracing research
Laceby, J. Patrick; McMahon, Joe; Evrard, Olivier; Olley, Jon
2015-04-01
Elevated suspended sediment loads reduce reservoir capacity and significantly increase the cost of operating water treatment infrastructure, making the management of sediment supply to reservoirs of increasingly importance. Sediment fingerprinting techniques can be used to determine the relative contributions of different sources of sediment accumulating in reservoirs. The objective of this research is to compare geological and statistical approaches to element selection for sediment fingerprinting modelling. Time-integrated samplers (n=45) were used to obtain source samples from four major subcatchments flowing into the Baroon Pocket Dam in South East Queensland, Australia. The geochemistry of potential sources were compared to the geochemistry of sediment cores (n=12) sampled in the reservoir. The geochemical approach selected elements for modelling that provided expected, observed and statistical discrimination between sediment sources. Two statistical approaches selected elements for modelling with the Kruskal-Wallis H-test and Discriminatory Function Analysis (DFA). In particular, two different significance levels (0.05 & 0.35) for the DFA were included to investigate the importance of element selection on modelling results. A distribution model determined the relative contributions of different sources to sediment sampled in the Baroon Pocket Dam. Elemental discrimination was expected between one subcatchment (Obi Obi Creek) and the remaining subcatchments (Lexys, Falls and Bridge Creek). Six major elements were expected to provide discrimination. Of these six, only Fe2O3 and SiO2 provided expected, observed and statistical discrimination. Modelling results with this geological approach indicated 36% (+/- 9%) of sediment sampled in the reservoir cores were from mafic-derived sources and 64% (+/- 9%) were from felsic-derived sources. The geological and the first statistical approach (DFA0.05) differed by only 1% (σ 5%) for 5 out of 6 model groupings with only
An integrated finite-element approach to mechanics, transport and biosynthesis in tissue engineering
Sengers, B.G.; Oomens, C.W.J.; Baaijens, F.P.T.
2004-01-01
A finite-element approach was formulated, aimed at enabling an integrated study of mechanical and biochemical factors that control the functional development of tissue engineered constructs. A nonlinear biphasic displacement-velocity-pressure description was combined with adjective and diffusive
Vahdatirad, Mohammadjavad; Bayat, Mehdi; Andersen, Lars Vabbersgaard
2012-01-01
In this study a stochastic approach is conducted to obtain the horizontal and rotational stiffness of an offshore monopile foundation. A nonlinear stochastic p-y curve is integrated into a finite element scheme for calculation of the monopile response in over-consolidated clay having spatial...
Foodsheds in Virtual Water Flow Networks: A Spectral Graph Theory Approach
Nina Kshetry
2017-06-01
Full Text Available A foodshed is a geographic area from which a population derives its food supply, but a method to determine boundaries of foodsheds has not been formalized. Drawing on the food–water–energy nexus, we propose a formal network science definition of foodsheds by using data from virtual water flows, i.e., water that is virtually embedded in food. In particular, we use spectral graph partitioning for directed graphs. If foodsheds turn out to be geographically compact, it suggests the food system is local and therefore reduces energy and externality costs of food transport. Using our proposed method we compute foodshed boundaries at the global-scale, and at the national-scale in the case of two of the largest agricultural countries: India and the United States. Based on our determination of foodshed boundaries, we are able to better understand commodity flows and whether foodsheds are contiguous and compact, and other factors that impact environmental sustainability. The formal method we propose may be used more broadly to study commodity flows and their impact on environmental sustainability.
Hooper, Sean D.; Anderson, Iain J; Pati, Amrita; Dalevi, Daniel; Mavromatis, Konstantinos; Kyrpides, Nikos C
2009-01-01
In order to simplify and meaningfully categorize large sets of protein sequence data, it is commonplace to cluster proteins based on the similarity of those sequences. However, it quickly becomes clear that the sequence flexibility allowed a given protein varies significantly among different protein families. The degree to which sequences are conserved not only differs for each protein family, but also is affected by the phylogenetic divergence of the source organisms. Clustering techniques that use similarity thresholds for protein families do not always allow for these variations and thus cannot be confidently used for applications such as automated annotation and phylogenetic profiling. In this work, we applied a spectral bipartitioning technique to all proteins from 53 archaeal genomes. Comparisons between different taxonomic levels allowed us to study the effects of phylogenetic distances on cluster structure. Likewise, by associating functional annotations and phenotypic metadata with each protein, we could compare our protein similarity clusters with both protein function and associated phenotype. Our clusters can be analyzed graphically and interactively online.
New Approach for Snow Cover Detection through Spectral Pattern Recognition with MODIS Data
Kyeong-Sang Lee
2017-01-01
Full Text Available Snow cover plays an important role in climate and hydrology, at both global and regional scales. Most previous studies have used static threshold techniques to detect snow cover, which can lead to errors such as misclassification of snow and clouds, because the reflectance of snow cover exhibits variability and is affected by several factors. Therefore, we present a simple new algorithm for mapping snow cover from Moderate Resolution Imaging Spectroradiometer (MODIS data using dynamic wavelength warping (DWW, which is based on dynamic time warping (DTW. DTW is a pattern recognition technique that is widely used in various fields such as human action recognition, anomaly detection, and clustering. Before performing DWW, we constructed 49 snow reflectance spectral libraries as reference data for various solar zenith angle and digital elevation model conditions using approximately 1.6 million sampled data. To verify the algorithm, we compared our results with the MODIS swath snow cover product (MOD10_L2. Producer’s accuracy, user’s accuracy, and overall accuracy values were 92.92%, 78.41%, and 92.24%, respectively, indicating good overall classification accuracy. The proposed algorithm is more useful for discriminating between snow cover and clouds than threshold techniques in some areas, such as those with a high viewing zenith angle.
A Spectral Approach for Quenched Limit Theorems for Random Expanding Dynamical Systems
Dragičević, D.; Froyland, G.; González-Tokman, C.; Vaienti, S.
2018-01-01
We prove quenched versions of (i) a large deviations principle (LDP), (ii) a central limit theorem (CLT), and (iii) a local central limit theorem for non-autonomous dynamical systems. A key advance is the extension of the spectral method, commonly used in limit laws for deterministic maps, to the general random setting. We achieve this via multiplicative ergodic theory and the development of a general framework to control the regularity of Lyapunov exponents of twisted transfer operator cocycles with respect to a twist parameter. While some versions of the LDP and CLT have previously been proved with other techniques, the local central limit theorem is, to our knowledge, a completely new result, and one that demonstrates the strength of our method. Applications include non-autonomous (piecewise) expanding maps, defined by random compositions of the form {T_{σ^{n-1} ω} circ\\cdotscirc T_{σω}circ T_ω} . An important aspect of our results is that we only assume ergodicity and invertibility of the random driving {σ:Ω\\toΩ} ; in particular no expansivity or mixing properties are required.
SURVEY DESIGN FOR SPECTRAL ENERGY DISTRIBUTION FITTING: A FISHER MATRIX APPROACH
Acquaviva, Viviana; Gawiser, Eric; Bickerton, Steven J.; Grogin, Norman A.; Guo Yicheng; Lee, Seong-Kook
2012-01-01
The spectral energy distribution (SED) of a galaxy contains information on the galaxy's physical properties, and multi-wavelength observations are needed in order to measure these properties via SED fitting. In planning these surveys, optimization of the resources is essential. The Fisher Matrix (FM) formalism can be used to quickly determine the best possible experimental setup to achieve the desired constraints on the SED-fitting parameters. However, because it relies on the assumption of a Gaussian likelihood function, it is in general less accurate than other slower techniques that reconstruct the probability distribution function (PDF) from the direct comparison between models and data. We compare the uncertainties on SED-fitting parameters predicted by the FM to the ones obtained using the more thorough PDF-fitting techniques. We use both simulated spectra and real data, and consider a large variety of target galaxies differing in redshift, mass, age, star formation history, dust content, and wavelength coverage. We find that the uncertainties reported by the two methods agree within a factor of two in the vast majority (∼90%) of cases. If the age determination is uncertain, the top-hat prior in age used in PDF fitting to prevent each galaxy from being older than the universe needs to be incorporated in the FM, at least approximately, before the two methods can be properly compared. We conclude that the FM is a useful tool for astronomical survey design.
The delta-Sobolev approach for modeling solar spectral irradiance and radiance
Xiang, Xuwu.
1990-01-01
The development and evaluation of a solar radiation model is reported, which gives irradiance and radiance results at the bottom and top of an atmosphere of specified optical depth for each of 145 spectral intervals from 0.29 to 4.05 microns. Absorption by water vapor, aerosols, ozone, and uniformly mixed gases; scattering by molecules and aerosols; and non-Lambertian surface reflectance are included in the model. For solving the radiative transfer equation, an innovative delta-Sobolev method is developed. It applies a delta-function modification to the conventional Sobolev solutions in a way analogous to the delta-Eddington method. The irradiance solution by the delta-Sobolev method turns out to be mathematically identical to the delta-Eddington approximation. The radiance solution by the delta-Sobolov method provides a convenient way to obtain the directional distribution pattern of the radiation transfer field, a feature unable to be obtained by most commonly used approximation methods. Such radiance solutions are also especially useful in models for satellite remote sensing. The model is tested against the rigorous Dave model, which solves the radiation transfer problem by the spherical harmonic method, an accurate but very time consuming process. Good agreement between the current model results and those of Dave's model are observed. The advantages of the delta-Sobolev model are simplicity, reasonable accuracy and capability for implementation on a minicomputer or microcomputer
A novel approach for characterizing broad-band radio spectral energy distributions
Harvey, V. M.; Franzen, T.; Morgan, J.; Seymour, N.
2018-05-01
We present a new broad-band radio frequency catalogue across 0.12 GHz ≤ ν ≤ 20 GHz created by combining data from the Murchison Widefield Array Commissioning Survey, the Australia Telescope 20 GHz survey, and the literature. Our catalogue consists of 1285 sources limited by S20 GHz > 40 mJy at 5σ, and contains flux density measurements (or estimates) and uncertainties at 0.074, 0.080, 0.119, 0.150, 0.180, 0.408, 0.843, 1.4, 4.8, 8.6, and 20 GHz. We fit a second-order polynomial in log-log space to the spectral energy distributions of all these sources in order to characterize their broad-band emission. For the 994 sources that are well described by a linear or quadratic model we present a new diagnostic plot arranging sources by the linear and curvature terms. We demonstrate the advantages of such a plot over the traditional radio colour-colour diagram. We also present astrophysical descriptions of the sources found in each segment of this new parameter space and discuss the utility of these plots in the upcoming era of large area, deep, broad-band radio surveys.
Schanen, Michel; Marin, Oana; Zhang, Hong; Anitescu, Mihai
2016-01-01
Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validate it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.
Eggert, F
2010-01-01
This work describes first real automated solution for qualitative evaluation of EDS spectra in X-ray microanalysis. It uses a combination of integrated standardless quantitative evaluation, computation of analytical errors to a final uncertainty, and parts of recently developed simulation approaches. Multiple spectra reconstruction assessments and peak searches of the residual spectrum are powerful enough to solve the qualitative analytical question automatically for totally unknown specimens. The integrated quantitative assessment is useful to improve the confidence of the qualitative analysis. Therefore, the qualitative element analysis becomes a part of integrated quantitative spectrum evaluation, where the quantitative results are used to iteratively refine element decisions, spectrum deconvolution, and simulation steps.
Use of adjoint methods in the probabilistic finite element approach to fracture mechanics
Liu, Wing Kam; Besterfield, Glen; Lawrence, Mark; Belytschko, Ted
1988-01-01
The adjoint method approach to probabilistic finite element methods (PFEM) is presented. When the number of objective functions is small compared to the number of random variables, the adjoint method is far superior to the direct method in evaluating the objective function derivatives with respect to the random variables. The PFEM is extended to probabilistic fracture mechanics (PFM) using an element which has the near crack-tip singular strain field embedded. Since only two objective functions (i.e., mode I and II stress intensity factors) are needed for PFM, the adjoint method is well suited.
de Jong, F.; Malfliet, R.
1991-01-01
Starting from a relativistic Lagrangian we derive a ''conserving'' approximation for the description of nuclear matter. We show this to be a nontrivial extension over the relativistic Dirac-Brueckner scheme. The saturation point of the equation of state calculated agrees very well with the empirical saturation point. The conserving character of the approach is tested by means of the Hugenholtz--van Hove theorem. We find the theorem fulfilled very well around saturation. A new value for compression modulus is derived, K=310 MeV. Also we calculate the occupation probabilities at normal nuclear matter densities by means of the spectral function. The average depletion κ of the Fermi sea is found to be κ∼0.11
Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.
2013-01-01
Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also
Saito, Masatoshi
2007-11-01
Dual-energy contrast agent-enhanced mammography is a technique of demonstrating breast cancers obscured by a cluttered background resulting from the contrast between soft tissues in the breast. The technique has usually been implemented by exploiting two exposures to different x-ray tube voltages. In this article, another dual-energy approach using the balanced filter method without switching the tube voltages is described. For the spectral optimization of dual-energy mammography using the balanced filters, we applied a theoretical framework reported by Lemacks et al. [Med. Phys. 29, 1739-1751 (2002)] to calculate the signal-to-noise ratio (SNR) in an iodinated contrast agent subtraction image. This permits the selection of beam parameters such as tube voltage and balanced filter material, and the optimization of the latter's thickness with respect to some critical quantity-in this case, mean glandular dose. For an imaging system with a 0.1 mm thick CsI:T1 scintillator, we predict that the optimal tube voltage would be 45 kVp for a tungsten anode using zirconium, iodine, and neodymium balanced filters. A mean glandular dose of 1.0 mGy is required to obtain an SNR of 5 in order to detect 1.0 mg/cm2 iodine in the resulting clutter-free image of a 5 cm thick breast composed of 50% adipose and 50% glandular tissue. In addition to spectral optimization, we carried out phantom measurements to demonstrate the present dual-energy approach for obtaining a clutter-free image, which preferentially shows iodine, of a breast phantom comprising three major components-acrylic spheres, olive oil, and an iodinated contrast agent. The detection of iodine details on the cluttered background originating from the contrast between acrylic spheres and olive oil is analogous to the task of distinguishing contrast agents in a mixture of glandular and adipose tissues.
Saito, Masatoshi
2007-01-01
Dual-energy contrast agent-enhanced mammography is a technique of demonstrating breast cancers obscured by a cluttered background resulting from the contrast between soft tissues in the breast. The technique has usually been implemented by exploiting two exposures to different x-ray tube voltages. In this article, another dual-energy approach using the balanced filter method without switching the tube voltages is described. For the spectral optimization of dual-energy mammography using the balanced filters, we applied a theoretical framework reported by Lemacks et al. [Med. Phys. 29, 1739-1751 (2002)] to calculate the signal-to-noise ratio (SNR) in an iodinated contrast agent subtraction image. This permits the selection of beam parameters such as tube voltage and balanced filter material, and the optimization of the latter's thickness with respect to some critical quantity--in this case, mean glandular dose. For an imaging system with a 0.1 mm thick CsI:Tl scintillator, we predict that the optimal tube voltage would be 45 kVp for a tungsten anode using zirconium, iodine, and neodymium balanced filters. A mean glandular dose of 1.0 mGy is required to obtain an SNR of 5 in order to detect 1.0 mg/cm 2 iodine in the resulting clutter-free image of a 5 cm thick breast composed of 50% adipose and 50% glandular tissue. In addition to spectral optimization, we carried out phantom measurements to demonstrate the present dual-energy approach for obtaining a clutter-free image, which preferentially shows iodine, of a breast phantom comprising three major components - acrylic spheres, olive oil, and an iodinated contrast agent. The detection of iodine details on the cluttered background originating from the contrast between acrylic spheres and olive oil is analogous to the task of distinguishing contrast agents in a mixture of glandular and adipose tissues
Liang, Yufeng; Vinson, John; Pemmaraju, Sri; Drisdell, Walter S; Shirley, Eric L; Prendergast, David
2017-03-03
Constrained-occupancy delta-self-consistent-field (ΔSCF) methods and many-body perturbation theories (MBPT) are two strategies for obtaining electronic excitations from first principles. Using the two distinct approaches, we study the O 1s core excitations that have become increasingly important for characterizing transition-metal oxides and understanding strong electronic correlation. The ΔSCF approach, in its current single-particle form, systematically underestimates the pre-edge intensity for chosen oxides, despite its success in weakly correlated systems. By contrast, the Bethe-Salpeter equation within MBPT predicts much better line shapes. This motivates one to reexamine the many-electron dynamics of x-ray excitations. We find that the single-particle ΔSCF approach can be rectified by explicitly calculating many-electron transition amplitudes, producing x-ray spectra in excellent agreement with experiments. This study paves the way to accurately predict x-ray near-edge spectral fingerprints for physics and materials science beyond the Bethe-Salpether equation.
Schwartz, N.; Huisman, J. A.; Furman, A.
2012-12-01
In recent years, there is a growing interest in using geophysical methods in general and spectral induced polarization (SIP) in particular as a tool to detect and monitor organic contaminants within the subsurface. The general idea of the SIP method is to inject alternating current through a soil volume and to measure the resultant potential in order to obtain the relevant soil electrical properties (e.g. complex impedance, complex conductivity/resistivity). Currently, a complete mechanistic understanding of the effect of organic contaminants on the SIP response of soil is still absent. In this work, we combine laboratory experiments with modeling to reveal the main processes affecting the SIP signature of soil contaminated with organic pollutant. In a first set of experiments, we investigate the effect of non-aqueous phase liquids (NAPL) on the complex conductivity of unsaturated porous media. Our results show that addition of NAPL to the porous media increases the real component of the soil electrical conductivity and decreases the polarization of the soil (imaginary component of the complex conductivity). Furthermore, addition of NAPL to the soil resulted in an increase of the electrical conductivity of the soil solution. Based on these results, we suggest that adsorption of NAPL to the soil surface, and exchange process between polar organic compounds in the NAPL and inorganic ions in the soil are the main processes affecting the SIP signature of the contaminated soil. To further support our hypothesis, the temporal change of the SIP signature of a soil as function of a single organic cation concentration was measured. In addition to the measurements of the soil electrical properties, we also measured the effect of the organic cation on the chemical composition of both the bulk and the surface of the soil. The results of those experiments again showed that the electrical conductivity of the soil increased with increasing contaminant concentration. In addition
Cross spectral, active and passive approach to face recognition for improved performance
Grudzien, A.; Kowalski, M.; Szustakowski, M.
2017-08-01
Biometrics is a technique for automatic recognition of a person based on physiological or behavior characteristics. Since the characteristics used are unique, biometrics can create a direct link between a person and identity, based on variety of characteristics. The human face is one of the most important biometric modalities for automatic authentication. The most popular method of face recognition which relies on processing of visual information seems to be imperfect. Thermal infrared imagery may be a promising alternative or complement to visible range imaging due to its several reasons. This paper presents an approach of combining both methods.
A new approach to the spectral analysis of liquid membrane oscillators by Gábor transformation
Płocharska-Jankowska, E.; Szpakowska, M.; Mátéfi-Tempfli, Stefan
2006-01-01
Liquid membrane oscillators very frequently have an irregular oscillatory behavior. Fourier transformation cannot be used for these nonstationary oscillations to establish their power spectra. This important point seems to be overlooked in the field of chemical oscillators. A new approach...... is presented here based on Gábor transformation allowing one to obtain power spectra of any kind of oscillations that can be met experimentally. The proposed Gábor analysis is applied to a liquid membrane oscillator containing a cationic surfactant. It was found that the power spectra are strongly influenced...
Boente, C; Matanzas, N; García-González, N; Rodríguez-Valdés, E; Gallego, J R
2017-09-01
The urban and peri-urban soils used for agriculture could be contaminated by atmospheric deposition or industrial releases, thus raising concerns about the potential risk to public health. Here we propose a method to evaluate potential soil pollution based on multivariate statistics, geostatistics (kriging), a novel soil pollution index, and bioavailability assessments. This approach was tested in two districts of a highly populated and industrialized city (Gijón, Spain). The soils showed anomalous content of several trace elements, such as As and Pb (up to 80 and 585 mg kg -1 respectively). In addition, factor analyses associated these elements with anthropogenic activity, whereas other elements were attributed to natural sources. Subsequent clustering also facilitated the differentiation between the northern area studied (only limited Pb pollution found) and the southern area (pattern of coal combustion, including simultaneous anomalies of trace elements and benzo(a)pyrene). A normalized soil pollution index (SPI) was calculated by kriging, using only the elements falling above threshold levels; therefore point-source polluted zones in the northern area and diffuse contamination in the south were identified. In addition, in the six mapping units with the highest SPIs of the fifty studied, we observed low bioavailability for most of the elements that surpassed the threshold levels. However, some anomalies of Pb contents and the pollution fingerprint in the central area of the southern grid call for further site-specific studies. On the whole, the combination of a multivariate (geo) statistic approach and a bioavailability assessment allowed us to efficiently identify sources of contamination and potential risks. Copyright © 2017 Elsevier Ltd. All rights reserved.
A unified approach for suppressing sidelobes arising in the spectral response of rugate filters
Abo-Zahhad, M.; Bataineh, M.
2000-01-01
This paper suggests a universal approach to reduce the side lobes which usually appear at both sides of a stop band of a ru gate filter. Both quin tic matching layers and anodization functions are to used to improve the filter's response. The proposed technique could be used to control the ripples level by properly choosing the refractive index profile after amending it to include mat aching layers and/or modulating its profile with a slowly varying anodization (or ta perine) function. Two illustrative examples are given to demonstrate the robustness of the proposed technique. The given examples suggest that combining both effects on the index of refraction profile lead to the lowest possible ripple level. A multichannel filter response is obtained by wavelet cons traction of the refractive index profile with potential applications in multimode lasers and wavelength division multiple xin networks. The obtained results demonstrate the applicability of the adopted approach to design ripple free ru gate filters. The extension to stack filters and other wave guiding structures are also visible. (authors). 14 refs., 8 figs
Tesoniero, A.; Leng, K.; Long, M. D.; Nissen-Meyer, T.
2017-12-01
Constraining the nature of the anisotropy in the core-mantle boundary region is a key factor for properly predicting the flow of the lowermost mantle. The lack of seismic waves sampling this region and their uneven azimuthal distribution hamper a correct representation of mantle dynamics. We present preliminary results for a series of SKS-SKKS splitting analysis based on numerical forward synthetic tests in a realistic 3-D Earth model using the software AXISEM3D, a newly developed efficient hybrid spectral element method solver for 3-D structures. The anisotropic property of the computational domain in the bottom 300km of the Earth's mantle is fully described with a fourth-order elastic tensor with 21 independent coefficients. We tested a single crystal mineralogy of postperovskite with different orientations that are consistent with realistic mantle flow models and accounted for a wide coverage of azimuthal seismic raypaths. We take advantage of the computational efficiency of the method to achieve resolutions for seismic periods as low as 8s. Our preliminary results, based on forward full waveform modeling, represent a step forward for validating hypotheses for the anisotropy in the D'' layer derived by direct splitting measurements and ray-theoretical mineral physics based modeling tests. Our study also highlights the capability of AXISEM3D to handle high degrees of model complexity in full anisotropy and its potentials for future endeavours.
Żak, A.; Krawczuk, M.; Palacz, M.; Doliński, Ł.; Waszkowiak, W.
2017-11-01
In this work results of numerical simulations and experimental measurements related to the high frequency dynamics of an aluminium Timoshenko periodic beam are presented. It was assumed by the authors that the source of beam structural periodicity comes from periodical alterations to its geometry due to the presence of appropriately arranged drill-holes. As a consequence of these alterations dynamic characteristics of the beam are changed revealing a set of frequency band gaps. The presence of the frequency band gaps can help in the design process of effective sound filters or sound barriers that can selectively attenuate propagating wave signals of certain frequency contents. In order to achieve this a combination of three numerical techniques were employed by the authors. They comprise the application of the Time-domain Spectral Finite Element Method in the case of analysis of finite and semi-infinite computational domains, damage modelling in the case of analysis of drill-hole influence, as well as the Bloch reduction in the case of analysis of periodic computational domains. As an experimental technique the Scanning Laser Doppler Vibrometry was chosen. A combined application of all these numerical and experimental techniques appears as new for this purpose and not reported in the literature available.
He, Yue-Jing; Hung, Wei-Chih; Syu, Cheng-Jyun
2017-12-01
The finite-element method (FEM) and eigenmode expansion method (EEM) were adopted to analyze the guided modes and spectrum of phase-shift fiber Bragg grating at five phase-shift degrees (including zero, 1/4π, 1/2π, 3/4π, and π). In previous studies on optical fiber grating, conventional coupled-mode theory was crucial. This theory contains abstruse knowledge about physics and complex computational processes, and thus is challenging for users. Therefore, a numerical simulation method was coupled with a simple and rigorous design procedure to help beginners and users to overcome difficulty in entering the field; in addition, graphical simulation results were presented. To reduce the difference between the simulated context and the actual context, a perfectly matched layer and perfectly reflecting boundary were added to the FEM and the EEM. When the FEM was used for grid cutting, the object meshing method and the boundary meshing method proposed in this study were used to effectively enhance computational accuracy and substantially reduce the time required for simulation. In summary, users can use the simulation results in this study to easily and rapidly design an optical fiber communication system and optical sensors with spectral characteristics.
An x ray scatter approach for non-destructive chemical analysis of low atomic numbered elements
Ross, H. Richard
1993-01-01
A non-destructive x-ray scatter (XRS) approach has been developed, along with a rapid atomic scatter algorithm for the detection and analysis of low atomic-numbered elements in solids, powders, and liquids. The present method of energy dispersive x-ray fluorescence spectroscopy (EDXRF) makes the analysis of light elements (i.e., less than sodium; less than 11) extremely difficult. Detection and measurement become progressively worse as atomic numbers become smaller, due to a competing process called 'Auger Emission', which reduces fluorescent intensity, coupled with the high mass absorption coefficients exhibited by low energy x-rays, the detection and determination of low atomic-numbered elements by x-ray spectrometry is limited. However, an indirect approach based on the intensity ratio of Compton and Rayleigh scattered has been used to define light element components in alloys, plastics and other materials. This XRS technique provides qualitative and quantitative information about the overall constituents of a variety of samples.
Moshrefzadeh, Ali; Fasana, Alessandro
2018-05-01
Envelope analysis is one of the most advantageous methods for rolling element bearing diagnostics but finding a suitable frequency band for demodulation has been a substantial challenge for a long time. Introduction of the Spectral Kurtosis (SK) and Kurtogram mostly solved this problem but in situations where signal to noise ratio is very low or in presence of non-Gaussian noise these methods will fail. This major drawback may noticeably decrease their effectiveness and goal of this paper is to overcome this problem. Vibration signals from rolling element bearings exhibit high levels of second-order cyclostationarity, especially in the presence of localized faults. The autocovariance function of a 2nd order cyclostationary signal is periodic and the proposed method, named Autogram, takes advantage of this property to enhance the conventional Kurtogram. The method computes the kurtosis of the unbiased Autocorrelation (AC) of the squared envelope of the demodulated signal, rather than the kurtosis of the filtered time signal. Moreover, to take advantage of unique features of the lower and upper portions of the AC, two modified forms of kurtosis are introduced and the resulting colormaps are called Upper and Lower Autogram. In addition, a thresholding method is also proposed to enhance the quality of the frequency spectrum analysis. A new indicator, Combined Squared Envelope Spectrum, is employed to consider all the frequency bands with valuable diagnostic information and to improve the fault detectability of the Autogram. The proposed method is tested on experimental data and compared with literature results so to assess its performances in rolling element bearing diagnostics.
Saeidifar, Maryam; Mirzaei, Hamidreza; Ahmadi Nasab, Navid; Mansouri-Torshizi, Hassan
2017-11-01
The binding ability between a new water-soluble palladium(II) complex [Pd(bpy)(bez-dtc)]Cl (where bpy is 2,2‧-bipyridine and bez-dtc is benzyl dithiocarbamate), as an antitumor agent, and calf thymus DNA was evaluated using various physicochemical methods, such as UV-Vis absorption, Competitive fluorescence studies, viscosity measurement, zeta potential and circular dichroism (CD) spectroscopy. The Pd(II) complex was synthesized and characterized using elemental analysis, molar conductivity measurements, FT-IR, 1H NMR, 13C NMR and electronic spectra studies. The anticancer activity against HeLa cell lines demonstrated lower cytotoxicity than cisplatin. The binding constants and the thermodynamic parameters were determined at different temperatures (300 K, 310 K and 320 K) and shown that the complex can bind to DNA via electrostatic forces. Furthermore, this result was confirmed by the viscosity and zeta potential measurements. The CD spectral results demonstrated that the binding of Pd(II) complex to DNA induced conformational changes in DNA. We hope that these results will provide a basis for further studies and practical clinical use of anticancer drugs.
Finite element methods for engineering sciences. Theoretical approach and problem solving techniques
Chaskalovic, J. [Ariel University Center of Samaria (Israel); Pierre and Marie Curie (Paris VI) Univ., 75 (France). Inst. Jean le Rond d' Alembert
2008-07-01
This self-tutorial offers a concise yet thorough grounding in the mathematics necessary for successfully applying FEMs to practical problems in science and engineering. The unique approach first summarizes and outlines the finite-element mathematics in general and then, in the second and major part, formulates problem examples that clearly demonstrate the techniques of functional analysis via numerous and diverse exercises. The solutions of the problems are given directly afterwards. Using this approach, the author motivates and encourages the reader to actively acquire the knowledge of finite- element methods instead of passively absorbing the material, as in most standard textbooks. The enlarged English-language edition, based on the original French, also contains a chapter on the approximation steps derived from the description of nature with differential equations and then applied to the specific model to be used. Furthermore, an introduction to tensor calculus using distribution theory offers further insight for readers with different mathematical backgrounds. (orig.)
Infill architecture: Design approaches for in-between buildings and 'bond' as integrative element
Alfirević Đorđe
2015-01-01
Full Text Available The aim of the paper is to draw attention to the view that the two key elements in achieving good quality of architecture infill in immediate, current surroundings, are the selection of optimal creative method of infill architecture and adequate application of 'the bond' as integrative element, The success of achievement and the quality of architectural infill mainly depend on the assessment of various circumstances, but also on the professionalism, creativity, sensibility, and finally innovativeness of the architect, In order for the infill procedure to be carried out adequately, it is necessary to carry out the assessment of quality of the current surroundings that the object will be integrated into, and then to choose the creative approach that will allow the object to establish an optimal dialogue with its surroundings, On a wider scale, both theory and the practice differentiate thee main creative approaches to infill objects: amimetic approach (mimesis, bassociative approach and ccontrasting approach, Which of the stated approaches will be chosen depends primarily on the fact whether the existing physical structure into which the object is being infilled is 'distinct', 'specific' or 'indistinct', but it also depends on the inclination of the designer, 'The bond' is a term which in architecture denotes an element or zone of one object, but in some instances it can refer to the whole object which has been articulated in a specific way, with an aim of reaching the solution for the visual conflict as is often the case in situations when there is a clash between the existing objects and the newly designed or reconstructed object, This paper provides in-depth analysis of different types of bonds, such as 'direction as bond', 'cornice as bond', 'structure as bond', 'texture as bond' and 'material as bond', which indicate complexity and multiple layers of the designing process of object interpolation.
Spectral element simulation of ultrafiltration
Hansen, M.; Barker, Vincent A.; Hassager, Ole
1998-01-01
for the unknowns at the mesh nodes. This system is solved via a technique combining the penalty method, Newton-Raphson iterations, static condensation, and a solver for banded linear systems. In addition, a smoothing technique is used to handle a singularity in the boundary condition at the membrane...
Saad, Bilal Mohammed
2017-09-18
This work focuses on the simulation of CO2 storage in deep underground formations under uncertainty and seeks to understand the impact of uncertainties in reservoir properties on CO2 leakage. To simulate the process, a non-isothermal two-phase two-component flow system with equilibrium phase exchange is used. Since model evaluations are computationally intensive, instead of traditional Monte Carlo methods, we rely on polynomial chaos (PC) expansions for representation of the stochastic model response. A non-intrusive approach is used to determine the PC coefficients. We establish the accuracy of the PC representations within a reasonable error threshold through systematic convergence studies. In addition to characterizing the distributions of model observables, we compute probabilities of excess CO2 leakage. Moreover, we consider the injection rate as a design parameter and compute an optimum injection rate that ensures that the risk of excess pressure buildup at the leaky well remains below acceptable levels. We also provide a comprehensive analysis of sensitivities of CO2 leakage, where we compute the contributions of the random parameters, and their interactions, to the variance by computing first, second, and total order Sobol’ indices.
Brunner, S.
1997-08-01
Ion temperature gradient (ITG)-related instabilities are studied in tokamak-like plasmas with the help of a new global eigenvalue code. Ions are modelled in the frame of gyrokinetic theory so that finite Larmor radius effects of these particles are retained to all orders. Non-adiabatic trapped electron dynamics is taken into account through the bounce-averaged drift kinetic equation. Assuming electrostatic perturbations, the system is closed with the quasineutrality relation. Practical methods are presented which make this global approach feasible. These include a non-standard wave decomposition compatible with the curved geometry as well as adapting an efficient root finding algorithm for computing the unstable spectrum. These techniques are applied to a low pressure configuration given by a large aspect ratio torus with circular, concentric magnetic surfaces. Simulations from a linear, time evolution, particle in cell code provide a useful benchmark. Comparisons with local ballooning calculations for different parameter scans enable further validation while illustrating the limits of that representation at low toroidal wave numbers or for non-interchange-like instabilities. The stabilizing effect of negative magnetic shear is also considered, in which case the global results show not only an attenuation of the growth rate but also a reduction of the radial extent induced by a transition from the toroidal- to the slab-ITG mode. Contributions of trapped electrons to the ITG instability as well as the possible coupling to the trapped electron mode are clearly brought to the fore. (author) figs., tabs., 69 refs
Saad, Bilal Mohammed; Alexanderian, Alen; Prudhomme, Serge; Knio, Omar
2017-01-01
This work focuses on the simulation of CO2 storage in deep underground formations under uncertainty and seeks to understand the impact of uncertainties in reservoir properties on CO2 leakage. To simulate the process, a non-isothermal two-phase two-component flow system with equilibrium phase exchange is used. Since model evaluations are computationally intensive, instead of traditional Monte Carlo methods, we rely on polynomial chaos (PC) expansions for representation of the stochastic model response. A non-intrusive approach is used to determine the PC coefficients. We establish the accuracy of the PC representations within a reasonable error threshold through systematic convergence studies. In addition to characterizing the distributions of model observables, we compute probabilities of excess CO2 leakage. Moreover, we consider the injection rate as a design parameter and compute an optimum injection rate that ensures that the risk of excess pressure buildup at the leaky well remains below acceptable levels. We also provide a comprehensive analysis of sensitivities of CO2 leakage, where we compute the contributions of the random parameters, and their interactions, to the variance by computing first, second, and total order Sobol’ indices.
Borghesi, Fabrizio; Migani, Francesca; Andreotti, Alessandro; Baccetti, Nicola; Bianchi, Nicola; Birke, Manfred; Dinelli, Enrico
2016-02-15
Assessing trace metal pollution using feathers has long attracted the attention of ecotoxicologists as a cost-effective and non-invasive biomonitoring method. In order to interpret the concentrations in feathers considering the external contamination due to lithic residue particles, we adopted a novel geochemical approach. We analysed 58 element concentrations in feathers of wild Eurasian Greater Flamingo Phoenicopterus roseus fledglings, from 4 colonies in Western Europe (Spain, France, Sardinia, and North-eastern Italy) and one group of adults from zoo. In addition, 53 elements were assessed in soil collected close to the nesting islets. This enabled to compare a wide selection of metals among the colonies, highlighting environmental anomalies and tackling possible causes of misinterpretation of feather results. Most trace elements in feathers (Al, Ce, Co, Cs, Fe, Ga, Li, Mn, Nb, Pb, Rb, Ti, V, Zr, and REEs) were of external origin. Some elements could be constitutive (Cu, Zn) or significantly bioaccumulated (Hg, Se) in flamingos. For As, Cr, and to a lesser extent Pb, it seems that bioaccumulation potentially could be revealed by highly exposed birds, provided feathers are well cleaned. This comprehensive study provides a new dataset and confirms that Hg has been accumulated in feathers in all sites to some extent, with particular concern for the Sardinian colony, which should be studied further including Cr. The Spanish colony appears critical for As pollution and should be urgently investigated in depth. Feathers collected from North-eastern Italy were the hardest to clean, but our methods allowed biological interpretation of Cr and Pb. Our study highlights the importance of external contamination when analysing trace elements in feathers and advances methodological recommendations in order to reduce the presence of residual particles carrying elements of external origin. Geochemical data, when available, can represent a valuable tool for a correct
A multidimensional approach to assessing the elemental status of an organism
Akimov, S.; Vedeneev, P.; Kiyaeva, E.; Laryushina, I.; Notova, S.; Pishchukhin, A.
2017-10-01
Multidimensional space is a convenient means of representing large amounts of information. This fully applies to information on the elemental status of population groups. The novelty of the approach of this study is based on the fact that the totality of the weight parts of all elements of the periodic table together makes up the weight of a person. In a multidimensional space, organisms with the same weight have the same sum of coordinates and are located on one hyperplane. Since for any norms it is important to have a ratio between the quantities that reflect the content of the chemical elements - a ray becomes the standard, for each point of which the ratio between the coordinates is observed. Large amounts of data will adequately represent the proximity an organism to one or another class, that is, increasing the accuracy of diagnosing the elemental status. The algorithm for diagnosing, therefore, should include finding the corresponding hyperplane, the point of intersection with the ray and determining the proximity to this point.
Simulations of singularity dynamics in liquid crystal flows: A C finite element approach
Lin Ping; Liu Chun
2006-01-01
In this paper, we present a C finite element method for a 2a hydrodynamic liquid crystal model which is simpler than existing C 1 element methods and mixed element formulation. The energy law is formally justified and the energy decay is used as a validation tool for our numerical computation. A splitting method combined with only a few fixed point iteration for the penalty term of the director field is applied to reduce the size of the stiffness matrix and to keep the stiffness matrix time-independent. The latter avoids solving a linear system at every time step and largely reduces the computational time, especially when direct linear system solvers are used. Our approach is verified by comparing its computational results with those obtained by C 1 elements and by mixed formulation. Through numerical experiments of a few other splittings and explicit-implicit strategies, we recommend a fast and reliable algorithm for this model. A number of examples are computed to demonstrate the algorithm
The intervals method: a new approach to analyse finite element outputs using multivariate statistics
Jordi Marcé-Nogué
2017-10-01
Full Text Available Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches.
The intervals method: a new approach to analyse finite element outputs using multivariate statistics
De Esteban-Trivigno, Soledad; Püschel, Thomas A.; Fortuny, Josep
2017-01-01
Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches. PMID:29043107
Cunha, Katia; Smith, Verne V.; Hasselquist, Sten; Souto, Diogo; Shetrone, Matthew D.; Allende Prieto, Carlos; Bizyaev, Dmitry; Frinchaboy, Peter; García-Hernández, D. Anibal; Holtzman, Jon; Johnson, Jennifer A.; Jőnsson, Henrik; Majewski, Steven R.; Mészáros, Szabolcs; Nidever, David; Pinsonneault, Mark; Schiavon, Ricardo P.; Sobeck, Jennifer; Skrutskie, Michael F.; Zamora, Olga; Zasowski, Gail; Fernández-Trincado, J. G.
2017-08-01
Nine Ce II lines have been identified and characterized within the spectral window observed by the Apache Point Observatory Galactic Evolution Experiment (APOGEE) survey (between λ1.51 and 1.69 μm). At solar metallicities, cerium is an element that is produced predominantly as a result of the slow capture of neutrons (the s-process) during asymptotic giant branch stellar evolution. The Ce II lines were identified using a combination of a high-resolution (R=λ /δ λ ={{100,000}}) Fourier Transform Spectrometer (FTS) spectrum of α Boo and an APOGEE spectrum (R = 22,400) of a metal-poor, but s-process enriched, red giant (2M16011638-1201525). Laboratory oscillator strengths are not available for these lines. Astrophysical gf-values were derived using α Boo as a standard star, with the absolute cerium abundance in α Boo set by using optical Ce II lines that have precise published laboratory gf-values. The near-infrared Ce II lines identified here are also analyzed, as consistency checks, in a small number of bright red giants using archival FTS spectra, as well as a small sample of APOGEE red giants, including two members of the open cluster NGC 6819, two field stars, and seven metal-poor N- and Al-rich stars. The conclusion is that this set of Ce II lines can be detected and analyzed in a large fraction of the APOGEE red giant sample and will be useful for probing chemical evolution of the s-process products in various populations of the Milky Way.
Nagaso, Masaru; Komatitsch, Dimitri; Moysan, Joseph; Lhuillier, Christian
2018-01-01
ASTRID project, French sodium cooled nuclear reactor of 4th generation, is under development at the moment by Alternative Energies and Atomic Energy Commission (CEA). In this project, development of monitoring techniques for a nuclear reactor during operation are identified as a measure issue for enlarging the plant safety. Use of ultrasonic measurement techniques (e.g. thermometry, visualization of internal objects) are regarded as powerful inspection tools of sodium cooled fast reactors (SFR) including ASTRID due to opacity of liquid sodium. In side of a sodium cooling circuit, heterogeneity of medium occurs because of complex flow state especially in its operation and then the effects of this heterogeneity on an acoustic propagation is not negligible. Thus, it is necessary to carry out verification experiments for developments of component technologies, while such kind of experiments using liquid sodium may be relatively large-scale experiments. This is why numerical simulation methods are essential for preceding real experiments or filling up the limited number of experimental results. Though various numerical methods have been applied for a wave propagation in liquid sodium, we still do not have a method for verifying on three-dimensional heterogeneity. Moreover, in side of a reactor core being a complex acousto-elastic coupled region, it has also been difficult to simulate such problems with conventional methods. The objective of this study is to solve these 2 points by applying three-dimensional spectral element method. In this paper, our initial results on three-dimensional simulation study on heterogeneous medium (the first point) are shown. For heterogeneity of liquid sodium to be considered, four-dimensional temperature field (three spatial and one temporal dimension) calculated by computational fluid dynamics (CFD) with Large-Eddy Simulation was applied instead of using conventional method (i.e. Gaussian Random field). This three-dimensional numerical
Hopkins, A.; Stewart, C.; Cabble, K.
1994-01-01
The primary purpose of Project Chariot was to investigate the technical problems and assess the effect of the proposed harbor excavation using nuclear explosives in Alaska. However, no nuclear devices were brought to the Project Chariot site. Between 1959 and 1961 various environmental tests were conducted. During the course of these environmental studies, the U.S. Geological Survey (USGS) granted the use of up to 5 curies of radioactive material at the Chariot site in Cape Thompson, Alaska; however only 26 millicuries were ever actually used. The tests were conducted in 12 test plots which were later gathered together and were mixed with in situ-soils generating approximately 1,600 cubic feet of soil. This area was then covered with four feet of clean soil, creating a mound. In 1962, the site was abandoned. A researcher at the University of Alaska at Fairbanks obtained in formation regarding the tests conducted and the materials left at the Project Chariot site. In response to concerns raised through the publication of this information, it was decided by the Department of Energy (DOE) that total remediation of the mound be completed within the year. During the summer of 1993, IT Corporation carried out the assessment and remediation of the Project Chariot site using a streamlined approach to waste site decision making called the Observational Approach (OA), and added elements of the new DOE Streamlined Approach for Environmental Restoration (SAFER). This remediation and remediation approach is described
Dib, Julián R.; Wagenknecht, Martin; Farías, María E.; Meinhardt, Friedhelm
2015-01-01
The term plasmid was originally coined for circular, extrachromosomal genetic elements. Today, plasmids are widely recognized not only as important factors facilitating genome restructuring but also as vehicles for the dissemination of beneficial characters within bacterial communities. Plasmid diversity has been uncovered by means of culture-dependent or -independent approaches, such as endogenous or exogenous plasmid isolation as well as PCR-based detection or transposon-aided capture, respectively. High-throughput-sequencing made possible to cover total plasmid populations in a given environment, i.e., the plasmidome, and allowed to address the quality and significance of self-replicating genetic elements. Since such efforts were and still are rather restricted to circular molecules, here we put equal emphasis on the linear plasmids which—despite their frequent occurrence in a large number of bacteria—are largely neglected in prevalent plasmidome conceptions. PMID:26074886
Identification of tipping elements of the Indian Summer Monsoon using climate network approach
Stolbova, Veronika; Surovyatkina, Elena; Kurths, Jurgen
2015-04-01
Spatial and temporal variability of the rainfall is a vital question for more than one billion of people inhabiting the Indian subcontinent. Indian Summer Monsoon (ISM) rainfall is crucial for India's economy, social welfare, and environment and large efforts are being put into predicting the Indian Summer Monsoon. For predictability of the ISM, it is crucial to identify tipping elements - regions over the Indian subcontinent which play a key role in the spatial organization of the Indian monsoon system. Here, we use climate network approach for identification of such tipping elements of the ISM. First, we build climate networks of the extreme rainfall, surface air temperature and pressure over the Indian subcontinent for pre-monsoon, monsoon and post-monsoon seasons. We construct network of extreme rainfall event using observational satellite data from 1998 to 2012 from the Tropical Rainfall Measuring Mission (TRMM 3B42V7) and reanalysis gridded daily rainfall data for a time period of 57 years (1951-2007) (Asian Precipitation Highly Resolved Observational Data Integration Towards the Evaluation of Water Resources, APHRODITE). For the network of surface air temperature and pressure fields, we use re-analysis data provided by the National Center for Environmental Prediction and National Center for Atmospheric Research (NCEP/NCAR). Second, we filter out data by coarse-graining the network through network measures, and identify tipping regions of the ISM. Finally, we compare obtained results of the network analysis with surface wind fields and show that occurrence of the tipping elements is mostly caused by monsoonal wind circulation, migration of the Intertropical Convergence Zone (ITCZ) and Westerlies. We conclude that climate network approach enables to select the most informative regions for the ISM, providing realistic description of the ISM dynamics with fewer data, and also help to identify tipping regions of the ISM. Obtained tipping elements deserve a
Kaldvee, K.; Nefedova, A.V. [Institute of Physics, University of Tartu, W. Ostwaldi st. 1, Tartu 50411 (Estonia); Fedorenko, S.G. [Voevodsky Institute of Chemical Kinetics and Combustion SB RAS, Novosibirsk 630090 (Russian Federation); Vanetsev, A.S. [Institute of Physics, University of Tartu, W. Ostwaldi st. 1, Tartu 50411 (Estonia); Prokhorov General Physics Institute RAS, Vavilov st. 38, Moscow 119991 (Russian Federation); Orlovskaya, E.O. [Prokhorov General Physics Institute RAS, Vavilov st. 38, Moscow 119991 (Russian Federation); Puust, L.; Pärs, M.; Sildos, I. [Institute of Physics, University of Tartu, W. Ostwaldi st. 1, Tartu 50411 (Estonia); Ryabova, A.V. [Prokhorov General Physics Institute RAS, Vavilov st. 38, Moscow 119991 (Russian Federation); National Research Nuclear University Moscow Engineering Physics Institute, Kashirskoe Highway, 31, Moscow 115409 (Russian Federation); Orlovskii, Yu.V., E-mail: orlovski@Lst.gpi.ru [Institute of Physics, University of Tartu, W. Ostwaldi st. 1, Tartu 50411 (Estonia); Prokhorov General Physics Institute RAS, Vavilov st. 38, Moscow 119991 (Russian Federation)
2017-03-15
The fluorescence kinetics and spectral intensity ratio (FIR) methods for contactless optical temperature measurement in the NIR spectral range with Nd{sup 3+} doped YAG micro- and YPO{sub 4} nanocrystals are considered and the problems are revealed. The requirements for good temperature RE doped crystalline nanoparticles sensor are formulated.
Mechanical spectral shift reactor
Sherwood, D.G.; Wilson, J.F.; Salton, R.B.; Fensterer, H.F.
1981-01-01
A mechanical spectral shift reactor comprises apparatus for inserting and withdrawing water displacer elements from the reactor core for selectively changing the water-moderator volume in the core thereby changing the reactivity of the core. The apparatus includes drivemechanisms for moving the displacer elements relative to the core and guide mechanisms for guiding the displayer rods through the reactor vessel
Mechanical spectral shift reactor
Sherwood, D.G.; Wilson, J.F.; Salton, R.B.; Fensterer, H.F.
1982-01-01
A mechanical spectral shift reactor comprises apparatus for inserting and withdrawing water displacer elements from the reactor core for selectively changing the water-moderator volume in the core thereby changing the reactivity of the core. The apparatus includes drive mechanisms for moving the displacer elements relative to the core and guide mechanisms for guiding the displacer rods through the reactor vessel. (author)
A Kullback-Leibler approach for 3D reconstruction of spectral CT data corrupted by Poisson noise
Hohweiller, Tom; Ducros, Nicolas; Peyrin, Françoise; Sixou, Bruno
2017-09-01
While standard computed tomography (CT) data do not depend on energy, spectral computed tomography (SPCT) acquire energy-resolved data, which allows material decomposition of the object of interest. Decompo- sitions in the projection domain allow creating projection mass density (PMD) per materials. From decomposed projections, a tomographic reconstruction creates 3D material density volume. The decomposition is made pos- sible by minimizing a cost function. The variational approach is preferred since this is an ill-posed non-linear inverse problem. Moreover, noise plays a critical role when decomposing data. That is why in this paper, a new data fidelity term is used to take into account of the photonic noise. In this work two data fidelity terms were investigated: a weighted least squares (WLS) term, adapted to Gaussian noise, and the Kullback-Leibler distance (KL), adapted to Poisson noise. A regularized Gauss-Newton algorithm minimizes the cost function iteratively. Both methods decompose materials from a numerical phantom of a mouse. Soft tissues and bones are decomposed in the projection domain; then a tomographic reconstruction creates a 3D material density volume for each material. Comparing relative errors, KL is shown to outperform WLS for low photon counts, in 2D and 3D. This new method could be of particular interest when low-dose acquisitions are performed.
Kaiser, C.; Roll, K.; Volk, W.
2017-09-01
In the automotive industry, the manufacturing of automotive outer panels requires hemming processes in which two sheet metal parts are joined together by bending the flange of the outer part over the inner part. Because of decreasing development times and the steadily growing number of vehicle derivatives, an efficient digital product and process validation is necessary. Commonly used simulations, which are based on the finite element method, demand significant modelling effort, which results in disadvantages especially in the early product development phase. To increase the efficiency of designing hemming processes this paper presents a hemming-specific metamodel approach. The approach includes a part analysis in which the outline of the automotive outer panels is initially split into individual segments. By doing a para-metrization of each of the segments and assigning basic geometric shapes, the outline of the part is approximated. Based on this, the hemming parameters such as flange length, roll-in, wrinkling and plastic strains are calculated for each of the geometric basic shapes by performing a meta-model-based segmental product validation. The metamodel is based on an element similar formulation that includes a reference dataset of various geometric basic shapes. A random automotive outer panel can now be analysed and optimized based on the hemming-specific database. By implementing this approach into a planning system, an efficient optimization of designing hemming processes will be enabled. Furthermore, valuable time and cost benefits can be realized in a vehicle’s development process.
Gaetani, G.A.; Cohen, A.L.; Wang, Z.; Crusius, John
2011-01-01
This study presents a new approach to coral thermometry that deconvolves the influence of water temperature on skeleton composition from that of “vital effects”, and has the potential to provide estimates of growth temperatures that are accurate to within a few tenths of a degree Celsius from both tropical and cold-water corals. Our results provide support for a physico-chemical model of coral biomineralization, and imply that Mg2+ substitutes directly for Ca2+ in biogenic aragonite. Recent studies have identified Rayleigh fractionation as an important influence on the elemental composition of coral skeletons. Daily, seasonal and interannual variations in the amount of aragonite precipitated by corals from each “batch” of calcifying fluid can explain why the temperature dependencies of elemental ratios in coral skeleton differ from those of abiogenic aragonites, and are highly variable among individual corals. On the basis of this new insight into the origin of “vital effects” in coral skeleton, we developed a Rayleigh-based, multi-element approach to coral thermometry. Temperature is resolved from the Rayleigh fractionation signal by combining information from multiple element ratios (e.g., Mg/Ca, Sr/Ca, Ba/Ca) to produce a mathematically over-constrained system of Rayleigh equations. Unlike conventional coral thermometers, this approach does not rely on an initial calibration of coral skeletal composition to an instrumental temperature record. Rather, considering coral skeletogenesis as a biologically mediated, physico-chemical process provides a means to extract temperature information from the skeleton composition using the Rayleigh equation and a set of experimentally determined partition coefficients. Because this approach is based on a quantitative understanding of the mechanism that produces the “vital effect” it should be possible to apply it both across scleractinian species and to corals growing in vastly different environments. Where
Enhanced phytoextraction of germanium and rare earth elements - a rhizosphere-based approach
Wiche, Oliver
2016-04-01
Germanium (Ge) and rare earth elements (REEs) are economically valuable raw materials that have become an integral part of our modern high tech society. While most of these elements are not actually rare in terms of general amounts in the earth's crust, they are rarely found in sufficient abundances in single locations for their mining to be economically viable. The average concentration of Ge in soils is estimated at 1.6 μg g-1. The REEs comprise a group of 16 elements including La, the group of lanthanides and Y that are abundant in the earth crust with concentrations varying from 35 μg g-1 (La), 40 μg g-1 (Nd), 6 μg g-1 (Gd) and 3.5 μg g-1 (Er) to 0.5 μg g-1 in Tm. Thus, a promising chance to improve supply of these elements could be phytomining. Unfortunately, bioavailability of Ge and REEs in soils appears to be low, in particular in neutral or alkaline soils. A sequential dissolution analysis of 120 soil samples taken from the A-horizons of soils in the area of Freiberg (Saxony, Germany) revealed that only 0.2% of total Ge and about 0.5% of La, Nd, Gd and Er of bulk concentrations were easily accessible by leaching with NH4-acetate (pH 7). Most of the investigated elements were bound to Fe-/Mn-oxides and silicates and were therefore only poorly available for plant uptake. Here we report an environmentally friendly approach for enhanced phytoextraction of Ge and REEs from soils using mixed cultures of plant species with efficient mechanisms for the acquisition of nutrients in the rhizosphere. The rhizosphere is characterized as the zone in soil sourrounding a plant root that consists of a gradient in chemical, physical and biological soil properties driven by rhizodeposits like carboxylates and protons. Some species like white lupin (Lupinus albus) are able to excrete large amounts of organic acid anions(predominantly citrate and malate) and show a particularly high potential for the acidification of the rhizosphere. In our experiments, mixed cultures
Nathan, Usha; Premadas, A.
2013-01-01
A new approach for the beryl mineral sample decomposition and solution preparation method suitable for the elemental analysis using ICP-AES and FAAS is described. For the complete sample decomposition four different decomposition procedures are employed such as with (i) ammonium bi-fluoride alone (ii) a mixture of ammonium bi-fluoride and ammonium sulphate (iii) powdered mixture of NaF and KHF 2 in 1: 3 ratio, and (iv) acid digestion treatment using hydrofluoric acid and nitric acid mixture, and the residue fused with a powdered mixture NaF and KHF 2 . Elements like Be, Al, Fe, Mn, Ti, Cr, Ca, Mg, and Nb are determined by ICP-AES and Na, K, Rb and Cs are determined by FAAS method. Fusion with 2g ammonium bifluoride flux alone is sufficient for the complete decomposition of 0.400 gram sample. The values obtained by this decomposition procedure are agreed well with the reported method. Accuracy of the proposed method was checked by analyzing synthetic samples prepared in the laboratory by mixing high purity oxides having a chemical composition similar to natural beryl mineral. It indicates that the accuracy of the method is very good, and the reproducibility is characterized by the RSD 1 to 4% for the elements studied. (author)
Mechanical spectral shift reactor
Wilson, J.F.; Sherwood, D.G.
1982-01-01
A mechanical spectral shift reactor comprises a reactive core having fuel assemblies accommodating both water displacer elements and neutron absorbing control rods for selectively changing the volume of water-moderator in the core. The fuel assemblies with displacer and control rods are arranged in alternating fashion so that one displacer element drive mechanism may move displacer elements in more than one fuel assembly without interfering with the movement of control rods of a corresponding control rod drive mechanisms. (author)
Quality Assurance of Cancer Study Common Data Elements Using A Post-Coordination Approach.
Jiang, Guoqian; Solbrig, Harold R; Prud'hommeaux, Eric; Tao, Cui; Weng, Chunhua; Chute, Christopher G
2015-01-01
Domain-specific common data elements (CDEs) are emerging as an effective approach to standards-based clinical research data storage and retrieval. A limiting factor, however, is the lack of robust automated quality assurance (QA) tools for the CDEs in clinical study domains. The objectives of the present study are to prototype and evaluate a QA tool for the study of cancer CDEs using a post-coordination approach. The study starts by integrating the NCI caDSR CDEs and The Cancer Genome Atlas (TCGA) data dictionaries in a single Resource Description Framework (RDF) data store. We designed a compositional expression pattern based on the Data Element Concept model structure informed by ISO/IEC 11179, and developed a transformation tool that converts the pattern-based compositional expressions into the Web Ontology Language (OWL) syntax. Invoking reasoning and explanation services, we tested the system utilizing the CDEs extracted from two TCGA clinical cancer study domains. The system could automatically identify duplicate CDEs, and detect CDE modeling errors. In conclusion, compositional expressions not only enable reuse of existing ontology codes to define new domain concepts, but also provide an automated mechanism for QA of terminological annotations for CDEs.
Large-scaled biomonitoring of trace-element air pollution: goals and approaches
Wolterbeek, H.T.
2000-01-01
Biomonitoring is often used in multi-parameter approaches in especially larger scaled surveys. The information obtained may consist of thousands of data points, which can be processed in a variety of mathematical routines to permit a condensed and strongly-smoothed presentation of results and conclusions. Although reports on larger-scaled biomonitoring surveys are 'easy- to-read' and often include far-reaching interpretations, it is not possible to obtain an insight into the real meaningfulness or quality of the survey performed. In any set-up, the aims of the survey should be put forward as clear as possible. Is the survey to provide information on atmospheric element levels, or on total, wet and dry deposition, what should be the time- or geographical scale and resolution of the survey, which elements should be determined, is the survey to give information on emission or immission characteristics? Answers to all these questions are of paramount importance, not only regarding the choice of the biomonitoring species or necessary handling/analysis techniques, but also with respect to planning and personnel, and, not to forget, the expected/available means of data interpretation. In considering a survey set-up, rough survey dimensions may follow directly from the goals; in practice, however, they will be governed by other aspects such as available personnel, handling means/capacity, costs, etc. In what sense and to what extent these factors may cause the survey to drift away from the pre-set goals should receive ample attention: in extreme cases the survey should not be carried out. Bearing in mind the above considerations, the present paper focuses on goals, quality and approaches of larger-scaled biomonitoring surveys on trace element air pollution. The discussion comprises practical problems, options, decisions, analytical means, quality measures, and eventual survey results. (author)
Matrix Elements of One- and Two-Body Operators in the Unitary Group Approach (I)-Formalism
DAI Lian-Rong; PAN Feng
2001-01-01
The tensor algebraic method is used to derive general one- and two-body operator matrix elements within the Un representations, which are useful in the unitary group approach to the configuration interaction problems of quantum many-body systems.
Rompotis, Dimitrios
2016-02-01
In this work, a single-shot temporal metrology scheme operating in the vacuum-extreme ultraviolet spectral range has been designed and experimentally implemented. Utilizing an anti-collinear geometry, a second-order intensity autocorrelation measurement of a vacuum ultraviolet pulse can be performed by encoding temporal delay information on the beam propagation coordinate. An ion-imaging time-of-flight spectrometer, offering micrometer resolution has been set-up for this purpose. This instrument enables the detection of a magnified image of the spatial distribution of ions exclusively generated by direct two-photon absorption in the combined counter-propagating pulse focus and thus obtain the second-order intensity autocorrelation measurement on a single-shot basis. Additionally, an intense VUV light source based on high-harmonic generation has been experimentally realized. It delivers intense sub-20 fs Ti:Sa fifth-harmonic pulses utilizing a loose-focusing geometry in a long Ar gas cell. The VUV pulses centered at 161.8 nm reach pulse energies of 1.1 μJ per pulse, while the corresponding pulse duration is measured with a second-order, fringe-resolved autocorrelation scheme to be 18 ± 1 fs on average. Non-resonant, two-photon ionization of Kr and Xe and three-photon ionization of Ne verify the fifth-harmonic pulse intensity and indicate the feasibility of multi-photon VUV pump/VUV probe studies of ultrafast atomic and molecular dynamics. Finally, the extended functionally of the counter-propagating pulse metrology approach is demonstrated by a single-shot VUV pump/VUV probe experiment aiming at the investigation of ultrafast dissociation dynamics of O 2 excited in the Schumann-Runge continuum at 162 nm.
Cally, Paul S.; Xiong, Ming
2018-01-01
Fast sausage modes in solar magnetic coronal loops are only fully contained in unrealistically short dense loops. Otherwise they are leaky, losing energy to their surrounds as outgoing waves. This causes any oscillation to decay exponentially in time. Simultaneous observations of both period and decay rate therefore reveal the eigenfrequency of the observed mode, and potentially insight into the tubes’ nonuniform internal structure. In this article, a global spectral description of the oscillations is presented that results in an implicit matrix eigenvalue equation where the eigenvalues are associated predominantly with the diagonal terms of the matrix. The off-diagonal terms vanish identically if the tube is uniform. A linearized perturbation approach, applied with respect to a uniform reference model, is developed that makes the eigenvalues explicit. The implicit eigenvalue problem is easily solved numerically though, and it is shown that knowledge of the real and imaginary parts of the eigenfrequency is sufficient to determine the width and density contrast of a boundary layer over which the tubes’ enhanced internal densities drop to ambient values. Linearized density kernels are developed that show sensitivity only to the extreme outside of the loops for radial fundamental modes, especially for small density enhancements, with no sensitivity to the core. Higher radial harmonics do show some internal sensitivity, but these will be more difficult to observe. Only kink modes are sensitive to the tube centres. Variation in internal and external Alfvén speed along the loop is shown to have little effect on the fundamental dimensionless eigenfrequency, though the associated eigenfunction becomes more compact at the loop apex as stratification increases, or may even displace from the apex.
Sánchez-Sesma, Francisco J.
2017-07-01
Microtremor H/ V spectral ratio (MHVSR) has gained popularity to assess the dominant frequency of soil sites. It requires measurement of ground motion due to seismic ambient noise at a site and a relatively simple processing. Theory asserts that the ensemble average of the autocorrelation of motion components belonging to a diffuse field at a given receiver gives the directional energy densities (DEDs) which are proportional to the imaginary parts of the Green's function components when both source and receiver are the same point and the directions of force and response coincide. Therefore, the MHVSR can be modeled as the square root of 2 × Im G 11/Im G 33, where Im G 11 and Im G 33 are the imaginary parts of Green's functions at the load point for the horizontal (sub-index 1) and vertical (sub-index 3) components, respectively. This connection has physical implications that emerge from the duality DED force and allows understanding the behavior of the MHVSR. For a given model, the imaginary parts of the Green's functions are integrals along a radial wavenumber. To deal with these integrals, we have used either the popular discrete wavenumber method or the Cauchy's residue theorem at the poles that account for surface waves normal modes giving the contributions due to Rayleigh and Love waves. For the retrieval of the velocity structure, one can minimize the weighted differences between observations and calculated values using the strategy of an inversion scheme. In this research, we used simulated annealing but other optimization techniques can be used as well. This last approach allows computing separately the contributions of different wave types. An example is presented for the mouth of Andarax River at Almería, Spain. [Figure not available: see fulltext.
Brian Johnson
2015-01-01
Full Text Available Segment-level image fusion involves segmenting a higher spatial resolution (HSR image to derive boundaries of land cover objects, and then extracting additional descriptors of image segments (polygons from a lower spatial resolution (LSR image. In past research, an unweighted segment-level fusion (USF approach, which extracts information from a resampled LSR image, resulted in more accurate land cover classification than the use of HSR imagery alone. However, simply fusing the LSR image with segment polygons may lead to significant errors due to the high level of noise in pixels along the segment boundaries (i.e., pixels containing multiple land cover types. To mitigate this, a spatially-weighted segment-level fusion (SWSF method was proposed for extracting descriptors (mean spectral values of segments from LSR images. SWSF reduces the weights of LSR pixels located on or near segment boundaries to reduce errors in the fusion process. Compared to the USF approach, SWSF extracted more accurate spectral properties of land cover objects when the ratio of the LSR image resolution to the HSR image resolution was greater than 2:1, and SWSF was also shown to increase classification accuracy. SWSF can be used to fuse any type of imagery at the segment level since it is insensitive to spectral differences between the LSR and HSR images (e.g., different spectral ranges of the images or different image acquisition dates.
Elements of integrated care approaches for older people: a review of reviews.
Briggs, Andrew M; Valentijn, Pim P; Thiyagarajan, Jotheeswaran A; Araujo de Carvalho, Islene
2018-04-07
The World Health Organization (WHO) recently proposed an Integrated Care for Older People approach to guide health systems and services in better supporting functional ability of older people. A knowledge gap remains in the key elements of integrated care approaches used in health and social care delivery systems for older populations. The objective of this review was to identify and describe the key elements of integrated care models for elderly people reported in the literature. Review of reviews using a systematic search method. A systematic search was performed in MEDLINE and the Cochrane database in June 2017. Reviews of interventions aimed at care integration at the clinical (micro), organisational/service (meso) or health system (macro) levels for people aged ≥60 years were included. Non-Cochrane reviews published before 2015 were excluded. Reviews were assessed for quality using the Assessment of Multiple Systematic Reviews (AMSTAR) 1 tool. Fifteen reviews (11 systematic reviews, of which six were Cochrane reviews) were included, representing 219 primary studies. Three reviews (20%) included only randomised controlled trials (RCT), while 10 reviews (65%) included both RCTs and non-RCTs. The region where the largest number of primary studies originated was North America (n=89, 47.6%), followed by Europe (n=60, 32.1%) and Oceania (n=31, 16.6%). Eleven (73%) reviews focused on clinical 'micro' and organisational 'meso' care integration strategies. The most commonly reported elements of integrated care models were multidisciplinary teams, comprehensive assessment and case management. Nurses, physiotherapists, general practitioners and social workers were the most commonly reported service providers. Methodological quality was variable (AMSTAR scores: 1-11). Seven (47%) reviews were scored as high quality (AMSTAR score ≥8). Evidence of elements of integrated care for older people focuses particularly on micro clinical care integration processes, while there
Ezer, Muhsin; Elwood, Seth A.; Jones, Bradley T.; Simeonsson, Josef B.
2006-01-01
The analytical utility of a tungsten (W)-coil atomization-laser-induced fluorescence (LIF) approach has been evaluated for trace level measurements of elemental chromium (Cr), arsenic (As), selenium (Se), antimony (Sb), lead (Pb), tin (Sn), copper (Cu), thallium (Tl), indium (In), cadmium (Cd), zinc (Zn) and mercury (Hg). Measurements of As, Cr, In, Se, Sb, Pb, Tl, and Sn were performed by laser-induced fluorescence using a single dye laser operating near 460 nm whose output was converted by frequency doubling and stimulated Raman scattering to wavelengths ranging from 196 to 286 nm for atomic excitation. Absolute limits of detection (LODs) of 1, 0.3, 0.3, 0.2, 1, 6, 1, 0.2 and 0.8 pg and concentration LODs of 100, 30, 30, 20, 100, 600, 100, 20, and 80 pg/mL were achieved for As, Se, Sb, Sn, In, Cu, Cr, Pb and Tl, respectively. Determinations of Hg, Pb, Zn and Cd were performed using two-color excitation approaches and resulted in absolute LODs of 2, 30, 5 and 0.6 pg, respectively, and concentration LODs of 200, 3000, 500 and 60 pg/mL, respectively. The sensitivities achieved by the W-coil LIF approaches compare well with those reported by W-coil atomic absorption spectrometry, graphite furnace atomic absorption spectrometry, and graphite furnace electrothermal atomization-LIF approaches. The accuracy of the approach was verified through the analysis of a multielement reference solution containing Sb, Pb and Tl which each had certified performance acceptance limits of 19.6-20.4 μg/mL. The determined concentrations were 20.05 ± 2.60, 20.70 ± 2.27 and 20.60 ± 2.46 μg/mL, for Sb, Pb and Tl, respectively. The results demonstrate that W-coil LIF provides good analytical performance for trace analyses due to its high sensitivity, linearity, and capability to measure multiple elements using a single tunable laser and suggest that the development of portable W-coil LIF instrumentation using compact, solid-state lasers is feasible
Ezer, Muhsin; Elwood, Seth A; Jones, Bradley T; Simeonsson, Josef B
2006-06-30
The analytical utility of a tungsten (W)-coil atomization-laser-induced fluorescence (LIF) approach has been evaluated for trace level measurements of elemental chromium (Cr), arsenic (As), selenium (Se), antimony (Sb), lead (Pb), tin (Sn), copper (Cu), thallium (Tl), indium (In), cadmium (Cd), zinc (Zn) and mercury (Hg). Measurements of As, Cr, In, Se, Sb, Pb, Tl, and Sn were performed by laser-induced fluorescence using a single dye laser operating near 460 nm whose output was converted by frequency doubling and stimulated Raman scattering to wavelengths ranging from 196 to 286 nm for atomic excitation. Absolute limits of detection (LODs) of 1, 0.3, 0.3, 0.2, 1, 6, 1, 0.2 and 0.8 pg and concentration LODs of 100, 30, 30, 20, 100, 600, 100, 20, and 80 pg/mL were achieved for As, Se, Sb, Sn, In, Cu, Cr, Pb and Tl, respectively. Determinations of Hg, Pb, Zn and Cd were performed using two-color excitation approaches and resulted in absolute LODs of 2, 30, 5 and 0.6 pg, respectively, and concentration LODs of 200, 3000, 500 and 60 pg/mL, respectively. The sensitivities achieved by the W-coil LIF approaches compare well with those reported by W-coil atomic absorption spectrometry, graphite furnace atomic absorption spectrometry, and graphite furnace electrothermal atomization-LIF approaches. The accuracy of the approach was verified through the analysis of a multielement reference solution containing Sb, Pb and Tl which each had certified performance acceptance limits of 19.6-20.4 microg/mL. The determined concentrations were 20.05+/-2.60, 20.70+/-2.27 and 20.60+/-2.46 microg/mL, for Sb, Pb and Tl, respectively. The results demonstrate that W-coil LIF provides good analytical performance for trace analyses due to its high sensitivity, linearity, and capability to measure multiple elements using a single tunable laser and suggest that the development of portable W-coil LIF instrumentation using compact, solid-state lasers is feasible.
A Practical Approach to Governance and Optimization of Structured Data Elements.
Collins, Sarah A; Gesner, Emily; Morgan, Steven; Mar, Perry; Maviglia, Saverio; Colburn, Doreen; Tierney, Diana; Rocha, Roberto
2015-01-01
Definition and configuration of clinical content in an enterprise-wide electronic health record (EHR) implementation is highly complex. Sharing of data definitions across applications within an EHR implementation project may be constrained by practical limitations, including time, tools, and expertise. However, maintaining rigor in an approach to data governance is important for sustainability and consistency. With this understanding, we have defined a practical approach for governance of structured data elements to optimize data definitions given limited resources. This approach includes a 10 step process: 1) identification of clinical topics, 2) creation of draft reference models for clinical topics, 3) scoring of downstream data needs for clinical topics, 4) prioritization of clinical topics, 5) validation of reference models for clinical topics, and 6) calculation of gap analyses of EHR compared against reference model, 7) communication of validated reference models across project members, 8) requested revisions to EHR based on gap analysis, 9) evaluation of usage of reference models across project, and 10) Monitoring for new evidence requiring revisions to reference model.
Kulkarni, Ankur H; Ghosh, Prasenjit; Seetharaman, Ashwin; Kondaiah, Paturu; Gundiah, Namrata
2018-05-09
Traction forces exerted by adherent cells are quantified using displacements of embedded markers on polyacrylamide substrates due to cell contractility. Fourier Transform Traction Cytometry (FTTC) is widely used to calculate tractions but has inherent limitations due to errors in the displacement fields; these are mitigated through a regularization parameter (γ) in the Reg-FTTC method. An alternate finite element (FE) approach computes tractions on a domain using known boundary conditions. Robust verification and recovery studies are lacking but essential in assessing the accuracy and noise sensitivity of the traction solutions from the different methods. We implemented the L2 regularization method and defined a maximum curvature point in the traction with γ plot as the optimal regularization parameter (γ*) in the Reg-FTTC approach. Traction reconstructions using γ* yield accurate values of low and maximum tractions (Tmax) in the presence of up to 5% noise. Reg-FTTC is hence a clear improvement over the FTTC method but is inadequate to reconstruct low stresses such as those at nascent focal adhesions. FE, implemented using a node-by-node comparison, showed an intermediate reconstruction compared to Reg-FTTC. We performed experiments using mouse embryonic fibroblast (MEF) and compared results between these approaches. Tractions from FTTC and FE showed differences of ∼92% and 22% as compared to Reg-FTTC. Selection of an optimum value of γ for each cell reduced variability in the computed tractions as compared to using a single value of γ for all the MEF cells in this study.
A time-domain finite element boundary integral approach for elastic wave scattering
Shi, F.; Lowe, M. J. S.; Skelton, E. A.; Craster, R. V.
2018-04-01
The response of complex scatterers, such as rough or branched cracks, to incident elastic waves is required in many areas of industrial importance such as those in non-destructive evaluation and related fields; we develop an approach to generate accurate and rapid simulations. To achieve this we develop, in the time domain, an implementation to efficiently couple the finite element (FE) method within a small local region, and the boundary integral (BI) globally. The FE explicit scheme is run in a local box to compute the surface displacement of the scatterer, by giving forcing signals to excitation nodes, which can lie on the scatterer itself. The required input forces on the excitation nodes are obtained with a reformulated FE equation, according to the incident displacement field. The surface displacements computed by the local FE are then projected, through time-domain BI formulae, to calculate the scattering signals with different modes. This new method yields huge improvements in the efficiency of FE simulations for scattering from complex scatterers. We present results using different shapes and boundary conditions, all simulated using this approach in both 2D and 3D, and then compare with full FE models and theoretical solutions to demonstrate the efficiency and accuracy of this numerical approach.
Novel Approaches for Mutual Coupling Reduction among Vertical and Planar Monopole Elements
Isaac, Ayman A.
Modern wireless systems such as 4G LTE-A, RFID, Wi-Fi, WiMAX, and GPS utilize miniaturized antenna array elements to improve performance and reliability through diversity and increase throughput using spatial multiplexing schemes of MIMO systems. One original contribution in this thesis is to significantly reduce the complexity of traditional design approaches targeting mutual coupling reductions such as metamaterials, defected ground plane structures, soft electromagnetic surfaces using novel design alternatives. A decoupling network is proposed, which consists of a rectangular metallic ring along with two tuning strips printed on a dielectric substrate, surrounding a two-element monopole antenna array fed by a coplanar waveguide or microstrip structure. The array design offers a reduction in mutual coupling level by around 20 dB at 2.4 GHz as compared to the same array in which the two monopoles share the same ground plane but without the decoupling network. The array achieves a -10 dB S11 bandwidth of 0.63 GHz, (2.12 GHz - 2.75 GHz), a 0.24 GHz (2.33 GHz - 2.57 GHz) bandwidth in which S21 is less than -20 dB, respectively. A total realized gain of 1.6 to 1.69 dB in the frequency range over which S11 and S21 is less than -10 dB and -20 dB respectively. The boresight of the radiation patterns of two vertical monopole wire antennas operating at 2.4 GHz and separated by 8 mm are shown to be orthogonal and inclined by 45° with respect to the horizon while maintaining the shape of the isolated single antenna element. Hence, we denote this design as the descattered and decoupled orthogonal MIMO antenna array, which is reported for the first time in this dissertation, providing the ideal far-field radiation characteristics as theoretically deemed for handheld MIMO devices. Moreover, two new approaches for the reduction of mutual coupling between two rectangular planar monopole antennas printed on a dielectric substrate with a partial ground plane are presented in this
Finite element analysis of an extended end-plate connection using the T-stub approach
Muresan, Ioana Cristina; Balc, Roxana [Technical University of Cluj-Napoca, Faculty of Civil Engineering. 15 C Daicoviciu Str., 400020, Cluj-Napoca (Romania)
2015-03-10
Beam-to-column end-plate bolted connections are usually used as moment-resistant connections in steel framed structures. For this joint type, the deformability is governed by the deformation capacity of the column flange and end-plate under tension and elongation of the bolts. All these elements around the beam tension flange form the tension region of the joint, which can be modeled by means of equivalent T-stubs. In this paper a beam-to-column end-plate bolted connection is substituted with a T-stub of appropriate effective length and it is analyzed using the commercially available finite element software ABAQUS. The performance of the model is validated by comparing the behavior of the T-stub from the numerical simulation with the behavior of the connection as a whole. The moment-rotation curve of the T-stub obtained from the numerical simulation is compared with the behavior of the whole extended end-plate connection, obtained by numerical simulation, experimental tests and analytical approach.
An approach for the design of closure bolts of spent fuel elements transportation packages
Mattar Neto, Miguel; Miranda, Carlos A.J.; Fainer, Gerson
2009-01-01
The spent fuel elements transportation packages must be designed for severe conditions including significant fire and impact loads corresponding to hypothetical accident conditions. In general, these packages have large flat lids connected to cylindrical bodies by closure bolts that can be the weak link in the containment system. The bolted closure design depends on the geometrical characteristics of the flat lid and the cylindrical body, including their flanges, on the type of the gaskets and their dimensions, and on the number, strength, and tightness of the bolts. There are well established procedures for the closure bolts design used in pressure vessels and piping. They can not be used directly in the bolts design applied to transportation packages. Prior to the use of these procedures, it is necessary consider the differences in the main loads (pressure for the pressure vessels and piping and impact loads for the transportation packages) and in the geometry (large flat lids are not used in pressure vessels and piping). So, this paper presents an approach for the design of the closure bolts of spent fuel elements transportation packages considering the impact loads and the typical geometrical configuration of the transportation packages. (author)
Sergiu Ciprian Catinas
2015-07-01
Full Text Available A detailed theoretical and practical investigation of the reinforced concrete elements is due to recent techniques and method that are implemented in the construction market. More over a theoretical study is a demand for a better and faster approach nowadays due to rapid development of the calculus technique. The paper above will present a study for implementing in a static calculus the direct stiffness matrix method in order capable to address phenomena related to different stages of loading, rapid change of cross section area and physical properties. The method is a demand due to the fact that in our days the FEM (Finite Element Method is the only alternative to such a calculus and FEM are considered as expensive methods from the time and calculus resources point of view. The main goal in such a method is to create the moment-curvature diagram in the cross section that is analyzed. The paper above will express some of the most important techniques and new ideas as well in order to create the moment curvature graphic in the cross sections considered.
Rectangular spectral collocation
Driscoll, Tobin A.; Hale, Nicholas
2015-01-01
Boundary conditions in spectral collocation methods are typically imposed by removing some rows of the discretized differential operator and replacing them with others that enforce the required conditions at the boundary. A new approach based upon
Chillara, Vamshi Krishna; Ren, Baiyang; Lissenden, Cliff J
2016-04-01
This article describes the use of the frequency domain finite element (FDFE) technique for guided wave mode selection in inhomogeneous waveguides. Problems with Rayleigh-Lamb and Shear-Horizontal mode excitation in isotropic homogeneous plates are first studied to demonstrate the application of the approach. Then, two specific cases of inhomogeneous waveguides are studied using FDFE. Finally, an example of guided wave mode selection for inspecting disbonds in composites is presented. Identification of sensitive and insensitive modes for defect inspection is demonstrated. As the discretization parameters affect the accuracy of the results obtained from FDFE, effect of spatial discretization and the length of the domain used for the spatial fast Fourier transform are studied. Some recommendations with regard to the choice of the above parameters are provided. Copyright © 2015 Elsevier B.V. All rights reserved.
Priemetz, O.; Samoilov, K.; Mukasheva, M.
2017-11-01
An ornament is an actual phenomenon of the architecture modern theory, a common element in the practice of design and construction. It has been an important aspect of shaping for millennia. The description of the methods of its application occupies a large place in the studies on the theory and practice of architecture. However, the problem of the saturation of compositions with ornamentation, the specificity of its themes and forms have not been sufficiently studied yet. This aspect requires accumulation of additional knowledge. The application of quantitative methods for the plastic solutions types and a thematic diversity of facade compositions of buildings constructed in different periods creates another tool for an objective analysis of ornament development. It demonstrates the application of this approach for studying the features of the architectural development in Kazakhstan at the end of the XIX - XXI centuries.
Finite-element blunt-crack propagation: a modified J-integral approach
Pan, Y.C.; Marchertas, A.H.; Kennedy, J.M.
1983-01-01
In assessing the safety of a liquid metal fast breeder reactor (LMFBR), a major concern is the behavior of concrete structures subjected to high temperatures. The potential of concrete cracking is an important parameter which could significantly influence the safety assessment of thermally attacked concrete. A new modified J-integral approach for the blunt crack model has been derived to provide a general procedure to accurately predict the direction of crack growth. This formulation has been incorporated into the coupled heat transfer-stress analysis finite element code TEMP-STRESS. A description of the formulation is presented in this paper. Results for the problems of a Mode I and mixed mode crack in a plate using regular and slanted meshes subjected to uniaxial and shear loading are presented
A bottom-up approach to estimating cost elements of REDD+ pilot projects in Tanzania
Merger Eduard
2012-08-01
Full Text Available Abstract Background Several previous global REDD+ cost studies have been conducted, demonstrating that payments for maintaining forest carbon stocks have significant potential to be a cost-effective mechanism for climate change mitigation. These studies have mostly followed highly aggregated top-down approaches without estimating the full range of REDD+ costs elements, thus underestimating the actual costs of REDD+. Based on three REDD+ pilot projects in Tanzania, representing an area of 327,825 ha, this study explicitly adopts a bottom-up approach to data assessment. By estimating opportunity, implementation, transaction and institutional costs of REDD+ we develop a practical and replicable methodological framework to consistently assess REDD+ cost elements. Results Based on historical land use change patterns, current region-specific economic conditions and carbon stocks, project-specific opportunity costs ranged between US$ -7.8 and 28.8 tCOxxxx for deforestation and forest degradation drivers such as agriculture, fuel wood production, unsustainable timber extraction and pasture expansion. The mean opportunity costs for the three projects ranged between US$ 10.1 – 12.5 tCO2. Implementation costs comprised between 89% and 95% of total project costs (excluding opportunity costs ranging between US$ 4.5 - 12.2 tCO2 for a period of 30 years. Transaction costs for measurement, reporting, verification (MRV, and other carbon market related compliance costs comprised a minor share, between US$ 0.21 - 1.46 tCO2. Similarly, the institutional costs comprised around 1% of total REDD+ costs in a range of US$ 0.06 – 0.11 tCO2. Conclusions The use of bottom-up approaches to estimate REDD+ economics by considering regional variations in economic conditions and carbon stocks has been shown to be an appropriate approach to provide policy and decision-makers robust economic information on REDD+. The assessment of opportunity costs is a crucial first step to
A bottom-up approach to estimating cost elements of REDD+ pilot projects in Tanzania
2012-01-01
Background Several previous global REDD+ cost studies have been conducted, demonstrating that payments for maintaining forest carbon stocks have significant potential to be a cost-effective mechanism for climate change mitigation. These studies have mostly followed highly aggregated top-down approaches without estimating the full range of REDD+ costs elements, thus underestimating the actual costs of REDD+. Based on three REDD+ pilot projects in Tanzania, representing an area of 327,825 ha, this study explicitly adopts a bottom-up approach to data assessment. By estimating opportunity, implementation, transaction and institutional costs of REDD+ we develop a practical and replicable methodological framework to consistently assess REDD+ cost elements. Results Based on historical land use change patterns, current region-specific economic conditions and carbon stocks, project-specific opportunity costs ranged between US$ -7.8 and 28.8 tCOxxxx for deforestation and forest degradation drivers such as agriculture, fuel wood production, unsustainable timber extraction and pasture expansion. The mean opportunity costs for the three projects ranged between US$ 10.1 – 12.5 tCO2. Implementation costs comprised between 89% and 95% of total project costs (excluding opportunity costs) ranging between US$ 4.5 - 12.2 tCO2 for a period of 30 years. Transaction costs for measurement, reporting, verification (MRV), and other carbon market related compliance costs comprised a minor share, between US$ 0.21 - 1.46 tCO2. Similarly, the institutional costs comprised around 1% of total REDD+ costs in a range of US$ 0.06 – 0.11 tCO2. Conclusions The use of bottom-up approaches to estimate REDD+ economics by considering regional variations in economic conditions and carbon stocks has been shown to be an appropriate approach to provide policy and decision-makers robust economic information on REDD+. The assessment of opportunity costs is a crucial first step to provide information on the
MAIN LAND USE PLANNING APPROACHES TO STRUCTURAL ELEMENTS LOCAL ECOLOGICAL NETWORK
TretiakV.M.
2016-08-01
Full Text Available In modern conditions of social development, changes in land eco-system of economic relations in Ukraine, the problem of providing conditions for the creation of sustainable land use and creation of protected areas get the status of special urgency. Ideology establishment of ecological networks became logical continuation of environmental thought in general. Considering the methodological approach to the establishment of ecological networks we can constitute, that it is an environmental frame of spatial infrastructure, land conservation and environmental areas, major part of land is the basis of the structural elements of ecological network. Designing an ecological network is made through developing regional schemes of Econet formation, regional and local schemes for establishing an ecological network areas, settlements and other areas. Land Management uses design of structural elements of the ecological network in the village council, as a rule, begins with ecological and landscape mikrozonationof the village council, held during the preparatory work for the land drafting and finishing the formation of environmentally homogeneous regions, which represents the tied system components of ecological network, environmental measures in the form of local environmental restrictions (encumbrances to use land and other natural resources. Additionally, there are some project organization and territorial measures that increase the sustainability area, such as: key, binders, buffer areas and renewable ecological network. Land management projects on the formation of structural elements of ecological network as territorial restrictions (encumbrances in land are used within the territories Councils determined the location and size of land: - Protection zones around especially valuable natural objects of cultural heritage, meteorological stations, etc. in order to protect them from adverse human impacts; - Protection zones along telecommunication lines, power
Festa, G.; Vilotte, J.; Scala, A.
2012-12-01
The M 9.0, 2011 Tohoku earthquake, along the North American-Pacific plate boundary, East of the Honshu Island, yielded a complex broadband rupture extending southwards over 600 km along strike and triggering a large tsunami that ravaged the East coast of North Japan. Strong motion and high-rate continuous GPS data, recorded all along the Japanese archipelago by the national seismic networks K-Net and Kik-net and geodetic network Geonet, together with teleseismic data, indicated a complex frequency dependent rupture. Low frequency signals (fmeters), extending along-dip over about 100 km, between the hypocenter and the trench, and 150 to 200 km along strike. This slip asperity was likely the cause of the localized tsunami source and of the large amplitude tsunami waves. High-frequency signals (f>0.5 Hz) were instead generated close to the coast in the deeper part of the subduction zone, by at least four smaller size asperities, with possible repeated slip, and were mostly the cause for the ground shaking felt in the Eastern part of Japan. The deep origin of the high-frequency radiation was also confirmed by teleseismic high frequency back projection analysis. Intermediate frequency analysis showed a transition between the shallow and deeper part of the fault, with the rupture almost confined in a small stripe containing the hypocenter before propagating southward along the strike, indicating a predominant in-plane rupture mechanism in the initial stage of the rupture itself. We numerically investigate the role of the geometry of the subduction interface and of the structural properties of the subduction zone on the broadband dynamic rupture and radiation of the Tohoku earthquake. Based upon the almost in-plane behavior of the rupture in its initial stage, 2D non-smooth spectral element dynamic simulations of the earthquake rupture propagation are performed including the non planar and kink geometry of the subduction interface, together with bi-material interfaces
Estimation of spectral kurtosis
Sutawanir
2017-03-01
Rolling bearings are the most important elements in rotating machinery. Bearing frequently fall out of service for various reasons: heavy loads, unsuitable lubrications, ineffective sealing. Bearing faults may cause a decrease in performance. Analysis of bearing vibration signals has attracted attention in the field of monitoring and fault diagnosis. Bearing vibration signals give rich information for early detection of bearing failures. Spectral kurtosis, SK, is a parameter in frequency domain indicating how the impulsiveness of a signal varies with frequency. Faults in rolling bearings give rise to a series of short impulse responses as the rolling elements strike faults, SK potentially useful for determining frequency bands dominated by bearing fault signals. SK can provide a measure of the distance of the analyzed bearings from a healthy one. SK provides additional information given by the power spectral density (psd). This paper aims to explore the estimation of spectral kurtosis using short time Fourier transform known as spectrogram. The estimation of SK is similar to the estimation of psd. The estimation falls in model-free estimation and plug-in estimator. Some numerical studies using simulations are discussed to support the methodology. Spectral kurtosis of some stationary signals are analytically obtained and used in simulation study. Kurtosis of time domain has been a popular tool for detecting non-normality. Spectral kurtosis is an extension of kurtosis in frequency domain. The relationship between time domain and frequency domain analysis is establish through power spectrum-autocovariance Fourier transform. Fourier transform is the main tool for estimation in frequency domain. The power spectral density is estimated through periodogram. In this paper, the short time Fourier transform of the spectral kurtosis is reviewed, a bearing fault (inner ring and outer ring) is simulated. The bearing response, power spectrum, and spectral kurtosis are plotted to
A unidirectional approach for d-dimensional finite element methods for higher order on sparse grids
Bungartz, H.J. [Technische Universitaet Muenchen (Germany)
1996-12-31
In the last years, sparse grids have turned out to be a very interesting approach for the efficient iterative numerical solution of elliptic boundary value problems. In comparison to standard (full grid) discretization schemes, the number of grid points can be reduced significantly from O(N{sup d}) to O(N(log{sub 2}(N)){sup d-1}) in the d-dimensional case, whereas the accuracy of the approximation to the finite element solution is only slightly deteriorated: For piecewise d-linear basis functions, e. g., an accuracy of the order O(N{sup - 2}(log{sub 2}(N)){sup d-1}) with respect to the L{sub 2}-norm and of the order O(N{sup -1}) with respect to the energy norm has been shown. Furthermore, regular sparse grids can be extended in a very simple and natural manner to adaptive ones, which makes the hierarchical sparse grid concept applicable to problems that require adaptive grid refinement, too. An approach is presented for the Laplacian on a uinit domain in this paper.
Calio, I.; Cannizzaro, F.; Marletta, M.; Panto, B.; D'Amore, E.
2008-01-01
In the present study a new discrete-element approach for the evaluation of the seismic resistance of composite reinforced concrete-masonry structures is presented. In the proposed model, unreinforced masonry panels are modelled by means of two-dimensional discrete-elements, conceived by the authors for modelling masonry structures, whereas the reinforced concrete elements are modelled by lumped plasticity elements interacting with the masonry panels through nonlinear interface elements. The proposed procedure was adopted for the assessment of the seismic response of a case study confined-masonry building which was conceived to be a typical representative of a wide class of residential buildings designed to the requirements of the 1909 issue of the Italian seismic code and widely adopted in the aftermath of the 1908 earthquake for the reconstruction of the cities of Messina and Reggio Calabria
Beyond Housing First: Essential Elements of a System-Planning Approach to Ending Homelessness
Alina Turner
2014-10-01
Full Text Available The concept of “Housing First” has taken on a powerful status in the complex of government, non-profit and academic systems that study and seek to eliminate homelessness. It is a compelling concept, in that it has brought our society to the realization that housing instability itself is often the culmination of various underlying and intersecting issues, ranging from mental health and addiction issues to domestic abuse and poverty. The “Housing First” principle holds that homeless individuals stand a far poorer chance of improving their condition while they remain homeless; that the stability of a permanent home provides the foundation that allows individuals to begin addressing the issues that led to their housing instability in the first place. However, the elegance of the fundamental principle behind “Housing First” also risks creating an illusion, wherein agencies and governments might too easily conclude that the entirety of this approach to ending homelessness is merely to begin housing the homeless. While that is a step in the process, it is but a piece of the Housing First approach. And unless all the various elements of the approach are also included in the actual work done on the ground, the success observed so far in communities that have tried the Housing First approach will not necessarily be replicated. This can lead to disappointment for those trying to implement new strategies, undermine the effectiveness of Housing First, and most importantly, fail to fully help those individuals in need. Housing First encompasses a strategic application of key principles across the entire homeless-serving system. When it is introduced into a new jurisdiction, it must be accompanied by an overhaul of the current approach to social policy and service delivery. The implementation of Housing First requires a difficult and systematic process, beginning with planning and strategy development that recognizes how every part of the homeless
Li, Zhijun; Feng, Maria Q.; Luo, Longxi; Feng, Dongming; Xu, Xiuli
2018-01-01
Uncertainty of modal parameters estimation appear in structural health monitoring (SHM) practice of civil engineering to quite some significant extent due to environmental influences and modeling errors. Reasonable methodologies are needed for processing the uncertainty. Bayesian inference can provide a promising and feasible identification solution for the purpose of SHM. However, there are relatively few researches on the application of Bayesian spectral method in the modal identification using SHM data sets. To extract modal parameters from large data sets collected by SHM system, the Bayesian spectral density algorithm was applied to address the uncertainty of mode extraction from output-only response of a long-span suspension bridge. The posterior most possible values of modal parameters and their uncertainties were estimated through Bayesian inference. A long-term variation and statistical analysis was performed using the sensor data sets collected from the SHM system of the suspension bridge over a one-year period. The t location-scale distribution was shown to be a better candidate function for frequencies of lower modes. On the other hand, the burr distribution provided the best fitting to the higher modes which are sensitive to the temperature. In addition, wind-induced variation of modal parameters was also investigated. It was observed that both the damping ratios and modal forces increased during the period of typhoon excitations. Meanwhile, the modal damping ratios exhibit significant correlation with the spectral intensities of the corresponding modal forces.
Teimoorinia, H.
2012-12-01
The aim of this work is to combine spectral energy distribution (SED) fitting with artificial neural network techniques to assign spectral characteristics to a sample of galaxies at 0.5 MUSIC catalog covering bands between ~0.4 and 24 μm in 10-13 filters. We use the CIGALE code to fit photometric data to Maraston's synthesis spectra to derive mass, specific star formation rate, and age, as well as the best SED of the galaxies. We use the spectral models presented by Kinney et al. as targets in the wavelength interval ~1200-7500 Å. Then a series of neural networks are trained, with average performance ~90%, to classify the best SED in a supervised manner. We consider the effects of the prominent features of the best SED on the performance of the trained networks and also test networks on the galaxy spectra of Coleman et al., which have a lower resolution than the target models. In this way, we conclude that the trained networks take into account all the features of the spectra simultaneously. Using the method, 105 out of 142 galaxies of the sample are classified with high significance. The locus of the classified galaxies in the three graphs of the physical parameters of mass, age, and specific star formation rate appears consistent with the morphological characteristics of the galaxies.
Finite Element Analysis of the Cingulata Jaw: An Ecomorphological Approach to Armadillo's Diets.
Sílvia Serrano-Fochs
Full Text Available Finite element analyses (FEA were applied to assess the lower jaw biomechanics of cingulate xenarthrans: 14 species of armadillos as well as one Pleistocene pampathere (11 extant taxa and the extinct forms Vassallia, Eutatus and Macroeuphractus. The principal goal of this work is to comparatively assess the biomechanical capabilities of the mandible based on FEA and to relate the obtained stress patterns with diet preferences and variability, in extant and extinct species through an ecomorphology approach. The results of FEA showed that omnivorous species have stronger mandibles than insectivorous species. Moreover, this latter group of species showed high variability, including some similar biomechanical features of the insectivorous Tolypeutes matacus and Chlamyphorus truncatus to those of omnivorous species, in agreement with reported diets that include items other than insects. It remains unclear the reasons behind the stronger than expected lower jaw of Dasypus kappleri. On the other hand, the very strong mandible of the fossil taxon Vassallia maxima agrees well with the proposed herbivorous diet. Moreover, Eutatus seguini yielded a stress pattern similar to Vassalia in the posterior part of the lower jaw, but resembling that of the stoutly built Macroeuphractus outesi in the anterior part. The results highlight the need for more detailed studies on the natural history of extant armadillos. FEA proved a powerful tool for biomechanical studies in a comparative framework.
Sara Gabriela Pacichana-Quinayáz
2016-06-01
Full Text Available Abstract Due to the limited supply of mental health services for Afro-Colombian victims of violence, a Common Elements Treatment Approach (CETA intervention has been implemented in the Colombian Pacific. Given the importance of improvement in mental health interventions for this population, it is necessary to characterize this process. This article seeks to describe the implementation of CETA for Afro-Colombian victims of violence in Buenaventura and Quibdó, Colombia through case studieswith individual in-depth interviews with Lay Psychosocial Community Workers (LPCW, supervisors, and coordinators responsible for implementing CETA. From this six core categories were obtained: 1. Effect of armed conflict and poverty 2. Trauma severity 3. Perceived changes with CETA 4. Characteristics and LPCW’s performance 5. Afro-Colombian culturalapproach and 6. Strategies to promote users’ well-being.Colombian Pacific’s scenario implies several factors, such as the active armed conflict, economic crisis, and lack of mental health care resources, affecting the implementation process and the intervention effects. This implies the need to establish and strengthen partnerships between institutions in order to administer necessary mental health care for victims of violence in the Colombian Pacific.
A new approach to elastography using mutual information and finite elements
Miga, Michael I
2003-01-01
Historically, increased mechanical stiffness during tissue palpation exams has been associated with assessing organ health as well as with detecting the growth of a potentially life-threatening cell mass. As such, techniques to image elasticity parameters (i.e., elastography) have recently become of great interest to scientists. In this work, a new method of elastography will be introduced within the context of mammographic imaging. The elastography method proposed represents a non-rigid iterative image registration algorithm that varies material properties within a finite element model to improve registration. More specifically, regional measures of image similarity are used within an objective function minimization framework to reconstruct elasticity images of tissue stiffness. Numerical simulations illustrate: (1) the encoding of stiffness information within the context of a regional image similarity criterion, (2) the methodology for an iterative elastographic imaging framework and (3) elasticity reconstruction simulations. The real strength in this approach is that images from any modality (e.g., magnetic resonance, computed tomography, ultrasound, etc) that have sufficient anatomically-based intensity heterogeneity and remain consistent from a pre- to a post-deformed state could be used in this paradigm
An Image-Based Finite Element Approach for Simulating Viscoelastic Response of Asphalt Mixture
Wenke Huang
2016-01-01
Full Text Available This paper presents an image-based micromechanical modeling approach to predict the viscoelastic behavior of asphalt mixture. An improved image analysis technique based on the OTSU thresholding operation was employed to reduce the beam hardening effect in X-ray CT images. We developed a voxel-based 3D digital reconstruction model of asphalt mixture with the CT images after being processed. In this 3D model, the aggregate phase and air void were considered as elastic materials while the asphalt mastic phase was considered as linear viscoelastic material. The viscoelastic constitutive model of asphalt mastic was implemented in a finite element code using the ABAQUS user material subroutine (UMAT. An experimental procedure for determining the parameters of the viscoelastic constitutive model at a given temperature was proposed. To examine the capability of the model and the accuracy of the parameter, comparisons between the numerical predictions and the observed laboratory results of bending and compression tests were conducted. Finally, the verified digital sample of asphalt mixture was used to predict the asphalt mixture viscoelastic behavior under dynamic loading and creep-recovery loading. Simulation results showed that the presented image-based digital sample may be appropriate for predicting the mechanical behavior of asphalt mixture when all the mechanical properties for different phases became available.
A Novel Approach for Earthing System Design Using Finite Element Method
Sajad Samadinasab
2017-04-01
Full Text Available Protection of equipment, safety of persons and continuity of power supply are the main objectives of the grounding system. For its accurate design, it is essential to determine the potential distribution on the earth surface and the equivalent resistance of the system. The knowledge of such parameters allows checking the security offered by the grounding system when there is a failure in the power systems. A new method to design an earthing systems using Finite Element Method (FEM is presented in this article. In this approach, the influence of the moisture and temperature on the behavior of soil resistivity are considered in EARTHING system DESIGN. The earthing system is considered to be a rod electrode and a plate type electrode buried vertically in the ground. The resistance of the system which is a very important factor in the design process is calculated using FEM. FEM is used to estimate the solution of the partial differential equation that governs the system behavior. COMSOL Multiphysics 4.4 which is one of the packages that work with the FEM is used as a tool in this design. Finally the values of the resistance obtained by COMSOL Multiphysics are compared with the proven analytical formula values for the ground resistance, in order to prove the work done with COMSOL Multiphysics.
A finite element model of myocardial infarction using a composite material approach.
Haddad, Seyyed M H; Samani, Abbas
2018-01-01
Computational models are effective tools to study cardiac mechanics under normal and pathological conditions. They can be used to gain insight into the physiology of the heart under these conditions while they are adaptable to computer assisted patient-specific clinical diagnosis and therapeutic procedures. Realistic cardiac mechanics models incorporate tissue active/passive response in conjunction with hyperelasticity and anisotropy. Conventional formulation of such models leads to mathematically-complex problems usually solved by custom-developed non-linear finite element (FE) codes. With a few exceptions, such codes are not available to the research community. This article describes a computational cardiac mechanics model developed such that it can be implemented using off-the-shelf FE solvers while tissue pathologies can be introduced in the model in a straight-forward manner. The model takes into account myocardial hyperelasticity, anisotropy, and active contraction forces. It follows a composite tissue modeling approach where the cardiac tissue is decomposed into two major parts: background and myofibers. The latter is modelled as rebars under initial stresses mimicking the contraction forces. The model was applied in silico to study the mechanics of infarcted left ventricle (LV) of a canine. End-systolic strain components, ejection fraction, and stress distribution attained using this LV model were compared quantitatively and qualitatively to corresponding data obtained from measurements as well as to other corresponding LV mechanics models. This comparison showed very good agreement.
Pulcini, C; Binda, F; Lamkang, A S; Trett, A; Charani, E; Goff, D A; Harbarth, S; Hinrichsen, S L; Levy-Hara, G; Mendelson, M; Nathwani, D; Gunturu, R; Singh, S; Srinivasan, A; Thamlikitkul, V; Thursky, K; Vlieghe, E; Wertheim, H; Zeng, M; Gandra, S; Laxminarayan, R
2018-04-03
With increasing global interest in hospital antimicrobial stewardship (AMS) programmes, there is a strong demand for core elements of AMS to be clearly defined on the basis of principles of effectiveness and affordability. To date, efforts to identify such core elements have been limited to Europe, Australia, and North America. The aim of this study was to develop a set of core elements and their related checklist items for AMS programmes that should be present in all hospitals worldwide, regardless of resource availability. A literature review was performed by searching Medline and relevant websites to retrieve a list of core elements and items that could have global relevance. These core elements and items were evaluated by an international group of AMS experts using a structured modified Delphi consensus procedure, using two-phased online in-depth questionnaires. The literature review identified seven core elements and their related 29 checklist items from 48 references. Fifteen experts from 13 countries in six continents participated in the consensus procedure. Ultimately, all seven core elements were retained, as well as 28 of the initial checklist items plus one that was newly suggested, all with ≥80% agreement; 20 elements and items were rephrased. This consensus on core elements for hospital AMS programmes is relevant to both high- and low-to-middle-income countries and could facilitate the development of national AMS stewardship guidelines and adoption by healthcare settings worldwide. Copyright © 2018 European Society of Clinical Microbiology and Infectious Diseases. All rights reserved.
Teimoorinia, H., E-mail: hteimoo@uvic.ca [Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia, V8P 1A1 (Canada)
2012-12-01
The aim of this work is to combine spectral energy distribution (SED) fitting with artificial neural network techniques to assign spectral characteristics to a sample of galaxies at 0.5 < z < 1. The sample is selected from the spectroscopic campaign of the ESO/GOODS-South field, with 142 sources having photometric data from the GOODS-MUSIC catalog covering bands between {approx}0.4 and 24 {mu}m in 10-13 filters. We use the CIGALE code to fit photometric data to Maraston's synthesis spectra to derive mass, specific star formation rate, and age, as well as the best SED of the galaxies. We use the spectral models presented by Kinney et al. as targets in the wavelength interval {approx}1200-7500 A. Then a series of neural networks are trained, with average performance {approx}90%, to classify the best SED in a supervised manner. We consider the effects of the prominent features of the best SED on the performance of the trained networks and also test networks on the galaxy spectra of Coleman et al., which have a lower resolution than the target models. In this way, we conclude that the trained networks take into account all the features of the spectra simultaneously. Using the method, 105 out of 142 galaxies of the sample are classified with high significance. The locus of the classified galaxies in the three graphs of the physical parameters of mass, age, and specific star formation rate appears consistent with the morphological characteristics of the galaxies.
Teimoorinia, H.
2012-01-01
The aim of this work is to combine spectral energy distribution (SED) fitting with artificial neural network techniques to assign spectral characteristics to a sample of galaxies at 0.5 < z < 1. The sample is selected from the spectroscopic campaign of the ESO/GOODS-South field, with 142 sources having photometric data from the GOODS-MUSIC catalog covering bands between ∼0.4 and 24 μm in 10-13 filters. We use the CIGALE code to fit photometric data to Maraston's synthesis spectra to derive mass, specific star formation rate, and age, as well as the best SED of the galaxies. We use the spectral models presented by Kinney et al. as targets in the wavelength interval ∼1200-7500 Å. Then a series of neural networks are trained, with average performance ∼90%, to classify the best SED in a supervised manner. We consider the effects of the prominent features of the best SED on the performance of the trained networks and also test networks on the galaxy spectra of Coleman et al., which have a lower resolution than the target models. In this way, we conclude that the trained networks take into account all the features of the spectra simultaneously. Using the method, 105 out of 142 galaxies of the sample are classified with high significance. The locus of the classified galaxies in the three graphs of the physical parameters of mass, age, and specific star formation rate appears consistent with the morphological characteristics of the galaxies.
An approach to unfold the response of a multi-element system using an artificial neural network
Cordes, E.; Fehrenbacher, G.; Schuetz, R.; Sprunck, M.; Hahn, K.; Hofmann, R.; Wahl, W.
1998-01-01
An unfolding procedure is proposed which aims at obtaining spectral information of a neutron radiation field by the analysis of the response of a multi-element system consisting of converter type semiconductors. For the unfolding procedure an artificial neural network (feed forward network), trained by the back-propagation method, was used. The response functions of the single elements to neutron radiation were calculated by application of a computational model for an energy range from 10 -2 eV to 10 MeV. The training of the artificial neural network was based on the computation of responses of a six-element system for a set of 300 neutron spectra and the application of the back-propagation method. The validation was performed by the unfolding of 100 computed responses. Two unfolding examples were pointed out for the determination of the neutron spectra. The spectra resulting from the unfolding procedure agree well with the original spectra used for the response computation
Surovyatkina, Elena; Stolbova, Veronika; Kurths, Jurgen
2017-04-01
The monsoon is the season of rain caused by a global seasonal reverse in winds direction and a change in pressure distribution. The Southwest winds bring summer monsoon to India. The economy of India is able to maintain its GDP in the wake of a good monsoon. However, if monsoon gets delayed by even two weeks, it can spell disaster because the high population depending on agriculture - 70% of its people directly related to farming. Agriculture, in turn, is dependent on the monsoon. Although the rainy season happens annually between June and September, the time of monsoon season's onset and withdrawal varies within a month from year to year. The important feature of the monsoon is that it starts and ends suddenly. Hence, despite enormous progress having been made in predicting monsoon since 1886, it remains a significant scientific challenge. To make predictions of monsoon timing in 2016, we applied our recently developed method [1]. Our approach is based on a teleconnection between the Eastern Ghats (EG) and North Pakistan (NP) - Tipping Elements of Indian Summer Monsoon. Both our predictions - for monsoon onset and withdrawal - were made for the Eastern Ghats region (EG-20N,80E) in the central part of India, while the Indian Meteorological Department forecasts monsoon over Kerala - a state at the southern tip of the Indian subcontinent. Our prediction for monsoon onset was published on May 6-th, 2016 [2]. We predicted the monsoon arrival to the EG on the 13th of June with a deviation of +/-4 days. In fact, monsoon onset was on June 17-th, that was confirmed by information from meteorological stations located around the EG-region. Hence, our prediction of monsoon onset (made 40 days in advance) was correct. We delivered the prediction of monsoon withdrawal on July 27, 2016 [3], announcing the monsoon withdrawal from the EG on October 5-th with a deviation of +/-5 days. The actual monsoon withdrawal started on October 10-th when the relative humidity in the region
Study of the behaviour of trace elements in estuaries: experimental approaches and modeling
Dange, Catherine
2002-01-01
Most of trace elements have a non conservative behavior in estuarine environments. It is the case of cadmium, cobalt and caesium for which the fate in estuarine and coastal zones is largely controlled by their distribution between water and suspended particles, which generally have high residence times or can be definitely deposited in these areas. Metallic contaminants and radionuclides can be present under various species: dissolved (mineral and organic complexes), colloidal and particulate forms (adsorbed, precipitated) or integrated by various mechanisms in the organisms. Such distributions are the result of processes (physical, chemical, biological) which are controlled by many factors (ionic strength, pH, E_h, major cations concentration, nature and concentration of suspended matter, primary production,...). Geochemical modeling is a very useful approach to understand the dynamics of this type of contaminant, especially in the complex systems which are the estuaries. A speciation model was used to simulate the measurements of dissolved and particulate Cd, Co and Cs, taken during various cruises carried out in the Seine, Loire, Gironde and Rhone estuaries. The model is able to reproduce the distribution of metals between the dissolved and particulate phases, and also to evaluate the concentrations of various chemical species (especially those which are most bio-available). The approach presented treats adsorption processes as a formation of inner sphere complexes with functional surface groups (surface complexation model) or as an cationic exchange reaction. The calculation of chemical species takes into account the presence of dissolved ligands or major cations of seawater, which compete with the metal for the surface sites. The model can consider the various natural particle components (metal oxy-hydroxides, organic matter) as individual adsorbent phases or treat natural particles in a 'global manner'. The choice of modeled processes is based on studies of
Divanoglou, A; Tasiemski, T; Augutis, M; Trok, K
2017-06-01
Active Rehabilitation (AR) is a community peer-based approach that started in Sweden in 1976. As a key component of the approach, AR training camps provide intensive, goal-oriented, intentional, group-based, customised training and peer-support opportunities in a community environment for individuals with spinal cord injury. Prospective cross-sectional study. To describe the profile of the organisations that use components of the AR approach, and to explore the characteristics and the international variations of the approach. Twenty-two organisations from 21 countries from Europe, Asia and Africa reported using components of the AR approach during the past 10 years. An electronic survey was developed and distributed through a personalised email. Sampling involved a prospective identification of organisations that met the inclusion criteria and snowball strategies. While there were many collaborating links between the organisations, RG Active Rehabilitation from Sweden and Motivation Charitable Trust from the United Kingdom were identified as key supporting organisations. The 10 key elements of the AR approach were found to be used uniformly across the participating organisations. Small variations were associated with variations in country income and key supporting organisation. This is the first study to describe the key elements and international variations of the AR approach. This will provide the basis for further studies exploring the effectiveness of the approach, it will likely facilitate international collaboration on research and operational aspects and it could potentially support higher integration in the health-care system and long-term funding of these programmes.
Mirza, Anwar M.; Iqbal, Shaukat; Rahman, Faizur
2007-01-01
A spatially adaptive grid-refinement approach has been investigated to solve the even-parity Boltzmann transport equation. A residual based a posteriori error estimation scheme has been utilized for checking the approximate solutions for various finite element grids. The local particle balance has been considered as an error assessment criterion. To implement the adaptive approach, a computer program ADAFENT (adaptive finite elements for neutron transport) has been developed to solve the second order even-parity Boltzmann transport equation using K + variational principle for slab geometry. The program has a core K + module which employs Lagrange polynomials as spatial basis functions for the finite element formulation and Legendre polynomials for the directional dependence of the solution. The core module is called in by the adaptive grid generator to determine local gradients and residuals to explore the possibility of grid refinements in appropriate regions of the problem. The a posteriori error estimation scheme has been implemented in the outer grid refining iteration module. Numerical experiments indicate that local errors are large in regions where the flux gradients are large. A comparison of the spatially adaptive grid-refinement approach with that of uniform meshing approach for various benchmark cases confirms its superiority in greatly enhancing the accuracy of the solution without increasing the number of unknown coefficients. A reduction in the local errors of the order of 10 2 has been achieved using the new approach in some cases
Mirza, Anwar M. [Department of Computer Science, National University of Computer and Emerging Sciences, NUCES-FAST, A.K. Brohi Road, H-11, Islamabad (Pakistan)], E-mail: anwar.m.mirza@gmail.com; Iqbal, Shaukat [Faculty of Computer Science and Engineering, Ghulam Ishaq Khan (GIK) Institute of Engineering Science and Technology, Topi-23460, Swabi (Pakistan)], E-mail: shaukat@giki.edu.pk; Rahman, Faizur [Department of Physics, Allama Iqbal Open University, H-8 Islamabad (Pakistan)
2007-07-15
A spatially adaptive grid-refinement approach has been investigated to solve the even-parity Boltzmann transport equation. A residual based a posteriori error estimation scheme has been utilized for checking the approximate solutions for various finite element grids. The local particle balance has been considered as an error assessment criterion. To implement the adaptive approach, a computer program ADAFENT (adaptive finite elements for neutron transport) has been developed to solve the second order even-parity Boltzmann transport equation using K{sup +} variational principle for slab geometry. The program has a core K{sup +} module which employs Lagrange polynomials as spatial basis functions for the finite element formulation and Legendre polynomials for the directional dependence of the solution. The core module is called in by the adaptive grid generator to determine local gradients and residuals to explore the possibility of grid refinements in appropriate regions of the problem. The a posteriori error estimation scheme has been implemented in the outer grid refining iteration module. Numerical experiments indicate that local errors are large in regions where the flux gradients are large. A comparison of the spatially adaptive grid-refinement approach with that of uniform meshing approach for various benchmark cases confirms its superiority in greatly enhancing the accuracy of the solution without increasing the number of unknown coefficients. A reduction in the local errors of the order of 10{sup 2} has been achieved using the new approach in some cases.
Yin, Chuancun; Wang, Chunwei
2009-11-01
The optimal dividend problem proposed in de Finetti [1] is to find the dividend-payment strategy that maximizes the expected discounted value of dividends which are paid to the shareholders until the company is ruined. Avram et al. [9] studied the case when the risk process is modelled by a general spectrally negative Lévy process and Loeffen [10] gave sufficient conditions under which the optimal strategy is of the barrier type. Recently Kyprianou et al. [11] strengthened the result of Loeffen [10] which established a larger class of Lévy processes for which the barrier strategy is optimal among all admissible ones. In this paper we use an analytical argument to re-investigate the optimality of barrier dividend strategies considered in the three recent papers.
2015-06-04
a faster and more scalable code, which is of vital importance as we approach the exascale range of computing. 15. SUBJECT TERMS 16. SECURITY...importance as we approach the exascale range of computing. Keywords: Residual-based Stabilization, Variational Mulstiscale Method, VMS, VMM, Crosswind...VMS + DC √ √ √ Improved As ∇2 HV × √ √ × As ∇2α and DC than by high order dissipative operators. As we try to approach the exascale range of
Šesnic, S.; Dorić, V.; Poljak, D.; Šušnjara, A.; Artaud, J.F.
2018-01-01
Roč. 46, č. 4 (2018), s. 1027-1034 ISSN 0093-3813 R&D Projects: GA MŠk(CZ) 8D15001 Institutional support: RVO:61389021 Keywords : Finite element analysis * Tokamaks * current diffusion equation (CDE) * finite-element method (FEM) Subject RIV: BL - Plasma and Gas Discharge Physics OBOR OECD: Fluids and plasma physics (including surface physics) Impact factor: 1.052, year: 2016
Multidisciplinary approach and multi-scale elemental analysis and separation chemistry
Mariet, Clarisse
2014-01-01
The development of methods for the analysis of trace elements is an important component of my research activities either for a radiometric measure or mass spectrometric detection. Many studies raise the question of the chemical signature of a sample or a process: eruptive behavior of a volcano, indicator of pollution, ion exchange in vectors vesicles of active principles,... Each time, highly sensitive analytical procedures, accurate and multi-elementary as well as the development of specific protocols were needed. Neutron activation analysis has often been used as reference procedure and allowed to validate the chemical lixiviation and the measurement by ICP-MS. Analysis of radioactive samples requires skills in analysis of trace but also separation chemistry. Two separation methods occupy an important place in the separation chemistry of radionuclides: chromatography and liquid-liquid extraction. The study of extraction of Lanthanide (III) by the oxide octyl (phenyl)-n, N-diisobutyl-carbamoylmethyl phosphine (CMPO) and a calixarene-CMPO led to better understand and quantify the influence of operating conditions on their performance of extraction and selectivity. The high concentration of salts in aqueous solutions required to reason in terms of thermodynamic activities in relying on a comprehensive approach to quantification of deviations from ideality. In order to reduce the amount of waste generated and costs, alternatives to the hydrometallurgical extraction processes were considered using ionic liquids at low temperatures as alternative solvents in biphasic processes. Remaining in this logic of effluent reduction, miniaturization of the liquid-liquid extraction is also study so as to exploit the characteristics of microscopic scale (very large specific surface, short diffusion distances). The miniaturization of chromatographic separations carries the same ambitions of gain of volumes of wastes and reagents. The miniaturization of the separation Uranium
Buravlev, Yu.M.; Zamarajev, V.P.; Chernyavskaya, N.V.
1989-01-01
The experimental technique consists in estimation of mutual arrangement of the calibration curves obtained using standard reference materials of low-alloyed and high-alloyed (high-chrome, stainless, high-speed) steels as well as of the curves for carbon steels and cast iron differing in their structure. ARL-31000 and Polyvac E-1000 quantometers with U=1300 V, I=0.12 A and argon pressure ∼1 kPa are used. The influence of third elements is shown in shift and slope changes of the curves for abovementioned high-alloyed steels in comparison to ones for low-alloyed steels accepted as basic. The influence magnitude runs up to 10-30 relative percents and more in the case of analysis of carbon, phosphorus, sulfur, silicon and other elements and depends on the type of the element and on the alloy composition. It is shown that the contribution of structure factor caused by different alloy thermal treatment makes up 10 to 20 relative percents. The experiments showed that the increase of influence of these factors caused by their imposing as well as the weakening of this influence caused by their counteraction is possible. When analyzed alloys differ in their composition and manufacturing technology it is necessary to take into consideration the influence of these effects. (author)
Photovoltaic spectral responsivity measurements
Emery, K.; Dunlavy, D.; Field, H.; Moriarty, T. [National Renewable Energy Lab., Golden, CO (United States)
1998-09-01
This paper discusses the various elemental random and nonrandom error sources in typical spectral responsivity measurement systems. The authors focus specifically on the filter and grating monochrometer-based spectral responsivity measurement systems used by the Photovoltaic (PV) performance characterization team at NREL. A variety of subtle measurement errors can occur that arise from a finite photo-current response time, bandwidth of the monochromatic light, waveform of the monochromatic light, and spatial uniformity of the monochromatic and bias lights; the errors depend on the light source, PV technology, and measurement system. The quantum efficiency can be a function of he voltage bias, light bias level, and, for some structures, the spectral content of the bias light or location on the PV device. This paper compares the advantages and problems associated with semiconductor-detector-based calibrations and pyroelectric-detector-based calibrations. Different current-to-voltage conversion and ac photo-current detection strategies employed at NREL are compared and contrasted.
McNab, Fiona; Hillebrand, Arjan; Swithenby, Stephen J; Rippon, Gina
2012-01-01
Early, lesion-based models of language processing suggested that semantic and phonological processes are associated with distinct temporal and parietal regions respectively, with frontal areas more indirectly involved. Contemporary spatial brain mapping techniques have not supported such clear-cut segregation, with strong evidence of activation in left temporal areas by both processes and disputed evidence of involvement of frontal areas in both processes. We suggest that combining spatial information with temporal and spectral data may allow a closer scrutiny of the differential involvement of closely overlapping cortical areas in language processing. Using beamforming techniques to analyze magnetoencephalography data, we localized the neuronal substrates underlying primed responses to nouns requiring either phonological or semantic processing, and examined the associated measures of time and frequency in those areas where activation was common to both tasks. Power changes in the beta (14-30 Hz) and gamma (30-50 Hz) frequency bands were analyzed in pre-selected time windows of 350-550 and 500-700 ms In left temporal regions, both tasks elicited power changes in the same time window (350-550 ms), but with different spectral characteristics, low beta (14-20 Hz) for the phonological task and high beta (20-30 Hz) for the semantic task. In frontal areas (BA10), both tasks elicited power changes in the gamma band (30-50 Hz), but in different time windows, 500-700 ms for the phonological task and 350-550 ms for the semantic task. In the left inferior parietal area (BA40), both tasks elicited changes in the 20-30 Hz beta frequency band but in different time windows, 350-550 ms for the phonological task and 500-700 ms for the semantic task. Our findings suggest that, where spatial measures may indicate overlapping areas of involvement, additional beamforming techniques can demonstrate differential activation in time and frequency domains.
Radiochemical approaches to the migration of elements from a radwaste repository
Guillaumont, R.
1993-01-01
Underground high-level or intermediary-level alpha radwaste repositories will contain tons to hundreds of tons of anthropogenic elements. It is predicted that the release of the elements into the far field will be mainly dependent upon the 'solubilities', sums of the concentrations of soluble species and colloidal/pseudo colloidal forms, of expected near field compounds. On the other hand, safety assessments, based on computation models of migration, show that elements in the discharged water into the biosphere will be at tracer level, or at least, at trace levels. Kinetic and thermodynamic aspects of the processes in which elements will be involved during their migration are discussed together with the change in their concentrations, over several orders of magnitude. It is shown that special attention must be given to predict the behaviour of the elements in the far field from what we know from classical chemistry, and that more experimental data must be obtained to improve the models. (author). 31 refs., 5 figs., 4 tabs
Sabeerali, C. T.; Ajayamohan, R. S.; Giannakis, Dimitrios; Majda, Andrew J.
2017-11-01
An improved index for real-time monitoring and forecast verification of monsoon intraseasonal oscillations (MISOs) is introduced using the recently developed nonlinear Laplacian spectral analysis (NLSA) technique. Using NLSA, a hierarchy of Laplace-Beltrami (LB) eigenfunctions are extracted from unfiltered daily rainfall data from the Global Precipitation Climatology Project over the south Asian monsoon region. Two modes representing the full life cycle of the northeastward-propagating boreal summer MISO are identified from the hierarchy of LB eigenfunctions. These modes have a number of advantages over MISO modes extracted via extended empirical orthogonal function analysis including higher memory and predictability, stronger amplitude and higher fractional explained variance over the western Pacific, Western Ghats, and adjoining Arabian Sea regions, and more realistic representation of the regional heat sources over the Indian and Pacific Oceans. Real-time prediction of NLSA-derived MISO indices is demonstrated via extended-range hindcasts based on NCEP Coupled Forecast System version 2 operational output. It is shown that in these hindcasts the NLSA MISO indices remain predictable out to ˜3 weeks.
Pogorelov, A.G.; Pogorelova, V.N.; Khrenova, E.V.; Gol'dshtejn, D.V.; Aksirov, A.M.; Kantor, G.M.
2005-01-01
The details of the quantitative X-ray spectral microanalysis performed with a wave dispersive spectrometer are described. Hydration of biological tissues, light element composition, low concentration of analyzed elements and their nonuniform distribution are the specific features of bioorganic film and tissue section. This paper is aimed to discuss the general approaches to both preparation technique and quantitative analysis principles [ru
Modeling approach for annular-fuel elements using the ASSERT-PV subchannel code
Dominguez, A.N.; Rao, Y.
2012-01-01
The internally and externally cooled annular fuel (hereafter called annular fuel) is under consideration for a new high burn-up fuel bundle design in Atomic Energy of Canada Limited (AECL) for its current, and its Generation IV reactor. An assessment of different options to model a bundle fuelled with annular fuel elements is presented. Two options are discussed: 1) Modify the subchannel code ASSERT-PV to handle multiple types of elements in the same bundle, and 2) coupling ASSERT-PV with an external application. Based on this assessment, the selected option is to couple ASSERT-PV with the thermalhydraulic system code CATHENA. (author)
Global and local approaches to population analysis: Bonding patterns in superheavy element compounds
Oleynichenko, Alexander; Zaitsevskii, Andréi; Romanov, Stepan; Skripnikov, Leonid V.; Titov, Anatoly V.
2018-03-01
Relativistic effective atomic configurations of superheavy elements Cn, Nh and Fl and their lighter homologues (Hg, Tl and Pb) in their simple compounds with fluorine and oxygen are determined using the analysis of local properties of molecular Kohn-Sham density matrices in the vicinity of heavy nuclei. The difference in populations of atomic spinors with the same orbital angular momentum and different total angular momenta is demonstrated to be essential for understanding the peculiarities of chemical bonding in superheavy element compounds. The results are fully compatible with those obtained by the relativistic iterative version of conventional projection analysis of global density matrices.
A historical approach to teaching the concept of the chemical element
Cachapuz, António; Paixão, Fátima
2005-01-01
A novel teaching strategy is described, which was developed to introduce the key notion of chemical elements to 15-year-old Portuguese chemistry pupils. The strategy started from the analysis of the so-called ‘Lavoisier law ’and explored the relationships between macro and micro level chemistry in an innovative way. The key idea was first to explore the macro level (mass conservation) to help pupils consider the existence of indestructible units (elements, micro level) as a logical necessity ...
Lakdawalla, Darius N; Doshi, Jalpa A; Garrison, Louis P; Phelps, Charles E; Basu, Anirban; Danzon, Patricia M
2018-02-01
The third section of our Special Task Force report identifies and defines a series of elements that warrant consideration in value assessments of medical technologies. We aim to broaden the view of what constitutes value in health care and to spur new research on incorporating additional elements of value into cost-effectiveness analysis (CEA). Twelve potential elements of value are considered. Four of them-quality-adjusted life-years, net costs, productivity, and adherence-improving factors-are conventionally included or considered in value assessments. Eight others, which would be more novel in economic assessments, are defined and discussed: reduction in uncertainty, fear of contagion, insurance value, severity of disease, value of hope, real option value, equity, and scientific spillovers. Most of these are theoretically well understood and available for inclusion in value assessments. The two exceptions are equity and scientific spillover effects, which require more theoretical development and consensus. A number of regulatory authorities around the globe have shown interest in some of these novel elements. Augmenting CEA to consider these additional elements would result in a more comprehensive CEA in line with the "impact inventory" of the Second Panel on Cost-Effectiveness in Health and Medicine. Possible approaches for valuation and inclusion of these elements include integrating them as part of a net monetary benefit calculation, including elements as attributes in health state descriptions, or using them as criteria in a multicriteria decision analysis. Further research is needed on how best to measure and include them in decision making. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Petr Koňas
2009-01-01
Full Text Available Paper presents new original application WOOD3D in form of program code assembling. The work extends the previous article “Part I – Theoretical approach” in detail description of implemented C++ classes of utilized projects Visualization Toolkit (VTK, Insight Toolkit (ITK and MIMX. Code is written in CMake style and it is available as multiplatform application. Currently GNU Linux (32/64b and MS Windows (32/64b platforms were released. Article discusses various filter classes for image filtering. Mainly Otsu and Binary threshold filters are classified for anatomy wood samples thresholding. Registration of images series is emphasized for difference of colour spaces compensation is included. Resulted work flow of image analysis is new methodological approach for images processing through the composition, visualization, filtering, registration and finite element mesh formation. Application generates script in ANSYS parametric design language (APDL which is fully compatible with ANSYS finite element solver and designer environment. The script includes the whole definition of unstructured finite element mesh formed by individual elements and nodes. Due to simple notation, the same script can be used for generation of geometrical entities in element positions. Such formed volumetric entities are prepared for further geometry approximation (e.g. by boolean or more advanced methods. Hexahedral and tetrahedral types of mesh elements are formed on user request with specified mesh options. Hexahedral meshes are formed both with uniform element size and with anisotropic character. Modified octree method for hexahedral mesh with anisotropic character was declared in application. Multicore CPUs in the application are supported for fast image analysis realization. Visualization of image series and consequent 3D image are realized in VTK format sufficiently known and public format, visualized in GPL application Paraview. Future work based on mesh
McCoy, Michael L.; Moradi, Rasoul; Lankarani, Hamid M.
2011-01-01
This paper examines the effectiveness of analyzing impact events in mechanical systems for design purposes using simple or low ordered finite elements. Traditional impact dynamics analyses of mechanical systems namely stereomechanics, energy method, stress-wave propagation and contact mechanics approaches are limited to very simplified geometries and provide basic analyses in making predictions and understanding the dominant features of the impact in a mechanical system. In engineering practice, impacted systems present a complexity of geometry, stiffness, mass distributions, contact areas and impact angles that are impossible to analyze and design with the traditional impact dynamics methods. In real cases, the effective tool is the finite element (FE) method. The high-end FEA codes though may be not available for typical engineer/designer. This paper provides information on whether impact events of mechanical systems can be successfully modeled using simple or low-order finite elements. FEA models using simple elements are benchmarked against theoretical impact problems and published experimental impact results. As a case study, an FE model using simple plastic beam elements is further tested to predict stresses and deflections in an experimental structural impact
Llobet, X.; Appert, K.; Bondeson, A.; Vaclavik, J.
1990-01-01
Finite difference and finite element approximations of eigenvalue problems, under certain circumstances exhibit spectral pollution, i.e. the appearance of eigenvalues that do not converge to the correct value when the mesh density is increased. In the present paper this phenomenon is investigated in a homogeneous case by means of discrete dispersion relations: the polluting modes belong to a branch of the dispersion relation that is strongly distorted by the discretization method employed, or to a new, spurious branch. The analysis is applied to finite difference methods and to finite element methods, and some indications about how to avoiding polluting schemes are given. (author) 5 figs., 10 refs
Carlson, W.R.; Piplica, E.J.
1982-01-01
A spectral shift pressurized water reactor comprising apparatus for inserting and withdrawing water displacer elements having differing neutron absorbing capabilities for selectively changing the water-moderator volume in the core thereby changing the reactivity of the core. The displacer elements comprise substantially hollow cylindrical low neutron absorbing rods and substantially hollow cylindrical thick walled stainless rods. Since the stainless steel displacer rods have greater neutron absorbing capability, they can effect greater reactivity change per rod. However, by arranging fewer stainless steel displacer rods in a cluster, the reactivity worth of the stainless steel displacer rod cluster can be less than a low neutron absorbing displacer rod cluster. (author)
丸山, 哲央
2002-01-01
"The main purpose of this paper is to propose a conceptual scheme for the analysis of cultural globalization in accordance with the Parsonian notion of culture, and to point out some problematic issues caused by cultural globalization: the unbalanced development of cultural elements and the prevalence of the pseudo universality of a dominant particular culture. With respect to the recent trend of ‘cultural turn’ in social sciences, this is a tentative approach to a theorization of culture as ...
Two approaches to form antibacterial surface: Doping with bactericidal element and drug loading
Sukhorukova, I.V.; Sheveyko, A.N.; Kiryukhantsev-Korneev, Ph.V. [National University of Science and Technology “MISIS”, Leninsky pr. 4, Moscow 119049 (Russian Federation); Anisimova, N.Y.; Gloushankova, N.A.; Zhitnyak, I.Y. [N.N Blokhin Russian Cancer Research Center of RAMS, Kashirskoe shosse 24, Moscow 115478 (Russian Federation); Benesova, J. [Institute of Experimental Medicine of the ASCR, Vídenska 1083, Prague 14220 (Czech Republic); Institute of Biophysics, 2nd Faculty of Medicine, Charles University in Prague, V Uvalu 84, Prague 15006 (Czech Republic); Amler, E. [Institute of Experimental Medicine of the ASCR, Vídenska 1083, Prague 14220 (Czech Republic); Faculty of Biomedical Engineering, Czech Technical University in Prague (Czech Republic); Shtansky, D.V., E-mail: shtansky@shs.misis.ru [National University of Science and Technology “MISIS”, Leninsky pr. 4, Moscow 119049 (Russian Federation)
2015-03-01
Graphical abstract: - Highlights: • Bioactive materials with rate-controlled release of antibacterial agent. • Ag{sup +} ion release from TiCaPCON-Ag films depended on Ag content. • TiCaPCON-coated Ti network structure with blind pores loaded with co-amoxiclav. • Strong bactericidal effect of drug-loaded samples. • Antibacterial yet biocompatible and bioactive surfaces. - Abstract: Two approaches (surface doping with bactericidal element and loading of antibiotic into specially formed surface microcontainers) to the fabrication of antibacterial yet biocompatible and bioactive surfaces are described. A network structure with square-shaped blind pores of 2.6 ± 0.6 × 10{sup −3} mm{sup 3} for drug loading was obtained by selective laser sintering (SLS). The SLS-fabricated samples were loaded with 0.03, 0.3, 2.4, and 4 mg/cm{sup 2} of co-amoxiclav (amoxicillin and clavulanic acid). Ag-doped TiCaPCON films with 0.4, 1.2, and 4.0 at.% of Ag were obtained by co-sputtering of composite TiC{sub 0.5}-Ca{sub 3}(PO{sub 4}){sub 2} and metallic Ag targets. The surface structure of SLS-prepared samples and cross-sectional morphology of TiCaPCON-Ag films were studied by scanning electron microscopy. The through-thickness of Ag distribution in the TiCaPCON-Ag films was obtained by glow discharge optical emission spectroscopy. The kinetics of Ag ion release in normal saline solution was studied using inductively coupled plasma mass spectrometry. Bacterial activity of the samples was evaluated against S. epidermidis, S. aureus, and K. pneum. ozaenae using the agar diffusion test and photometric method by controlling the variation of optical density of the bacterial suspension over time. Cytocompatibility of the Ag-doped TiCaPCON films was observed in vitro using chondrocytic and MC3T3-E1 osteoblastic cells. The viability and proliferation of chondrocytic cells were determined using the MTS assay and PicoGreen assay tests, respectively. The alkaline phosphatase (ALP
Some new approaches to the synthesis of heavy and superheavy elements
Flerov, G.N.
1980-01-01
The results of work on the synthesis of heavy and superheavy elements are considered. It is shown that the new regularity of the systematics of spontaneous-fission half-lives, established for heavy nuclei at Dubna, has made it possible to extend the region of the nuclei being synthesized. In particular, it becomes possible to produce relatively long-lived heavy isotopes of Z>=107. The results of experiments to study the emission of energetic α-particles in the collision of heavy nuclei are presented. It is noted that such reactions can be used to produce atomic nuclei with low excitation energy and large angular momentum. The possible use of similar reactions in the synthesis of heavy and superheavy elements is discussed. In case the existence of a naturally occurring superheavy element has been established, a possibility will arise to synthesize in nuclear reactions a number of isotopes belonging to the island of stability, and to investigate their properties. The present state of work on the search for superheavy elements in nature is briefly described
Microscopic approach of the spectral property of 1+ and high-spin states in 124Te nucleus
Shi Zhuyi; Ni Shaoyong; Tong Hong; Zhao Xingzhi
2004-01-01
Using a microscopic sdIBM-2+2q·p· approach, the spectra of the low-spin and partial high-spin states in 124 Te nucleus are relatively successfully calculated. In particular, the 1 1 + , 1 2 + , 3 1 + , 3 2 + and 5 1 + states are successfully reproduced, the energy relationship resulting from this approach identifies that the 6 1 + , 8 1 + and 10 1 + states belong to the aligned states of the two protons. This can explain the recent experimental results that the collective structures may coexist with the single-particle states. So this approach becomes a powerful tool for successfully describing the spectra of general nuclei without clear symmetry and of isotopes located at transitional regions. Finally, the aligned-state structure and the broken-pair energy of the two-quasi-particle are discussed
Eshel, Gil, E-mail: eshelgil@gmail.com [Soil Erosion Research Station, Ministry of Agriculture and Rural Development, HaMaccabim Road, Rishon-Lezion. P.O.B. 30, Beit-Dagan, 50250 (Israel); Lin, Chunye [School of Environment, Beijing Normal University, 19 Xinjiekouwaidajie St., Beijing, 100875 (China); Banin, Amos [Department of Soil and Water Sciences, Faculty of Agricultural, Food and Environmental Quality Sciences, The Hebrew University of Jerusalem, P.O. Box 12, Rehovot (Israel)
2015-01-01
We investigated changes in element content and distribution in soil profiles in a study designed to monitor the geochemical changes accruing in soil due to long-term secondary effluent recharge, and its impact on the sustainability of the Soil Aquifer Treatment (SAT) system. Since the initial elemental contents of the soils at the studied site were not available, we reconstructed them using scandium (Sc) as a conservative tracer. By using this approach, we were able to produce a mass-balance for 18 elements and evaluate the geochemical changes resulting from 19 years of effluent recharge. This approach also provides a better understanding of the role of soils as an adsorption filter for the heavy metals contained in the effluent. The soil mass balance suggests 19 years of effluent recharge cause for a significant enrichment in Cu, Cr, Ni, Zn, Mg, K, Na, S and P contents in the upper 4 m of the soil profile. Combining the elements lode record during the 19 years suggest that Cr, Ni, and P inputs may not reach the groundwater (20 m deep), whereas the other elements may. Conversely, we found that 58, 60, and 30% of the initial content of Mn, Ca and Co respectively leached from the upper 2-m of the soil profile. These high percentages of Mn and Ca depletion from the basin soils may reduce the soil's ability to buffer decreases in redox potential pe and pH, respectively, which could initiate a reduction in the soil's holding capacity for heavy metals. - Highlights: • Sc proved as a reliable tracer for reconstructing the initial soil elemental contents. • Mass-balance for 18 elements resulting from 19 years of SAT operation is presented. • After 19 years of operation Cr, Ni, and P inputs may not reach the groundwater. • The inputs of other 15 elements may reach the groundwater. • 58, 60, 30% of initial soil content of Mn, Ca, Co res. leached from the upper 2-m.
Ustinov, Eugene A.
2005-01-01
An approach to formulation of inversion algorithms for remote sensing in the thermal spectral region in the case of a scattering planetary atmosphere, based on the adjoint equation of radiative transfer (Ustinov (JQSRT 68 (2001) 195; JQSRT 73 (2002) 29); referred to as Papers 1 and 2, respectively, in the main text), is applied to the general case of retrievals of atmospheric and surface parameters for the scattering atmosphere with nadir viewing geometry. Analytic expressions for corresponding weighting functions for atmospheric parameters and partial derivatives for surface parameters are derived. The case of pure atmospheric absorption with a scattering underlying surface is considered and convergence to results obtained for the non-scattering atmospheres (Ustinov (JQSRT 74 (2002) 683), referred to as Paper 3 in the main text) is demonstrated
Figueroa, M. C.; Gregory, D. D.; Lyons, T. W.; Williford, K. H.
2017-12-01
Life processes affect trace element abundances in pyrite such that sedimentary and hydrothermal pyrite have significantly different trace element signatures. Thus, we propose that these biogeochemical data could be used to identify pyrite that formed biogenetically either early in our planet's history or on other planets, particularly Mars. The potential for this approach is elevated because pyrite is common in diverse sedimentary settings, and its trace element content can be preserved despite secondary overprints up to greenschist facies, thus minimizing the concerns about remobilization that can plague traditional whole rock studies. We are also including in-situ sulfur isotope analysis to further refine our understanding of the complex signatures of ancient pyrite. Sulfur isotope data can point straightforwardly to the involvement of life, because pyrite in sediments is inextricably linked to bacterial sulfate reduction and its diagnostic isotopic expressions. In addition to analyzing pyrite of known biological origin formed in the modern and ancient oceans under a range of conditions, we are building a data set for pyrite formed by hydrothermal and metamorphic processes to minimize the risk of false positives in life detection. We have used Random Forests (RF), a machine learning statistical technique with proven efficiency for classifying large geological datasets, to classify pyrite into biotic and abiotic end members. Coupling the trace element and sulfur isotope data from our analyses with a large existing dataset from diverse settings has yielded 4500 analyses with 18 different variables. Our initial results reveal the promise of the RF approach, correctly identifying biogenic pyrite 97 percent of the time. We will continue to couple new in-situ S-isotope and trace element analyses of biogenic pyrite grains from modern and ancient environments, using cutting-edge microanalytical techniques, with new data from high temperature settings. Our ultimately goal
The Elements of Language Curriculum: A Systematic Approach to Program Development.
Brown, James Dean
A systematic approach to second language curriculum development is outlined, enumerating the phases and activities involved in developing and implementing a sound and effective language program. The first chapter describes a system whereby all language teaching activities can be classified into approaches, syllabuses, techniques, exercises, or…
Discrete element modeling approach to porosimetry for durability risk estimation of concrete
Stroeven, P.; Le, N.L.B.; Stroeven, M.; Sluys, L.J.
2011-01-01
The paper introduces a novel approach to porosimetry in virtual concrete, denoted as random node structuring (RNS). The fresh state of this particulate material is produced by the DEM system HADES. Hydration simulation is a hybrid approach making use of wellknown discretization and vector methods.
Anderson, M.B. [Renewable Energy Systems Ltd., Hemel Hempstead (United Kingdom)
1996-09-01
It is possible to compute the aeroelastic response of a horizontal axis wind turbine comprising; Structural: rotor substructure 144 dof, tower substructure 48 dof, induction, synchronous or variable speed, and gearbox. Aerodynamic: 3 blades (10 elements per blade), dynamic stall, and 6 different aerofoil types with combination of fixed or pitching elements. Control: stall or power regulation or speed control and shutdowns, wind shear, and tower shadow. Turbulence: 8 radial points, 32 circumferential, and 3 components. On a DEC Alpha Workstation the code will simulate the response inclose to real-time. As the code is presently formulated deflections from the initial starting point have to be small and therefore its ability to fully analyse very flexible structures is limited. (EG)
Jurowski, Claudia Anne
1994-01-01
Recent research in the field of tourism has demonstrated that the endorsement of the indigenous the population is essential for the development, successful operation and sustainability of tourism. Achieving the goal of favorable community support for the tourism industry will require an understanding of how residents formulate their perceptions of the impact of tourism and their attitudes toward tourism. The purpose of this study was to examine the interplay of elements that affect host co...
Investigation of High-Speed Cryogenic Machining Based on Finite Element Approach
Pooyan Vahidi Pashaki
Full Text Available Abstract The simulation of cryogenic machining process because of using a three-dimensional model and high process duration time in the finite element method, have been studied rarely. In this study, to overcome this limitation, a 2.5D finite element model using the commercial finite element software ABAQUS has been developed for the cryogenic machining process and by considering more realistic assumptions, the chip formation procedure investigated. In the proposed method, the liquid nitrogen has been used as a coolant. At the modeling of friction during the interaction of tools - chip, the Coulomb law has been used. In order to simulate the behavior of plasticity and failure criterion, Johnson-Cook model was used, and unlike previous investigations, thermal and mechanical properties of materials as a function of temperature were applied to the software. After examining accuracy of the model with present experimental data, the effect of parameters such as rake angle and the cutting speed as well as dry machining of aluminum alloy by the use of coupled dynamic temperature solution has been studied. Results indicated that at the cutting velocity of 10 m/s, cryogenic cooling has caused into decreasing 60 percent of tools temperature in comparison with the dry cooling. Furthermore, a chip which has been made by cryogenic machining were connected and without fracture in contrast to dry machining.
Manufacturing of 37-element fuel bundles for PHWR 540 - new approach
Arora, U.K.; Sastry, V.S.; Banerjee, P.K.; Rao, G.V.S.H.; Jayaraj, R.N. [Nuclear Fuel Complex, Dept. Atomic Energy, Government of India, Hyderabad (India)
2003-07-01
Nuclear Fuel Complex (NFC), established in early seventies, is a major industrial unit of Department of Atomic Energy. NFC is responsible for the supply of fuel bundles to all the 220 MWe PHWRs presently in operation. For supplying fuel bundles for the forthcoming 540 MWe PHWRs, NEC is dovetailing 37-element fuel bundle manufacturing facilities in the existing plants. In tune with the philosophy of self-reliance, emphasis is given to technology upgradation, higher customer satisfaction and application of modern quality control techniques. With the experience gained over the years in manufacturing 19-element fuel bundles, NEC has introduced resistance welding of appendages on fuel tubes prior to loading of UO{sub 2} pellets, use of bio-degradable cleaning agents, simple diagnostic tools for checking the equipment condition, on line monitoring of variables, built-in process control methods and total productive maintenance concepts in the new manufacturing facility. Simple material handling systems have been contemplated for handling of the fuel bundles. This paper highlights the flow-sheet adopted for the process, design features of critical equipment and the methodology for fabricating the 37-element fuel bundles, 'RIGHT FIRST TIME'. (author)
Manufacturing of 37-element fuel bundles for PHWR 540 - new approach
Arora, U.K.; Sastry, V.S.; Banerjee, P.K.; Rao, G.V.S.H.; Jayaraj, R.N.
2003-01-01
Nuclear Fuel Complex (NFC), established in early seventies, is a major industrial unit of Department of Atomic Energy. NFC is responsible for the supply of fuel bundles to all the 220 MWe PHWRs presently in operation. For supplying fuel bundles for the forthcoming 540 MWe PHWRs, NEC is dovetailing 37-element fuel bundle manufacturing facilities in the existing plants. In tune with the philosophy of self-reliance, emphasis is given to technology upgradation, higher customer satisfaction and application of modern quality control techniques. With the experience gained over the years in manufacturing 19-element fuel bundles, NEC has introduced resistance welding of appendages on fuel tubes prior to loading of UO 2 pellets, use of bio-degradable cleaning agents, simple diagnostic tools for checking the equipment condition, on line monitoring of variables, built-in process control methods and total productive maintenance concepts in the new manufacturing facility. Simple material handling systems have been contemplated for handling of the fuel bundles. This paper highlights the flow-sheet adopted for the process, design features of critical equipment and the methodology for fabricating the 37-element fuel bundles, 'RIGHT FIRST TIME'. (author)
Arunava Maity
2015-01-01
Full Text Available This paper considers an infinite-buffer queuing system with birth-death modulated Markovian arrival process (BDMMAP with arbitrary service time distribution. BDMMAP is an excellent representation of the arrival process, where the fractal behavior such as burstiness, correlation, and self-similarity is observed, for example, in ethernet LAN traffic systems. This model was first apprised by Nishimura (2003, and to analyze it, he proposed a twofold spectral theory approach. It seems from the investigations that Nishimura’s approach is tedious and difficult to employ for practical purposes. The objective of this paper is to analyze the same model with an alternative methodology proposed by Chaudhry et al. (2013 (to be referred to as CGG method. The CGG method appears to be rather simple, mathematically tractable, and easy to implement as compared to Nishimura’s approach. The Achilles tendon of the CGG method is the roots of the characteristic equation associated with the probability generating function (pgf of the queue length distribution, which absolves any eigenvalue algebra and iterative analysis. Both the methods are presented in stepwise manner for easy accessibility, followed by some illustrative examples in accordance with the context.
Alejandro Morales
2017-11-01
Full Text Available This paper presents a new approach for energetic analyses of traffic accidents against fixed road elements using close-range photogrammetry. The main contributions of the developed approach are related to the quality of the 3D photogrammetric models, which enable objective and accurate energetic analyses through the in-house tool CRASHMAP. As a result, security forces can reconstruct the accident in a simple and comprehensive way without requiring spreadsheets or external tools, and thus avoid the subjectivity and imprecisions of the traditional protocol. The tool has already been validated, and is being used by the Local Police of Salamanca (Salamanca, Spain for the resolution of numerous accidents. In this paper, a real accident of a car against a fixed metallic pole is analysed, and significant discrepancies are obtained between the new approach and the traditional protocol of data acquisition regarding collision speed and absorbed energy.
Pask, J.E.; Klein, B.M.; Fong, C.Y.; Sterne, P.A.
1999-01-01
We present an approach to solid-state electronic-structure calculations based on the finite-element method. In this method, the basis functions are strictly local, piecewise polynomials. Because the basis is composed of polynomials, the method is completely general and its convergence can be controlled systematically. Because the basis functions are strictly local in real space, the method allows for variable resolution in real space; produces sparse, structured matrices, enabling the effective use of iterative solution methods; and is well suited to parallel implementation. The method thus combines the significant advantages of both real-space-grid and basis-oriented approaches and so promises to be particularly well suited for large, accurate ab initio calculations. We develop the theory of our approach in detail, discuss advantages and disadvantages, and report initial results, including electronic band structures and details of the convergence of the method. copyright 1999 The American Physical Society
J. Czerny
2013-05-01
Full Text Available Recent studies on the impacts of ocean acidification on pelagic communities have identified changes in carbon to nutrient dynamics with related shifts in elemental stoichiometry. In principle, mesocosm experiments provide the opportunity of determining temporal dynamics of all relevant carbon and nutrient pools and, thus, calculating elemental budgets. In practice, attempts to budget mesocosm enclosures are often hampered by uncertainties in some of the measured pools and fluxes, in particular due to uncertainties in constraining air–sea gas exchange, particle sinking, and wall growth. In an Arctic mesocosm study on ocean acidification applying KOSMOS (Kiel Off-Shore Mesocosms for future Ocean Simulation, all relevant element pools and fluxes of carbon, nitrogen and phosphorus were measured, using an improved experimental design intended to narrow down the mentioned uncertainties. Water-column concentrations of particulate and dissolved organic and inorganic matter were determined daily. New approaches for quantitative estimates of material sinking to the bottom of the mesocosms and gas exchange in 48 h temporal resolution as well as estimates of wall growth were developed to close the gaps in element budgets. However, losses elements from the budgets into a sum of insufficiently determined pools were detected, and are principally unavoidable in mesocosm investigation. The comparison of variability patterns of all single measured datasets revealed analytic precision to be the main issue in determination of budgets. Uncertainties in dissolved organic carbon (DOC, nitrogen (DON and particulate organic phosphorus (POP were much higher than the summed error in determination of the same elements in all other pools. With estimates provided for all other major elemental pools, mass balance calculations could be used to infer the temporal development of DOC, DON and POP pools. Future elevated pCO2 was found to enhance net autotrophic community carbon
A Multi-Element Approach to Location Inference of Twitter: A Case for Emergency Response
Farhad Laylavi
2016-04-01
Full Text Available Since its inception, Twitter has played a major role in real-world events—especially in the aftermath of disasters and catastrophic incidents, and has been increasingly becoming the first point of contact for users wishing to provide or seek information about such situations. The use of Twitter in emergency response and disaster management opens up avenues of research concerning different aspects of Twitter data quality, usefulness and credibility. A real challenge that has attracted substantial attention in the Twitter research community exists in the location inference of twitter data. Considering that less than 2% of tweets are geotagged, finding location inference methods that can go beyond the geotagging capability is undoubtedly the priority research area. This is especially true in terms of emergency response, where spatial aspects of information play an important role. This paper introduces a multi-elemental location inference method that puts the geotagging aside and tries to predict the location of tweets by exploiting the other inherently attached data elements. In this regard, textual content, users’ profile location and place labelling, as the main location-related elements, are taken into account. Location-name classes in three granularity levels are defined and employed to look up the location references from the location-associated elements. The inferred location of the finest granular level is assigned to a tweet, based on a novel location assignment rule. The location assigned by the location inference process is considered to be the inferred location of a tweet, and is compared with the geotagged coordinates as the ground truth of the study. The results show that this method is able to successfully infer the location of 87% of the tweets at the average distance error of 12.2 km and the median distance error of 4.5 km, which is a significant improvement compared with that of the current methods that can predict the location
Finite element approach to global gyrokinetic particle-in-cell simulations using magnetic coordinate
Fivaz, M.; Brunner, S.; Ridder, G. de; Sauter, O.; Tran, T.M.; Vaclavik, J.; Villard, L.; Appert, K.
1997-08-01
We present a fully-global linear gyrokinetic simulation code (GYGLES) aimed at describing the instable spectrum of the ion-temperature-gradient modes in toroidal geometry. We formulate the Particle-In-Cell method with finite elements defined in magnetic coordinates, which provides excellent numerical convergence properties. The poloidal mode structure corresponding to k // =0 is extracted without approximation from the equations, which reduces drastically the numerical resolution needed. The code can simulate routinely modes with both very long and very short toroidal wavelengths, can treat realistic (MHD) equilibria of any size and runs on a massively parallel computer. (author) 10 figs., 28 refs
Reem Yassine
2016-12-01
Full Text Available The frequency response function is a quantitative measure used in structural analysis and engineering design; hence, it is targeted for accuracy. For a large structure, a high number of substructures, also called cells, must be considered, which will lead to a high amount of computational time. In this paper, the recursive method, a finite element method, is used for computing the frequency response function, independent of the number of cells with much lesser time costs. The fundamental principle is eliminating the internal degrees of freedom that are at the interface between a cell and its succeeding one. The method is applied solely for free (no load nodes. Based on the boundary and interior degrees of freedom, the global dynamic stiffness matrix is computed by means of products and inverses resulting with a dimension the same as that for one cell. The recursive method is demonstrated on periodic structures (cranes and buildings under harmonic vibrations. The method yielded a satisfying time decrease with a maximum time ratio of 1 18 and a percentage difference of 19%, in comparison with the conventional finite element method. Close values were attained at low and very high frequencies; the analysis is supported for two types of materials (steel and plastic. The method maintained its efficiency with a high number of forces, excluding the case when all of the nodes are under loads.
Michael J. Leamy
2011-12-01
Full Text Available Dispersion calculations are presented for cylindrical carbon nanotubes using a manifold-based continuum-atomistic finite element formulation combined with Bloch analysis. The formulated finite elements allow any (n,m chiral nanotube, or mixed tubes formed by periodically-repeating heterojunctions, to be examined quickly and accurately using only three input parameters (radius, chiral angle, and unit cell length and a trivial structured mesh, thus avoiding the tedious geometry generation and energy minimization tasks associated with ab initio and lattice dynamics-based techniques. A critical assessment of the technique is pursued to determine the validity range of the resulting dispersion calculations, and to identify any dispersion anomalies. Two small anomalies in the dispersion curves are documented, which can be easily identified and therefore rectified. They include difficulty in achieving a zero energy point for the acoustic twisting phonon, and a branch veering in nanotubes with nonzero chiral angle. The twisting mode quickly restores its correct group velocity as wavenumber increases, while the branch veering is associated with a rapid exchange of eigenvectors at the veering point, which also lessens its impact. By taking into account the two noted anomalies, accurate predictions of acoustic and low-frequency optical branches can be achieved out to the midpoint of the first Brillouin zone.
Anuruddh Kumar
2015-03-01
Full Text Available This paper examines the selection and performance evaluation of a variety of piezoelectric materials for cantilever-based sensor applications. The finite element analysis method is implemented to evaluate the relative importance of materials properties such as Young's Modulus (E, piezoelectric stress constants (e31, dielectric constant (ε and Poisson's ratio (υ for cantilever-based sensor applications. An analytic hierarchy process (AHP is used to assign weights to the properties that are studied for the sensor structure under study. A technique for order preference by similarity to ideal solution (TOPSIS is used to rank the performance of the piezoelectric materials in the context of sensor voltage outputs. The ranking achieved by the TOPSIS analysis is in good agreement with the results obtained from finite element method simulation. The numerical simulations show that K0.5Na0.5NbO3–LiSbO3 (KNN–LS materials family is important for sensor application. Young's modulus (E is most influencing material's property followed by piezoelectric constant (e31, dielectric constant (ε and Poisson's ratio (υ for cantilever-based piezoelectric sensor applications.
Guney, Mert; Zagury, Gerald J.
2014-01-01
Highlights: • Risk for children up to 3 years-old was characterized considering oral exposure. • Saliva mobilization, ingestion of parts and of scraped-off material were considered. • Ingestion of parts caused hazard index (HI) values >>for Cd, Ni, and Pb exposure. • HI were lower (but > for saliva mobilization and 1, up to 75, 5.8, and 43, respectively). HI for ingestion of scraped-off material scenario was always 1 in three samples (two for Cd, one for Ni). Risk characterization identified different potentially hazardous items compared to United States, Canadian, and European Union approaches. A comprehensive approach was also developed to deal with complexity and drawbacks caused by various toy/jewelry definitions, test methods, exposure scenarios, and elements considered in different regulatory approaches. It includes bioaccessible limits for eight priority elements (As, Cd, Cr, Cu, Hg, Ni, Pb, and Sb). Research is recommended on metals bioaccessibility determination in toys/jewelry, in vitro bioaccessibility test development, estimation of material ingestion rates and frequency, presence of hexavalent Cr and organic Sn, and assessment of prolonged exposure to MJ
Guney, Mert; Zagury, Gerald J
2014-04-30
Contamination problem in jewelry and toys and children's exposure possibility have been previously demonstrated. For this study, risk from oral exposure has been characterized for highly contaminated metallic toys and jewelry ((MJ), n=16) considering three scenarios. Total and bioaccessible concentrations of Cd, Cu, Ni, and Pb were high in selected MJ. First scenario (ingestion of parts or pieces) caused unacceptable risk for eight items for Cd, Ni, and/or Pb (hazard index (HI)>1, up to 75, 5.8, and 43, respectively). HI for ingestion of scraped-off material scenario was always 1 in three samples (two for Cd, one for Ni). Risk characterization identified different potentially hazardous items compared to United States, Canadian, and European Union approaches. A comprehensive approach was also developed to deal with complexity and drawbacks caused by various toy/jewelry definitions, test methods, exposure scenarios, and elements considered in different regulatory approaches. It includes bioaccessible limits for eight priority elements (As, Cd, Cr, Cu, Hg, Ni, Pb, and Sb). Research is recommended on metals bioaccessibility determination in toys/jewelry, in vitro bioaccessibility test development, estimation of material ingestion rates and frequency, presence of hexavalent Cr and organic Sn, and assessment of prolonged exposure to MJ. Copyright © 2014 Elsevier B.V. All rights reserved.
Sustainable development, tourism and territory. Previous elements towards a systemic approach
Pierre TORRENTE
2009-01-01
Full Text Available Today, tourism is one of the major challenges for many countries and territories. The balance of payments, an ever-increasing number of visitors and the significant development of the tourism offer clearly illustrate the booming trend in this sector. This macro-economic approach is often used by the organizations in charge of tourism, WTO for instance. Quantitative assessments which consider the satisfaction of customers’ needs as an end in itself have prevailed both in tourism development schemes and in prospective approaches since the sixties.
Brown, Koshonna Dinettia
X-ray Fluorescence Microscopy (XFM) is a useful technique for study of biological samples. XFM was used to map and quantify endogenous biological elements as well as exogenous materials in biological samples, such as the distribution of titanium dioxide (TiO2) nanoparticles. TiO 2 nanoparticles are produced for many different purposes, including development of therapeutic and diagnostic particles for cancer detection and treatment, drug delivery, and induction of DNA breaks. Delivery of such nanoparticles can be targeted to specific cells and subcellular structures. In this work, we develop two novel approaches to stain TiO2 nanoparticles for optical microscopy and to confirm that staining by XFM. The first approach utilizes fluorescent biotin and fluorescent streptavidin to label the nanoparticles before and after cellular uptake; the second approach is based on the copper-catalyzed azide-alkyne cycloaddition, the so-called CLICK chemistry, for labeling of azide conjugated TiO2 nanoparticles with "clickable" dyes such as alkyne Alexa Fluor dyes with a high fluorescent yield. To confirm that the optical fluorescence signals of nanoparticles stained in situ match the distribution of the Ti element, we used high resolution synchrotron X-Ray Fluorescence Microscopy (XFM) using the Bionanoprobe instrument at the Advanced Photon Source at Argonne National Laboratory. Titanium-specific X-ray fluorescence showed excellent overlap with the location of Alexa Fluor optical fluorescence detected by confocal microscopy. In this work XFM was also used to investigate native elemental differences between two different types of head and neck cancer, one associated with human papilloma virus infection, the other virus free. Future work may see a cross between these themes, for example, exploration of TiO2 nanoparticles as anticancer treatment for these two different types of head and neck cancer.
Kwadwo S. Agyepong
2013-01-01
Full Text Available Time-course expression profiles and methods for spectrum analysis have been applied for detecting transcriptional periodicities, which are valuable patterns to unravel genes associated with cell cycle and circadian rhythm regulation. However, most of the proposed methods suffer from restrictions and large false positives to a certain extent. Additionally, in some experiments, arbitrarily irregular sampling times as well as the presence of high noise and small sample sizes make accurate detection a challenging task. A novel scheme for detecting periodicities in time-course expression data is proposed, in which a real-valued iterative adaptive approach (RIAA, originally proposed for signal processing, is applied for periodogram estimation. The inferred spectrum is then analyzed using Fisher’s hypothesis test. With a proper -value threshold, periodic genes can be detected. A periodic signal, two nonperiodic signals, and four sampling strategies were considered in the simulations, including both bursts and drops. In addition, two yeast real datasets were applied for validation. The simulations and real data analysis reveal that RIAA can perform competitively with the existing algorithms. The advantage of RIAA is manifested when the expression data are highly irregularly sampled, and when the number of cycles covered by the sampling time points is very reduced.
Territorial community: a systematic approach to advance functions of individual elements
T. V. Serohina
2017-03-01
It is established that in conditions of the administrative-territorial reform, is the need to change in the approach to the basic concepts, in particular, of the territorial communities category as well as of a new category of amalgamated territorial community. New categories need to be identifyed and be enshrined in the legal framework.
Elements and rationale for a common approach to assess and report soil disturbance.
Mike Curran; Doug Maynard; Ron Heninger; Tom Terry; Steve Howes; Doug Stone; Tom Niemann; Richard E. Miller
2008-01-01
Soil disturbance from forest practices ranges from barely perceptible to very obvious, and from positive to nil to negative effects on forest productivity and 1 or hydrologic function. Currently, most public and private landholders and various other interested parties have different approaches to describing this soil disturbance. More uniformity is needed to describe,...
The 4th Missing Element of the ITO Systemic Approach to Safety
Smetnik, A.; Murlis, D.
2016-01-01
According to the IAEA Report the Fukushima Daiichi accident was a wake-up call for the nuclear community to recognise the complexity of safety and to respect the entire systems interaction of ITOs. The complexity of nuclear organizations is increasing, and different and more unique approaches are needed to ensure that safety is maintained. The Fukushima Daiichi accident was avoidable, according to the presentations of experts from Japan. Taking into account the ongoing interaction between all the individual, technical and organizational (ITO) factors reveals the complexity and non-linearity of the operations at a nuclear power plant. It is necessary to better examine how the weaknesses and strengths of all these factors influence one another and to facilitate the proactive elimination of risks. The International Experts Meeting (IEM) participants emphasised that an integrated approach to safety through consideration of the interaction of ITO systems is needed to complement the more traditional approach to safety. The concept of a systemic approach to safety represents a new way of thinking about safety for some Member States and even for some IAEA activities and services.
A novel approach to the island of stability of super-heavy elements search
Wieloch A.
2016-01-01
Full Text Available It is expected that the cross section for super-heavy nuclei production of Z > 118 is dropping into the region of tens of femto barns. This creates a serious limitation for the complete fusion technique that is used so far. Moreover, the available combinations of the neutron to proton ratio of stable projectiles and targets are quite limited and it can be difficult to reach the island of stability of super heavy elements using complete fusion reactions with stable projectiles. In this context, a new experimental investigation of mechanisms other than complete fusion of heavy nuclei and a novel experimental technique are invented for our search of super- and hyper-nuclei. This contribution is focused on that technique.
New approach to description of fusion-fission dynamics in super-heavy element formation
Zagrebaev, V.I.
2002-01-01
A new mechanism of the fusion-fission process for a heavy nuclear system is proposed, which takes place in the (A 1 , A 2 ) space, where A 1 and A 2 are two nuclei, surrounded by a certain number of shared nucleons ΔA. The nuclei A 1 and A 2 gradually lose (or acquire) their individualities with increasing (or decreasing) a number of collectivized nucleons ΔA. The driving potential in the (A 1 , A 2 ) space is derived, which allows the calculation of both the probability of the compound nucleus formation and the mass distribution of fission and quasi-fission fragments in heavy ion fusion reactions. The cross sections of super-heavy element formation in the 'hot' and 'cold' fusion reactions have been calculated up to Z CN =118. (author)
Numerical investigation of a shot peening process by a finite element approach
Liu, Hongsheng; Zhang, Xiaodan; Hansen, Niels
2014-01-01
Shot peening is a surface impact treatment widely used to improve the performance of a metal or a component. The better performance of the shot peened part is controlled by compressive residual stresses resulting from the plastic deformation of the surface layers by impacts of shot. The compressive...... residual stress is generally measured by X-ray diffraction. However, considerable cost and time are needed for such measurements. For this reason, in this work a 3D finite element (FE) model is introduced for a shot peening process. Through the FE simulations, the effect of process parameters...... such as damping ratio of material, friction coefficient, shot velocity and shot angle on the magnitude and distribution of the compressive residual stress is examined....
Li, Jingchao; Cao, Yunpeng; Ying, Yulong; Li, Shuying
2016-01-01
Bearing failure is one of the dominant causes of failure and breakdowns in rotating machinery, leading to huge economic loss. Aiming at the nonstationary and nonlinear characteristics of bearing vibration signals as well as the complexity of condition-indicating information distribution in the signals, a novel rolling element bearing fault diagnosis method based on multifractal theory and gray relation theory was proposed in the paper. Firstly, a generalized multifractal dimension algorithm was developed to extract the characteristic vectors of fault features from the bearing vibration signals, which can offer more meaningful and distinguishing information reflecting different bearing health status in comparison with conventional single fractal dimension. After feature extraction by multifractal dimensions, an adaptive gray relation algorithm was applied to implement an automated bearing fault pattern recognition. The experimental results show that the proposed method can identify various bearing fault types as well as severities effectively and accurately.
A discontinuous finite element approach to cracking in coupled poro-elastic fluid flow models
Wilson, C. R.; Spiegelman, M. W.; Evans, O.; Ulven, O. I.; Sun, W.
2016-12-01
Reaction-driven cracking is a coupled process whereby fluid-induced reactions drive large volume changes in the host rock which produce stresses leading to crack propagation and failure. This in turn generates new surface area and fluid-flow pathways for subsequent reaction in a potentially self-sustaining system. This mechanism has has been proposed for the pervasive serpentinization and carbonation of peridotite, as well as applications to mineral carbon sequestration and hydrocarbon extraction. The key computational issue in this problem is implementing algorithms that adequately model the formation of discrete fractures. Here we present models using a discontinuous finite element method for modeling fracture formation (Radovitsky et al., 2011). Cracks are introduced along facets of the mesh by the relaxation of penalty parameters once a failure criterion is met. It is fully described in the weak form of the equations, requiring no modification of the underlying mesh structure and allowing fluid properties to be easily adjusted along cracked facets. To develop and test the method, we start by implementing the algorithm for the simplified Biot equations for poro-elasticity using the finite element model assembler TerraFERMA. We consider hydro-fracking around a borehole (Grassl et al., 2015), where elevated fluid pressure in the poro-elastic solid causes it to fail radially in tension. We investigate the effects of varying the Biot coefficient and adjusting the fluid transport properties in the vicinity of the crack and compare our results to related dual-graph models (Ulven & Sun, submitted). We discuss issues arising from this method, including the formation of null spaces and appropriate preconditioning and solution strategies. Initial results suggest that this method provides a promising way to incorporate cracking into our reactive fluid flow models and future work aims to integrate the mechanical and chemical aspects of this process.
Macho, Gabriele A; Shimizu, Daisuke; Jiang, Yong; Spears, Iain R
2005-04-01
Australopithecus anamensis is the stem species of all later hominins and exhibits the suite of characters traditionally associated with hominins, i.e., bipedal locomotion when on the ground, canine reduction, and thick-enameled teeth. The functional consequences of its thick enamel are, however, unclear. Without appropriate structural reinforcement, these thick-enameled teeth may be prone to failure. This article investigates the mechanical behavior of A. anamensis enamel and represents the first in a series that will attempt to determine the functional adaptations of hominin teeth. First, the microstructural arrangement of enamel prisms in A. anamensis teeth was reconstructed using recently developed software and was compared with that of extant hominoids. Second, a finite-element model of a block of enamel containing one cycle of prism deviation was reconstructed for Homo, Pan, Gorilla, and A. anamensis and the behavior of these tissues under compressive stress was determined. Despite similarities in enamel microstructure between A. anamensis and the African great apes, the structural arrangement of prismatic enamel in A. anamensis appears to be more effective in load dissipation under these compressive loads. The findings may imply that this hominin species was well adapted to puncture crushing and are in some respects contrary to expectations based on macromorphology of teeth. Taking together, information obtained from both finite-element analyses and dental macroanatomy leads us to suggest that A. anamensis was probably adapted for habitually consuming a hard-tough diet. However, additional tests are needed to understand the functional adaptations of A. anamensis teeth fully.
Allison, Stuart A; Xin, Yao
2005-08-15
A boundary element (BE) procedure is developed to numerically calculate the electrophoretic mobility of highly charged, rigid model macroions in the thin double layer regime based on the continuum primitive model. The procedure is based on that of O'Brien (R.W. O'Brien, J. Colloid Interface Sci. 92 (1983) 204). The advantage of the present procedure over existing BE methodologies that are applicable to rigid model macroions in general (S. Allison, Macromolecules 29 (1996) 7391) is that computationally time consuming integrations over a large number of volume elements that surround the model particle are completely avoided. The procedure is tested by comparing the mobilities derived from it with independent theory of the mobility of spheres of radius a in a salt solution with Debye-Huckel screening parameter, kappa. The procedure is shown to yield accurate mobilities provided (kappa)a exceeds approximately 50. The methodology is most relevant to model macroions of mean linear dimension, L, with 1000>(kappa)L>100 and reduced absolute zeta potential (q|zeta|/k(B)T) greater than 1.0. The procedure is then applied to the compact form of high molecular weight, duplex DNA that is formed in the presence of the trivalent counterion, spermidine, under low salt conditions. For T4 DNA (166,000 base pairs), the compact form is modeled as a sphere (diameter=600 nm) and as a toroid (largest linear dimension=600 nm). In order to reconcile experimental and model mobilities, approximately 95% of the DNA phosphates must be neutralized by bound counterions. This interpretation, based on electrokinetics, is consistent with independent studies.
Mottese, Antonio Francesco; Naccari, Clara; Vadalà, Rossella; Bua, Giuseppe Daniel; Bartolomeo, Giovanni; Rando, Rossana; Cicero, Nicola; Dugo, Giacomo
2018-01-01
Opuntia ficus-indica L. Miller fruits, particularly 'Ficodindia dell'Etna' of Biancavilla (POD), 'Fico d'india tradizionale di Roccapalumba' with protected brand and samples from an experimental field in Pezzolo (Sicily) were analyzed by inductively coupled plasma mass spectrometry in order to determine the multi-element profile. A multivariate chemometric approach, specifically principal component analysis (PCA), was applied to individuate how mineral elements may represent a marker of geographic origin, which would be useful for traceability. PCA has allowed us to verify that the geographical origin of prickly pear fruits is significantly influenced by trace element content, and the results found in Biancavilla PDO samples were linked to the geological composition of this volcanic areas. It was observed that two principal components accounted for 72.03% of the total variance in the data and, in more detail, PC1 explains 45.51% and PC2 26.52%, respectively. This study demonstrated that PCA is an integrated tool for the traceability of food products and, at the same time, a useful method of authentication of typical local fruits such as prickly pear. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Nie, Jing-Bao; Fitzgerald, Ruth P
From the outset, cross-cultural and transglobal bioethics has constituted a potent arena for a dynamic public discourse and academic debate alike. But prominent bioethical debates on such issues as the notion of common morality and a distinctive "Asian" bioethics in contrast to a "Western" one reveal some deeply rooted and still popular but seriously problematic methodological habits in approaching cultural differences, most notably, radically dichotomizing the East and the West, the local and the universal. In this paper, a "transcultural" approach to bioethics and cultural studies is proposed. It takes seriously the challenges offered by social sciences, anthropology in particular, towards the development of new methodologies for comparative and global bioethics. The key methodological elements of "transculturalism" include acknowledging the great internal plurality within every culture; highlighting the complexity of cultural differences; upholding the primacy of morality; incorporating a reflexive theory of social power; and promoting changes or progress towards shared and sometimes new moral values.
Riza, Yose; Cheris, Rika; Repi
2017-12-01
The development of Pekanbaru City is very rapid, consequently is constantly experiencing changes in buildings, areas or cultural objects that need to be preserved to be disrupted, replaced by economic-oriented development - commercial. The contradiction between the construction of the metropolis will be the beginning of the problem for urban areas. Kampong Bandar Senapelan is an early town of Pekanbaru town located on the banks of the Siak River. The settlement has a typology of Malay and vernacular Malay architecture. The existence of these villages experienced concern as a contradiction of the city's development toward the metropolis which resulted in degradation of the historical value of urban development in this region. This study was conducted to make an important assessment of preserving Kampung Bandar Senapelan as the oldest area and its great influence on the development of metropolis. Preservation of historical and cultural heritage with conservation and preservation measures is one of the urban design elements to be considered for all city stakeholders to safeguard the civilization of a generation. Considerations that will become a benchmark is the history, conservation and urban development towards the metropolis. The importance of awareness of the conservation of the city through conservation and preservation in this area can lead to new characters and values to the building and its environment and will create an atmosphere different from the rapid development (modern style). In addition, this preservation will be evident in a harmonious life with a high tolerance between multi-ethnicity that co-existed in the past.
Vora, H.; Morgan, J.
2017-12-01
Brittle failure in rock under confined biaxial conditions is accompanied by release of seismic energy, known as acoustic emissions (AE). The objective our study is to understand the influence of elastic properties of rock and its stress state on deformation patterns, and associated seismicity in granular rocks. Discrete Element Modeling is used to simulate biaxial tests on granular rocks of defined grain size distribution. Acoustic Energy and seismic moments are calculated from microfracture events as rock is taken to conditions of failure under different confining pressure states. Dimensionless parameters such as seismic b-value and fractal parameter for deformation, D-value, are used to quantify seismic character and distribution of damage in rock. Initial results suggest that confining pressure has the largest control on distribution of induced microfracturing, while fracture energy and seismic magnitudes are highly sensitive to elastic properties of rock. At low confining pressures, localized deformation (low D-values) and high seismic b-values are observed. Deformation at high confining pressures is distributed in nature (high D-values) and exhibit low seismic b-values as shearing becomes the dominant mode of microfracturing. Seismic b-values and fractal D-values obtained from microfracturing exhibit a linear inverse relationship, similar to trends observed in earthquakes. Mode of microfracturing in our simulations of biaxial compression tests show mechanistic similarities to propagation of fractures and faults in nature.
Marco Gonzalez
Full Text Available Abstract The analysis of cracked brittle mechanical components considering linear elastic fracture mechanics is usually reduced to the evaluation of stress intensity factors (SIFs. The SIF calculation can be carried out experimentally, theoretically or numerically. Each methodology has its own advantages but the use of numerical methods has become very popular. Several schemes for numerical SIF calculations have been developed, the J-integral method being one of the most widely used because of its energy-like formulation. Additionally, some variations of the J-integral method, such as displacement-based methods, are also becoming popular due to their simplicity. In this work, a simple displacement-based scheme is proposed to calculate SIFs, and its performance is compared with contour integrals. These schemes are all implemented with the Boundary Element Method (BEM in order to exploit its advantages in crack growth modelling. Some simple examples are solved with the BEM and the calculated SIF values are compared against available solutions, showing good agreement between the different schemes.
Finite element (fem) Kohn-Sham density functional approach to lighter dimers
Kolb, D.; Kopylow, A.V.; Duesterhoft, C.; Heinemann, D.
1998-01-01
The very accurate Finite Element Method has been employed for a comparative study of various combinations of frequently used exchange and correlation density functionals both local and non-local. We also investigated the properties of the Colle- Salvetti orbital functional in KLI approximation. All these studies were done for atoms and dimers of the sp-shell which exhibits a rich variety of system dependent properties. Moving through the sp-shell we compare binding energies, radii and vibrational frequencies for ground state and excited configurations and also compute potential energy surfaces (curves) as a function of internuclear distance. The dependency of total energies on occupation number variations of the Kohn-Sham orbitals provides inferences on polarisation and alignment. An interesting question is how to incorporate at least approximately non- relativistic strict physical conservation laws like spin S 2 and S z , angular momentum L 2 and L z and parity and how to allow for symmetry breaking necessary for the dissociation e.g. of N 2 . (Copyright (1998) World Scientific Publishing Co. Pte. Ltd)
A hybrid finite element - statistical energy analysis approach to robust sound transmission modeling
Reynders, Edwin; Langley, Robin S.; Dijckmans, Arne; Vermeir, Gerrit
2014-09-01
When considering the sound transmission through a wall in between two rooms, in an important part of the audio frequency range, the local response of the rooms is highly sensitive to uncertainty in spatial variations in geometry, material properties and boundary conditions, which have a wave scattering effect, while the local response of the wall is rather insensitive to such uncertainty. For this mid-frequency range, a computationally efficient modeling strategy is adopted that accounts for this uncertainty. The partitioning wall is modeled deterministically, e.g. with finite elements. The rooms are modeled in a very efficient, nonparametric stochastic way, as in statistical energy analysis. All components are coupled by means of a rigorous power balance. This hybrid strategy is extended so that the mean and variance of the sound transmission loss can be computed as well as the transition frequency that loosely marks the boundary between low- and high-frequency behavior of a vibro-acoustic component. The method is first validated in a simulation study, and then applied for predicting the airborne sound insulation of a series of partition walls of increasing complexity: a thin plastic plate, a wall consisting of gypsum blocks, a thicker masonry wall and a double glazing. It is found that the uncertainty caused by random scattering is important except at very high frequencies, where the modal overlap of the rooms is very high. The results are compared with laboratory measurements, and both are found to agree within the prediction uncertainty in the considered frequency range.
Li, Yubo; Wang, Pengtao; Hua, Fei; Zhan, Shijie; Wang, Xiaozhi; Luo, Jikui; Yang, Hangsheng
2018-03-01
Electronic properties of cubic boron nitride (c-BN) doped with group IIA elements were systematically investigated using the first principle calculation based on density functional theory. The electronic bandgap of c-BN was found to be narrowed when the impurity atom substituted either the B (IIA→B) or the N (IIA→N) atom. For IIA→B, a shallow accept level degenerated into valence band (VB); while for IIA→N, a shallow donor level degenerated conduction band (CB). In the cases of IIBe→N and IIMg→N, deep donor levels were also induced. Moreover, a zigzag bandgap narrowing pattern was found, which is in consistent with the variation pattern of dopants' radius of electron occupied outer s-orbital. From the view of formation energy, the substitution of B atom under N-rich conditions and the substitution of N atom under B-rich conditions were energetically favored. Our simulation results suggested that Mg and Ca are good candidates for p-type dopants, and Ca is the best candidate for n-type dopant.
Material Substitution For The Supporting Frame of Power Tiller With Finite Element Analysis Approach
Midian Shite
2006-08-01
Full Text Available Due to its advantageouse characteristic, aluminum is considered to substitute the existing steel as material of the supporting frame of power tiller to meet the strength and environment concerns. The investigation was emphasized on the comparison of both material in view of stress and deformation. In this study, both experimental test and finite element (FE analysis were employed to meet the research concem.comparison between the experimental test and numerical analysis result indicated acceptable differnces of about 7-33% wich is lower than the previouse research. Substitution with aluminum was confirmed using material index that aluminum has better performance in strength and stiffness than that of steel by prescibing minimum better performance in strength and stiffness than that of steel by prescibing minimum weight. FE analysis result revealed that aluminum model was capable of sustaining loads about equal to the steel model. It was based on its maximum von Mises stress wich was insignificatly lower than the steel model. In term of strength characteristic, strength ratio of the aluminum model was higher than the steel model. Furthemore, the substitution also resulted in redistrubuting stress into wider area and mass reduction for about 36%.
Wysocka-Lisek, J.; Paszkowska, B.; Mularczyk, K.
1976-01-01
In the beginning the influence of Sn, Pb, Sb, Bi, Cu, Ag, Zn and Cd on the light rare earth spectral lines using Ni as the internal standard, during the intermittent current arc excitation between C-electrodes was studied. On the basis of the spectral lines intensity measurements, it was stated that one may apply the addition of Ni as the internal standard by the quantitative determination of Sn, Pb, Sb, Bi, Zn and Cd in the light rare earth mixtures with one of the above. Also a great influence of the presence of the individually studied metal was observed on the spectral line intensity of rare earth elements and nickel. The differences of the thermo-chemical reactions nature between excited elements and the carbon of the electrodes may cause that influence. (author)
Matsunaga, Takeshi
2002-10-01
Concerning the study subject on the transport of trace, toxic chemicals and radioactive elements in a river watershed, that has been developed in the Research Group for Terrestrial Environment, its aims and methodological approaches have been discussed in the light of related social and technological aspects of today. It is stressed that a study of the transport of radionuclides originated from a nuclear installation is needed to assess the physiological impact and to provide appropriate countermeasures in case of an accident. A numerical model is prerequisite for these objectives and to be keenly developed. The outcome of the modeling will be also important for a quantitative analysis of cycling of trace toxic elements in the atmosphere- lithosphere-hydrosphere, and also of the mechanisms of contamination of the surface aquatic environment. Accordingly, the study will contribute to the key issues stated in the national programs of science and technologies such as conservation of the natural and living environment. The present large consumption of metals and metalloids may cause an extensive contamination in the future. The study can provide solutions to the problems associated with metals and metalloids, because their environmental behavior resembles to that of radionuclides. From a methodological aspect, an importance of a direct investigation of physicochemical forms of trace, toxic elements must be stressed. A simultaneous use of experimental methods and chemical modeling to study the physico-chemical forms will be a good exemption to be realized hereafter. Experimentally, partitioning between solid and liquid phases using radioisotopes, and identification of solid species using various X-ray spectrometric techniques, for example, have been recognized very promising to investigate physico-chemical form of trace elements. These techniques are much ought to the nuclear sciences, suggesting further possible contribution of the nuclear sciences to the questions of
Sarparandeh, Mohammadali; Hezarkhani, Ardeshir
2017-12-01
The use of efficient methods for data processing has always been of interest to researchers in the field of earth sciences. Pattern recognition techniques are appropriate methods for high-dimensional data such as geochemical data. Evaluation of the geochemical distribution of rare earth elements (REEs) requires the use of such methods. In particular, the multivariate nature of REE data makes them a good target for numerical analysis. The main subject of this paper is application of unsupervised pattern recognition approaches in evaluating geochemical distribution of REEs in the Kiruna type magnetite-apatite deposit of Se-Chahun. For this purpose, 42 bulk lithology samples were collected from the Se-Chahun iron ore deposit. In this study, 14 rare earth elements were measured with inductively coupled plasma mass spectrometry (ICP-MS). Pattern recognition makes it possible to evaluate the relations between the samples based on all these 14 features, simultaneously. In addition to providing easy solutions, discovery of the hidden information and relations of data samples is the advantage of these methods. Therefore, four clustering methods (unsupervised pattern recognition) - including a modified basic sequential algorithmic scheme (MBSAS), hierarchical (agglomerative) clustering, k-means clustering and self-organizing map (SOM) - were applied and results were evaluated using the silhouette criterion. Samples were clustered in four types. Finally, the results of this study were validated with geological facts and analysis results from, for example, scanning electron microscopy (SEM), X-ray diffraction (XRD), ICP-MS and optical mineralogy. The results of the k-means clustering and SOM methods have the best matches with reality, with experimental studies of samples and with field surveys. Since only the rare earth elements are used in this division, a good agreement of the results with lithology is considerable. It is concluded that the combination of the proposed
Survey of meshless and generalized finite element methods: A unified approach
Babuška, Ivo; Banerjee, Uday; Osborn, John E.
In the past few years meshless methods for numerically solving partial differential equations have come into the focus of interest, especially in the engineering community. This class of methods was essentially stimulated by difficulties related to mesh generation. Mesh generation is delicate in many situations, for instance, when the domain has complicated geometry; when the mesh changes with time, as in crack propagation, and remeshing is required at each time step; when a Lagrangian formulation is employed, especially with nonlinear PDEs. In addition, the need for flexibility in the selection of approximating functions (e.g., the flexibility to use non-polynomial approximating functions), has played a significant role in the development of meshless methods. There are many recent papers, and two books, on meshless methods; most of them are of an engineering character, without any mathematical analysis.In this paper we address meshless methods and the closely related generalized finite element methods for solving linear elliptic equations, using variational principles. We give a unified mathematical theory with proofs, briefly address implementational aspects, present illustrative numerical examples, and provide a list of references to the current literature.The aim of the paper is to provide a survey of a part of this new field, with emphasis on mathematics. We present proofs of essential theorems because we feel these proofs are essential for the understanding of the mathematical aspects of meshless methods, which has approximation theory as a major ingredient. As always, any new field is stimulated by and related to older ideas. This will be visible in our paper.
3D finite element model of the diabetic neuropathic foot: a gait analysis driven approach.
Guiotto, Annamaria; Sawacha, Zimi; Guarneri, Gabriella; Avogaro, Angelo; Cobelli, Claudio
2014-09-22
Diabetic foot is an invalidating complication of diabetes that can lead to foot ulcers. Three-dimensional (3D) finite element analysis (FEA) allows characterizing the loads developed in the different anatomical structures of the foot in dynamic conditions. The aim of this study was to develop a subject specific 3D foot FE model (FEM) of a diabetic neuropathic (DNS) and a healthy (HS) subject, whose subject specificity can be found in term of foot geometry and boundary conditions. Kinematics, kinetics and plantar pressure (PP) data were extracted from the gait analysis trials of the two subjects with this purpose. The FEM were developed segmenting bones, cartilage and skin from MRI and drawing a horizontal plate as ground support. Materials properties were adopted from previous literature. FE simulations were run with the kinematics and kinetics data of four different phases of the stance phase of gait (heel strike, loading response, midstance and push off). FEMs were then driven by group gait data of 10 neuropathic and 10 healthy subjects. Model validation focused on agreement between FEM-simulated and experimental PP. The peak values and the total distribution of the pressures were compared for this purpose. Results showed that the models were less robust when driven from group data and underestimated the PP in each foot subarea. In particular in the case of the neuropathic subject's model the mean errors between experimental and simulated data were around the 20% of the peak values. This knowledge is crucial in understanding the aetiology of diabetic foot. Copyright © 2014 Elsevier Ltd. All rights reserved.
Fortuny, Josep; Marcé-Nogué, Jordi; Konietzko-Meier, Dorota
2017-06-01
The Late Triassic freshwater ecosystems were occupied by different tetrapod groups including large-sized anamniotes, such as metoposaurids. Most members of this group of temnospondyls acquired gigantic sizes (up to 5 m long) with a nearly worldwide distribution. The paleoecology of metoposaurids is controversial; they have been historically considered passive, bottom-dwelling animals, waiting for prey on the bottom of rivers and lakes, or they have been suggested to be active mid-water feeders. The present study aims to expand upon the paleoecological interpretations of these animals using 3D finite element analyses (FEA). Skulls from two taxa, Metoposaurus krasiejowensis, a gigantic taxon from Europe, and Apachesaurus gregorii, a non-gigantic taxon from North America, were analyzed under different biomechanical scenarios. Both 3D models of the skulls were scaled to allow comparisons between them and reveal that the general stress distribution pattern found in both taxa is clearly similar in all scenarios. In light of our results, both previous hypotheses about the paleoecology of these animals can be partly merged: metoposaurids probably were ambush and active predators, but not the top predators of these aquatic environments. The FEA results demonstrate that they were particularly efficient at bilateral biting, and together with their characteristically anteropositioned orbits, optimal for an ambush strategy. Nonetheless, the results also show that these animals were capable of lateral strikes of the head, suggesting active hunting of prey. Regarding the important skull size differences between the taxa analyzed, our results suggest that the size reduction in the North American taxon could be related to drastic environmental changes or the increase of competitors. The size reduction might have helped them expand into new ecological niches, but they likely remained fully aquatic, as are all other metoposaurids. © 2017 Anatomical Society.
Wang, Songlin; Matsuda, Isamu; Long, Fei; Ishii, Yoshitaka, E-mail: yishii@uic.edu [University of Illinois at Chicago, Department of Chemistry (United States)
2016-02-15
This study demonstrates a novel spectral editing technique for protein solid-state NMR (SSNMR) to simplify the spectrum drastically and to reduce the ambiguity for protein main-chain signal assignments in fast magic-angle-spinning (MAS) conditions at a wide frequency range of 40–80 kHz. The approach termed HIGHLIGHT (Wang et al., in Chem Comm 51:15055–15058, 2015) combines the reverse {sup 13}C, {sup 15}N-isotope labeling strategy and selective signal quenching using the frequency-selective REDOR pulse sequence under fast MAS. The scheme allows one to selectively observe the signals of “highlighted” labeled amino-acid residues that precede or follow unlabeled residues through selectively quenching {sup 13}CO or {sup 15}N signals for a pair of consecutively labeled residues by recoupling {sup 13}CO–{sup 15}N dipolar couplings. Our numerical simulation results showed that the scheme yielded only ∼15 % loss of signals for the highlighted residues while quenching as much as ∼90 % of signals for non-highlighted residues. For lysine-reverse-labeled micro-crystalline GB1 protein, the 2D {sup 15}N/{sup 13}C{sub α} correlation and 2D {sup 13}C{sub α}/{sup 13}CO correlation SSNMR spectra by the HIGHLIGHT approach yielded signals only for six residues following and preceding the unlabeled lysine residues, respectively. The experimental dephasing curves agreed reasonably well with the corresponding simulation results for highlighted and quenched residues at spinning speeds of 40 and 60 kHz. The compatibility of the HIGHLIGHT approach with fast MAS allows for sensitivity enhancement by paramagnetic assisted data collection (PACC) and {sup 1}H detection. We also discuss how the HIGHLIGHT approach facilitates signal assignments using {sup 13}C-detected 3D SSNMR by demonstrating full sequential assignments of lysine-reverse-labeled micro-crystalline GB1 protein (∼300 nmol), for which data collection required only 11 h. The HIGHLIGHT approach offers valuable
How to finance energy transition? Elements of analysis for a strategic approach
Ruedinger, Andreas
2015-01-01
If regulatory and economic signals are the first determining factors for the launching of energy transition projects, financing tools are a major stake. But financing these projects is also facing two complementary challenges: the mobilisation of additional capital resources to face the needs, and the re-orientation of a part of this financing towards more efficient projects. In order to asses the consistency of financing tools, this study identifies three determining financing stakes: an inter-mediation with capital markets to mobilise capitals at low cost, a calibration of project financing mechanisms to meet the needs of the different actors and sectors and to limit transaction costs, and a better articulation between financial tools and regulatory tools. The authors thus propose an integrated approach to the stakes of transition financing
Przygoda, K; Napieralski, A; Grecki, M
2010-01-01
Abstract: Linear accelerators such as Free Electron Lasers (FELs) use superconducting (SC) resonant cavities to accelerate electron beam to high energies. TESLA type resonators are extremely sensitive to detuning induced by mechanical deformations – Lorentz force detuning (LFD), mainly due to the extremely high quality factor (Q) of the 1.3 GHz resonance mode, in the range of 1e6. The resulting modulation of a resonance frequency of the cavity makes power consumption and stability performances of the Low-Latency Radio Frequency (LLRF) control more critical. In order to minimize the RF control efforts and desired stabilities, the fast piezoelectric actuators with digital control systems are commonly used. The paper presents a novel approach for automatic control of piezoelectric actuators used for compensation of Lorentz force detuning, the practical application and carried out tests in accelerating module ACC6 in Free-Electron Laser in Hamburg (FLASH).
Ricoeur, Andreas; Lange, Stephan; Avakian, Artjom
2015-04-01
Magnetoelectric (ME) coupling is an inherent property of only a few crystals exhibiting very low coupling coefficients at low temperatures. On the other hand, these materials are desirable due to many promising applications, e.g. as efficient data storage devices or medical or geophysical sensors. Efficient coupling of magnetic and electric fields in materials can only be achieved in composite structures. Here, ferromagnetic (FM) and ferroelectric (FE) phases are combined e.g. including FM particles in a FE matrix or embedding fibers of the one phase into a matrix of the other. The ME coupling is then accomplished indirectly via strain fields exploiting magnetostrictive and piezoelectric effects. This requires a poling of the composite, where the structure is exposed to both large magnetic and electric fields. The efficiency of ME coupling will strongly depend on the poling process. Besides the alignment of local polarization and magnetization, it is going along with cracking, also being decisive for the coupling properties. Nonlinear ferroelectric and ferromagnetic constitutive equations have been developed and implemented within the framework of a multifield, two-scale FE approach. The models are microphysically motivated, accounting for domain and Bloch wall motions. A second, so called condensed approach is presented which doesn't require the implementation of a spatial discretisation scheme, however still considering grain interactions and residual stresses. A micromechanically motivated continuum damage model is established to simulate degradation processes. The goal of the simulation tools is to predict the different constitutive behaviors, ME coupling properties and lifetime of smart magnetoelectric devices.
Uma, B.; Swaminathan, T. N.; Ayyaswamy, P. S.; Eckmann, D. M.; Radhakrishnan, R.
2011-09-01
A direct numerical simulation (DNS) procedure is employed to study the thermal motion of a nanoparticle in an incompressible Newtonian stationary fluid medium with the generalized Langevin approach. We consider both the Markovian (white noise) and non-Markovian (Ornstein-Uhlenbeck noise and Mittag-Leffler noise) processes. Initial locations of the particle are at various distances from the bounding wall to delineate wall effects. At thermal equilibrium, the numerical results are validated by comparing the calculated translational and rotational temperatures of the particle with those obtained from the equipartition theorem. The nature of the hydrodynamic interactions is verified by comparing the velocity autocorrelation functions and mean square displacements with analytical results. Numerical predictions of wall interactions with the particle in terms of mean square displacements are compared with analytical results. In the non-Markovian Langevin approach, an appropriate choice of colored noise is required to satisfy the power-law decay in the velocity autocorrelation function at long times. The results obtained by using non-Markovian Mittag-Leffler noise simultaneously satisfy the equipartition theorem and the long-time behavior of the hydrodynamic correlations for a range of memory correlation times. The Ornstein-Uhlenbeck process does not provide the appropriate hydrodynamic correlations. Comparing our DNS results to the solution of an one-dimensional generalized Langevin equation, it is observed that where the thermostat adheres to the equipartition theorem, the characteristic memory time in the noise is consistent with the inherent time scale of the memory kernel. The performance of the thermostat with respect to equilibrium and dynamic properties for various noise schemes is discussed.
Eguchi, Yuzuru
2005-07-01
The report is concerned with the evaluation of applicability of numerical modelling methods for the prediction of gas entrainment in an upper plenum of a sodium-cooled fast breeder reactor (FBR). Special attention was paid to applicability of variational multiscale (VMS) modelling in the context of the Finite Element Method. Two flow problems, which were experimentally shown to induce gas entrainment, are solved by a VMS code (MISTRAL). First, computing a benchmark problem of a gas entrainment swirl flow in a cylindrical vessel has led to the following results; (1) the VMS solution is able to resolve the precise vortex core structure more accurately than the non-VMS solution computed by Smart-fem. The circumferential velocity obtained from VMS computation rises almost double in comparison with the non-VMS solution, though it still underestimates the experimental values. (2) the half-value radius of the negative region of the second invariant of velocity gradient matches well between the VMS solution and non-VMS solution. (3) the negative/positive boundary of the second invariant of velocity gradient obtained from the VMS solution is closer to the vortex core radius observed in the experiment than that of the non-VMS solution, though the vortex dip length computed from the VMS result is shorter than the experimental value. Second, computing a benchmark problem of open channel flow with a square pillar and downstream suction pipe has led to the following results; (4) 2Δx-type spatial oscillation was observed due to lack of mesh subdivisions. (5) the distributional profile of the second invariant of velocity gradient is similar to that of the first problem (swirl flow in a cylindrical vessel), characterized by a strong negative region surrounded by a weak positive region. As a possible future plan, it may be necessary to analyze more precisely the features of unsteady vortices obtained in the second benchmark problem and to identify the difference (if any) from the
Comparative risk assessment: an element for a more rational and ethical approach to radiation risk
Danesi, P.R.
2006-01-01
Peaceful nuclear technologies are still perceived by a large fraction of the general public, the media as well as by some decision makers, as more risky than many other 'conventional' technologies. One of the approaches that can help bringing more rationality and ethics into the picture is to present the risk associated with radiation and nuclear technologies in the frame of correctly conducted comparative risk assessments. However, comparing different risks is not so straightforward because quantifying different risks on a comparable scale requires overcoming both conceptual and practical difficulties. Risk (R) can be expressed as the product of the probability (P) that a given undesired event, the risk, will occur, times the consequences of this event (C), i.e. R = P x C. Although in principle risks could be compared by simply ranking them according to the different values of R, this simplistic approach is not always possible because to correctly compare risks all factors, circumstances and assumptions should be mutually equivalent and quantified and the (often large) uncertainties taken into proper account. In the case of radiation risk, ICRP has assumed as valid the LNT model, (probability coefficient of 5 % per Sievert for attributable death from cancer) and selected the present equivalent dose limit of 1 mSv per year for public exposure. This dose corresponds to 50 lethal cancers by 1 million people exposed and is approximately equivalent (in terms of probability of death) to the risk of bicycling for 600 km, driving for 3200 km, crossing a busy road twice a day for 1 year, smoking 2.5 packets of cigarettes or being X-rayed once for kidney metabolism. However, according to many scientists on the basis of both epidemiological and biological results and considerations, the actual risk is far lower than that predicted by the LNT model. Nevertheless, the policies and myths that were created about half a century ago are still persisting and have lead the general
Boris Jesús Goenaga
2017-01-01
Full Text Available The pavement roughness is the main variable that produces the vertical excitation in vehicles. Pavement profiles are the main determinant of (i discomfort perception on users and (ii dynamic loads generated at the tire-pavement interface, hence its evaluation constitutes an essential step on a Pavement Management System. The present document evaluates two specific techniques used to simulate pavement profiles; these are the shaping filter and the sinusoidal approach, both based on the Power Spectral Density. Pavement roughness was evaluated using the International Roughness Index (IRI, which represents the most used index to characterize longitudinal road profiles. Appropriate parameters were defined in the simulation process to obtain pavement profiles with specific ranges of IRI values using both simulation techniques. The results suggest that using a sinusoidal approach one can generate random profiles with IRI values that are representative of different road types, therefore, one could generate a profile for a paved or an unpaved road, representing all the proposed categories defined by ISO 8608 standard. On the other hand, to obtain similar results using the shaping filter approximation a modification in the simulation parameters is necessary. The new proposed values allow one to generate pavement profiles with high levels of roughness, covering a wider range of surface types. Finally, the results of the current investigation could be used to further improve our understanding on the effect of pavement roughness on tire pavement interaction. The evaluated methodologies could be used to generate random profiles with specific levels of roughness to assess its effect on dynamic loads generated at the tire-pavement interface and user’s perception of road condition.
Validation of a Methodology to Predict Micro-Vibrations Based on Finite Element Model Approach
Soula, Laurent; Rathband, Ian; Laduree, Gregory
2014-06-01
This paper presents the second part of the ESA R&D study called "METhodology for Analysis of structure- borne MICro-vibrations" (METAMIC). After defining an integrated analysis and test methodology to help predicting micro-vibrations [1], a full-scale validation test campaign has been carried out. It is based on a bread-board representative of typical spacecraft (S/C) platform consisting in a versatile structure made of aluminium sandwich panels equipped with different disturbance sources and a dummy payload made of a silicon carbide (SiC) bench. The bread-board has been instrumented with a large set of sensitive accelerometers and tests have been performed including back-ground noise measurement, modal characterization and micro- vibration tests. The results provided responses to the perturbation coming from a reaction wheel or cryo-cooler compressors, operated independently then simultaneously with different operation modes. Using consistent modelling and associated experimental characterization techniques, a correlation status has been assessed by comparing test results with predictions based on FEM approach. Very good results have been achieved particularly for the case of a wheel in sweeping rate operation with test results over-predicted within a reasonable margin lower than two. Some limitations of the methodology have also been identified for sources operating at a fixed rate or coming with a small number of dominant harmonics and recommendations have been issued in order to deal with model uncertainties and stay conservative.
Thermodynamic approach of the poly-azine - f element ions interaction in aqueous conditions
Miguirditchian, M.; Guillaumont, D.; Moisy, P.; Guillaneux, D.; Madic, C.
2004-01-01
2-Amino-4,6-di-(pyridine-2-yl)-1,3,5-triazine (Adptz) was considered as a model compound for selective aromatic nitrogen extractants (poly-azines) of minor actinides. Thermodynamic data ( ΔG 0 , ΔH 0 , ΔS 0 ) were systematically acquired for the complexation of lanthanide(III) ions as well as yttrium(III) and americium(III) in hydro-alcoholic medium. Two complementary experimental approaches were followed. Stability constants for the formation of the 1:1 complexes were evaluated from UV-visible spectrophotometry titration experiments, whereas enthalpies and entropies of reaction were obtained consistently from either temperature dependence experiments or micro-calorimetry. The interaction of Adptz with lanthanide(III) and yttrium(III) ions was found to be essentially ionic and dependent upon the hydration and size of the ion. As for americium(III) ion, stability constant and enthalpy of complexation was significantly larger. This was attributed to a partial electronic transfer from the ligand to empty orbitals of the cation. DFT calculations support this interpretation. (authors)
Dörr, Dominik; Faisst, Markus; Joppich, Tobias; Poppe, Christian; Henning, Frank; Kärger, Luise
2018-05-01
Finite Element (FE) forming simulation offers the possibility of a detailed analysis of thermoforming processes by means of constitutive modelling of intra- and inter-ply deformation mechanisms, which makes manufacturing defects predictable. Inter-ply slippage is a deformation mechanism, which influences the forming behaviour and which is usually assumed to be isotropic in FE forming simulation so far. Thus, the relative (fibre) orientation between the slipping plies is neglected for modelling of frictional behaviour. Characterization results, however, reveal a dependency of frictional behaviour on the relative orientation of the slipping plies. In this work, an anisotropic model for inter-ply slippage is presented, which is based on an FE forming simulation approach implemented within several user subroutines of the commercially available FE solver Abaqus. This approach accounts for the relative orientation between the slipping plies for modelling frictional behaviour. For this purpose, relative orientation of the slipping plies is consecutively evaluated, since it changes during forming due to inter-ply slipping and intra-ply shearing. The presented approach is parametrized based on characterization results with and without relative orientation for a thermoplastic UD-tape (PA6-CF) and applied to forming simulation of a generic geometry. Forming simulation results reveal an influence of the consideration of relative fibre orientation on the simulation results. This influence, however, is small for the considered geometry.
Kim, Ji Hoon; Lee, M.G.; Kim, D.; Matlock, D.K.; Wagoner, R.H.
2010-01-01
Research highlights: → Robust microstructure-based FE mesh generation technique was developed. → Local deformation behavior near phase boundaries could be quantitatively understood. → Macroscopic failure could be connected to microscopic deformation behavior of multi-phase steel. - Abstract: A qualitative analysis was carried out on the formability of dual-phase (DP) steels by introducing a realistic microstructure-based finite element approach. The present microstructure-based model was constructed using a mesh generation process with a boundary-smoothing algorithm after proper image processing. The developed model was applied to hole-expansion formability tests for DP steel sheets having different volume fractions and morphological features. On the basis of the microstructural inhomogeneity observed in the scanning electron micrographs of the DP steel sheets, it was inferred that the localized plastic deformation in the ferritic phase might be closely related to the macroscopic formability of DP steel. The experimentally observed difference between the hole-expansion formability of two different microstructures was reasonably explained by using the present finite element model.
Park, Junhong; Palumbo, Daniel L.
2004-01-01
The use of shunted piezoelectric patches in reducing vibration and sound radiation of structures has several advantages over passive viscoelastic elements, e.g., lower weight with increased controllability. The performance of the piezoelectric patches depends on the shunting electronics that are designed to dissipate vibration energy through a resistive element. In past efforts most of the proposed tuning methods were based on modal properties of the structure. In these cases, the tuning applies only to one mode of interest and maximum tuning is limited to invariant points when based on den Hartog's invariant points concept. In this study, a design method based on the wave propagation approach is proposed. Optimal tuning is investigated depending on the dynamic and geometric properties that include effects from boundary conditions and position of the shunted piezoelectric patch relative to the structure. Active filters are proposed as shunting electronics to implement the tuning criteria. The developed tuning methods resulted in superior capabilities in minimizing structural vibration and noise radiation compared to other tuning methods. The tuned circuits are relatively insensitive to changes in modal properties and boundary conditions, and can applied to frequency ranges in which multiple modes have effects.
Abbott, T.I.; Jones, C.G.
1984-01-01
Radiographic elements are disclosed comprised of first and second silver halide emulsion layers separated by an interposed support capable of transmitting radiation to which the second image portion is responsive. At least the first imaging portion contains a silver halide emulsion in which thin tubular silver halide grains of intermediate aspect ratios (from 5:1 to 8:1) are present. Spectral sensitizing dye is adsorbed to the surface of the tubular grains. Increased photographic speeds can be realized at comparable levels of crossover. (author)
Camera, S. [Jodrell Bank Centre for Astrophysics, The University of Manchester, Alan Turing Building, Oxford Road, Manchester M13 9PL (United Kingdom); Fornasa, M. [School of Physics and Astronomy, University of Nottingham, University Campus, Nottingham NG7 2RD (United Kingdom); Fornengo, N.; Regis, M., E-mail: stefano.camera@manchester.ac.uk, E-mail: fornasam@gmail.com, E-mail: fornengo@to.infn.it, E-mail: regis@to.infn.it [Dipartimento di Fisica, Università di Torino, Via P. Giuria 1, 10125 Torino (Italy)
2015-06-01
We recently proposed to cross-correlate the diffuse extragalactic γ-ray background with the gravitational lensing signal of cosmic shear. This represents a novel and promising strategy to search for annihilating or decaying particle dark matter (DM) candidates. In the present work, we demonstrate the potential of a tomographic-spectral approach: measuring the cross-correlation in separate bins of redshift and energy significantly improves the sensitivity to a DM signal. Indeed, the technique proposed here takes advantage of the different scaling of the astrophysical and DM components with redshift and, simultaneously of their different energy spectra and different angular extensions. The sensitivity to a particle DM signal is extremely promising even when the DM-induced emission is quite faint. We first quantify the prospects of detecting DM by cross-correlating the Fermi Large Area Telescope (LAT) diffuse γ-ray background with the cosmic shear expected from the Dark Energy Survey. Under the hypothesis of a significant subhalo boost, such a measurement can deliver a 5σ detection of DM, if the DM particle is lighter than 300 GeV and has a thermal annihilation rate. We then forecast the capability of the European Space Agency Euclid satellite (whose launch is planned for 2020), in combination with an hypothetical future γ-ray detector with slightly improved specifications compared to current telescopes. We predict that the cross-correlation of their data will allow a measurement of the DM mass with an uncertainty of a factor of 1.5–2, even for moderate subhalo boosts, for DM masses up to few hundreds of GeV and thermal annihilation rates.
Camera, S.; Fornasa, M.; Fornengo, N.; Regis, M.
2015-06-01
We recently proposed to cross-correlate the diffuse extragalactic γ-ray background with the gravitational lensing signal of cosmic shear. This represents a novel and promising strategy to search for annihilating or decaying particle dark matter (DM) candidates. In the present work, we demonstrate the potential of a tomographic-spectral approach: measuring the cross-correlation in separate bins of redshift and energy significantly improves the sensitivity to a DM signal. Indeed, the technique proposed here takes advantage of the different scaling of the astrophysical and DM components with redshift and, simultaneously of their different energy spectra and different angular extensions. The sensitivity to a particle DM signal is extremely promising even when the DM-induced emission is quite faint. We first quantify the prospects of detecting DM by cross-correlating the Fermi Large Area Telescope (LAT) diffuse γ-ray background with the cosmic shear expected from the Dark Energy Survey. Under the hypothesis of a significant subhalo boost, such a measurement can deliver a 5σ detection of DM, if the DM particle is lighter than 300 GeV and has a thermal annihilation rate. We then forecast the capability of the European Space Agency Euclid satellite (whose launch is planned for 2020), in combination with an hypothetical future γ-ray detector with slightly improved specifications compared to current telescopes. We predict that the cross-correlation of their data will allow a measurement of the DM mass with an uncertainty of a factor of 1.5-2, even for moderate subhalo boosts, for DM masses up to few hundreds of GeV and thermal annihilation rates.
Camera, S.; Fornasa, M.; Fornengo, N.; Regis, M.
2015-01-01
We recently proposed to cross-correlate the diffuse extragalactic γ-ray background with the gravitational lensing signal of cosmic shear. This represents a novel and promising strategy to search for annihilating or decaying particle dark matter (DM) candidates. In the present work, we demonstrate the potential of a tomographic-spectral approach: measuring the cross-correlation in separate bins of redshift and energy significantly improves the sensitivity to a DM signal. Indeed, the technique proposed here takes advantage of the different scaling of the astrophysical and DM components with redshift and, simultaneously of their different energy spectra and different angular extensions. The sensitivity to a particle DM signal is extremely promising even when the DM-induced emission is quite faint. We first quantify the prospects of detecting DM by cross-correlating the Fermi Large Area Telescope (LAT) diffuse γ-ray background with the cosmic shear expected from the Dark Energy Survey. Under the hypothesis of a significant subhalo boost, such a measurement can deliver a 5σ detection of DM, if the DM particle is lighter than 300 GeV and has a thermal annihilation rate. We then forecast the capability of the European Space Agency Euclid satellite (whose launch is planned for 2020), in combination with an hypothetical future γ-ray detector with slightly improved specifications compared to current telescopes. We predict that the cross-correlation of their data will allow a measurement of the DM mass with an uncertainty of a factor of 1.5–2, even for moderate subhalo boosts, for DM masses up to few hundreds of GeV and thermal annihilation rates
Pasquariello, Vito; Hammerl, Georg; Örley, Felix; Hickel, Stefan; Danowski, Caroline; Popp, Alexander; Wall, Wolfgang A.; Adams, Nikolaus A.
2016-02-01
We present a loosely coupled approach for the solution of fluid-structure interaction problems between a compressible flow and a deformable structure. The method is based on staggered Dirichlet-Neumann partitioning. The interface motion in the Eulerian frame is accounted for by a conservative cut-cell Immersed Boundary method. The present approach enables sub-cell resolution by considering individual cut-elements within a single fluid cell, which guarantees an accurate representation of the time-varying solid interface. The cut-cell procedure inevitably leads to non-matching interfaces, demanding for a special treatment. A Mortar method is chosen in order to obtain a conservative and consistent load transfer. We validate our method by investigating two-dimensional test cases comprising a shock-loaded rigid cylinder and a deformable panel. Moreover, the aeroelastic instability of a thin plate structure is studied with a focus on the prediction of flutter onset. Finally, we propose a three-dimensional fluid-structure interaction test case of a flexible inflated thin shell interacting with a shock wave involving large and complex structural deformations.
Pasquariello, Vito; Hammerl, Georg; Örley, Felix; Hickel, Stefan; Danowski, Caroline; Popp, Alexander; Wall, Wolfgang A.; Adams, Nikolaus A.
2016-01-01
We present a loosely coupled approach for the solution of fluid–structure interaction problems between a compressible flow and a deformable structure. The method is based on staggered Dirichlet–Neumann partitioning. The interface motion in the Eulerian frame is accounted for by a conservative cut-cell Immersed Boundary method. The present approach enables sub-cell resolution by considering individual cut-elements within a single fluid cell, which guarantees an accurate representation of the time-varying solid interface. The cut-cell procedure inevitably leads to non-matching interfaces, demanding for a special treatment. A Mortar method is chosen in order to obtain a conservative and consistent load transfer. We validate our method by investigating two-dimensional test cases comprising a shock-loaded rigid cylinder and a deformable panel. Moreover, the aeroelastic instability of a thin plate structure is studied with a focus on the prediction of flutter onset. Finally, we propose a three-dimensional fluid–structure interaction test case of a flexible inflated thin shell interacting with a shock wave involving large and complex structural deformations.
Koch, Stephan
2009-03-30
problem-tailored discretization approach is based on a geometrical modeling of reduced spatial dimension inside respective domains of symmetry. For the approximation of the electromagnetic fields, orthogonal polynomials along the direction of symmetry are combined with finite element shape functions at the remaining cross-section. This leads to an efficient method providing a high accuracy. The domains of symmetry are embedded into the surrounding region by means of a strong coupling at the discrete level in terms of a domain decomposition approach. Using this strategy, for certain examples a level of accuracy corresponding to numerical models featuring several millions of degrees of freedom in classical finite element methods can be achieved with only one hundred thousand unknowns. This is demonstrated for different examples, e.g., a cylindrical power transformer and the already mentioned accelerator magnet. (orig.)
Steiner, Hans-Georg
1988-01-01
Describes two kinds of elements in mathematics: Euclid's and Bourbaki's. Discusses some criticisms on the two concepts of elements from a philosophical, methodological, and didactical point of view. Suggests a complementarist view and several implications for mathematics education. (YP)
Spectral Imaging by Upconversion
Dam, Jeppe Seidelin; Pedersen, Christian; Tidemand-Lichtenberg, Peter
2011-01-01
We present a method to obtain spectrally resolved images using upconversion. By this method an image is spectrally shifted from one spectral region to another wavelength. Since the process is spectrally sensitive it allows for a tailored spectral response. We believe this will allow standard...... silicon based cameras designed for visible/near infrared radiation to be used for spectral images in the mid infrared. This can lead to much lower costs for such imaging devices, and a better performance....
Spectral Decomposition Algorithm (SDA)
National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...
Substitution dynamical systems spectral analysis
Queffélec, Martine
2010-01-01
This volume mainly deals with the dynamics of finitely valued sequences, and more specifically, of sequences generated by substitutions and automata. Those sequences demonstrate fairly simple combinatorical and arithmetical properties and naturally appear in various domains. As the title suggests, the aim of the initial version of this book was the spectral study of the associated dynamical systems: the first chapters consisted in a detailed introduction to the mathematical notions involved, and the description of the spectral invariants followed in the closing chapters. This approach, combined with new material added to the new edition, results in a nearly self-contained book on the subject. New tools - which have also proven helpful in other contexts - had to be developed for this study. Moreover, its findings can be concretely applied, the method providing an algorithm to exhibit the spectral measures and the spectral multiplicity, as is demonstrated in several examples. Beyond this advanced analysis, many...
Deetjen, Ulrike; Powell, John A
2016-05-01
This research examines the extent to which informational and emotional elements are employed in online support forums for 14 purposively sampled chronic medical conditions and the factors that influence whether posts are of a more informational or emotional nature. Large-scale qualitative data were obtained from Dailystrength.org. Based on a hand-coded training dataset, all posts were classified into informational or emotional using a Bayesian classification algorithm to generalize the findings. Posts that could not be classified with a probability of at least 75% were excluded. The overall tendency toward emotional posts differs by condition: mental health (depression, schizophrenia) and Alzheimer's disease consist of more emotional posts, while informational posts relate more to nonterminal physical conditions (irritable bowel syndrome, diabetes, asthma). There is no gender difference across conditions, although prostate cancer forums are oriented toward informational support, whereas breast cancer forums rather feature emotional support. Across diseases, the best predictors for emotional content are lower age and a higher number of overall posts by the support group member. The results are in line with previous empirical research and unify empirical findings from single/2-condition research. Limitations include the analytical restriction to predefined categories (informational, emotional) through the chosen machine-learning approach. Our findings provide an empirical foundation for building theory on informational versus emotional support across conditions, give insights for practitioners to better understand the role of online support groups for different patients, and show the usefulness of machine-learning approaches to analyze large-scale qualitative health data from online settings. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Nigel Cook
2016-10-01
Full Text Available Laser ablation inductively-coupled plasma mass spectrometry (LA-ICP-MS has rapidly established itself as the method of choice for generation of multi-element datasets for specific minerals, with broad applications in Earth science. Variation in absolute concentrations of different trace elements within common, widely distributed phases, such as pyrite, iron-oxides (magnetite and hematite, and key accessory minerals, such as apatite and titanite, can be particularly valuable for understanding processes of ore formation, and when trace element distributions vary systematically within a mineral system, for a vector approach in mineral exploration. LA-ICP-MS trace element data can assist in element deportment and geometallurgical studies, providing proof of which minerals host key elements of economic relevance, or elements that are deleterious to various metallurgical processes. This contribution reviews recent advances in LA-ICP-MS methodology, reference standards, the application of the method to new mineral matrices, outstanding analytical uncertainties that impact on the quality and usefulness of trace element data, and future applications of the technique. We illustrate how data interpretation is highly dependent on an adequate understanding of prevailing mineral textures, geological history, and in some cases, crystal structure.
Dehghani, Hamid; Brooksby, Ben; Vishwanath, Karthik; Pogue, Brian W; Paulsen, Keith D
2003-01-01
Near-infrared (NIR) tomography is a technique used to measure light propagation through tissue and generate images of internal optical property distributions from boundary measurements. Most popular applications have concentrated on female breast imaging, neonatal and adult head imaging, as well as muscle and small animal studies. In most instances a highly scattering medium with a homogeneous refractive index is assumed throughout the imaging domain. Using these assumptions, it is possible to simplify the model to the diffusion approximation. However, biological tissue contains regions of varying optical absorption and scatter, as well as varying refractive index. In this work, we introduce an internal boundary constraint in the finite element method approach to modelling light propagation through tissue that accounts for regions of different refractive indices. We have compared the results to data from a Monte Carlo simulation and show that for a simple two-layered slab model of varying refractive index, the phase of the measured reflectance data is significantly altered by the variation in internal refractive index, whereas the amplitude data are affected only slightly
Martin, M.L.; Martin, G.J.; Guillou, C.
1991-01-01
A strategy is presented for the characterization of sugars according to their botanical origin. The samples fermented in standardized conditions can be described in the multi-dimensional space of the overall carbon isotope ratio of ethanol measured by isotope ratio mass spectrometry (IRMS) and of the specific hydrogen isotope parameters of the methyl and methylene sites derived from nuclear magnetic resonance investigation of site-specific natural isotope fractionation (SNIF-NMR method). In the comparison of natural juices, the deuterium and oxygen-18 parameters of water extracted from the juice and from the end fermentation medium also contain information on the origin of the product. The isotopic effects of the concentration processes leading to concentrated juices, musts and syrups can be estimated and taken into account in interpreting the data. The classification power of this multi-element and multi-site approach is illustrated by discriminant analyses involving selected isotopic variables associated with pineapple, apple and barley sugars, compared to beet and cane sugars which are common sources of enrichment. The ability of the method to detect adulteration by exogenous sugars is improved when environmental conditions can be taken into account. (authors)
Busk, Peter Kamp; Hallin, Peter Fischer; Salomon, Jesper
-regulatory elements. We have developed a method for identifying short, conserved motifs in biological sequences such as proteins, DNA and RNA5. This method was used for analysis of approximately 2000 Arabidopsis thaliana promoters that have been shown by DNA array analysis to be induced by abscisic acid6....... These promoters were compared to 28000 promoters that are not induced by abscisic acid. The analysis identified previously described ABA-inducible promoter elements such as ABRE, CE3 and CRT1 but also new cis-elements were found. Furthermore, the list of DNA elements could be used to predict ABA...
Joyce, Walter G; Werneburg, Ingmar; Lyson, Tyler R
2013-01-01
The hooked element in the pes of turtles was historically identified by most palaeontologists and embryologists as a modified fifth metatarsal, and often used as evidence to unite turtles with other reptiles with a hooked element. Some recent embryological studies, however, revealed that this element might represent an enlarged fifth distal tarsal. We herein provide extensive new myological and developmental observations on the hooked element of turtles, and re-evaluate its primary and secondary homology using all available lines of evidence. Digital count and timing of development are uninformative. However, extensive myological, embryological and topological data are consistent with the hypothesis that the hooked element of turtles represents a fusion of the fifth distal tarsal with the fifth metatarsal, but that the fifth distal tarsal dominates the hooked element in pleurodiran turtles, whereas the fifth metatarsal dominates the hooked element of cryptodiran turtles. The term ‘ansulate bone’ is proposed to refer to hooked elements that result from the fusion of these two bones. The available phylogenetic and fossil data are currently insufficient to clarify the secondary homology of hooked elements within Reptilia. PMID:24102560
Spectral dimension in causal set quantum gravity
Eichhorn, Astrid; Mizera, Sebastian
2014-01-01
We evaluate the spectral dimension in causal set quantum gravity by simulating random walks on causal sets. In contrast to other approaches to quantum gravity, we find an increasing spectral dimension at small scales. This observation can be connected to the nonlocality of causal set theory that is deeply rooted in its fundamentally Lorentzian nature. Based on its large-scale behaviour, we conjecture that the spectral dimension can serve as a tool to distinguish causal sets that approximate manifolds from those that do not. As a new tool to probe quantum spacetime in different quantum gravity approaches, we introduce a novel dimensional estimator, the causal spectral dimension, based on the meeting probability of two random walkers, which respect the causal structure of the quantum spacetime. We discuss a causal-set example, where the spectral dimension and the causal spectral dimension differ, due to the existence of a preferred foliation. (paper)
Tanabe, M; Wakui, H; Sogabe, M; Matsumoto, N; Tanabe, Y
2010-01-01
A combined multibody and finite element approach is given to solve the dynamic interaction of a Shinkansen train (high-speed train in Japan) and the railway structure including post-derailment during an earthquake effectively. The motion of the train is expressed in multibody dynamics. Efficient mechanical models to express interactions between wheel and track structure including post-derailment are given. Rail and track elements expressed in multibody dynamics and FEM are given to solve contact problems between wheel and long railway components effectively. The motion of a railway structure is modeled with various finite elements and rail and track elements. The computer program has been developed for the dynamic interaction analysis of a Shinkansen train and railway structure including post derailment during an earthquake. Numerical examples are demonstrated.
Equeter, Lucas; Ducobu, François; Rivière-Lorphèvre, Edouard; Abouridouane, Mustapha; Klocke, Fritz; Dehombreux, Pierre
2018-05-01
Industrial concerns arise regarding the significant cost of cutting tools in machining process. In particular, their improper replacement policy can lead either to scraps, or to early tool replacements, which would waste fine tools. ISO 3685 provides the flank wear end-of-life criterion. Flank wear is also the nominal type of wear for longest tool lifetimes in optimal cutting conditions. Its consequences include bad surface roughness and dimensional discrepancies. In order to aid the replacement decision process, several tool condition monitoring techniques are suggested. Force signals were shown in the literature to be strongly linked with tools flank wear. It can therefore be assumed that force signals are highly relevant for monitoring the condition of cutting tools and providing decision-aid information in the framework of their maintenance and replacement. The objective of this work is to correlate tools flank wear with numerically computed force signals. The present work uses a Finite Element Model with a Coupled Eulerian-Lagrangian approach. The geometry of the tool is changed for different runs of the model, in order to obtain results that are specific to a certain level of wear. The model is assessed by comparison with experimental data gathered earlier on fresh tools. Using the model at constant cutting parameters, force signals under different tool wear states are computed and provide force signals for each studied tool geometry. These signals are qualitatively compared with relevant data from the literature. At this point, no quantitative comparison could be performed on worn tools because the reviewed literature failed to provide similar studies in this material, either numerical or experimental. Therefore, further development of this work should include experimental campaigns aiming at collecting cutting forces signals and assessing the numerical results that were achieved through this work.
Samimi, M.; Dommelen, van J.A.W.; Geers, M.G.D.
2011-01-01
Oscillations observed in the load–displacement response of brittle interfaces modeled by cohesive zone elements in a quasi-static finite element framework are artifacts of the discretization. The typical limit points in this oscillatory path can be traced by application of path-following techniques,
Su Tingzhi; Guan Xiaohong; Tang Yulin; Gu Guowei; Wang Jianmin
2010-01-01
Toxic anionic elements such as arsenic, selenium, and vanadium often co-exist in groundwater. These elements may impact each other when adsorption methods are used to remove them. In this study, we investigated the competitive adsorption behavior of As(V), Se(IV), and V(V) onto activated alumina under different pH and surface loading conditions. Results indicated that these anionic elements interfered with each other during adsorption. A speciation-based model was developed to quantify the competitive adsorption behavior of these elements. This model could predict the adsorption data well over the pH range of 1.5-12 for various surface loading conditions, using the same set of adsorption constants obtained from single-sorbate systems. This model has great implications in accurately predicting the field capacity of activated alumina under various local water quality conditions when multiple competitive anionic elements are present.
Spectral dimension of quantum geometries
Calcagni, Gianluca; Oriti, Daniele; Thürigen, Johannes
2014-01-01
The spectral dimension is an indicator of geometry and topology of spacetime and a tool to compare the description of quantum geometry in various approaches to quantum gravity. This is possible because it can be defined not only on smooth geometries but also on discrete (e.g., simplicial) ones. In this paper, we consider the spectral dimension of quantum states of spatial geometry defined on combinatorial complexes endowed with additional algebraic data: the kinematical quantum states of loop quantum gravity (LQG). Preliminarily, the effects of topology and discreteness of classical discrete geometries are studied in a systematic manner. We look for states reproducing the spectral dimension of a classical space in the appropriate regime. We also test the hypothesis that in LQG, as in other approaches, there is a scale dependence of the spectral dimension, which runs from the topological dimension at large scales to a smaller one at short distances. While our results do not give any strong support to this hypothesis, we can however pinpoint when the topological dimension is reproduced by LQG quantum states. Overall, by exploring the interplay of combinatorial, topological and geometrical effects, and by considering various kinds of quantum states such as coherent states and their superpositions, we find that the spectral dimension of discrete quantum geometries is more sensitive to the underlying combinatorial structures than to the details of the additional data associated with them. (paper)
Ringwelski, S; Gabbert, U
2010-01-01
A recently developed approach for the simulation and design of a fluid-loaded lightweight structure with surface-mounted piezoelectric actuators and sensors capable of actively reducing the sound radiation and the vibration is presented. The objective of this paper is to describe the theoretical background of the approach in which the FEM is applied to model the actively controlled shell structure. The FEM is also employed to model finite fluid domains around the shell structure as well as fluid domains that are partially or totally bounded by the structure. Boundary elements are used to characterize the unbounded acoustic pressure fields. The approach presented is based on the coupling of piezoelectric and acoustic finite elements with boundary elements. A coupled finite element–boundary element model is derived by introducing coupling conditions at the fluid–fluid and fluid–structure interfaces. Because of the possibility of using piezoelectric patches as actuators and sensors, feedback control algorithms can be implemented directly into the multi-coupled structural–acoustic approach to provide a closed-loop model for the design of active noise and vibration control. In order to demonstrate the applicability of the approach developed, a number of test simulations are carried out and the results are compared with experimental data. As a test case, a box-shaped shell structure with surface-mounted piezoelectric actuators and four sensors and an open rearward end is considered. A comparison between the measured values and those predicted by the coupled finite element–boundary element model shows a good agreement
Information-efficient spectral imaging sensor
Sweatt, William C.; Gentry, Stephen M.; Boye, Clinton A.; Grotbeck, Carter L.; Stallard, Brian R.; Descour, Michael R.
2003-01-01
A programmable optical filter for use in multispectral and hyperspectral imaging. The filter splits the light collected by an optical telescope into two channels for each of the pixels in a row in a scanned image, one channel to handle the positive elements of a spectral basis filter and one for the negative elements of the spectral basis filter. Each channel for each pixel disperses its light into n spectral bins, with the light in each bin being attenuated in accordance with the value of the associated positive or negative element of the spectral basis vector. The spectral basis vector is constructed so that its positive elements emphasize the presence of a target and its negative elements emphasize the presence of the constituents of the background of the imaged scene. The attenuated light in the channels is re-imaged onto separate detectors for each pixel and then the signals from the detectors are combined to give an indication of the presence or not of the target in each pixel of the scanned scene. This system provides for a very efficient optical determination of the presence of the target, as opposed to the very data intensive data manipulations that are required in conventional hyperspectral imaging systems.
Bonnail, Estefanía; Pérez-López, Rafael; Sarmiento, Aguasanta M; Nieto, José Miguel; DelValls, T Ángel
2017-09-15
Lanthanide series have been used as a record of the water-rock interaction and work as a tool for identifying impacts of acid mine drainage (lixiviate residue derived from sulphide oxidation). The application of North-American Shale Composite-normalized rare earth elements patterns to these minority elements allows determining the origin of the contamination. In the current study, geochemical patterns were applied to rare earth elements bioaccumulated in the soft tissue of the freshwater clam Corbicula fluminea after exposure to different acid mine drainage contaminated environments. Results show significant bioaccumulation of rare earth elements in soft tissue of the clam after 14 days of exposure to acid mine drainage contaminated sediment (ΣREE=1.3-8μg/gdw). Furthermore, it was possible to biomonitor different degrees of contamination based on rare earth elements in tissue. The pattern of this type of contamination describes a particular curve characterized by an enrichment in the middle rare earth elements; a homologous pattern (E MREE =0.90) has also been observed when applied NASC normalization in clam tissues. Results of lanthanides found in clams were contrasted with the paucity of toxicity studies, determining risk caused by light rare earth elements in the Odiel River close to the Estuary. The current study purposes the use of clam as an innovative "bio-tool" for the biogeochemical monitoring of pollution inputs that determines the acid mine drainage networks affection. Copyright © 2017 Elsevier B.V. All rights reserved.
Improved documentation of spectral lines for inductively coupled plasma emission spectrometry
Doidge, Peter S.
2018-05-01
An approach to improving the documentation of weak spectral lines falling near the prominent analytical lines used in inductively coupled plasma optical emission spectrometry (ICP-OES) is described. Measurements of ICP emission spectra in the regions around several hundred prominent lines, using concentrated solutions (up to 1% w/v) of some 70 elements, and comparison of the observed spectra with both recent published work and with the output of a computer program that allows calculation of transitions between the known energy levels, show that major improvements can be made in the coverage of spectral atlases for ICP-OES, with respect to "classical" line tables. It is argued that the atomic spectral data (wavelengths, energy levels) required for the reliable identification and documentation of a large majority of the weak interfering lines of the elements detectable by ICP-OES now exist, except for most of the observed lines of the lanthanide elements. In support of this argument, examples are provided from a detailed analysis of a spectral window centered on the prominent Pb II 220.353 nm line, and from a selected line-rich spectrum (W). Shortcomings in existing analyses are illustrated with reference to selected spectral interferences due to Zr. This approach has been used to expand the spectral-line library used in commercial ICP-ES instruments (Agilent 700-ES/5100-ES). The precision of wavelength measurements is evaluated in terms of the shot-noise limit, while the absolute accuracy of wavelength measurement is characterised through comparison with a small set of precise Ritz wavelengths for Sb I, and illustrated through the identification of Zr III lines; it is further shown that fractional-pixel absolute wavelength accuracies can be achieved. Finally, problems with the wavelengths and classifications of certain Au I lines are discussed.
Kiel, Nikolaj; Andersen, Lars Vabbersgaard; Niu, Bin
2012-01-01
. With the number of modules in the three axial directions defined, wall and floor panels are constructed, placed and coupled in the global model. The core of this modular finite element model consists of connecting the different panels to each other in a rational manner, where the accuracy is as high as possible......, with as many applications as possible, for the least possible computational cost. The coupling method of the structural panels in the above mentioned modular finite element model is in this paper discussed and evaluated. The coupling of the panels are performed using the commercial finite element program....... In this way a well-defined master geometry is present onto which all panels can be tied. But as the skeleton is an element itself, it will have a physical mass and a corresponding stiffness to be included in the linear system of equations. This means that the skeleton will influence the structure...
Templeton, D.M.; Ariese, F.; Cornelis, R.; Danielsson, L.G.; Muntau, H.; Leeuwen, van H.P.; Lobínski, R.
2000-01-01
This paper presents definitions of concepts related to speciation of elements, more particularly speciation analysis and chemical species. Fractionation is distinguished from speciation analysis, and a general outline of fractionation procedures is given. We propose a categorization of species
Nicholson, Caroline; Hepworth, Julie; Burridge, Letitia; Marley, John; Jackson, Claire
2018-01-31
Against a paucity of evidence, a model describing elements of health governance best suited to achieving integrated care internationally was developed. The aim of this study was to explore how health meso-level organisations used, or planned to use, the governance elements. A case study design was used to offer two contrasting contexts of health governance. Semi-structured interviews were conducted with participants who held senior governance roles. Data were thematically analysed to identify if the elements of health governance were being used, or intended to be in the future. While all participants agreed that the ten elements were essential to developing future integrated care, most were not used. Three major themes were identified: (1) organisational versus system focus, (2) leadership and culture, and, (3) community (dis)engagement. Several barriers and enablers to the use of the elements were identified and would require addressing in order to make evidence-based changes. Despite a clear international policy direction in support of integrated care this study identified a number of significant barriers to its implementation. The study reconfirmed that a focus on all ten elements of health governance is essential to achieve integrated care.
Li, Lin
2008-12-01
Partial least squares (PLS) regressions were applied to lunar highland and mare soil data characterized by the Lunar Soil Characterization Consortium (LSCC) for spectral estimation of the abundance of lunar soil chemical constituents FeO and Al2O3. The LSCC data set was split into a number of subsets including the total highland, Apollo 16, Apollo 14, and total mare soils, and then PLS was applied to each to investigate the effect of nonlinearity on the performance of the PLS method. The weight-loading vectors resulting from PLS were analyzed to identify mineral species responsible for spectral estimation of the soil chemicals. The results from PLS modeling indicate that the PLS performance depends on the correlation of constituents of interest to their major mineral carriers, and the Apollo 16 soils are responsible for the large errors of FeO and Al2O3 estimates when the soils were modeled along with other types of soils. These large errors are primarily attributed to the degraded correlation FeO to pyroxene for the relatively mature Apollo 16 soils as a result of space weathering and secondary to the interference of olivine. PLS consistently yields very accurate fits to the two soil chemicals when applied to mare soils. Although Al2O3 has no spectrally diagnostic characteristics, this chemical can be predicted for all subset data by PLS modeling at high accuracies because of its correlation to FeO. This correlation is reflected in the symmetry of the PLS weight-loading vectors for FeO and Al2O3, which prove to be very useful for qualitative interpretation of the PLS results. However, this qualitative interpretation of PLS modeling cannot be achieved using principal component regression loading vectors.
Lin, Junfang; Cao, Wenxi; Wang, Guifeng; Hu, Shuibo
2013-06-20
Using a data set of 1333 samples, we assess the spectral absorption relationships of different wave bands for phytoplankton (ph) and particles. We find that a nonlinear model (second-order quadratic equations) delivers good performance in describing their spectral characteristics. Based on these spectral relationships, we develop a method for partitioning the total absorption coefficient into the contributions attributable to phytoplankton [a(ph)(λ)], colored dissolved organic material [CDOM; a(CDOM)(λ)], and nonalgal particles [NAP; a(NAP)(λ)]. This method is validated using a data set that contains 550 simultaneous measurements of phytoplankton, CDOM, and NAP from the NASA bio-Optical Marine Algorithm Dataset. We find that our method is highly efficient and robust, with significant accuracy: the relative root-mean-square errors (RMSEs) are 25.96%, 38.30%, and 19.96% for a(ph)(443), a(CDOM)(443), and the CDOM exponential slope, respectively. The performance is still satisfactory when the method is applied to water samples from the northern South China Sea as a regional case. The computed and measured absorption coefficients (167 samples) agree well with the RMSEs, i.e., 18.50%, 32.82%, and 10.21% for a(ph)(443), a(CDOM)(443), and the CDOM exponential slope, respectively. Finally, the partitioning method is applied directly to an independent data set (1160 samples) derived from the Bermuda Bio-Optics Project that contains relatively low absorption values, and we also obtain good inversion accuracy [RMSEs of 32.37%, 32.57%, and 11.52% for a(ph)(443), a(CDOM)(443), and the CDOM exponential slope, respectively]. Our results indicate that this partitioning method delivers satisfactory performance for the retrieval of a(ph), a(CDOM), and a(NAP). Therefore, this may be a useful tool for extracting absorption coefficients from in situ measurements or remotely sensed ocean-color data.
Spectral gamuts and spectral gamut mapping
Rosen, Mitchell R.; Derhak, Maxim W.
2006-01-01
All imaging devices have two gamuts: the stimulus gamut and the response gamut. The response gamut of a print engine is typically described in CIE colorimetry units, a system derived to quantify human color response. More fundamental than colorimetric gamuts are spectral gamuts, based on radiance, reflectance or transmittance units. Spectral gamuts depend on the physics of light or on how materials interact with light and do not involve the human's photoreceptor integration or brain processing. Methods for visualizing a spectral gamut raise challenges as do considerations of how to utilize such a data-set for producing superior color reproductions. Recent work has described a transformation of spectra reduced to 6-dimensions called LabPQR. LabPQR was designed as a hybrid space with three explicit colorimetric axes and three additional spectral reconstruction axes. In this paper spectral gamuts are discussed making use of LabPQR. Also, spectral gamut mapping is considered in light of the colorimetric-spectral duality of the LabPQR space.
Rectangular spectral collocation
Driscoll, Tobin A.
2015-02-06
Boundary conditions in spectral collocation methods are typically imposed by removing some rows of the discretized differential operator and replacing them with others that enforce the required conditions at the boundary. A new approach based upon resampling differentiated polynomials into a lower-degree subspace makes differentiation matrices, and operators built from them, rectangular without any row deletions. Then, boundary and interface conditions can be adjoined to yield a square system. The resulting method is both flexible and robust, and avoids ambiguities that arise when applying the classical row deletion method outside of two-point scalar boundary-value problems. The new method is the basis for ordinary differential equation solutions in Chebfun software, and is demonstrated for a variety of boundary-value, eigenvalue and time-dependent problems.
Hegazy, Maha Abdel Monem; Fayez, Yasmin Mohammed
2015-04-01
Two different methods manipulating spectrophotometric data have been developed, validated and compared. One is capable of removing the signal of any interfering components at the selected wavelength of the component of interest (univariate). The other includes more variables and extracts maximum information to determine the component of interest in the presence of other components (multivariate). The applied methods are smart, simple, accurate, sensitive, precise and capable of determination of spectrally overlapped antihypertensives; hydrochlorothiazide (HCT), irbesartan (IRB) and candesartan (CAN). Mean centering of ratio spectra (MCR) and concentration residual augmented classical least-squares method (CRACLS) were developed and their efficiency was compared. CRACLS is a simple method that is capable of extracting the pure spectral profiles of each component in a mixture. Correlation was calculated between the estimated and pure spectra and was found to be 0.9998, 0.9987 and 0.9992 for HCT, IRB and CAN, respectively. The methods were successfully determined the three components in bulk powder, laboratory-prepared mixtures, and combined dosage forms. The results obtained were compared statistically with each other and to those of the official methods.
Meson spectral functions at finite temperature
Wetzorke, I.; Karsch, F.; Laermann, E.; Petreczky, P.; Stickan, S.
2001-10-01
The Maximum Entropy Method provides a Bayesian approach to reconstruct the spectral functions from discrete points in Euclidean time. The applicability of the approach at finite temperature is probed with the thermal meson correlation function. Furthermore the influence of fuzzing/smearing techniques on the spectral shape is investigated. We present first results for meson spectral functions at several temperatures below and above T c . The correlation functions were obtained from quenched calculations with Clover fermions on large isotropic lattices of the size (24 - 64) 3 x 16. We compare the resulting pole masses with the ones obtained from standard 2-exponential fits of spatial and temporal correlation functions at finite temperature and in the vacuum. The deviation of the meson spectral functions from free spectral functions is examined above the critical temperature. (orig.)
Meson spectral functions at finite temperature
Wetzorke, I.; Karsch, F.; Laermann, E.; Petreczky, P.; Stickan, S.
2002-01-01
The Maximum Entropy Method provides a Bayesian approach to reconstruct the spectral functions from discrete points in Euclidean time. The applicability of the approach at finite temperature is probed with the thermal meson correlation function. Furthermore the influence of fuzzing/smearing techniques on the spectral shape is investigated. We present first results for meson spectral functions at several temperatures below and above T c . The correlation functions were obtained from quenched calculations with Clover fermions on large isotropic lattices of the size (24 - 64) 3 x 16. We compare the resulting pole masses with the ones obtained from standard 2-exponential fits of spatial and temporal correlation functions at finite temperature and in the vacuum. The deviation of the meson spectral functions from free spectral functions is examined above the critical temperature
Meson spectral functions at finite temperature
Wetzorke, I.; Karsch, F.; Laermann, E.; Petreczky, P.; Stickan, S
2002-03-01
The Maximum Entropy Method provides a Bayesian approach to reconstruct the spectral functions from discrete points in Euclidean time. The applicability of the approach at finite temperature is probed with the thermal meson correlation function. Furthermore the influence of fuzzing/smearing techniques on the spectral shape is investigated. We present first results for meson spectral functions at several temperatures below and above T{sub c}. The correlation functions were obtained from quenched calculations with Clover fermions on large isotropic lattices of the size (24 - 64){sup 3} x 16. We compare the resulting pole masses with the ones obtained from standard 2-exponential fits of spatial and temporal correlation functions at finite temperature and in the vacuum. The deviation of the meson spectral functions from free spectral functions is examined above the critical temperature.
Meson spectral functions at finite temperature
Wetzorke, I.; Karsch, F.; Laermann, E.; Petreczky, P.; Stickan, S. [Bielefeld Univ. (Germany). Fakultaet fuer Physik
2001-10-01
The Maximum Entropy Method provides a Bayesian approach to reconstruct the spectral functions from discrete points in Euclidean time. The applicability of the approach at finite temperature is probed with the thermal meson correlation function. Furthermore the influence of fuzzing/smearing techniques on the spectral shape is investigated. We present first results for meson spectral functions at several temperatures below and above T{sub c}. The correlation functions were obtained from quenched calculations with Clover fermions on large isotropic lattices of the size (24 - 64){sup 3} x 16. We compare the resulting pole masses with the ones obtained from standard 2-exponential fits of spatial and temporal correlation functions at finite temperature and in the vacuum. The deviation of the meson spectral functions from free spectral functions is examined above the critical temperature. (orig.)
3D Discrete element approach to the problem on abutment pressure in a gently dipping coal seam
Klishin, S. V.; Revuzhenko, A. F.
2017-09-01
Using the discrete element method, the authors have carried out 3D implementation of the problem on strength loss in surrounding rock mass in the vicinity of a production heading and on abutment pressure in a gently dripping coal seam. The calculation of forces at the contacts between particles accounts for friction, rolling resistance and viscosity. Between discrete particles modeling coal seam, surrounding rock mass and broken rocks, an elastic connecting element is introduced to allow simulating coherent materials. The paper presents the kinematic patterns of rock mass deformation, stresses in particles and the graph of the abutment pressure behavior in the coal seam.
EIT Imaging Regularization Based on Spectral Graph Wavelets.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut
2017-09-01
The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.
Farhat Ghada
2008-04-01
Full Text Available Abstract Background Recent advances in genomics, proteomics, and the increasing demands for biomarker validation studies have catalyzed changes in the landscape of cancer research, fueling the development of tissue banks for translational research. A result of this transformation is the need for sufficient quantities of clinically annotated and well-characterized biospecimens to support the growing needs of the cancer research community. Clinical annotation allows samples to be better matched to the research question at hand and ensures that experimental results are better understood and can be verified. To facilitate and standardize such annotation in bio-repositories, we have combined three accepted and complementary sets of data standards: the College of American Pathologists (CAP Cancer Checklists, the protocols recommended by the Association of Directors of Anatomic and Surgical Pathology (ADASP for pathology data, and the North American Association of Central Cancer Registry (NAACCR elements for epidemiology, therapy and follow-up data. Combining these approaches creates a set of International Standards Organization (ISO – compliant Common Data Elements (CDEs for the mesothelioma tissue banking initiative supported by the National Institute for Occupational Safety and Health (NIOSH of the Center for Disease Control and Prevention (CDC. Methods The purpose of the project is to develop a core set of data elements for annotating mesothelioma specimens, following standards established by the CAP checklist, ADASP cancer protocols, and the NAACCR elements. We have associated these elements with modeling architecture to enhance both syntactic and semantic interoperability. The system has a Java-based multi-tiered architecture based on Unified Modeling Language (UML. Results Common Data Elements were developed using controlled vocabulary, ontology and semantic modeling methodology. The CDEs for each case are of different types: demographic
Pavlů, J.; Řehák, Petr; Vřešťál, Jan; Šob, Mojmír
2015-01-01
Roč. 51, č. 1 (2015), s. 161-171 ISSN 0364-5916 Institutional support: RVO:68081723 Keywords : Einstein temperature * Heat capacity * Low temperature * Pure elements * SGTE data * Zero Kelvin Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 2.129, year: 2015
Janoušek, V.; Navrátil, Tomáš; Trubač, J.; Strnad, J.; Laufek, F.; Minařík, Luděk
2014-01-01
Roč. 65, č. 4 (2014), s. 257-271 ISSN 1335-0552 Institutional research plan: CEZ:AV0Z30130516 Institutional support: RVO:67985831 Keywords : modal analyses * trace-element residence * ICP -MS * Central Bohemian Plutonic Complex * Říčany granite Subject RIV: DD - Geochemistry Impact factor: 0.761, year: 2014
A novel and compact spectral imaging system based on two curved prisms
Nie, Yunfeng; Bin, Xiangli; Zhou, Jinsong; Li, Yang
2013-09-01
As a novel detection approach which simultaneously acquires two-dimensional visual picture and one-dimensional spectral information, spectral imaging offers promising applications on biomedical imaging, conservation and identification of artworks, surveillance of food safety, and so forth. A novel moderate-resolution spectral imaging system consisting of merely two optical elements is illustrated in this paper. It can realize the function of a relay imaging system as well as a 10nm spectral resolution spectroscopy. Compared to conventional prismatic imaging spectrometers, this design is compact and concise with only two special curved prisms by utilizing two reflective surfaces. In contrast to spectral imagers based on diffractive grating, the usage of compound-prism possesses characteristics of higher energy utilization and wider free spectral range. The seidel aberration theory and dispersive principle of this special prism are analyzed at first. According to the results, the optical system of this design is simulated, and the performance evaluation including spot diagram, MTF and distortion, is presented. In the end, considering the difficulty and particularity of manufacture and alignment, an available method for fabrication and measurement is proposed.
Kragh, Helge
2009-01-01
of the nineteenth century. In the modest form of a yellow spectral line known as D3, 'helium' was sometimes supposed to exist in the Sun's atmosphere, an idea which is traditionally ascribed to J. Norman Lockyer. Did Lockyer discover helium as a solar element? How was the suggestion received by chemists, physicists...... and astronomers in the period until the spring of 1895, when William Ramsay serendipitously found the gas in uranium minerals? The hypothetical element helium was fairly well known, yet Ramsay's discovery owed little or nothing to Lockyer's solar element. Indeed, for a brief while it was thought that the two...... elements might be different. The complex story of how helium became established as both a solar and terrestrial element involves precise observations as well as airy speculations. It is a story that is unique among the discovery histories of the chemical elements....
Spectral parameters for scattering amplitudes in N=4 super Yang-Mills theory
Ferro, Livia; Łukowski, Tomasz; Meneghelli, Carlo; Plefka, Jan; Staudacher, Matthias
2014-01-01
Planar N=4 Super Yang-Mills theory appears to be a quantum integrable four-dimensional conformal theory. This has been used to find equations believed to describe its exact spectrum of anomalous dimensions. Integrability seemingly also extends to the planar space-time scattering amplitudes of the N=4 model, which show strong signs of Yangian invariance. However, in contradistinction to the spectral problem, this has not yet led to equations determining the exact amplitudes. We propose that the missing element is the spectral parameter, ubiquitous in integrable models. We show that it may indeed be included into recent on-shell approaches to scattering amplitude integrands, providing a natural deformation of the latter. Under some constraints, Yangian symmetry is preserved. Finally we speculate that the spectral parameter might also be the regulator of choice for controlling the infrared divergences appearing when integrating the integrands in exactly four dimensions
Cozer, Thamara C.; Conceicao, Andre L.C.; Paschuk, Sergei A.; Rocha, Anna S.S. da; Fagundes, Alana C.F.; Maciel, Karla F.R.; Pimentel, Gustavo R.O.; Badelli, Juliana C.
2015-01-01
Studies performed with canines indicate that one of the main neoplasia which affect these animals are the breast tumors, representing from 25% to 50% of all kinds of tumors. Moreover, half of them are classified as malignant. In this sense, recent researches on humans have been associated the presence of certain trace elements with the development of breast neoplasia in those individuals. Then, as the breast tissue composition in canines is very similar to the humans, it is expected the same behavior. In this direction, a very effective technique to identify and to determinate trace elements concentration is the EDXRF. However, studies on this area are scarce in the literature. Therefore, in this work it was developed an approach to quantify the main trace elements present into these tumors with high sensitivity. For this purpose, it was determined calibration curves of standards samples diluted in water, with concentrations of Ca, Fe, Cu and Zn, ranging from 400mg/kg to 35mg/kg, from 20mg/kg to 2mg/kg, from 10mg/kg to 1mg/kg and from 100mg/kg to 10mg/kg, respectively. All calibration curves were linearly fitted and on basis in this behavior it was determined the sensitivity of our approach to quantify the concentration of the trace elements mentioned above. In addition, it is important to mention that studies in this area are of great potential, because EDXRF represents a quickly practical and non-destructive alternative to quantify trace elements. (author)
Cozer, Thamara C.; Conceicao, Andre L.C.; Paschuk, Sergei A.; Rocha, Anna S.S. da; Fagundes, Alana C.F.; Maciel, Karla F.R.; Pimentel, Gustavo R.O.; Badelli, Juliana C., E-mail: thamara.cozer@gmail.com, E-mail: alconceicao@utfpr.edu.br, E-mail: sergei@utfpr.edu.br, E-mail: anna@utfpr.edu.br, E-mail: alanacarolinef@gmail.com, E-mail: karla_rimanski@hotmail.com, E-mail: g_rop@hotmail.com, E-mail: jubadellin@gmail.com [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil). Lab. de Espectroscopia de Raio-X
2015-07-01
Studies performed with canines indicate that one of the main neoplasia which affect these animals are the breast tumors, representing from 25% to 50% of all kinds of tumors. Moreover, half of them are classified as malignant. In this sense, recent researches on humans have been associated the presence of certain trace elements with the development of breast neoplasia in those individuals. Then, as the breast tissue composition in canines is very similar to the humans, it is expected the same behavior. In this direction, a very effective technique to identify and to determinate trace elements concentration is the EDXRF. However, studies on this area are scarce in the literature. Therefore, in this work it was developed an approach to quantify the main trace elements present into these tumors with high sensitivity. For this purpose, it was determined calibration curves of standards samples diluted in water, with concentrations of Ca, Fe, Cu and Zn, ranging from 400mg/kg to 35mg/kg, from 20mg/kg to 2mg/kg, from 10mg/kg to 1mg/kg and from 100mg/kg to 10mg/kg, respectively. All calibration curves were linearly fitted and on basis in this behavior it was determined the sensitivity of our approach to quantify the concentration of the trace elements mentioned above. In addition, it is important to mention that studies in this area are of great potential, because EDXRF represents a quickly practical and non-destructive alternative to quantify trace elements. (author)
Christensen, Max la Cour; Villa, Umberto; Engsig-Karup, Allan Peter
2017-01-01
associated with non-planar interfaces between agglomerates, the coarse velocity space has guaranteed approximation properties. The employed AMGe technique provides coarse spaces with desirable local mass conservation and stability properties analogous to the original pair of Raviart-Thomas and piecewise......We study the application of a finite element numerical upscaling technique to the incompressible two-phase porous media total velocity formulation. Specifically, an element agglomeration based Algebraic Multigrid (AMGe) technique with improved approximation proper ties [37] is used, for the first...... discontinuous polynomial spaces, resulting in strong mass conservation for the upscaled systems. Due to the guaranteed approximation properties and the generic nature of the AMGe method, recursive multilevel upscaling is automatically obtained. Furthermore, this technique works for both structured...
El-Zein, Abbas; Carter, John P.; Airey, David W.
2006-06-01
A three-dimensional finite-element model of contaminant migration in fissured clays or contaminated sand which includes multiple sources of non-equilibrium processes is proposed. The conceptual framework can accommodate a regular network of fissures in 1D, 2D or 3D and immobile solutions in the macro-pores of aggregated topsoils, as well as non-equilibrium sorption. A Galerkin weighted-residual statement for the three-dimensional form of the equations in the Laplace domain is formulated. Equations are discretized using linear and quadratic prism elements. The system of algebraic equations is solved in the Laplace domain and solution is inverted to the time domain numerically. The model is validated and its scope is illustrated through the analysis of three problems: a waste repository deeply buried in fissured clay, a storage tank leaking into sand and a sanitary landfill leaching into fissured clay over a sand aquifer.
Optical phonons in cubic AlxGa1-xN approached by the modified random element isodisplacement model
Liu, M.S.; Bursill, L.A.; Prawer, S.
1998-01-01
The behaviour of longitudinal and transverse optical phonons in cubic Al x Ga l-x N are derived theoretically as a function of the concentration x (0≤x≤1). The calculation is based on a Modified Random Element Isodisplacement model which considers the interactions from the nearest neighbor and second neighbor atoms. We find one-mode behavior in Al x Ga l-x N where the phonon frequency in general varies continuously and approximately linearly with x. (author)
Thien, Bruno M.J.; Kulik, Dmitrii A.; Curti, Enzo
2014-01-01
Highlights: • There are several models able to describe trace element partitioning in growing minerals. • To describe complex systems, those models must be embedded in a geochemical code. • We merged two models into a unified one suitable for implementation in a geochemical code. • This unified model was tested against coprecipitation experimental data. • We explored how our model reacts to solution depletion effects. - Abstract: Thermodynamics alone is usually not sufficient to predict growth-rate dependencies of trace element partitioning into host mineral solid solutions. In this contribution, two uptake kinetic models were analyzed that are promising in terms of mechanistic understanding and potential for implementation in geochemical modelling codes. The growth Surface Entrapment Model (Watson, 2004) and the Surface Reaction Kinetic Model (DePaolo, 2011) were shown to be complementary, and under certain assumptions merged into a single analytical expression. This Unified Uptake Kinetics Model was implemented in GEMS3K and GEM-Selektor codes ( (http://gems.web.psi.ch)), a Gibbs energy minimization package for geochemical modelling. This implementation extends the applicability of the unified uptake kinetics model to accounting for non-trivial factors influencing the trace element partitioning into solid solutions, such as the changes in aqueous solution composition and speciation, or the depletion effects in closed geochemical systems
Computer-assisted spectral design and synthesis
Vadakkumpadan, Fijoy; Wang, Qiqi; Sun, Yinlong
2005-01-01
In this paper, we propose a computer-assisted approach for spectral design and synthesis. This approach starts with some initial spectrum, modifies it interactively, evaluates the change, and decides the optimal spectrum. Given a requested change as function of wavelength, we model the change function using a Gaussian function. When there is the metameric constraint, from the Gaussian function of request change, we propose a method to generate the change function such that the result spectrum has the same color as the initial spectrum. We have tested the proposed method with different initial spectra and change functions, and implemented an interactive graphics environment for spectral design and synthesis. The proposed approach and graphics implementation for spectral design and synthesis can be helpful for a number of applications such as lighting of building interiors, textile coloration, and pigment development of automobile paints, and spectral computer graphics.
Adaptive Spectral Doppler Estimation
Gran, Fredrik; Jakobsson, Andreas; Jensen, Jørgen Arendt
2009-01-01
. The methods can also provide better quality of the estimated power spectral density (PSD) of the blood signal. Adaptive spectral estimation techniques are known to pro- vide good spectral resolution and contrast even when the ob- servation window is very short. The 2 adaptive techniques are tested......In this paper, 2 adaptive spectral estimation techniques are analyzed for spectral Doppler ultrasound. The purpose is to minimize the observation window needed to estimate the spectrogram to provide a better temporal resolution and gain more flexibility when designing the data acquisition sequence...... and compared with the averaged periodogram (Welch’s method). The blood power spectral capon (BPC) method is based on a standard minimum variance technique adapted to account for both averaging over slow-time and depth. The blood amplitude and phase estimation technique (BAPES) is based on finding a set...
Analysing flow structures around a blade using spectral/hp method and HPIV
Stoevesandt, Bernhard; Steigerwald, Christian; Shishkin, Andrei; Wagner, Claus; Peinke, Joachim
2007-01-01
A still difficult, yet pressing task for blade manufacturers and turbine producers is the correct prediction of the effects of turbulent winds on the blade. Reynolds Averaged Numerical Simulations (RANS) are a limited tool for calculating the effects. For large eddy simulations (LES) boundary layer calculation are still difficult therefore the spectral element method seems to be an approach to improve numerical calculations of flow separation. The flow field around an fx79-w151a airfoil has been calculated by the spectral element code NεκTαrusing a direct numerical simulation (DNS) solver. In a first step a laminar inflow on the airfoil at angle of attack of α = 12 0 and a Reynolds number of Re= 33000 was simulated using the 2D Version of the code. The flow pattern was compared to measurements using holographic particle induced velocimetry (HPIV) in a wind tunnel
Petr Koňas
2009-01-01
Full Text Available The work summarizes created algorithms for formation of finite element (FE mesh which is derived from bitmap pattern. Process of registration, segmentation and meshing is described in detail. C++ library of STL from Insight Toolkit (ITK Project together with Visualization Toolkit (VTK were used for base processing of images. Several methods for appropriate mesh output are discussed. Multiplatform application WOOD3D for the task under GNU GPL license was assembled. Several methods of segmentation and mainly different ways of contouring were included. Tetrahedral and rectilinear types of mesh were programmed. Improving of mesh quality in some simple ways is mentioned. Testing and verification of final program on wood anatomy samples of spruce and walnut was realized. Methods of microscopic anatomy samples preparation are depicted. Final utilization of formed mesh in the simple structural analysis was performed.The article discusses main problems in image analysis due to incompatible colour spaces, samples preparation, thresholding and final conversion into finite element mesh. Assembling of mentioned tasks together and evaluation of the application are main original results of the presented work. In presented program two thresholding filters were used. By utilization of ITK two following filters were included. Otsu filter based and binary filter based were used. The most problematic task occurred in a production of wood anatomy samples in the unique light conditions with minimal or zero colour space shift and the following appropriate definition of thresholds (corresponding thresholding parameters and connected methods (prefiltering + registration which influence the continuity and mainly separation of wood anatomy structure. Solution in samples staining is suggested with the following quick image analysis realization. Next original result of the work is complex fully automated application which offers three types of finite element mesh
Spectral-Product Methods for Electronic Structure Calculations (Preprint)
Langhoff, P. W; Mills, J. E; Boatz, J. A
2006-01-01
.... The spectral-product approach to molecular electronic structure avoids the repeated evaluations of the one- and two-electron integrals required in construction of polyatomic Hamiltonian matrices...
Spectral-Product Methods for Electronic Structure Calculations (Postprint)
Langhoff, P. W; Hinde, R. J; Mills, J. D; Boatz, J. A
2007-01-01
.... The spectral-product approach to molecular electronic structure avoids the repeated evaluations of the one- and two-electron integrals required in construction of polyatomic Hamiltonian matrices...
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2017-01-01
A finite element method for the evolution of a two-phase membrane in a sharp interface formulation is introduced. The evolution equations are given as an $L^2$--gradient flow of an energy involving an elastic bending energy and a line energy. In the two phases Helfrich-type evolution equations are prescribed, and on the interface, an evolving curve on an evolving surface, highly nonlinear boundary conditions have to hold. Here we consider both $C^0$-- and $C^1$--matching conditions for the su...
Extracting attosecond delays from spectrally overlapping interferograms
Jordan, Inga; Wörner, Hans Jakob
2018-02-01
Attosecond interferometry is becoming an increasingly popular technique for measuring the dynamics of photoionization in real time. Whereas early measurements focused on atomic systems with very simple photoelectron spectra, the technique is now being applied to more complex systems including isolated molecules and solids. The increase in complexity translates into an augmented spectral congestion, unavoidably resulting in spectral overlap in attosecond interferograms. Here, we discuss currently used methods for phase retrieval and introduce two new approaches for determining attosecond photoemission delays from spectrally overlapping photoelectron spectra. We show that the previously used technique, consisting in the spectral integration of the areas of interest, does in general not provide reliable results. Our methods resolve this problem, thereby opening the technique of attosecond interferometry to complex systems and fully exploiting its specific advantages in terms of spectral resolution compared to attosecond streaking.
Introduction to spectral theory
Levitan, B M
1975-01-01
This monograph is devoted to the spectral theory of the Sturm- Liouville operator and to the spectral theory of the Dirac system. In addition, some results are given for nth order ordinary differential operators. Those parts of this book which concern nth order operators can serve as simply an introduction to this domain, which at the present time has already had time to become very broad. For the convenience of the reader who is not familar with abstract spectral theory, the authors have inserted a chapter (Chapter 13) in which they discuss this theory, concisely and in the main without proofs, and indicate various connections with the spectral theory of differential operators.
Holonomy loops, spectral triples and quantum gravity
Johannes, Aastrup; Grimstrup, Jesper Møller; Nest, Ryszard
2009-01-01
We review the motivation, construction and physical interpretation of a semi-finite spectral triple obtained through a rearrangement of central elements of loop quantum gravity. The triple is based on a countable set of oriented graphs and the algebra consists of generalized holonomy loops...
Jiajie Fan
2017-07-01
Full Text Available With the expanding application of light-emitting diodes (LEDs, the color quality of white LEDs has attracted much attention in several color-sensitive application fields, such as museum lighting, healthcare lighting and displays. Reliability concerns for white LEDs are changing from the luminous efficiency to color quality. However, most of the current available research on the reliability of LEDs is still focused on luminous flux depreciation rather than color shift failure. The spectral power distribution (SPD, defined as the radiant power distribution emitted by a light source at a range of visible wavelength, contains the most fundamental luminescence mechanisms of a light source. SPD is used as the quantitative inference of an LED’s optical characteristics, including color coordinates that are widely used to represent the color shift process. Thus, to model the color shift failure of white LEDs during aging, this paper first extracts the features of an SPD, representing the characteristics of blue LED chips and phosphors, by multi-peak curve-fitting and modeling them with statistical functions. Then, because the shift processes of extracted features in aged LEDs are always nonlinear, a nonlinear state-space model is then developed to predict the color shift failure time within a self-adaptive particle filter framework. The results show that: (1 the failure mechanisms of LEDs can be identified by analyzing the extracted features of SPD with statistical curve-fitting and (2 the developed method can dynamically and accurately predict the color coordinates, correlated color temperatures (CCTs, and color rendering indexes (CRIs of phosphor-converted (pc-white LEDs, and also can estimate the residual color life.
Fan, Jiajie; Mohamed, Moumouni Guero; Qian, Cheng; Fan, Xuejun; Zhang, Guoqi; Pecht, Michael
2017-07-18
With the expanding application of light-emitting diodes (LEDs), the color quality of white LEDs has attracted much attention in several color-sensitive application fields, such as museum lighting, healthcare lighting and displays. Reliability concerns for white LEDs are changing from the luminous efficiency to color quality. However, most of the current available research on the reliability of LEDs is still focused on luminous flux depreciation rather than color shift failure. The spectral power distribution (SPD), defined as the radiant power distribution emitted by a light source at a range of visible wavelength, contains the most fundamental luminescence mechanisms of a light source. SPD is used as the quantitative inference of an LED's optical characteristics, including color coordinates that are widely used to represent the color shift process. Thus, to model the color shift failure of white LEDs during aging, this paper first extracts the features of an SPD, representing the characteristics of blue LED chips and phosphors, by multi-peak curve-fitting and modeling them with statistical functions. Then, because the shift processes of extracted features in aged LEDs are always nonlinear, a nonlinear state-space model is then developed to predict the color shift failure time within a self-adaptive particle filter framework. The results show that: (1) the failure mechanisms of LEDs can be identified by analyzing the extracted features of SPD with statistical curve-fitting and (2) the developed method can dynamically and accurately predict the color coordinates, correlated color temperatures (CCTs), and color rendering indexes (CRIs) of phosphor-converted (pc)-white LEDs, and also can estimate the residual color life.
Takata, Hyoe, E-mail: takata@kaiseiken.or.jp [Marine Ecology Research Institute, Central Laboratory, Onjuku, Chiba (Japan); National Institute of Radiological Sciences, Chiba City, Chiba (Japan); Aono, Tatsuo; Tagami, Keiko; Uchida, Shigeo [National Institute of Radiological Sciences, Chiba City, Chiba (Japan)
2016-02-01
In numerical models to simulate the dispersion of anthropogenic radionuclides in the marine environment, the sediment–seawater distribution coefficient (K{sub d}) for various elements is an important parameter. In coastal regions, K{sub d} values are largely dependent on hydrographic conditions and physicochemical characteristics of sediment. Here we report K{sub d} values for 36 elements (Na, Mg, Al, K, Ca, V, Mn, Fe, Co, Ni, Cu, Se, Rb, Sr, Y, Mo, Cd, I, Cs, rare earth elements, Pb, {sup 232}Th and {sup 238}U) in seawater and sediment samples from 19 Japanese coastal regions, and we examine the factors controlling the variability of these K{sub d} values by investigating their relationships to hydrographic conditions and sediment characteristics. There was large variability in K{sub d} values for Al, Mn, Fe, Co, Ni, Cu, Se, Cd, I, Pb and Th. Variations of K{sub d} for Al, Mn, Fe, Co, Pb and Th appear to be controlled by hydrographic conditions. Although K{sub d} values for Ni, Cu, Se, Cd and I depend mainly on grain size, organic matter content, and the concentrations of hydrous oxides/oxides of Fe and Mn in sediments, heterogeneity in the surface characteristics of sediment particles appears to hamper evaluation of the relative importance of these factors. Thus, we report a new approach to evaluate the factors contributing to variability in K{sub d} for an element. By this approach, we concluded that the K{sub d} values for Cu, Se, Cd and I are controlled by grain size and organic matter in sediments, and the K{sub d} value for Ni is dependent on grain size and on hydrous oxides/oxides of Fe and Mn. - Highlights: • K{sub d}s for 36 elements were determined in 19 Japanese coastal regions. • K{sub d}s for several elements appeared to be controlled by multiple factors in sediments. • We evaluated these factors based on physico-chemical characteristics of sediments.
Bridge element deterioration rates.
2008-10-01
This report describes the development of bridge element deterioration rates using the NYSDOT : bridge inspection database using Markov chains and Weibull-based approaches. It is observed : that Weibull-based approach is more reliable for developing b...
Guney, Mert; Zagury, Gerald J., E-mail: gerald.zagury@polymtl.ca
2014-04-01
Highlights: • Risk for children up to 3 years-old was characterized considering oral exposure. • Saliva mobilization, ingestion of parts and of scraped-off material were considered. • Ingestion of parts caused hazard index (HI) values >>for Cd, Ni, and Pb exposure. • HI were lower (but > for saliva mobilization and <1 for scraped material ingestion. • Comprehensive approach aims to deal with drawbacks of current toy safety approaches. - Abstract: Contamination problem in jewelry and toys and children's exposure possibility have been previously demonstrated. For this study, risk from oral exposure has been characterized for highly contaminated metallic toys and jewelry ((MJ), n = 16) considering three scenarios. Total and bioaccessible concentrations of Cd, Cu, Ni, and Pb were high in selected MJ. First scenario (ingestion of parts or pieces) caused unacceptable risk for eight items for Cd, Ni, and/or Pb (hazard index (HI) > 1, up to 75, 5.8, and 43, respectively). HI for ingestion of scraped-off material scenario was always <1. Finally, saliva mobilization scenario caused HI > 1 in three samples (two for Cd, one for Ni). Risk characterization identified different potentially hazardous items compared to United States, Canadian, and European Union approaches. A comprehensive approach was also developed to deal with complexity and drawbacks caused by various toy/jewelry definitions, test methods, exposure scenarios, and elements considered in different regulatory approaches. It includes bioaccessible limits for eight priority elements (As, Cd, Cr, Cu, Hg, Ni, Pb, and Sb). Research is recommended on metals bioaccessibility determination in toys/jewelry, in vitro bioaccessibility test development, estimation of material ingestion rates and frequency, presence of hexavalent Cr and organic Sn, and assessment of prolonged exposure to MJ.
Investigation of Drag Force on Fibres of Bonded Spherical Elements using a Coupled CFD-DEM Approach
Jensen, Anna Lyhne; Sørensen, Henrik; Rosendahl, Lasse Aistrup
2016-01-01
Clogging in wastewater pumps is often caused by flexible, stringy objects. Therefore, simulation of clogging effects in wastewater pumps entails simulation of such flexible objects and the interaction between these objects and fluid in the pump. Using a coupled CFD-DEM approach, the flexible obje...
Hallquist, Michael N; Hwang, Kai; Luna, Beatriz
2013-11-15
Recent resting-state functional connectivity fMRI (RS-fcMRI) research has demonstrated that head motion during fMRI acquisition systematically influences connectivity estimates despite bandpass filtering and nuisance regression, which are intended to reduce such nuisance variability. We provide evidence that the effects of head motion and other nuisance signals are poorly controlled when the fMRI time series are bandpass-filtered but the regressors are unfiltered, resulting in the inadvertent reintroduction of nuisance-related variation into frequencies previously suppressed by the bandpass filter, as well as suboptimal correction for noise signals in the frequencies of interest. This is important because many RS-fcMRI studies, including some focusing on motion-related artifacts, have applied this approach. In two cohorts of individuals (n=117 and 22) who completed resting-state fMRI scans, we found that the bandpass-regress approach consistently overestimated functional connectivity across the brain, typically on the order of r=.10-.35, relative to a simultaneous bandpass filtering and nuisance regression approach. Inflated correlations under the bandpass-regress approach were associated with head motion and cardiac artifacts. Furthermore, distance-related differences in the association of head motion and connectivity estimates were much weaker for the simultaneous filtering approach. We recommend that future RS-fcMRI studies ensure that the frequencies of nuisance regressors and fMRI data match prior to nuisance regression, and we advocate a simultaneous bandpass filtering and nuisance regression strategy that better controls nuisance-related variability. Copyright © 2013 Elsevier Inc. All rights reserved.
Dogra, Sugandha; Singh, Jasveer; Lodh, Abhishek; Sharma, Nita Dilawar; Bandyopadhyay, A K
2011-01-01
This paper reports the behavior of a well-characterized pneumatic piston gauge in the pressure range up to 8 MPa through simulation using finite element method (FEM). Experimentally, the effective area of this piston gauge has been estimated by cross-floating to obtain A 0 and λ. The FEM technique addresses this problem through simulation and optimization with standard commercial software (ANSYS) where the material properties of the piston and cylinder, dimensional measurements, etc are used as the input parameters. The simulation provides the effective area A p as a function of pressure in the free deformation mode. From these data, one can estimate A p versus pressure and thereby A o and λ. Further, we have carried out a similar theoretical calculation of A p using the conventional method involving the Dadson's as well as Johnson–Newhall equations. A comparison of these results with the experimental results has been carried out
Dogra, Sugandha; Singh, Jasveer; Lodh, Abhishek; Dilawar Sharma, Nita; Bandyopadhyay, A. K.
2011-02-01
This paper reports the behavior of a well-characterized pneumatic piston gauge in the pressure range up to 8 MPa through simulation using finite element method (FEM). Experimentally, the effective area of this piston gauge has been estimated by cross-floating to obtain A0 and λ. The FEM technique addresses this problem through simulation and optimization with standard commercial software (ANSYS) where the material properties of the piston and cylinder, dimensional measurements, etc are used as the input parameters. The simulation provides the effective area Ap as a function of pressure in the free deformation mode. From these data, one can estimate Ap versus pressure and thereby Ao and λ. Further, we have carried out a similar theoretical calculation of Ap using the conventional method involving the Dadson's as well as Johnson-Newhall equations. A comparison of these results with the experimental results has been carried out.
Chegel, Raad; Behzad, Somayeh
2014-02-01
We have studied the electronic structure and dipole matrix element, D, of carbon nanotubes (CNTs) under magnetic field, using the third nearest neighbor tight binding model. It is shown that the 1NN and 3NN-TB band structures show differences such as the spacing and mixing of neighbor subbands. Applying the magnetic field leads to breaking the degeneracy behavior in the D transitions and creates new allowed transitions corresponding to the band modifications. It is found that |D| is proportional to the inverse tube radius and chiral angle. Our numerical results show that amount of filed induced splitting for the first optical peak is proportional to the magnetic field by the splitting rate ν11. It is shown that ν11 changes linearly and parabolicly with the chiral angle and radius, respectively.
Hansen, Elo Harald; Miró, Manuel; Long, Xiangbao
2006-01-01
The determination of trace level concentrations of elements, such as metal species, in complex matrices by atomic absorption or emission spectrometric methods often require appropriate pretreatments comprising separation of the analyte from interfering constituents and analyte preconcentration...... are presented as based on the exploitation of micro-sequential injection (μSI-LOV) using hydrophobic as well as hydrophilic bead materials. The examples given comprise the presentation of a universal approach for SPE-assays, front-end speciation of Cr(III) and Cr(VI) in a fully automated and enclosed set...
Becker, P.; Idelsohn, S. R.; Oñate, E.
2015-06-01
This paper describes a strategy to solve multi-fluid and fluid-structure interaction (FSI) problems using Lagrangian particles combined with a fixed finite element (FE) mesh. Our approach is an extension of the fluid-only PFEM-2 (Idelsohn et al., Eng Comput 30(2):2-2, 2013; Idelsohn et al., J Numer Methods Fluids, 2014) which uses explicit integration over the streamlines to improve accuracy. As a result, the convective term does not appear in the set of equations solved on the fixed mesh. Enrichments in the pressure field are used to improve the description of the interface between phases.
M.A. Khanday
2015-10-01
Full Text Available The human body is a complex structure where the balance of mass and heat transport in all tissues is necessary for its normal functioning. The stabilities of intracellular and extracellular fluids are important physiological factors responsible for homoeostasis. To estimate the effects of thermal stress on the behavior of extracellular fluid concentration in human dermal regions, a mathematical model based on diffusion equation along with appropriate boundary conditions has been formulated. Atmospheric temperature, evaporation rate, moisture concentration and other factors affecting the fluid concentration were taken into account. The variational finite element approach has been employed to solve the model and the results were interpreted graphically.
Learning theory of distributed spectral algorithms
Guo, Zheng-Chu; Lin, Shao-Bo; Zhou, Ding-Xuan
2017-01-01
Spectral algorithms have been widely used and studied in learning theory and inverse problems. This paper is concerned with distributed spectral algorithms, for handling big data, based on a divide-and-conquer approach. We present a learning theory for these distributed kernel-based learning algorithms in a regression framework including nice error bounds and optimal minimax learning rates achieved by means of a novel integral operator approach and a second order decomposition of inverse operators. Our quantitative estimates are given in terms of regularity of the regression function, effective dimension of the reproducing kernel Hilbert space, and qualification of the filter function of the spectral algorithm. They do not need any eigenfunction or noise conditions and are better than the existing results even for the classical family of spectral algorithms. (paper)
On Longitudinal Spectral Coherence
Kristensen, Leif
1979-01-01
It is demonstrated that the longitudinal spectral coherence differs significantly from the transversal spectral coherence in its dependence on displacement and frequency. An expression for the longitudinal coherence is derived and it is shown how the scale of turbulence, the displacement between ...... observation sites and the turbulence intensity influence the results. The limitations of the theory are discussed....
Katiyar, Prateek; Divine, Mathew R; Kohlhofer, Ursula; Quintanilla-Martinez, Leticia; Schölkopf, Bernhard; Pichler, Bernd J; Disselhorst, Jonathan A
2017-04-01
In this study, we described and validated an unsupervised segmentation algorithm for the assessment of tumor heterogeneity using dynamic 18 F-FDG PET. The aim of our study was to objectively evaluate the proposed method and make comparisons with compartmental modeling parametric maps and SUV segmentations using simulations of clinically relevant tumor tissue types. Methods: An irreversible 2-tissue-compartmental model was implemented to simulate clinical and preclinical 18 F-FDG PET time-activity curves using population-based arterial input functions (80 clinical and 12 preclinical) and the kinetic parameter values of 3 tumor tissue types. The simulated time-activity curves were corrupted with different levels of noise and used to calculate the tissue-type misclassification errors of spectral clustering (SC), parametric maps, and SUV segmentation. The utility of the inverse noise variance- and Laplacian score-derived frame weighting schemes before SC was also investigated. Finally, the SC scheme with the best results was tested on a dynamic 18 F-FDG measurement of a mouse bearing subcutaneous colon cancer and validated using histology. Results: In the preclinical setup, the inverse noise variance-weighted SC exhibited the lowest misclassification errors (8.09%-28.53%) at all noise levels in contrast to the Laplacian score-weighted SC (16.12%-31.23%), unweighted SC (25.73%-40.03%), parametric maps (28.02%-61.45%), and SUV (45.49%-45.63%) segmentation. The classification efficacy of both weighted SC schemes in the clinical case was comparable to the unweighted SC. When applied to the dynamic 18 F-FDG measurement of colon cancer, the proposed algorithm accurately identified densely vascularized regions from the rest of the tumor. In addition, the segmented regions and clusterwise average time-activity curves showed excellent correlation with the tumor histology. Conclusion: The promising results of SC mark its position as a robust tool for quantification of tumor
Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar
2017-09-01
The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.
Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar
2017-09-05
The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.
The spectral dimension of random trees
Destri, Claudio; Donetti, Luca
2002-01-01
We present a simple yet rigorous approach to the determination of the spectral dimension of random trees, based on the study of the massless limit of the Gaussian model on such trees. As a by-product, we obtain evidence in favour of a new scaling hypothesis for the Gaussian model on generic bounded graphs and in favour of a previously conjectured exact relation between spectral and connectivity dimensions on more general tree-like structures
Biagini, Francesca
2016-01-01
This book provides an introduction to elementary probability and to Bayesian statistics using de Finetti's subjectivist approach. One of the features of this approach is that it does not require the introduction of sample space – a non-intrinsic concept that makes the treatment of elementary probability unnecessarily complicate – but introduces as fundamental the concept of random numbers directly related to their interpretation in applications. Events become a particular case of random numbers and probability a particular case of expectation when it is applied to events. The subjective evaluation of expectation and of conditional expectation is based on an economic choice of an acceptable bet or penalty. The properties of expectation and conditional expectation are derived by applying a coherence criterion that the evaluation has to follow. The book is suitable for all introductory courses in probability and statistics for students in Mathematics, Informatics, Engineering, and Physics.
Prinz, Victor Ya.; Naumova, Elena V.; Golod, Sergey V.; Seleznev, Vladimir A.; Bocharov, Andrey A.; Kubarev, Vitaliy V.
2017-01-01
Electromagnetic metamaterials opened the way to extraordinary manipulation of radiation. Terahertz (THz) and optical metamaterials are usually fabricated by traditional planar-patterning approaches, while the majority of practical applications require metamaterials with 3D resonators. Making arrays of precise 3D micro- and nanoresonators is still a challenging problem. Here we present a versatile set of approaches to fabrication of metamaterials with 3D resonators rolled-up from strained films, demonstrate novel THz metamaterials/systems, and show giant polarization rotation by several chiral metamaterials/systems. The polarization spectra of chiral metamaterials on semiconductor substrates exhibit ultrasharp quasiperiodic peaks. Application of 3D printing allowed assembling more complex systems, including the bianisotropic system with optimal microhelices, which showed an extreme polarization azimuth rotation of 85° with drop by 150° at a frequency shift of 0.4%. We refer the quasiperiodic peaks in the polarization spectra of metamaterial systems to the interplay of different resonances, including peculiar chiral waveguide resonance. Formed metamaterials cannot be made by any other presently available technology. All steps of presented fabrication approaches are parallel, IC-compatible and allow mass fabrication with scaling of rolled-up resonators up to visible frequencies. We anticipate that the rolled-up meta-atoms will be ideal building blocks for future generations of commercial metamaterials, devices and systems on their basis. PMID:28256587
Tanuma, T.; Oneda, S.; Terasaki, K.
1984-01-01
A new approach to nonleptonic weak interactions is presented. It is argued that the presence and violation of the Vertical BarΔIVertical Bar = 1/2 rule as well as those of the quark-line selection rules can be explained in a unified way, along with other fundamental physical quantities [such as the value of g/sub A/(0) and the smallness of the isoscalar nucleon magnetic moments], in terms of a single dynamical asymptotic ansatz imposed at the level of observable hadrons. The ansatz prescribes a way in which asymptotic flavor SU(N) symmetry is secured levelwise for a certain class of chiral algebras in the standard QCD model. It yields severe asymptotic constraints upon the two-particle hadronic matrix elements of nonleptonic weak Hamiltonians as well as QCD currents and their charges. It produces for weak matrix elements the asymptotic Vertical BarΔIVertical Bar = 1/2 rule and its charm counterpart for the ground-state hadrons, while for strong matrix elements quark-line-like approximate selection rules. However, for the less important weak two-particle vertices involving higher excited states, the Vertical BarΔIVertical Bar = 1/2 rule and its charm counterpart are in general violated, providing us with an explicit source of the violation of these selection rules in physical processes
Zhang, Chi; Fang, Xin; Qiu, Haopu; Li, Ning
2015-01-01
Real-time PCR amplification of mitochondria gene could not be used for DNA quantification, and that of single copy DNA did not allow an ideal sensitivity. Moreover, cross-reactions among similar species were commonly observed in the published methods amplifying repetitive sequence, which hindered their further application. The purpose of this study was to establish a short interspersed nuclear element (SINE)-based real-time PCR approach having high specificity for species detection that could be used in DNA quantification. After massive screening of candidate Sus scrofa SINEs, one optimal combination of primers and probe was selected, which had no cross-reaction with other common meat species. LOD of the method was 44 fg DNA/reaction. Further, quantification tests showed this approach was practical in DNA estimation without tissue variance. Thus, this study provided a new tool for qualitative detection of porcine component, which could be promising in the QC of meat products.
Rejman, Marek; Wiesner, Wojciech; Silakiewicz, Piotr; Klarowicz, Andrzej; Abraldes, J. Arturo
2012-01-01
The aim of this study was an analysis of the time required to swim to a victim and tow them back to shore, while perfoming the flutter-kick and the dolphin-kick using fins. It has been hypothesized that using fins while using the dolphin-kick when swimming leads to reduced rescue time. Sixteen lifeguards took part in the study. The main tasks performed by them, were to approach and tow (double armpit) a dummy a distance of 50m while applying either the flutter-kick, or the dolphin-kick with fins. The analysis of the temporal parameters of both techniques of kicking demonstrates that, during the approach to the victim, neither the dolphin (tmean = 32.9s) or the flutter kick (tmean = 33.0s) were significantly faster than the other. However, when used for towing a victim the flutter kick (tmean = 47.1s) was significantly faster when compared to the dolphin-kick (tmean = 52.8s). An assessment of the level of technical skills in competitive swimming, and in approaching and towing the victim, were also conducted. Towing time was significantly correlated with the parameter that linked the temporal and technical dimensions of towing and swimming (difference between flutter kick towing time and dolphin-kick towing time, 100m medley time and the four swimming strokes evaluation). No similar interdependency has been discovered in flutter kick towing time. These findings suggest that the dolphin-kick is a more difficult skill to perform when towing the victim than the flutter-kick. Since the hypothesis stated was not confirmed, postulates were formulated on how to improve dolphin-kick technique with fins, in order to reduce swimming rescue time. Key points The source of reduction of swimming rescue time was researched. Time required to approach and to tow the victim while doing the flutter kick and the dolphin-kick with fins was analyzed. The propulsion generated by dolphin-kick did not make the approach and tow faster than the flutter kick. More difficult skill to realize of
Patra, Asim
2018-03-01
This paper displays the approach of the time-splitting Fourier spectral (TSFS) technique for the linear Riesz fractional Schrödinger equation (RFSE) in the semi-classical regime. The splitting technique is shown to be unconditionally stable. Further a suitable implicit finite difference discretization of second order has been manifested for the RFSE where the Riesz derivative has been discretized via an approach of fractional centered difference. Moreover the stability analysis for the implicit scheme has also been presented here via von Neumann analysis. The L2-norm and L^{∞}-norm errors are calculated for \\vert u(x,t)\\vert2, Re(u(x,t)) and Im(u(x,t)) for various cases. The results obtained by the methods are further tabulated for the absolute errors for \\vert u(x,t)\\vert2. Furthermore the graphs are depicted showing comparison of \\vert u(x,t)\\vert2 by both techniques. The derivatives are taken here in the context of the Riesz fractional sense. Apart from that, the comparative study put forth in the following section via tables and graphs between the implicit second-order finite difference method (IFDM) and the TSFS method is for the purpose of investigating the efficiency of the results obtained. Moreover the stability analysis of the presented techniques manifesting their unconditional stability makes the proposed approach more competing and accurate.
Hsu, Yu-Chun; Gung, Yih-Wen; Shih, Shih-Liang; Feng, Chi-Kuang; Wei, Shun-Hwa; Yu, Chung-Huang; Chen, Chen-Sheng
2008-08-01
Plantar heel pain is a commonly encountered orthopedic problem and is most often caused by plantar fasciitis. In recent years, different shapes of insole have been used to treat plantar fasciitis. However, little research has been focused on the junction stress between the plantar fascia and the calcaneus when wearing different shapes of insole. Therefore, this study aimed to employ a finite element (FE) method to investigate the relationship between different shapes of insole and the junction stress, and accordingly design an optimal insole to lower fascia stress.A detailed 3D foot FE model was created using ANSYS 9.0 software. The FE model calculation was compared to the Pedar device measurements to validate the FE model. After the FE model validation, this study conducted parametric analysis of six different insoles and used optimization analysis to determine the optimal insole which minimized the junction stress between plantar fascia and calcaneus. This FE analysis found that the plantar fascia stress and peak pressure when using the optimal insole were lower by 14% and 38.9%, respectively, than those when using the flat insole. In addition, the stress variation in plantar fascia was associated with the different shapes of insole.
Martínez-Fernández, Domingo; Bingöl, Deniz; Komárek, Michael
2014-07-15
Two experiments were carried out to study the competition for adsorption between trace elements (TEs) and nutrients following the application of nano-maghemite (NM) (iron nano-oxide; Fe2O3) to a soil solution (the 0.01molL(-1) CaCl2 extract of a TEs-contaminated soil). In the first, the nutrients K, N, and P were added to create a set of combinations: potential availability of TEs during their interaction with NM and nutrients were studied. In the second, response surface methodology was used to develop predictive models by central composite design (CCD) for competition between TEs and the nutrients K and N for adsorption onto NM. The addition of NM to the soil solution reduced specifically the concentrations of available As and Cd, but the TE-adsorption capacity of NM decreased as the P concentration increased. The CCD provided more concise and valuable information, appropriate to estimate the behavior of NM sequestering TEs: according to the suggested models, K(+) and NH4(+) were important factors for Ca, Fe, Mg, Mn, Na, and Zn adsorption (Radj(2)=95%, except for Zn with Radj(2)=87%). The obtained information and models can be used to predict the effectiveness of NM for the stabilization of TEs, crucial during the phytoremediation of contaminated soils. Copyright © 2014 Elsevier B.V. All rights reserved.
Liang, Liang; Liu, Minliang; Martin, Caitlin; Sun, Wei
2018-01-01
Structural finite-element analysis (FEA) has been widely used to study the biomechanics of human tissues and organs, as well as tissue-medical device interactions, and treatment strategies. However, patient-specific FEA models usually require complex procedures to set up and long computing times to obtain final simulation results, preventing prompt feedback to clinicians in time-sensitive clinical applications. In this study, by using machine learning techniques, we developed a deep learning (DL) model to directly estimate the stress distributions of the aorta. The DL model was designed and trained to take the input of FEA and directly output the aortic wall stress distributions, bypassing the FEA calculation process. The trained DL model is capable of predicting the stress distributions with average errors of 0.492% and 0.891% in the Von Mises stress distribution and peak Von Mises stress, respectively. This study marks, to our knowledge, the first study that demonstrates the feasibility and great potential of using the DL technique as a fast and accurate surrogate of FEA for stress analysis. © 2018 The Author(s).
Khambampati, Anil Kumar; Kim, Sin; Lee, Bo An; Kim, Kyung Youn
2012-01-01
This paper is about locating the boundary of a moving cavity within a homogeneous background from the voltage measurements recorded on the outer boundary. An inverse boundary problem of a moving cavity is formulated by considering a two-phase vapor–liquid flow in a pipe. The conductivity of the flow components (vapor and liquid) is assumed to be constant and known a priori while the location and shape of the inclusion (vapor) are the unknowns to be estimated. The forward problem is solved using the boundary element method (BEM) with the integral equations solved analytically. A special situation is considered such that the cavity changes its location and shape during the time taken to acquire a full set of independent measurement data. The boundary of a cavity is assumed to be elliptic and is parameterized with Fourier series. The inverse problem is treated as a state estimation problem with the Fourier coefficients that represent the center and radii of the cavity as the unknowns to be estimated. An extended Kalman filter (EKF) is used as an inverse algorithm to estimate the time varying Fourier coefficients. Numerical experiments are shown to evaluate the performance of the proposed method. Through the results, it can be noticed that the proposed BEM with EKF method is successful in estimating the boundary of a moving cavity. (paper)
Huang, Jyh-Jaan; Lin, Sheng-Chi; Löwemark, Ludvig; Liou, Ya-Hsuan; Chang, Queenie; Chang, Tsun-Kuo; Wei, Kuo-Yen; Croudace, Ian W.
2016-04-01
X-ray fluorescence (XRF) core-scanning is a fast, and convenient technique to assess elemental variations for a wide variety of research topics. However, the XRF scanning counts are often considered a semi-quantitative measurement due to possible absorption or scattering caused by down core variability in physical properties. To overcome this problem and extend the applications of XRF-scanning to water pollution studies, we propose to use cation exchange resin (IR-120) as an "elemental carrier", and to analyze the resins using the Itrax-XRF core scanner. The use of resin minimizes the matrix effects during the measurements, and can be employed in the field in great numbers due to its low price. Therefore, the fast, and non-destructive XRF-scanning technique can provide a quick and economical method to analyze environmental pollution via absorption in the resin. Five standard resin samples were scanned by the Itrax-XRF core scanner at different exposure times (1 s, 5 s, 15 s, 30 s, 100 s) to allow the comparisons of scanning counts with the absolute concentrations. The regression lines and correlation coefficients of elements that are generally used in pollution studies (Ca, Ti, Cr, Ni, Cu, Zn, and Pb) were examined for the different exposure times. The result shows that within the test range (from few ppm to thousands ppm), the correlation coefficients are all higher than 0.97, even at the shortest exposure time (1 s). Therefore, we propose to use this method in the field to monitor for example sewage disposal events. The low price of resin, and fast, multi elements and precise XRF-scanning technique provide a viable, cost- and time-effective approach that allows large sample numbers to be processed. In this way, the properties and sources of wastewater pollution can be traced for the purpose of environmental monitoring and environmental forensics.
Standard elements; Elements standards
Blanc, B [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1958-07-01
Following his own experience the author recalls the various advantages, especially in the laboratory, of having pre-fabricated vacuum-line components at his disposal. (author) [French] A la suite de sa propre experience, l'auteur veut rappeler les divers avantages que presente, tout particulierement en laboratoire, le fait d'avoir a sa disposition des elements pre-fabriques de canalisations a vide. (auteur)