An approximate analytical approach to resampling averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....
A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, Manfred
2003-01-01
We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...
Salajegheh, Maral; Nejad, S. Mohammad Moosavi; Khanpour, Hamzeh; Tehrani, S. Atashbar
2018-05-01
In this paper, we present SMKA18 analysis, which is a first attempt to extract the set of next-to-next-leading-order (NNLO) spin-dependent parton distribution functions (spin-dependent PDFs) and their uncertainties determined through the Laplace transform technique and Jacobi polynomial approach. Using the Laplace transformations, we present an analytical solution for the spin-dependent Dokshitzer-Gribov-Lipatov-Altarelli-Parisi evolution equations at NNLO approximation. The results are extracted using a wide range of proton g1p(x ,Q2) , neutron g1n(x ,Q2) , and deuteron g1d(x ,Q2) spin-dependent structure functions data set including the most recent high-precision measurements from COMPASS16 experiments at CERN, which are playing an increasingly important role in global spin-dependent fits. The careful estimations of uncertainties have been done using the standard Hessian error propagation. We will compare our results with the available spin-dependent inclusive deep inelastic scattering data set and other results for the spin-dependent PDFs in literature. The results obtained for the spin-dependent PDFs as well as spin-dependent structure functions are clearly explained both in the small and large values of x .
International Nuclear Information System (INIS)
Basak, K C; Ray, P C; Bera, R K
2009-01-01
The aim of the present analysis is to apply the Adomian decomposition method and He's variational method for the approximate analytical solution of a nonlinear ordinary fractional differential equation. The solutions obtained by the above two methods have been numerically evaluated and presented in the form of tables and also compared with the exact solution. It was found that the results obtained by the above two methods are in excellent agreement with the exact solution. Finally, a surface plot of the approximate solutions of the fractional differential equation by the above two methods is drawn for 0≤t≤2 and 1<α≤2.
Analytical Ballistic Trajectories with Approximately Linear Drag
Directory of Open Access Journals (Sweden)
Giliam J. P. de Carpentier
2014-01-01
Full Text Available This paper introduces a practical analytical approximation of projectile trajectories in 2D and 3D roughly based on a linear drag model and explores a variety of different planning algorithms for these trajectories. Although the trajectories are only approximate, they still capture many of the characteristics of a real projectile in free fall under the influence of an invariant wind, gravitational pull, and terminal velocity, while the required math for these trajectories and planners is still simple enough to efficiently run on almost all modern hardware devices. Together, these properties make the proposed approach particularly useful for real-time applications where accuracy and performance need to be carefully balanced, such as in computer games.
Analytical approximation of neutron physics data
International Nuclear Information System (INIS)
Badikov, S.A.; Vinogradov, V.A.; Gaj, E.V.; Rabotnov, N.S.
1984-01-01
The method for experimental neutron-physical data analytical approximation by rational functions based on the Pade approximation is suggested. It is shown that the existence of the Pade approximation specific properties in polar zones is an extremely favourable analytical property essentially extending the convergence range and increasing its rate as compared with polynomial approximation. The Pade approximation is the particularly natural instrument for resonance curve processing as the resonances conform to the complex poles of the approximant. But even in a general case analytical representation of the data in this form is convenient and compact. Thus representation of the data on the neutron threshold reaction cross sections (BOSPOR constant library) in the form of rational functions lead to approximately twenty fold reduction of the storaged numerical information as compared with the by-point calculation at the same accWracy
An analytical approximation for resonance integral
International Nuclear Information System (INIS)
Magalhaes, C.G. de; Martinez, A.S.
1985-01-01
It is developed a method which allows to obtain an analytical solution for the resonance integral. The problem formulation is completely theoretical and based in concepts of physics of general character. The analytical expression for integral does not involve any empiric correlation or parameter. Results of approximation are compared with pattern values for each individual resonance and for sum of all resonances. (M.C.K.) [pt
Analytical approximations for wide and narrow resonances
International Nuclear Information System (INIS)
Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da
2005-01-01
This paper aims at developing analytical expressions for the adjoint neutron spectrum in the resonance energy region, taking into account both narrow and wide resonance approximations, in order to reduce the numerical computations involved. These analytical expressions, besides reducing computing time, are very simple from a mathematical point of view. The results obtained with this analytical formulation were compared to a reference solution obtained with a numerical method previously developed to solve the neutron balance adjoint equations. Narrow and wide resonances of U 238 were treated and the analytical procedure gave satisfactory results as compared with the reference solution, for the resonance energy range. The adjoint neutron spectrum is useful to determine the neutron resonance absorption, so that multigroup adjoint cross sections used by the adjoint diffusion equation can be obtained. (author)
Analytical approximations for wide and narrow resonances
Energy Technology Data Exchange (ETDEWEB)
Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear]. E-mail: aquilino@lmp.ufrj.br
2005-07-01
This paper aims at developing analytical expressions for the adjoint neutron spectrum in the resonance energy region, taking into account both narrow and wide resonance approximations, in order to reduce the numerical computations involved. These analytical expressions, besides reducing computing time, are very simple from a mathematical point of view. The results obtained with this analytical formulation were compared to a reference solution obtained with a numerical method previously developed to solve the neutron balance adjoint equations. Narrow and wide resonances of U{sup 238} were treated and the analytical procedure gave satisfactory results as compared with the reference solution, for the resonance energy range. The adjoint neutron spectrum is useful to determine the neutron resonance absorption, so that multigroup adjoint cross sections used by the adjoint diffusion equation can be obtained. (author)
Approximate analytical modeling of leptospirosis infection
Ismail, Nur Atikah; Azmi, Amirah; Yusof, Fauzi Mohamed; Ismail, Ahmad Izani
2017-11-01
Leptospirosis is an infectious disease carried by rodents which can cause death in humans. The disease spreads directly through contact with feces, urine or through bites of infected rodents and indirectly via water contaminated with urine and droppings from them. Significant increase in the number of leptospirosis cases in Malaysia caused by the recent severe floods were recorded during heavy rainfall season. Therefore, to understand the dynamics of leptospirosis infection, a mathematical model based on fractional differential equations have been developed and analyzed. In this paper an approximate analytical method, the multi-step Laplace Adomian decomposition method, has been used to conduct numerical simulations so as to gain insight on the spread of leptospirosis infection.
Analytic approximate radiation effects due to Bremsstrahlung
Energy Technology Data Exchange (ETDEWEB)
Ben-Zvi I.
2012-02-01
The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R&D Energy Recovery Linac.
Analytic approximate radiation effects due to Bremsstrahlung
International Nuclear Information System (INIS)
Ben-Zvi, I.
2012-01-01
The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R and D Energy Recovery Linac.
Approximate analytic theory of the multijunction grill
International Nuclear Information System (INIS)
Hurtak, O.; Preinhaelter, J.
1991-03-01
An approximate analytic theory of the general multijunction grill is developed. Omitting the evanescent modes in the subsidiary waveguides both at the junction and at the grill mouth and neglecting multiple wave reflection, simple formulae are derived for the reflection coefficient, the amplitudes of the incident and reflected waves and the spectral power density. These quantities are expressed through the basic grill parameters (the electric length of the structure and phase shift between adjacent waveguides) and two sets of reflection coefficients describing wave reflections in the subsidiary waveguides at the junction and at the plasma. Approximate expressions for these coefficients are also given. The results are compared with a numerical solution of two specific examples; they were shown to be useful for the optimization and design of multijunction grills.For the JET structure it is shown that, in the case of a dense plasma,many results can be obtained from the simple formulae for a two-waveguide multijunction grill. (author) 12 figs., 12 refs
Analytic approximations for inside-outside interferometry
Energy Technology Data Exchange (ETDEWEB)
Padula, S.S.; Gyulassy, M. (Lawrence Berkeley Lab., CA (USA). Nuclear Science Div.)
1990-07-30
Analytical expressions for pion interferometry are derived illustrating the competing effects of various non-ideal aspects of inside-outside cascade dynamics at energies {proportional to}200 AGeV. (orig.).
Uniform analytic approximation of Wigner rotation matrices
Hoffmann, Scott E.
2018-02-01
We derive the leading asymptotic approximation, for low angle θ, of the Wigner rotation matrix elements, dm1m2 j(θ ) , uniform in j, m1, and m2. The result is in terms of a Bessel function of integer order. We numerically investigate the error for a variety of cases and find that the approximation can be useful over a significant range of angles. This approximation has application in the partial wave analysis of wavepacket scattering.
Analytic Approximation to Radiation Fields from Line Source Geometry
International Nuclear Information System (INIS)
Michieli, I.
2000-01-01
Line sources with slab shields represent typical source-shield configuration in gamma-ray attenuation problems. Such shielding problems often lead to the generalized Secant integrals of the specific form. Besides numerical integration approach, various expansions and rational approximations with limited applicability are in use for computing the value of such integral functions. Lately, the author developed rapidly convergent infinite series representation of generalized Secant Integrals involving incomplete Gamma functions. Validity of such representation was established for zero and positive values of integral parameter a (a=0). In this paper recurrence relations for generalized Secant Integrals are derived allowing us simple approximate analytic calculation of the integral for arbitrary a values. It is demonstrated how truncated series representation can be used, as the basis for such calculations, when possibly negative a values are encountered. (author)
International Nuclear Information System (INIS)
Chen Changyuan; Sun Dongsheng; Lu Falin
2007-01-01
Using the exponential function transformation approach along with an approximation for the centrifugal potential, the radial Klein-Gordon equation with the vector and scalar Hulthen potential is transformed to a hypergeometric differential equation. The approximate analytical solutions of bound states are attained for different l. The analytical energy equation and the unnormalized radial wave functions expressed in terms of hypergeometric polynomials are given
A new analytical approximation to the Duffing-harmonic oscillator
International Nuclear Information System (INIS)
Fesanghary, M.; Pirbodaghi, T.; Asghari, M.; Sojoudi, H.
2009-01-01
In this paper, a novel analytical approximation to the nonlinear Duffing-harmonic oscillator is presented. The variational iteration method (VIM) is used to obtain some accurate analytical results for frequency. The accuracy of the results is excellent in the whole range of oscillation amplitude variations.
Nonlinear ordinary differential equations analytical approximation and numerical methods
Hermann, Martin
2016-01-01
The book discusses the solutions to nonlinear ordinary differential equations (ODEs) using analytical and numerical approximation methods. Recently, analytical approximation methods have been largely used in solving linear and nonlinear lower-order ODEs. It also discusses using these methods to solve some strong nonlinear ODEs. There are two chapters devoted to solving nonlinear ODEs using numerical methods, as in practice high-dimensional systems of nonlinear ODEs that cannot be solved by analytical approximate methods are common. Moreover, it studies analytical and numerical techniques for the treatment of parameter-depending ODEs. The book explains various methods for solving nonlinear-oscillator and structural-system problems, including the energy balance method, harmonic balance method, amplitude frequency formulation, variational iteration method, homotopy perturbation method, iteration perturbation method, homotopy analysis method, simple and multiple shooting method, and the nonlinear stabilized march...
Analytical approximations to seawater optical phase functions of scattering
Haltrin, Vladimir I.
2004-11-01
This paper proposes a number of analytical approximations to the classic and recently measured seawater light scattering phase functions. The three types of analytical phase functions are derived: individual representations for 15 Petzold, 41 Mankovsky, and 91 Gulf of Mexico phase functions; collective fits to Petzold phase functions; and analytical representations that take into account dependencies between inherent optical properties of seawater. The proposed phase functions may be used for problems of radiative transfer, remote sensing, visibility and image propagation in natural waters of various turbidity.
Precise analytic approximations for the Bessel function J1 (x)
Maass, Fernando; Martin, Pablo
2018-03-01
Precise and straightforward analytic approximations for the Bessel function J1 (x) have been found. Power series and asymptotic expansions have been used to determine the parameters of the approximation, which is as a bridge between both expansions, and it is a combination of rational and trigonometric functions multiplied with fractional powers of x. Here, several improvements with respect to the so called Multipoint Quasirational Approximation technique have been performed. Two procedures have been used to determine the parameters of the approximations. The maximum absolute errors are in both cases smaller than 0.01. The zeros of the approximation are also very precise with less than 0.04 per cent for the first one. A second approximation has been also determined using two more parameters, and in this way the accuracy has been increased to less than 0.001.
Approximation of Analytic Functions by Bessel's Functions of Fractional Order
Directory of Open Access Journals (Sweden)
Soon-Mo Jung
2011-01-01
Full Text Available We will solve the inhomogeneous Bessel's differential equation x2y″(x+xy′(x+(x2-ν2y(x=∑m=0∞amxm, where ν is a positive nonintegral number and apply this result for approximating analytic functions of a special type by the Bessel functions of fractional order.
Finite Gaussian Mixture Approximations to Analytically Intractable Density Kernels
DEFF Research Database (Denmark)
Khorunzhina, Natalia; Richard, Jean-Francois
The objective of the paper is that of constructing finite Gaussian mixture approximations to analytically intractable density kernels. The proposed method is adaptive in that terms are added one at the time and the mixture is fully re-optimized at each step using a distance measure that approxima...
Analytic bounds and approximations for annuities and Asian options
Vanduffel, S.; Shang, Z.; Henrard, L.; Dhaene, J.; Valdez, E.A.
2008-01-01
Even in case of the Brownian motion as most natural rate of return model it appears too difficult to obtain analytic expressions for most risk measures of constant continuous annuities. In literature the so-called comonotonic approximations have been proposed but these still require the evaluation
A unified approach to the Darwin approximation
International Nuclear Information System (INIS)
Krause, Todd B.; Apte, A.; Morrison, P. J.
2007-01-01
There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting
Approximate analytical methods for solving ordinary differential equations
Radhika, TSL; Rani, T Raja
2015-01-01
Approximate Analytical Methods for Solving Ordinary Differential Equations (ODEs) is the first book to present all of the available approximate methods for solving ODEs, eliminating the need to wade through multiple books and articles. It covers both well-established techniques and recently developed procedures, including the classical series solution method, diverse perturbation methods, pioneering asymptotic methods, and the latest homotopy methods.The book is suitable not only for mathematicians and engineers but also for biologists, physicists, and economists. It gives a complete descripti
Analytical Approximation of Spectrum for Pulse X-ray Tubes
International Nuclear Information System (INIS)
Vavilov, S; Fofanof, O; Koshkin, G; Udod, V
2016-01-01
Among the main characteristics of the pulsed X-ray apparatuses the spectral energy characteristics are the most important ones: the spectral distribution of the photon energy, effective and maximum energy of quanta. Knowing the spectral characteristics of the radiation of pulse sources is very important for the practical use of them in non-destructive testing. We have attempted on the analytical approximation of the pulsed X-ray apparatuses spectra obtained in the different experimental papers. The results of the analytical approximation of energy spectrum for pulse X-ray tube are presented. Obtained formulas are adequate to experimental data and can be used by designing pulsed X-ray apparatuses. (paper)
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
Analytical Evaluation of Beam Deformation Problem Using Approximate Methods
DEFF Research Database (Denmark)
Barari, Amin; Kimiaeifar, A.; Domairry, G.
2010-01-01
The beam deformation equation has very wide applications in structural engineering. As a differential equation, it has its own problem concerning existence, uniqueness and methods of solutions. Often, original forms of governing differential equations used in engineering problems are simplified......, and this process produces noise in the obtained answers. This paper deals with the solution of second order of differential equation governing beam deformation using four analytical approximate methods, namely the Perturbation, Homotopy Perturbation Method (HPM), Homotopy Analysis Method (HAM) and Variational...... Iteration Method (VIM). The comparisons of the results reveal that these methods are very effective, convenient and quite accurate for systems of non-linear differential equation....
An analytic distorted wave approximation for intermediate energy proton scattering
International Nuclear Information System (INIS)
Di Marzio, F.; Amos, K.
1982-01-01
An analytic Distorted Wave approximation has been developed for use in analyses of intermediate energy proton inelastic scattering from nuclei. Applications are made to analyse 402 and 800 MeV data from the isoscalar and isovector 1 + and 2 + states in 12 C and to the 800 MeV data from the excitation of the 2 - (8.88MeV) state in 16 O. Comparisons of predictions made using different model two-nucleon t-matrices and different models of nuclear structure are given
Approximate analytical solution of two-dimensional multigroup P-3 equations
International Nuclear Information System (INIS)
Matausek, M.V.; Milosevic, M.
1981-01-01
Iterative solution of multigroup spherical harmonics equations reduces, in the P-3 approximation and in two-dimensional geometry, to a problem of solving an inhomogeneous system of eight ordinary first order differential equations. With appropriate boundary conditions, these equations have to be solved for each energy group and in each iteration step. The general solution of the corresponding homogeneous system of equations is known in analytical form. The present paper shows how the right-hand side of the system can be approximated in order to derive a particular solution and thus an approximate analytical expression for the general solution of the inhomogeneous system. This combined analytical-numerical approach was shown to have certain advantages compared to the finite-difference method or the Lie-series expansion method, which have been used to solve similar problems. (orig./RW) [de
Approximate analytical solution of two-dimensional multigroup P-3 equations
International Nuclear Information System (INIS)
Matausek, M.V.; Milosevic, M.
1981-01-01
Iterative solution of multigroup spherical harmonics equations reduces, in the P-3 approximation and in two-dimensional geometry, to a problem of solving an inhomogeneous system of eight ordinary first order differential equations. With appropriate boundary conditions, these equations have to be solved for each energy group and in each iteration step. The general solution of the corresponding homogeneous system of equations is known in analytical form. The present paper shows how the right-hand side of the system can be approximated in order to derive a particular solution and thus an approximate analytical expression for the general solution of the inhomogeneous system. This combined analytical-numerical approach was shown to have certain advantages compared to the finite-difference method or the Lie-series expansion method, which have been used to solve similar problems. (author)
Proteomics - new analytical approaches
International Nuclear Information System (INIS)
Hancock, W.S.
2001-01-01
Full text: Recent developments in the sequencing of the human genome have indicated that the number of coding gene sequences may be as few as 30,000. It is clear, however, that the complexity of the human species is dependent on the much greater diversity of the corresponding protein complement. Estimates of the diversity (discrete protein species) of the human proteome range from 200,000 to 300,000 at the lower end to 2,000,000 to 3,000,000 at the high end. In addition, proteomics (the study of the protein complement to the genome) has been subdivided into two main approaches. Global proteomics refers to a high throughput examination of the full protein set present in a cell under a given environmental condition. Focused proteomics refers to a more detailed study of a restricted set of proteins that are related to a specified biochemical pathway or subcellular structure. While many of the advances in proteomics will be based on the sequencing of the human genome, de novo characterization of protein microheterogeneity (glycosylation, phosphorylation and sulfation as well as the incorporation of lipid components) will be required in disease studies. To characterize these modifications it is necessary to digest the protein mixture with an enzyme to produce the corresponding mixture of peptides. In a process analogous to sequencing of the genome, shot-gun sequencing of the proteome is based on the characterization of the key fragments produced by such a digest. Thus, a glycopeptide and hence a specific glycosylation motif will be identified by a unique mass and then a diagnostic MS/MS spectrum. Mass spectrometry will be the preferred detector in these applications because of the unparalleled information content provided by one or more dimensions of mass measurement. In addition, highly efficient separation processes are an absolute requirement for advanced proteomic studies. For example, a combination of the orthogonal approaches, HPLC and HPCE, can be very powerful
ANALYTIC APPROXIMATION OF CARBON CONDENSATION ISSUES IN TYPE II SUPERNOVAE
Energy Technology Data Exchange (ETDEWEB)
Clayton, Donald D., E-mail: claydonald@gmail.com [Department of Physics and Astronomy, Clemson University, Clemson, SC (United States)
2013-01-01
I present analytic approximations for some issues related to condensation of graphite, TiC, and silicon carbide in oxygen-rich cores of supernovae of Type II. Increased understanding, which mathematical analysis can support, renders researchers more receptive to condensation in O-rich supernova gases. Taking SN 1987A as typical, my first analysis shows why the abundance of CO molecules reaches an early maximum in which free carbon remains more abundant than CO. This analysis clarifies why O-rich gas cannot oxidize C if {sup 56}Co radioactivity is as strong as in SN 1987A. My next analysis shows that the CO abundance could be regarded as being in chemical equilibrium if the CO molecule is given an effective binding energy rather than its laboratory dissociation energy. The effective binding energy makes the thermal dissociation rate of CO equal to its radioactive dissociation rate. This preserves possible relevance for the concept of chemical equilibrium. My next analysis shows that the observed abundances of CO and SiO molecules in SN 1987A rule out frequent suggestions that equilibrium condensation of SUNOCONs has occurred following atomic mixing of the He-burning shell with more central zones in such a way as to reproduce roughly the observed spectrum of isotopes in SUNOCONs while preserving C/O > 1. He atoms admixed along with the excess carbon would destroy CO and SiO molecules, leaving their observed abundances unexplained. The final analysis argues that a chemical quasiequilibrium among grains (but not gas) may exist approximately during condensation, so that its computational use is partially justified as a guide to which mineral phases would be stable against reactions with gas. I illustrate this point with quasiequilibrium calculations by Ebel and Grossman that have shown that graphite is stable even when O/C >1 if prominent molecules are justifiably excluded from the calculation of chemical equilibrium.
Interpretation of plasma impurity deposition probes. Analytic approximation
Stangeby, P. C.
1987-10-01
Insertion of a probe into the plasma induces a high speed flow of the hydrogenic plasma to the probe which, by friction, accelerates the impurity ions to velocities approaching the hydrogenic ion acoustic speed, i.e., higher than the impurity ion thermal speed. A simple analytic theory based on this effect provides a relation between impurity fluxes to the probe Γimp and the undisturbed impurity ion density nimp, with the hydrogenic temperature and density as input parameters. Probe size also influences the collection process and large probes are found to attract a higher flux density than small probes in the same plasma. The quantity actually measured, cimp, the impurity atom surface density (m-2) net-deposited on the probe, is related to Γimp and thus to nimp by taking into account the partial removal of deposited material caused by sputtering and the redeposition process.
Random phase approximation in relativistic approach
International Nuclear Information System (INIS)
Ma Zhongyu; Yang Ding; Tian Yuan; Cao Ligang
2009-01-01
Some special issues of the random phase approximation(RPA) in the relativistic approach are reviewed. A full consistency and proper treatment of coupling to the continuum are responsible for the successful application of the RPA in the description of dynamical properties of finite nuclei. The fully consistent relativistic RPA(RRPA) requires that the relativistic mean filed (RMF) wave function of the nucleus and the RRPA correlations are calculated in a same effective Lagrangian and the consistent treatment of the Dirac sea of negative energy states. The proper treatment of the single particle continuum with scattering asymptotic conditions in the RMF and RRPA is discussed. The full continuum spectrum can be described by the single particle Green's function and the relativistic continuum RPA is established. A separable form of the paring force is introduced in the relativistic quasi-particle RPA. (authors)
Approximating the Analytic Fourier Transform with the Discrete Fourier Transform
Axelrod, Jeremy
2015-01-01
The Fourier transform is approximated over a finite domain using a Riemann sum. This Riemann sum is then expressed in terms of the discrete Fourier transform, which allows the sum to be computed with a fast Fourier transform algorithm more rapidly than via a direct matrix multiplication. Advantages and limitations of using this method to approximate the Fourier transform are discussed, and prototypical MATLAB codes implementing the method are presented.
Analytical models approximating individual processes: a validation method.
Favier, C; Degallier, N; Menkès, C E
2010-12-01
Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.
Approximate analytical relationships for linear optimal aeroelastic flight control laws
Kassem, Ayman Hamdy
1998-09-01
This dissertation introduces new methods to uncover functional relationships between design parameters of a contemporary control design technique and the resulting closed-loop properties. Three new methods are developed for generating such relationships through analytical expressions: the Direct Eigen-Based Technique, the Order of Magnitude Technique, and the Cost Function Imbedding Technique. Efforts concentrated on the linear-quadratic state-feedback control-design technique applied to an aeroelastic flight control task. For this specific application, simple and accurate analytical expressions for the closed-loop eigenvalues and zeros in terms of basic parameters such as stability and control derivatives, structural vibration damping and natural frequency, and cost function weights are generated. These expressions explicitly indicate how the weights augment the short period and aeroelastic modes, as well as the closed-loop zeros, and by what physical mechanism. The analytical expressions are used to address topics such as damping, nonminimum phase behavior, stability, and performance with robustness considerations, and design modifications. This type of knowledge is invaluable to the flight control designer and would be more difficult to formulate when obtained from numerical-based sensitivity analysis.
Dataset concerning the analytical approximation of the Ae3 temperature
Directory of Open Access Journals (Sweden)
B.L. Ennis
2017-02-01
The dataset includes the terms of the function and the values for the polynomial coefficients for major alloying elements in steel. A short description of the approximation method used to derive and validate the coefficients has also been included. For discussion and application of this model, please refer to the full length article entitled “The role of aluminium in chemical and phase segregation in a TRIP-assisted dual phase steel” 10.1016/j.actamat.2016.05.046 (Ennis et al., 2016 [1].
Directory of Open Access Journals (Sweden)
Giorgos Minas
2017-07-01
Full Text Available In order to analyse large complex stochastic dynamical models such as those studied in systems biology there is currently a great need for both analytical tools and also algorithms for accurate and fast simulation and estimation. We present a new stochastic approximation of biological oscillators that addresses these needs. Our method, called phase-corrected LNA (pcLNA overcomes the main limitations of the standard Linear Noise Approximation (LNA to remain uniformly accurate for long times, still maintaining the speed and analytically tractability of the LNA. As part of this, we develop analytical expressions for key probability distributions and associated quantities, such as the Fisher Information Matrix and Kullback-Leibler divergence and we introduce a new approach to system-global sensitivity analysis. We also present algorithms for statistical inference and for long-term simulation of oscillating systems that are shown to be as accurate but much faster than leaping algorithms and algorithms for integration of diffusion equations. Stochastic versions of published models of the circadian clock and NF-κB system are used to illustrate our results.
Approximate, analytic solutions of the Bethe equation for charged particle range
Swift, Damian C.; McNaney, James M.
2009-01-01
By either performing a Taylor expansion or making a polynomial approximation, the Bethe equation for charged particle stopping power in matter can be integrated analytically to obtain the range of charged particles in the continuous deceleration approximation. Ranges match reference data to the expected accuracy of the Bethe model. In the non-relativistic limit, the energy deposition rate was also found analytically. The analytic relations can be used to complement and validate numerical solu...
A new way of obtaining analytic approximations of Chandrasekhar's H function
International Nuclear Information System (INIS)
Vukanic, J.; Arsenovic, D.; Davidovic, D.
2007-01-01
Applying the mean value theorem for definite integrals in the non-linear integral equation for Chandrasekhar's H function describing conservative isotropic scattering, we have derived a new, simple analytic approximation for it, with a maximal relative error below 2.5%. With this new function as a starting-point, after a single iteration in the corresponding integral equation, we have obtained a new, highly accurate analytic approximation for the H function. As its maximal relative error is below 0.07%, it significantly surpasses the accuracy of other analytic approximations
Analytic approximation for the modified Bessel function I -2/3(x)
Martin, Pablo; Olivares, Jorge; Maass, Fernando
2017-12-01
In the present work an analytic approximation to modified Bessel function of negative fractional order I -2/3(x) is presented. The validity of the approximation is for every positive value of the independent variable. The accuracy is high in spite of the small number (4) of parameters used. The approximation is a combination of elementary functions with rational ones. Power series and assymptotic expansions are simultaneously used to obtain the approximation.
Directory of Open Access Journals (Sweden)
Xiao-Ying Qin
2014-01-01
Full Text Available An Adomian decomposition method (ADM is applied to solve a two-phase Stefan problem that describes the pure metal solidification process. In contrast to traditional analytical methods, ADM avoids complex mathematical derivations and does not require coordinate transformation for elimination of the unknown moving boundary. Based on polynomial approximations for some known and unknown boundary functions, approximate analytic solutions for the model with undetermined coefficients are obtained using ADM. Substitution of these expressions into other equations and boundary conditions of the model generates some function identities with the undetermined coefficients. By determining these coefficients, approximate analytic solutions for the model are obtained. A concrete example of the solution shows that this method can easily be implemented in MATLAB and has a fast convergence rate. This is an efficient method for finding approximate analytic solutions for the Stefan and the inverse Stefan problems.
Analytic approaches to relativistic hydrodynamics
Energy Technology Data Exchange (ETDEWEB)
Hatta, Yoshitaka
2016-12-15
I summarize our recent work towards finding and utilizing analytic solutions of relativistic hydrodynamic. In the first part I discuss various exact solutions of the second-order conformal hydrodynamics. In the second part I compute flow harmonics v{sub n} analytically using the anisotropically deformed Gubser flow and discuss its dependence on n, p{sub T}, viscosity, the chemical potential and the charge.
International Nuclear Information System (INIS)
Liu Chunliang; Xie Xi; Chen Yinbao
1991-01-01
The universal nonlinear dynamic system equation is equivalent to its nonlinear Volterra's integral equation, and any order approximate analytical solution of the nonlinear Volterra's integral equation is obtained by exact analytical method, thus giving another derivation procedure as well as another computation algorithm for the solution of the universal nonlinear dynamic system equation
Bakker, Mark
2001-05-01
An analytic, approximate solution is derived for the modeling of three-dimensional flow to partially penetrating wells. The solution is written in terms of a correction on the solution for a fully penetrating well and is obtained by dividing the aquifer up, locally, in a number of aquifer layers. The resulting system of differential equations is solved by application of the theory for multiaquifer flow. The presented approach has three major benefits. First, the solution may be applied to any groundwater model that can simulate flow to a fully penetrating well; the solution may be superimposed onto the solution for the fully penetrating well to simulate the local three-dimensional drawdown and flow field. Second, the approach is applicable to isotropic, anisotropic, and stratified aquifers and to both confined and unconfined flow. Third, the solution extends over a small area around the well only; outside this area the three-dimensional effect of the partially penetrating well is negligible, and no correction to the fully penetrating well is needed. A number of comparisons are made to existing three-dimensional, analytic solutions, including radial confined and unconfined flow and a well in a uniform flow field. It is shown that a subdivision in three layers is accurate for many practical cases; very accurate solutions are obtained with more layers.
International Nuclear Information System (INIS)
Chernov, V; Barboza-Flores, M; Chernov, G
2012-01-01
In this work we propose an analytical approach describing the dose distribution around a NP embedded in a medium. The approach describes the following sequence of events: The homogenous and isotropic creation of secondary electrons under incident photon fluence; travel of the created electrons toward the NP surface and their escaping from the NP with different energies and angles; deposition of energy in surrounding medium. The radial dose distribution around the NP was found as the average energy deposited by the escaped electrons in a spherical shell at a distance r from the NP center normalized to its mass. The continuous slowing down approximation and the assumption that created electrons travel in a straight-line path were used. As result, a set of analytical expressions describing the dose distribution was derived. The expressions were applied to the calculation of the dose distribution around spherical gold NPs of different size embedded in water. It was shown that the dose distribution is close to the 1/r 2 dependence and practically independent of the NP radius.
Nikitin, E E; Troe, J
2010-09-16
Approximate analytical expressions are derived for the low-energy rate coefficients of capture of two identical dipolar polarizable rigid rotors in their lowest nonresonant (j(1) = 0 and j(2) = 0) and resonant (j(1) = 0,1 and j(2) = 1,0) states. The considered range extends from the quantum, ultralow energy regime, characterized by s-wave capture, to the classical regime described within fly wheel and adiabatic channel approaches, respectively. This is illustrated by the table of contents graphic (available on the Web) that shows the scaled rate coefficients for the mutual capture of rotors in the resonant state versus the reduced wave vector between the Bethe zero-energy (left arrows) and classical high-energy (right arrow) limits for different ratios δ of the dipole-dipole to dispersion interaction.
Fast and Analytical EAP Approximation from a 4th-Order Tensor
Directory of Open Access Journals (Sweden)
Aurobrata Ghosh
2012-01-01
Full Text Available Generalized diffusion tensor imaging (GDTI was developed to model complex apparent diffusivity coefficient (ADC using higher-order tensors (HOTs and to overcome the inherent single-peak shortcoming of DTI. However, the geometry of a complex ADC profile does not correspond to the underlying structure of fibers. This tissue geometry can be inferred from the shape of the ensemble average propagator (EAP. Though interesting methods for estimating a positive ADC using 4th-order diffusion tensors were developed, GDTI in general was overtaken by other approaches, for example, the orientation distribution function (ODF, since it is considerably difficult to recuperate the EAP from a HOT model of the ADC in GDTI. In this paper, we present a novel closed-form approximation of the EAP using Hermite polynomials from a modified HOT model of the original GDTI-ADC. Since the solution is analytical, it is fast, differentiable, and the approximation converges well to the true EAP. This method also makes the effort of computing a positive ADC worthwhile, since now both the ADC and the EAP can be used and have closed forms. We demonstrate our approach with 4th-order tensors on synthetic data and in vivo human data.
Fast and Analytical EAP Approximation from a 4th-Order Tensor.
Ghosh, Aurobrata; Deriche, Rachid
2012-01-01
Generalized diffusion tensor imaging (GDTI) was developed to model complex apparent diffusivity coefficient (ADC) using higher-order tensors (HOTs) and to overcome the inherent single-peak shortcoming of DTI. However, the geometry of a complex ADC profile does not correspond to the underlying structure of fibers. This tissue geometry can be inferred from the shape of the ensemble average propagator (EAP). Though interesting methods for estimating a positive ADC using 4th-order diffusion tensors were developed, GDTI in general was overtaken by other approaches, for example, the orientation distribution function (ODF), since it is considerably difficult to recuperate the EAP from a HOT model of the ADC in GDTI. In this paper, we present a novel closed-form approximation of the EAP using Hermite polynomials from a modified HOT model of the original GDTI-ADC. Since the solution is analytical, it is fast, differentiable, and the approximation converges well to the true EAP. This method also makes the effort of computing a positive ADC worthwhile, since now both the ADC and the EAP can be used and have closed forms. We demonstrate our approach with 4th-order tensors on synthetic data and in vivo human data.
Faculty Workload: An Analytical Approach
Dennison, George M.
2012-01-01
Recent discussions of practices in higher education have tended toward muck-raking and self-styled exposure of cynical self-indulgence by faculty and administrators at the expense of students and their families, as usually occurs during periods of economic duress, rather than toward analytical studies designed to foster understanding This article…
Number-conserving random phase approximation with analytically integrated matrix elements
International Nuclear Information System (INIS)
Kyotoku, M.; Schmid, K.W.; Gruemmer, F.; Faessler, A.
1990-01-01
In the present paper a number conserving random phase approximation is derived as a special case of the recently developed random phase approximation in general symmetry projected quasiparticle mean fields. All the occurring integrals induced by the number projection are performed analytically after writing the various overlap and energy matrices in the random phase approximation equation as polynomials in the gauge angle. In the limit of a large number of particles the well-known pairing vibration matrix elements are recovered. We also present a new analytically number projected variational equation for the number conserving pairing problem
Analytical approximate solutions for a general class of nonlinear delay differential equations.
Căruntu, Bogdan; Bota, Constantin
2014-01-01
We use the polynomial least squares method (PLSM), which allows us to compute analytical approximate polynomial solutions for a very general class of strongly nonlinear delay differential equations. The method is tested by computing approximate solutions for several applications including the pantograph equations and a nonlinear time-delay model from biology. The accuracy of the method is illustrated by a comparison with approximate solutions previously computed using other methods.
Poirier, M.
2015-06-01
Density effects in ionized matter require particular attention since they modify energies, wavefunctions and transition rates with respect to the isolated-ion situation. The approach chosen in this paper is based on the ion-sphere model involving a Thomas-Fermi-like description for free electrons, the bound electrons being described by a full quantum mechanical formalism. This permits to deal with plasmas out of thermal local equilibrium, assuming only a Maxwell distribution for free electrons. For H-like ions, such a theory provides simple and rather accurate analytical approximations for the potential created by free electrons. Emphasis is put on the plasma potential rather than on the electron density, since the energies and wavefunctions depend directly on this potential. Beyond the uniform electron gas model, temperature effects may be analyzed. In the case of H-like ions, this formalism provides analytical perturbative expressions for the energies, wavefunctions and transition rates. Explicit expressions are given in the case of maximum orbital quantum number, and compare satisfactorily with results from a direct integration of the radial Schrödinger equation. Some formulas for lower orbital quantum numbers are also proposed.
Analytical approach for the Floquet theory of delay differential equations.
Simmendinger, C; Wunderlin, A; Pelster, A
1999-05-01
We present an analytical approach to deal with nonlinear delay differential equations close to instabilities of time periodic reference states. To this end we start with approximately determining such reference states by extending the Poincaré-Lindstedt and the Shohat expansions, which were originally developed for ordinary differential equations. Then we systematically elaborate a linear stability analysis around a time periodic reference state. This allows us to approximately calculate the Floquet eigenvalues and their corresponding eigensolutions by using matrix valued continued fractions.
Nonlinear optics an analytical approach
Mandel, Paul
2010-01-01
Based on the author's extensive teaching experience and lecture notes, this textbook provides a substantially analytical rather than descriptive presentation of nonlinear optics. Divided into five parts, with most chapters corresponding to a two-hour lecture, the book begins with a unique account of the historical development from Kirchhoff's law for the black-body radiation to Planck's quantum hypothesis and Einstein's discovery of spontaneous emission - providing all the explicit proofs. The subsequent sections deal with matter quantization, ultrashort pulse propagation in 2-level media, cavity nonlinear optics, chi(2) and chi(3) media. For graduate and PhD students in nonlinear optics or photonics, while also representing a valuable reference for researchers in these fields.
International Nuclear Information System (INIS)
Kurnia, W; Tan, P C; Yeo, S H; Wong, M
2008-01-01
Theoretical models have been used to predict process performance measures in electrical discharge machining (EDM), namely the material removal rate (MRR), tool wear ratio (TWR) and surface roughness (SR). However, these contributions are mainly applicable to conventional EDM due to limits on the range of energy and pulse-on-time adopted by the models. This paper proposes an analytical approximation of micro-EDM performance measures, based on the crater prediction using a developed theoretical model. The results show that the analytical approximation of the MRR and TWR is able to provide a close approximation with the experimental data. The approximation results for the MRR and TWR are found to have a variation of up to 30% and 24%, respectively, from their associated experimental values. Since the voltage and current input used in the computation are captured in real time, the method can be applied as a reliable online monitoring system for the micro-EDM process
Barrett, Steven R. H.; Britter, Rex E.
Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean
Consumer energy conservation policy. An analytical approach
Energy Technology Data Exchange (ETDEWEB)
McDougall, G.H.G.; Ritchie, J.R.B.
1984-06-01
To capture the potential energy savings available in the consumer sector an analytical approach to conservation policy is proposed. A policy framework is described and the key constructs including a payoff matrix analysis and a consumer impact analysis are discussed. Implications derived from the considerable amount of prior consumer research are provided to illustrate the effect on the design and implementation of future programmes. The result of this analytical approach to conservation policy - economic stability and economic security - are goals well worth pursuing.
Dynamic programming approach to optimization of approximate decision rules
Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata
2013-01-01
This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number
Analytic approximations for the elastic moduli of two-phase materials
DEFF Research Database (Denmark)
Zhang, Z. J.; Zhu, Y. K.; Zhang, P.
2017-01-01
Based on the models of series and parallel connections of the two phases in a composite, analytic approximations are derived for the elastic constants (Young's modulus, shear modulus, and Poisson's ratio) of elastically isotropic two-phase composites containing second phases of various volume...
Approximate Analytic and Numerical Solutions to Lane-Emden Equation via Fuzzy Modeling Method
Directory of Open Access Journals (Sweden)
De-Gang Wang
2012-01-01
Full Text Available A novel algorithm, called variable weight fuzzy marginal linearization (VWFML method, is proposed. This method can supply approximate analytic and numerical solutions to Lane-Emden equations. And it is easy to be implemented and extended for solving other nonlinear differential equations. Numerical examples are included to demonstrate the validity and applicability of the developed technique.
International Nuclear Information System (INIS)
Lublinsky, Michael
2004-01-01
A simple analytic expression for the nonsinglet structure function f NS is given. The expression is derived from the result of Ermolaev, Manaenkov, and Ryskin obtained by low x resummation of the quark ladder diagrams in the double logarithmic approximation of perturbative QCD
International Nuclear Information System (INIS)
Tellier, C.R.; Tosser, A.J.
1977-01-01
In the usual thickness range of sputtered metallic films, analytical linearized approximate expressions of polycrystalline film resistivity and its t.c.r. are deduced from the Mayadas-Shatzkes theoretical equations. A good experimental fit is observed for Al rf sputtered metal films. (orig.) [de
Delay in a tandem queueing model with mobile queues: An analytical approximation
Al Hanbali, Ahmad; de Haan, Roland; Boucherie, Richardus J.; van Ommeren, Jan C.W.
In this paper, we analyze the end-to-end delay performance of a tandem queueing system with mobile queues. Due to state-space explosion, there is no hope for a numerical exact analysis for the joint-queue-length distribution. For this reason, we present an analytical approximation that is based on
Collaborative Visual Analytics: A Health Analytics Approach to Injury Prevention.
Al-Hajj, Samar; Fisher, Brian; Smith, Jennifer; Pike, Ian
2017-09-12
Background : Accurate understanding of complex health data is critical in order to deal with wicked health problems and make timely decisions. Wicked problems refer to ill-structured and dynamic problems that combine multidimensional elements, which often preclude the conventional problem solving approach. This pilot study introduces visual analytics (VA) methods to multi-stakeholder decision-making sessions about child injury prevention; Methods : Inspired by the Delphi method, we introduced a novel methodology-group analytics (GA). GA was pilot-tested to evaluate the impact of collaborative visual analytics on facilitating problem solving and supporting decision-making. We conducted two GA sessions. Collected data included stakeholders' observations, audio and video recordings, questionnaires, and follow up interviews. The GA sessions were analyzed using the Joint Activity Theory protocol analysis methods; Results : The GA methodology triggered the emergence of ' common g round ' among stakeholders. This common ground evolved throughout the sessions to enhance stakeholders' verbal and non-verbal communication, as well as coordination of joint activities and ultimately collaboration on problem solving and decision-making; Conclusion s : Understanding complex health data is necessary for informed decisions. Equally important, in this case, is the use of the group analytics methodology to achieve ' common ground' among diverse stakeholders about health data and their implications.
Collaborative Visual Analytics: A Health Analytics Approach to Injury Prevention
Directory of Open Access Journals (Sweden)
Samar Al-Hajj
2017-09-01
Full Text Available Background: Accurate understanding of complex health data is critical in order to deal with wicked health problems and make timely decisions. Wicked problems refer to ill-structured and dynamic problems that combine multidimensional elements, which often preclude the conventional problem solving approach. This pilot study introduces visual analytics (VA methods to multi-stakeholder decision-making sessions about child injury prevention; Methods: Inspired by the Delphi method, we introduced a novel methodology—group analytics (GA. GA was pilot-tested to evaluate the impact of collaborative visual analytics on facilitating problem solving and supporting decision-making. We conducted two GA sessions. Collected data included stakeholders’ observations, audio and video recordings, questionnaires, and follow up interviews. The GA sessions were analyzed using the Joint Activity Theory protocol analysis methods; Results: The GA methodology triggered the emergence of ‘common ground’ among stakeholders. This common ground evolved throughout the sessions to enhance stakeholders’ verbal and non-verbal communication, as well as coordination of joint activities and ultimately collaboration on problem solving and decision-making; Conclusions: Understanding complex health data is necessary for informed decisions. Equally important, in this case, is the use of the group analytics methodology to achieve ‘common ground’ among diverse stakeholders about health data and their implications.
International Nuclear Information System (INIS)
Marrero, S. I.; Turibus, S. N.; Assis, J. T. De; Monin, V. I.
2011-01-01
Data processing of the most of diffraction experiments is based on determination of diffraction line position and measurement of broadening of diffraction profile. High precision and digitalisation of these procedures can be resolved by approximation of experimental diffraction profiles by analytical functions. There are various functions for these purposes both simples, like Gauss function, but no suitable for wild range of experimental profiles and good approximating functions but complicated for practice using, like Vougt or PersonVII functions. Proposed analytical function is modified Cauchy function which uses two variable parameters allowing describing any experimental diffraction profile. In the presented paper modified function was applied for approximation of diffraction lines of steels after various physical and mechanical treatments and simulation of diffraction profiles applied for study of stress gradients and distortions of crystal structure. (Author)
International Nuclear Information System (INIS)
Palma, Daniel A.P.; Silva, Adilson C. da; Goncalves, Alessandro C.; Martinez, Aquilino S.
2009-01-01
The analytical solution of point kinetics equations with a group of delayed neutrons is useful in predicting neutron density variation during the operation of a nuclear reactor. Although different approximate solutions for the system of point kinetics equations with temperature feedback may be found in literature, some of them do not present an explicit dependence in time, which makes the computing implementation difficult and, as a result, its applicability in practical cases. The present paper uses the polynomial adjustment technique to overcome this problem in the analytical approximation as proposed by Nahla. In a systematic comparison with other existing approximations it is concluded that the method is adequate, presenting small deviations in relation to the reference values obtained from the reference numerical method. (author)
Energy Technology Data Exchange (ETDEWEB)
Palma, Daniel A.P. [Instituto Federal do Rio de Janeiro, Nilopolis, RJ (Brazil)], e-mail: dpalmaster@gmail.com; Silva, Adilson C. da; Goncalves, Alessandro C.; Martinez, Aquilino S. [Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear], e-mail: asilva@con.ufrj.br, e-mail: agoncalves@con.ufrj.br, e-mail: aquilino@lmp.ufrj.br
2009-07-01
The analytical solution of point kinetics equations with a group of delayed neutrons is useful in predicting neutron density variation during the operation of a nuclear reactor. Although different approximate solutions for the system of point kinetics equations with temperature feedback may be found in literature, some of them do not present an explicit dependence in time, which makes the computing implementation difficult and, as a result, its applicability in practical cases. The present paper uses the polynomial adjustment technique to overcome this problem in the analytical approximation as proposed by Nahla. In a systematic comparison with other existing approximations it is concluded that the method is adequate, presenting small deviations in relation to the reference values obtained from the reference numerical method. (author)
Aymard, François; Gulminelli, Francesca; Margueron, Jérôme
2016-08-01
The problem of determination of nuclear surface energy is addressed within the framework of the extended Thomas Fermi (ETF) approximation using Skyrme functionals. We propose an analytical model for the density profiles with variationally determined diffuseness parameters. In this first paper, we consider the case of symmetric nuclei. In this situation, the ETF functional can be exactly integrated, leading to an analytical formula expressing the surface energy as a function of the couplings of the energy functional. The importance of non-local terms is stressed and it is shown that they cannot be deduced simply from the local part of the functional, as it was suggested in previous works.
Energy Technology Data Exchange (ETDEWEB)
Silvestre-Brac, Bernard [LPSC Universite Joseph Fourier, Grenoble 1, CNRS/IN2P3, Institut Polytechnique de Grenoble, Avenue des Martyrs 53, F-38026 Grenoble-Cedex (France); Semay, Claude; Buisseret, Fabien [Groupe de Physique Nucleaire Theorique, Universite de Mons-Hainaut, Academie universitaire Wallonie-Bruxelles, Place du Parc 20, B-7000 Mons (Belgium)], E-mail: silvestre@lpsc.in2p3.fr, E-mail: claude.semay@umh.ac.be, E-mail: fabien.buisseret@umh.ac.be
2009-06-19
The auxiliary field method is a new and efficient way to compute approximate analytical eigenenergies of the Schroedinger equation. This method has already been successfully applied to the case of central potentials of power-law and logarithmic forms. In the present work, we show that the Schroedinger equation with exponential potentials of the form -{alpha}r{sup {lambda}}exp(-{beta}r) can also be analytically solved by using the auxiliary field method. Closed formulae giving the critical heights and the energy levels of these potentials are presented. Special attention is drawn to the Yukawa potential and the pure exponential potential.
International Nuclear Information System (INIS)
Silvestre-Brac, Bernard; Semay, Claude; Buisseret, Fabien
2009-01-01
The auxiliary field method is a new and efficient way to compute approximate analytical eigenenergies of the Schroedinger equation. This method has already been successfully applied to the case of central potentials of power-law and logarithmic forms. In the present work, we show that the Schroedinger equation with exponential potentials of the form -αr λ exp(-βr) can also be analytically solved by using the auxiliary field method. Closed formulae giving the critical heights and the energy levels of these potentials are presented. Special attention is drawn to the Yukawa potential and the pure exponential potential
Lin, Yezhi; Liu, Yinping; Li, Zhibin
2013-01-01
The Adomian decomposition method (ADM) is one of the most effective methods to construct analytic approximate solutions for nonlinear differential equations. In this paper, based on the new definition of the Adomian polynomials, Rach (2008) [22], the Adomian decomposition method and the Padé approximants technique, a new algorithm is proposed to construct analytic approximate solutions for nonlinear fractional differential equations with initial or boundary conditions. Furthermore, a MAPLE software package is developed to implement this new algorithm, which is user-friendly and efficient. One only needs to input the system equation, initial or boundary conditions and several necessary parameters, then our package will automatically deliver the analytic approximate solutions within a few seconds. Several different types of examples are given to illustrate the scope and demonstrate the validity of our package, especially for non-smooth initial value problems. Our package provides a helpful and easy-to-use tool in science and engineering simulations. Program summaryProgram title: ADMP Catalogue identifier: AENE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 12011 No. of bytes in distributed program, including test data, etc.: 575551 Distribution format: tar.gz Programming language: MAPLE R15. Computer: PCs. Operating system: Windows XP/7. RAM: 2 Gbytes Classification: 4.3. Nature of problem: Constructing analytic approximate solutions of nonlinear fractional differential equations with initial or boundary conditions. Non-smooth initial value problems can be solved by this program. Solution method: Based on the new definition of the Adomian polynomials [1], the Adomian decomposition method and the Pad
Analytical approximations to the Hotelling trace for digital x-ray detectors
Clarkson, Eric; Pineda, Angel R.; Barrett, Harrison H.
2001-06-01
The Hotelling trace is the signal-to-noise ratio for the ideal linear observer in a detection task. We provide an analytical approximation for this figure of merit when the signal is known exactly and the background is generated by a stationary random process, and the imaging system is an ideal digital x-ray detector. This approximation is based on assuming that the detector is infinite in extent. We test this approximation for finite-size detectors by comparing it to exact calculations using matrix inversion of the data covariance matrix. After verifying the validity of the approximation under a variety of circumstances, we use it to generate plots of the Hotelling trace as a function of pairs of parameters of the system, the signal and the background.
Analytic approaches to atomic response properties
International Nuclear Information System (INIS)
Lamm, E.E.
1980-01-01
Many important response properties, e.g., multipole polarizabilites and sum rules, photodetachment cross sections, and closely-related long-range dispersion force coefficients, are insensitive to details of electronic structure. In this investigation, analytic asymptotic theories of atomic response properties are constructed that yield results as accurate as those obtained by more elaborate numerical methods. In the first chapter, a novel and simple method is used to determined the multipole sum rules S/sub l/(-k), for positive and negative values of k, of the hydrogen atom and the hydrogen negative ion in the asymptotic approximation. In the second chapter, an analytically-tractable extended asymptotic model for the response properites of weakly-bound anions is proposed and the multipole polarizability, multipole sum rules, and photodetachment cross section determined by the model are computed analytically. Dipole polarizabilities and photodetachment cross sections determined from the model for Li-, Na-, and K- are compared with the numercal results of Moores and Norcross. Agreement is typically within 15% if the pseudopotential is included. In the third chapter a comprehensive and unified treatment of atomic multipole oscillator strengths, dynamic multipole polarizabilites, and dispersion force constants in a variety of Coulomb-like approximations is presented. A theoretically and computationally superior modification of the original Bates-Damgaard (BD) procedure, referred to here as simply the Coulomb approximation (CA), is introduced. An analytic expression for the dynamic multipole polarizability is found which contains as special cases this quantity within the CA, the extended Coulomb approximation (ECA) of Adelman and Szabo, and the quantum defect orbital (QDO) method of Simons
Consumer energy - conservation policy: an analytical approach
Energy Technology Data Exchange (ETDEWEB)
McDougall, G.H.G.; Ritchie, J.R.B.
1984-06-01
To capture the potential energy savings available in the consumer sector an analytical approach to conservation policy is proposed. A policy framework is described, and the key constructs including a payoff matrix analysis and a consumer impact analysis are discussed. Implications derived from the considerable amount of prior consumer research are provided to illustrate the effect on the design and implementation of future programs. The result of this analytical approach to conservation policy (economic stability and economic security) are goals well worth pursuing. 13 references, 2 tables.
Directory of Open Access Journals (Sweden)
M. Bishehniasar
2017-01-01
Full Text Available The demand of many scientific areas for the usage of fractional partial differential equations (FPDEs to explain their real-world systems has been broadly identified. The solutions may portray dynamical behaviors of various particles such as chemicals and cells. The desire of obtaining approximate solutions to treat these equations aims to overcome the mathematical complexity of modeling the relevant phenomena in nature. This research proposes a promising approximate-analytical scheme that is an accurate technique for solving a variety of noninteger partial differential equations (PDEs. The proposed strategy is based on approximating the derivative of fractional-order and reducing the problem to the corresponding partial differential equation (PDE. Afterwards, the approximating PDE is solved by using a separation-variables technique. The method can be simply applied to nonhomogeneous problems and is proficient to diminish the span of computational cost as well as achieving an approximate-analytical solution that is in excellent concurrence with the exact solution of the original problem. In addition and to demonstrate the efficiency of the method, it compares with two finite difference methods including a nonstandard finite difference (NSFD method and standard finite difference (SFD technique, which are popular in the literature for solving engineering problems.
Directory of Open Access Journals (Sweden)
S. Das
2013-12-01
Full Text Available In this article, optimal homotopy-analysis method is used to obtain approximate analytic solution of the time-fractional diffusion equation with a given initial condition. The fractional derivatives are considered in the Caputo sense. Unlike usual Homotopy analysis method, this method contains at the most three convergence control parameters which describe the faster convergence of the solution. Effects of parameters on the convergence of the approximate series solution by minimizing the averaged residual error with the proper choices of parameters are calculated numerically and presented through graphs and tables for different particular cases.
Aymard, François; Gulminelli, Francesca; Margueron, Jérôme
2016-08-01
We have recently addressed the problem of the determination of the nuclear surface energy for symmetric nuclei in the framework of the extended Thomas-Fermi (ETF) approximation using Skyrme functionals. We presently extend this formalism to the case of asymmetric nuclei and the question of the surface symmetry energy. We propose an approximate expression for the diffuseness and the surface energy. These quantities are analytically related to the parameters of the energy functional. In particular, the influence of the different equation of state parameters can be explicitly quantified. Detailed analyses of the different energy components (local/non-local, isoscalar/isovector, surface/curvature and higher order) are also performed. Our analytical solution of the ETF integral improves previous models and leads to a precision of better than 200 keV per nucleon in the determination of the nuclear binding energy for dripline nuclei.
Sinc-Approximations of Fractional Operators: A Computing Approach
Directory of Open Access Journals (Sweden)
Gerd Baumann
2015-06-01
Full Text Available We discuss a new approach to represent fractional operators by Sinc approximation using convolution integrals. A spin off of the convolution representation is an effective inverse Laplace transform. Several examples demonstrate the application of the method to different practical problems.
Approximate Approaches to the One-Dimensional Finite Potential Well
Singh, Shilpi; Pathak, Praveen; Singh, Vijay A.
2011-01-01
The one-dimensional finite well is a textbook problem. We propose approximate approaches to obtain the energy levels of the well. The finite well is also encountered in semiconductor heterostructures where the carrier mass inside the well (m[subscript i]) is taken to be distinct from mass outside (m[subscript o]). A relevant parameter is the mass…
Lau, Chun Sing
This thesis studies two types of problems in financial derivatives pricing. The first type is the free boundary problem, which can be formulated as a partial differential equation (PDE) subject to a set of free boundary condition. Although the functional form of the free boundary condition is given explicitly, the location of the free boundary is unknown and can only be determined implicitly by imposing continuity conditions on the solution. Two specific problems are studied in details, namely the valuation of fixed-rate mortgages and CEV American options. The second type is the multi-dimensional problem, which involves multiple correlated stochastic variables and their governing PDE. One typical problem we focus on is the valuation of basket-spread options, whose underlying asset prices are driven by correlated geometric Brownian motions (GBMs). Analytic approximate solutions are derived for each of these three problems. For each of the two free boundary problems, we propose a parametric moving boundary to approximate the unknown free boundary, so that the original problem transforms into a moving boundary problem which can be solved analytically. The governing parameter of the moving boundary is determined by imposing the first derivative continuity condition on the solution. The analytic form of the solution allows the price and the hedging parameters to be computed very efficiently. When compared against the benchmark finite-difference method, the computational time is significantly reduced without compromising the accuracy. The multi-stage scheme further allows the approximate results to systematically converge to the benchmark results as one recasts the moving boundary into a piecewise smooth continuous function. For the multi-dimensional problem, we generalize the Kirk (1995) approximate two-asset spread option formula to the case of multi-asset basket-spread option. Since the final formula is in closed form, all the hedging parameters can also be derived in
Analytical approximations for the amplitude and period of a relaxation oscillator
Directory of Open Access Journals (Sweden)
Golkhou Vahid
2009-01-01
Full Text Available Abstract Background Analysis and design of complex systems benefit from mathematically tractable models, which are often derived by approximating a nonlinear system with an effective equivalent linear system. Biological oscillators with coupled positive and negative feedback loops, termed hysteresis or relaxation oscillators, are an important class of nonlinear systems and have been the subject of comprehensive computational studies. Analytical approximations have identified criteria for sustained oscillations, but have not linked the observed period and phase to compact formulas involving underlying molecular parameters. Results We present, to our knowledge, the first analytical expressions for the period and amplitude of a classic model for the animal circadian clock oscillator. These compact expressions are in good agreement with numerical solutions of corresponding continuous ODEs and for stochastic simulations executed at literature parameter values. The formulas are shown to be useful by permitting quick comparisons relative to a negative-feedback represillator oscillator for noise (10× less sensitive to protein decay rates, efficiency (2× more efficient, and dynamic range (30 to 60 decibel increase. The dynamic range is enhanced at its lower end by a new concentration scale defined by the crossing point of the activator and repressor, rather than from a steady-state expression level. Conclusion Analytical expressions for oscillator dynamics provide a physical understanding for the observations from numerical simulations and suggest additional properties not readily apparent or as yet unexplored. The methods described here may be applied to other nonlinear oscillator designs and biological circuits.
Monoenergetic approximation of a polyenergetic beam: a theoretical approach
International Nuclear Information System (INIS)
Robinson, D.M.; Scrimger, J.W.
1991-01-01
There exist numerous occasions in which it is desirable to approximate the polyenergetic beams employed in radiation therapy by a beam of photons of a single energy. In some instances, commonly used rules of thumb for the selection of an appropriate energy may be valid. A more accurate approximate energy, however, may be determined by an analysis which takes into account both the spectral qualities of the beam and the material through which it passes. The theoretical basis of this method of analysis is presented in this paper. Experimental agreement with theory for a range of materials and beam qualities is also presented and demonstrates the validity of the theoretical approach taken. (author)
Analytical approximate solutions of the time-domain diffusion equation in layered slabs.
Martelli, Fabrizio; Sassaroli, Angelo; Yamada, Yukio; Zaccanti, Giovanni
2002-01-01
Time-domain analytical solutions of the diffusion equation for photon migration through highly scattering two- and three-layered slabs have been obtained. The effect of the refractive-index mismatch with the external medium is taken into account, and approximate boundary conditions at the interface between the diffusive layers have been considered. A Monte Carlo code for photon migration through a layered slab has also been developed. Comparisons with the results of Monte Carlo simulations showed that the analytical solutions correctly describe the mean path length followed by photons inside each diffusive layer and the shape of the temporal profile of received photons, while discrepancies are observed for the continuous-wave reflectance or transmittance.
Analytic number theory, approximation theory, and special functions in honor of Hari M. Srivastava
Rassias, Michael
2014-01-01
This book, in honor of Hari M. Srivastava, discusses essential developments in mathematical research in a variety of problems. It contains thirty-five articles, written by eminent scientists from the international mathematical community, including both research and survey works. Subjects covered include analytic number theory, combinatorics, special sequences of numbers and polynomials, analytic inequalities and applications, approximation of functions and quadratures, orthogonality, and special and complex functions. The mathematical results and open problems discussed in this book are presented in a simple and self-contained manner. The book contains an overview of old and new results, methods, and theories toward the solution of longstanding problems in a wide scientific field, as well as new results in rapidly progressing areas of research. The book will be useful for researchers and graduate students in the fields of mathematics, physics, and other computational and applied sciences.
International Nuclear Information System (INIS)
Roteta, M.; Baro, J.; Fernandez-Varea, J.M.; Salvat, F.
1994-01-01
The FORTRAN 77 code PHOTAC to compute photon attenuation coefficients of elements and compounds is described. The code is based on the semi-analytical approximate atomic cross sections proposed by Baro et al. (1994). Photoelectric cross sections are calculated directly from a simple analytical expression. Atomic cross sections for coherent and incoherent scattering and for pair production are obtained as integrals of the corresponding differential cross sections. These integrals are evaluated, to a pre-selected accuracy, by using a 20-point Gauss adaptive integration algorithm. Calculated attenuation coefficients agree with recently compiled databases to within equal 1%, in the energy range from 1 KeV to 1 GeV. The complete source listing of the program PHOTAC is included
International Nuclear Information System (INIS)
Broekhoven, M.J.G.; Ruijtenbeek, M.G. van de
1975-01-01
The fracture mechanics based stress intensity factor (K-factor) concept has obtained wide-spread acceptance as a tool for quantitative analysis of both fatigue crack growth and instable fracture. The present study discusses the applicability of various simple analytical approximations by comparing results with experimental data. A semi-analytical procedure has been developed whose main characteristics are: the true stress distribution perpendicular to the crack plane for the uncracked structure is used as input data; an extended version of the Shah and Kobayashi solution for elliptical cracks, loaded on their surfaces by tractions described by fourth order double symmetrical polynomials fit through the data of previous step is used to calculate full K-factor variations along the crack fronts; several corrections, a.o. to correct for free surfaces and for a corner radius are incorporated. The experiments concern careful monitoring crack growth rates (da/dN) under uniaxial fatigue loading of precracked nozzle-on-plate models, a.o. using a closed T.V. circuit. Resulting da/dN versus crack length (a) curves are converted into K versus a curves using da/dN versus ΔK curves for the same material (ASTM A 508 C12) obtained by standard procedures. Comparison of theoretical and experimental data yields the conclusion that: simple analytical approximations as sometimes recommended in literature may largely overestimate or underestimate K-factors for nozzle corner cracks; a computer program based on the semi-analytical procedure yields results within seconds of CPU-time once the input data have been generated. These results compare well with experimental and available finite element data for the range of crack depths of practical concern
Tao, Wanghai; Wang, Quanjiu; Lin, Henry
2018-03-01
Soil and water loss from farmland causes land degradation and water pollution, thus continued efforts are needed to establish mathematical model for quantitative analysis of relevant processes and mechanisms. In this study, an approximate analytical solution has been developed for overland flow model and sediment transport model, offering a simple and effective means to predict overland flow and erosion under natural rainfall conditions. In the overland flow model, the flow regime was considered to be transitional with the value of parameter β (in the kinematic wave model) approximately two. The change rate of unit discharge with distance was assumed to be constant and equal to the runoff rate at the outlet of the plane. The excess rainfall was considered to be constant under uniform rainfall conditions. The overland flow model developed can be further applied to natural rainfall conditions by treating excess rainfall intensity as constant over a small time interval. For the sediment model, the recommended values of the runoff erosion calibration constant (cr) and the splash erosion calibration constant (cf) have been given in this study so that it is easier to use the model. These recommended values are 0.15 and 0.12, respectively. Comparisons with observed results were carried out to validate the proposed analytical solution. The results showed that the approximate analytical solution developed in this paper closely matches the observed data, thus providing an alternative method of predicting runoff generation and sediment yield, and offering a more convenient method of analyzing the quantitative relationships between variables. Furthermore, the model developed in this study can be used as a theoretical basis for developing runoff and erosion control methods.
Lin, Yezhi; Liu, Yinping; Li, Zhibin
2012-01-01
The Adomian decomposition method (ADM) is one of the most effective methods for constructing analytic approximate solutions of nonlinear differential equations. In this paper, based on the new definition of the Adomian polynomials, and the two-step Adomian decomposition method (TSADM) combined with the Padé technique, a new algorithm is proposed to construct accurate analytic approximations of nonlinear differential equations with initial conditions. Furthermore, a MAPLE package is developed, which is user-friendly and efficient. One only needs to input a system, initial conditions and several necessary parameters, then our package will automatically deliver analytic approximate solutions within a few seconds. Several different types of examples are given to illustrate the validity of the package. Our program provides a helpful and easy-to-use tool in science and engineering to deal with initial value problems. Program summaryProgram title: NAPA Catalogue identifier: AEJZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4060 No. of bytes in distributed program, including test data, etc.: 113 498 Distribution format: tar.gz Programming language: MAPLE R13 Computer: PC Operating system: Windows XP/7 RAM: 2 Gbytes Classification: 4.3 Nature of problem: Solve nonlinear differential equations with initial conditions. Solution method: Adomian decomposition method and Padé technique. Running time: Seconds at most in routine uses of the program. Special tasks may take up to some minutes.
Energy Technology Data Exchange (ETDEWEB)
Roteta, M; Baro, J; Fernandez-Varea, J M; Salvat, F
1994-07-01
The FORTRAN 77 code PHOTAC to compute photon attenuation coefficients of elements and compounds is described. The code is based on the semi analytical approximate atomic cross sections proposed by Baro et al. (1994). Photoelectric cross sections for coherent and incoherent scattering and for pair production are obtained as integrals of the corresponding differential cross sections. These integrals are evaluated, to a pre-selected accuracy, by using a 20-point Gauss adaptive integration algorithm. Calculated attenuation coefficients agree with recently compiled databases to within - 1%, in the energy range from 1 keV to 1 GeV. The complete source listing of the program PHOTAC is included. (Author) 14 refs.
International Nuclear Information System (INIS)
Roteta, M.; Baro, J.; Fernandez-Varea, J. M.; Salvat, F.
1994-01-01
The FORTRAN 77 code PHOTAC to compute photon attenuation coefficients of elements and compounds is described. The code is based on the semi analytical approximate atomic cross sections proposed by Baro et al. (1994). Photoelectric cross sections for coherent and incoherent scattering and for pair production are obtained as integrals of the corresponding differential cross sections. These integrals are evaluated, to a pre-selected accuracy, by using a 20-point Gauss adaptive integration algorithm. Calculated attenuation coefficients agree with recently compiled databases to within - 1%, in the energy range from 1 keV to 1 GeV. The complete source listing of the program PHOTAC is included. (Author) 14 refs
Higher order analytical approximate solutions to the nonlinear pendulum by He's homotopy method
International Nuclear Information System (INIS)
Belendez, A; Pascual, C; Alvarez, M L; Mendez, D I; Yebra, M S; Hernandez, A
2009-01-01
A modified He's homotopy perturbation method is used to calculate the periodic solutions of a nonlinear pendulum. The method has been modified by truncating the infinite series corresponding to the first-order approximate solution and substituting a finite number of terms in the second-order linear differential equation. As can be seen, the modified homotopy perturbation method works very well for high values of the initial amplitude. Excellent agreement of the analytical approximate period with the exact period has been demonstrated not only for small but also for large amplitudes A (the relative error is less than 1% for A < 152 deg.). Comparison of the result obtained using this method with the exact ones reveals that this modified method is very effective and convenient.
Approximate approaches to the one-dimensional finite potential well
International Nuclear Information System (INIS)
Singh, Shilpi; Pathak, Praveen; Singh, Vijay A
2011-01-01
The one-dimensional finite well is a textbook problem. We propose approximate approaches to obtain the energy levels of the well. The finite well is also encountered in semiconductor heterostructures where the carrier mass inside the well (m i ) is taken to be distinct from mass outside (m o ). A relevant parameter is the mass discontinuity ratio β = m i /m o . To correctly account for the mass discontinuity, we apply the BenDaniel-Duke boundary condition. We obtain approximate solutions for two cases: when the well is shallow and when the well is deep. We compare the approximate results with the exact results and find that higher-order approximations are quite robust. For the shallow case, the approximate solution can be expressed in terms of a dimensionless parameter σ l = 2m o V 0 L 2 /ℎ 2 (or σ = β 2 σ l for the deep case). We show that the lowest-order results are related by a duality transform. We also discuss how the energy upscales with L (E∼1/L γ ) and obtain the exponent γ. Exponent γ → 2 when the well is sufficiently deep and β → 1. The ratio of the masses dictates the physics. Our presentation is pedagogical and should be useful to students on a first course on elementary quantum mechanics or low-dimensional semiconductors.
Approximate analytical solution to the Boussinesq equation with a sloping water-land boundary
Tang, Yuehao; Jiang, Qinghui; Zhou, Chuangbing
2016-04-01
An approximate solution is presented to the 1-D Boussinesq equation (BEQ) characterizing transient groundwater flow in an unconfined aquifer subject to a constant water variation at the sloping water-land boundary. The flow equation is decomposed to a linearized BEQ and a head correction equation. The linearized BEQ is solved using a Laplace transform. By means of the frozen-coefficient technique and Gauss function method, the approximate solution for the head correction equation can be obtained, which is further simplified to a closed-form expression under the condition of local energy equilibrium. The solutions of the linearized and head correction equations are discussed from physical concepts. Especially for the head correction equation, the well posedness of the approximate solution obtained by the frozen-coefficient method is verified to demonstrate its boundedness, which can be further embodied as the upper and lower error bounds to the exact solution of the head correction by statistical analysis. The advantage of this approximate solution is in its simplicity while preserving the inherent nonlinearity of the physical phenomenon. Comparisons between the analytical and numerical solutions of the BEQ validate that the approximation method can achieve desirable precisions, even in the cases with strong nonlinearity. The proposed approximate solution is applied to various hydrological problems, in which the algebraic expressions that quantify the water flow processes are derived from its basic solutions. The results are useful for the quantification of stream-aquifer exchange flow rates, aquifer response due to the sudden reservoir release, bank storage and depletion, and front position and propagation speed.
An approximate and an analytical solution to the carousel-pendulum problem
Energy Technology Data Exchange (ETDEWEB)
Vial, Alexandre [Pole Physique, Mecanique, Materiaux et Nanotechnologies, Universite de technologie de Troyes, 12, rue Marie Curie BP-2060, F-10010 Troyes Cedex (France)], E-mail: alexandre.vial@utt.fr
2009-09-15
We show that an improved solution to the carousel-pendulum problem can be easily obtained through a first-order Taylor expansion, and its accuracy is determined after the obtention of an unusable analytical exact solution, advantageously replaced by a numerical one. It is shown that the accuracy is unexpectedly high, even when the ratio length of the pendulum to carousel radius approaches unity. (letters and comments)
International Nuclear Information System (INIS)
Boisseau, Bruno; Forgacs, Peter; Giacomini, Hector
2007-01-01
A new (algebraic) approximation scheme to find global solutions of two-point boundary value problems of ordinary differential equations (ODEs) is presented. The method is applicable for both linear and nonlinear (coupled) ODEs whose solutions are analytic near one of the boundary points. It is based on replacing the original ODEs by a sequence of auxiliary first-order polynomial ODEs with constant coefficients. The coefficients in the auxiliary ODEs are uniquely determined from the local behaviour of the solution in the neighbourhood of one of the boundary points. The problem of obtaining the parameters of the global (connecting) solutions, analytic at one of the boundary points, reduces to find the appropriate zeros of algebraic equations. The power of the method is illustrated by computing the approximate values of the 'connecting parameters' for a number of nonlinear ODEs arising in various problems in field theory. We treat in particular the static and rotationally symmetric global vortex, the skyrmion, the Abrikosov-Nielsen-Olesen vortex, as well as the 't Hooft-Polyakov magnetic monopole. The total energy of the skyrmion and of the monopole is also computed by the new method. We also consider some ODEs coming from the exact renormalization group. The ground-state energy level of the anharmonic oscillator is also computed for arbitrary coupling strengths with good precision. (fast track communication)
DEFF Research Database (Denmark)
Pedersen, Thomas Quistgaard
In this paper we derive an approximate analytical solution to the optimal con- sumption and portfolio choice problem of an infinitely-lived investor with power utility defined over the difference between consumption and an external habit. The investor is assumed to have access to two tradable......-linearized surplus consumption ratio. The "difference habit model" implies that the relative risk aversion is time-varying which is in line with recent ev- idence from the asset pricing literature. We show that accounting for habit a¤ects both the myopic and intertemporal hedge component of optimal asset demand......, and introduces an additional component that works as a hedge against changes in the investor's habit level. In an empirical application, we calibrate the model to U.S. data and show that habit formation has significant effects on both the optimal consumption and portfolio choice compared to a standard CRRA...
Analytic regularity and collocation approximation for elliptic PDEs with random domain deformations
Castrillon, Julio
2016-03-02
In this work we consider the problem of approximating the statistics of a given Quantity of Interest (QoI) that depends on the solution of a linear elliptic PDE defined over a random domain parameterized by N random variables. The elliptic problem is remapped onto a corresponding PDE with a fixed deterministic domain. We show that the solution can be analytically extended to a well defined region in CN with respect to the random variables. A sparse grid stochastic collocation method is then used to compute the mean and variance of the QoI. Finally, convergence rates for the mean and variance of the QoI are derived and compared to those obtained in numerical experiments.
Energy Technology Data Exchange (ETDEWEB)
Zou, Li [Dalian Univ. of Technology, Dalian City (China). State Key Lab. of Structural Analysis for Industrial Equipment; Liang, Songxin; Li, Yawei [Dalian Univ. of Technology, Dalian City (China). School of Mathematical Sciences; Jeffrey, David J. [Univ. of Western Ontario, London (Canada). Dept. of Applied Mathematics
2017-06-01
Nonlinear boundary value problems arise frequently in physical and mechanical sciences. An effective analytic approach with two parameters is first proposed for solving nonlinear boundary value problems. It is demonstrated that solutions given by the two-parameter method are more accurate than solutions given by the Adomian decomposition method (ADM). It is further demonstrated that solutions given by the ADM can also be recovered from the solutions given by the two-parameter method. The effectiveness of this method is demonstrated by solving some nonlinear boundary value problems modeling beam-type nano-electromechanical systems.
International Nuclear Information System (INIS)
Bozkaya, Uğur; Sherrill, C. David
2016-01-01
An efficient implementation is presented for analytic gradients of the coupled-cluster singles and doubles (CCSD) method with the density-fitting approximation, denoted DF-CCSD. Frozen core terms are also included. When applied to a set of alkanes, the DF-CCSD analytic gradients are significantly accelerated compared to conventional CCSD for larger molecules. The efficiency of our DF-CCSD algorithm arises from the acceleration of several different terms, which are designated as the “gradient terms”: computation of particle density matrices (PDMs), generalized Fock-matrix (GFM), solution of the Z-vector equation, formation of the relaxed PDMs and GFM, back-transformation of PDMs and GFM to the atomic orbital (AO) basis, and evaluation of gradients in the AO basis. For the largest member of the alkane set (C 10 H 22 ), the computational times for the gradient terms (with the cc-pVTZ basis set) are 2582.6 (CCSD) and 310.7 (DF-CCSD) min, respectively, a speed up of more than 8-folds. For gradient related terms, the DF approach avoids the usage of four-index electron repulsion integrals. Based on our previous study [U. Bozkaya, J. Chem. Phys. 141, 124108 (2014)], our formalism completely avoids construction or storage of the 4-index two-particle density matrix (TPDM), using instead 2- and 3-index TPDMs. The DF approach introduces negligible errors for equilibrium bond lengths and harmonic vibrational frequencies.
An analytic approach to cyber adversarial dynamics
Sweeney, Patrick; Cybenko, George
2012-06-01
To date, cyber security investment by both the government and commercial sectors has been largely driven by the myopic best response of players to the actions of their adversaries and their perception of the adversarial environment. However, current work in applying traditional game theory to cyber operations typically assumes that games exist with prescribed moves, strategies, and payos. This paper presents an analytic approach to characterizing the more realistic cyber adversarial metagame that we believe is being played. Examples show that understanding the dynamic metagame provides opportunities to exploit an adversary's anticipated attack strategy. A dynamic version of a graph-based attack-defend game is introduced, and a simulation shows how an optimal strategy can be selected for success in the dynamic environment.
Directory of Open Access Journals (Sweden)
M. I. Popov
2016-01-01
Full Text Available The approximate analytical solution of a problem about nonstationary free convection in the conductive and laminar mode of the Newtonian liquid in square area at the instantaneous change of temperature of a sidewall and lack of heat fluxes is submitted on top and bottom the bases. The equations of free convection in an approximation of Oberbeka-Bussinesk are linearized due to neglect by convective items. For reduction of number of hydrothermal parameters the system is given to the dimensionless look by introduction of scales for effect and explanatory variables. Transition from classical variables to the variables "whirlwind-a flow function" allowed to reduce system to a nonstationary heat conduction equation and a nonstationary nonuniform biharmonic equation, and the first is not dependent on the second. The decision in the form of a flow function is received by application integral a sine - Fourier transforms with terminating limits to a biharmonic equation at first on a variable x, and then on a variable y. The flow function has an appearance of a double series of Fourier on sine with coefficients in an integral form. Coefficients of a row represent integrals from unknown functions. On the basis of a hypothesis of an express type of integrals coefficients are calculated from the linear equation system received from boundary conditions on partial derivatives of function. Dependence of structure of a current on Prandtl's number is investigated. The cards of streamlines and isolines of components of speed describing development of a current from the moment of emergence before transition to a stationary state are received. The schedules of a field of vectors of speeds in various time illustrating dynamics of a current are provided. Reliability of a hypothesis of an express type of integral coefficients is confirmed by adequacy to physical sense and coherence of the received results with the numerical solution of a problem.
International Nuclear Information System (INIS)
Chen, C.S.; Yates, S.R.
1989-01-01
In dealing with problems related to land-based nuclear waste management, a number of analytical and approximate solutions were developed to quantify radionuclide transport through fractures contained in the porous formation. It has been reported that by treating the radioactive decay constant as the appropriate first-order rate constant, these solutions can also be used to study injection problems of a similar nature subject to first-order chemical or biological reactions. The fracture is idealized by a pair of parallel, smooth plates separated by an aperture of constant thickness. Groundwater was assumed to be immobile in the underlying and overlying porous formations due to their low permeabilities. However, the injected radionuclides were able to move from the fracture into the porous matrix by molecular diffusion (the matrix diffusion) due to possible concentration gradients across the interface between the fracture and the porous matrix. Calculation of the transient solutions is not straightforward, and the paper documents a contained Fortran program, which computes the Stehfest inversion, the Airy functions, and gives the concentration distributions in the fracture as well as in the porous matrix for both transient and steady-state cases
Directory of Open Access Journals (Sweden)
Yongliang Wang
2015-01-01
Full Text Available Tilting pad bearings offer unique dynamic stability enabling successful deployment of high-speed rotating machinery. The model of dynamic stiffness, damping, and added mass coefficients is often used for rotordynamic analyses, and this method does not suffice to describe the dynamic behaviour due to the nonlinear effects of oil film force under larger shaft vibration or vertical rotor conditions. The objective of this paper is to present a nonlinear oil force model for finite length tilting pad journal bearings. An approximate analytic oil film force model was established by analysing the dynamic characteristic of oil film of a single pad journal bearing using variable separation method under the dynamic π oil film boundary condition. And an oil film force model of a four-tilting-pad journal bearing was established by using the pad assembly technique and considering pad tilting angle. The validity of the model established was proved by analyzing the distribution of oil film pressure and the locus of journal centre for tilting pad journal bearings and by comparing the model established in this paper with the model established using finite difference method.
Hermite-Pade approximation approach to hydromagnetic flows in convergent-divergent channels
International Nuclear Information System (INIS)
Makinde, O.D.
2005-10-01
The problem of two-dimensional, steady, nonlinear flow of an incompressible conducting viscous fluid in convergent-divergent channels under the influence of an externally applied homogeneous magnetic field is studied using a special type of Hermite-Pade approximation approach. This semi-numerical scheme offers some advantages over solutions obtained by using traditional methods such as finite differences, spectral method, shooting method, etc. It reveals the analytical structure of the solution function and the important properties of overall flow structure including velocity field, flow reversal control and bifurcations are discussed. (author)
Dynamic programming approach to optimization of approximate decision rules
Amin, Talha
2013-02-01
This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number of unordered pairs of rows with different decisions in the decision table T. For a nonnegative real number β, we consider β-decision rules that localize rows in subtables of T with uncertainty at most β. Our algorithm constructs a directed acyclic graph Δβ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most β. The graph Δβ(T) allows us to describe the whole set of so-called irredundant β-decision rules. We can describe all irredundant β-decision rules with minimum length, and after that among these rules describe all rules with maximum coverage. We can also change the order of optimization. The consideration of irredundant rules only does not change the results of optimization. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2012 Elsevier Inc. All rights reserved.
Lifton, Nathaniel; Sato, Tatsuhiko; Dunai, Tibor J.
2014-01-01
Several models have been proposed for scaling in situ cosmogenic nuclide production rates from the relatively few sites where they have been measured to other sites of interest. Two main types of models are recognized: (1) those based on data from nuclear disintegrations in photographic emulsions combined with various neutron detectors, and (2) those based largely on neutron monitor data. However, stubborn discrepancies between these model types have led to frequent confusion when calculating surface exposure ages from production rates derived from the models. To help resolve these discrepancies and identify the sources of potential biases in each model, we have developed a new scaling model based on analytical approximations to modeled fluxes of the main atmospheric cosmic-ray particles responsible for in situ cosmogenic nuclide production. Both the analytical formulations and the Monte Carlo model fluxes on which they are based agree well with measured atmospheric fluxes of neutrons, protons, and muons, indicating they can serve as a robust estimate of the atmospheric cosmic-ray flux based on first principles. We are also using updated records for quantifying temporal and spatial variability in geomagnetic and solar modulation effects on the fluxes. A key advantage of this new model (herein termed LSD) over previous Monte Carlo models of cosmogenic nuclide production is that it allows for faster estimation of scaling factors based on time-varying geomagnetic and solar inputs. Comparing scaling predictions derived from the LSD model with those of previously published models suggest potential sources of bias in the latter can be largely attributed to two factors: different energy responses of the secondary neutron detectors used in developing the models, and different geomagnetic parameterizations. Given that the LSD model generates flux spectra for each cosmic-ray particle of interest, it is also relatively straightforward to generate nuclide-specific scaling
Analytic approach to auroral electron transport and energy degradation
International Nuclear Information System (INIS)
Stamnes, K.
1980-01-01
The interaction of a beam of auroral electrons with the atmosphere is described by the linear transport equation, encompassing discrete energy loss, multiple scattering, and secondary electrons. A solution to the transport equation provides the electron intensity as a function of altitude, pitch angle (with respect to the geomagnetic field) and energy. A multi-stream (discrete ordinate) approximation to the transport equation is developed. An analytic solution is obtained in this approximation. The computational scheme obtained by combining the present transport code with the energy degradation method of Swartz (1979) conserves energy identically. The theory provides a framework within which angular distributions can be easily calculated and interpreted. Thus, a detailed study of the angular distributions of 'non-absorbed' electrons (i.e., electrons that have lost just a small fraction of their incident energy) reveals a systematic variation with incident angle and energy, and with penetration depth. The present approach also gives simple yet accurate solutions in low order multi-stream approximations. The accuracy of the four-stream approximation is generally within a few per cent, whereas two-stream results for backscattered mean intensities and fluxes are accurate to within 10-15%. (author)
International Nuclear Information System (INIS)
Kusba, J.; Sipp, B.
1985-01-01
We present a discussion about the range of validity of the usual approximate transfer rate expressions used in the description of the kinetics of diffusion-modulated excitation transfer, for a reactive interaction of exponential functional form. We simulate the features of energy transfer by a numerical inversion of the exact Laplace transform of the transfer rate. It is shown that for high diffusion coefficients of the order of 10 -5 cm 2 s -1 , the kinetics may be well reproduced, even at short times, by the asymptotic form of the transfer rate. For slow molecular displacements, the short time static regime is brought to direct observation, but the transfer rate approaches is asymptotic value at a much later time
An algebraic approach to the analytic bootstrap
Energy Technology Data Exchange (ETDEWEB)
Alday, Luis F. [Mathematical Institute, University of Oxford, Andrew Wiles Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom); Zhiboedov, Alexander [Center for the Fundamental Laws of Nature, Harvard University, Cambridge, MA 02138 (United States)
2017-04-27
We develop an algebraic approach to the analytic bootstrap in CFTs. By acting with the Casimir operator on the crossing equation we map the problem of doing large spin sums to any desired order to the problem of solving a set of recursion relations. We compute corrections to the anomalous dimension of large spin operators due to the exchange of a primary and its descendants in the crossed channel and show that this leads to a Borel-summable expansion. We analyse higher order corrections to the microscopic CFT data in the direct channel and its matching to infinite towers of operators in the crossed channel. We apply this method to the critical O(N) model. At large N we reproduce the first few terms in the large spin expansion of the known two-loop anomalous dimensions of higher spin currents in the traceless symmetric representation of O(N) and make further predictions. At small N we present the results for the truncated large spin expansion series of anomalous dimensions of higher spin currents.
Energy Technology Data Exchange (ETDEWEB)
Heng, Kevin; Mendonça, João M.; Lee, Jae-Min, E-mail: kevin.heng@csh.unibe.ch, E-mail: joao.mendonca@csh.unibe.ch, E-mail: lee@physik.uzh.ch [University of Bern, Center for Space and Habitability, Sidlerstrasse 5, CH-3012 Bern (Switzerland)
2014-11-01
We present a comprehensive analytical study of radiative transfer using the method of moments and include the effects of non-isotropic scattering in the coherent limit. Within this unified formalism, we derive the governing equations and solutions describing two-stream radiative transfer (which approximates the passage of radiation as a pair of outgoing and incoming fluxes), flux-limited diffusion (which describes radiative transfer in the deep interior), and solutions for the temperature-pressure profiles. Generally, the problem is mathematically underdetermined unless a set of closures (Eddington coefficients) is specified. We demonstrate that the hemispheric (or hemi-isotropic) closure naturally derives from the radiative transfer equation if energy conservation is obeyed, while the Eddington closure produces spurious enhancements of both reflected light and thermal emission. We concoct recipes for implementing two-stream radiative transfer in stand-alone numerical calculations and general circulation models. We use our two-stream solutions to construct toy models of the runaway greenhouse effect. We present a new solution for temperature-pressure profiles with a non-constant optical opacity and elucidate the effects of non-isotropic scattering in the optical and infrared. We derive generalized expressions for the spherical and Bond albedos and the photon deposition depth. We demonstrate that the value of the optical depth corresponding to the photosphere is not always 2/3 (Milne's solution) and depends on a combination of stellar irradiation, internal heat, and the properties of scattering in both the optical and infrared. Finally, we derive generalized expressions for the total, net, outgoing, and incoming fluxes in the convective regime.
Spatial Correlation Of Streamflows: An Analytical Approach
Betterle, A.; Schirmer, M.; Botter, G.
2016-12-01
The interwoven space and time variability of climate and landscape properties results in complex and non-linear hydrological response of streamflow dynamics. Understanding how meteorologic and morphological characteristics of catchments affect similarity/dissimilarity of streamflow timeseries at their outlets represents a scientific challenge with application in water resources management, ecological studies and regionalization approaches aimed to predict streamflows in ungauged areas. In this study, we establish an analytical approach to estimate the spatial correlation of daily streamflows in two arbitrary locations within a given hydrologic district or river basin at seasonal and annual time scales. The method is based on a stochastic description of the coupled streamflow dynamics at the outlet of two catchments. The framework aims to express the correlation of daily streamflows at two locations along a river network as a function of a limited number of physical parameters characterizing the main underlying hydrological drivers, that include climate conditions, precipitation regime and catchment drainage rates. The proposed method portrays how heterogeneity of climate and landscape features affect the spatial variability of flow regimes along river systems. In particular, we show that frequency and intensity of synchronous effective rainfall events in the relevant contributing catchments are the main driver of the spatial correlation of daily discharge, whereas only pronounced differences in the drainage rate of the two basins bear a significant effect on the streamflow correlation. The topological arrangement of the two outlets also influences the underlying streamflow correlation, as we show that nested catchments tend to maximize the spatial correlation of flow regimes. The application of the method to a set of catchments in the South-Eastern US suggests the potential of the proposed tool for the characterization of spatial connections of flow regimes in the
International Nuclear Information System (INIS)
Rekab, S.; Zenine, N.
2006-01-01
We consider the three dimensional non relativistic eigenvalue problem in the case of a Coulomb potential plus linear and quadratic radial terms. In the framework of the Rayleigh-Schrodinger Perturbation Theory, using a specific choice of the unperturbed Hamiltonian, we obtain approximate analytic expressions for the eigenvalues of orbital excitations. The implications and the range of validity of the obtained analytic expression are discussed
An approximate approach to quantum mechanical study of biomacromolecules
Chen, Xihua
method/basis-set levels of the quantum chemical calculation on the MFCC-downhill simplex optimization are also discussed. Finally, the MFCC-downhill simplex method is tested, as a general multiatomic case study, on a molecular system of cyclo-AAGAGG·H 2O to optimize the binding structure of water molecule to the fixed cyclohexapeptide. The MFCC-downhill simplex optimization results in good agreement with the crystal structure. The MFCC-downhill simplex method should be applicable to optimize the structures of ligands that bind to biomacromolecules such as proteins and DNAs. In Chapter 4, we propose a new approximate method for efficient calculation of biomacromolecular electronic properties, using a Density Matrix (DM) scheme which is integrated with the MFCC approach. In this MFCC-DM method, a biomacro-molecule such as a protein is partitioned by an MFCC scheme into properly capped fragments and concaps whose density matrices are calculated by conventional ab initio methods. These sub-system density matrices are then assembled to construct the full system density matrix which is finally employed to calculate the electronic energy, dipole moment, electronic density, electrostatic potential, etc., of the protein using Hartree-Fock or Density Functional Theory methods. By this MFCC-DM method, the self-consistent field (SCF) procedure for solving the full Hamiltonian problem is circumvented. Two implementations of this approach, MFCC-SDM and MFCC-GDM, are discussed. Systematic numerical studies are carried out on a series of extended polyglycines CH3CO-(GLY) n-NHCH3 (n=3-25) and excellent results are obtained. In Chapter 5, we present an improvement of MFCC-DM method and introduce a pairwise interaction correction (PIC) with which the MFCC-DM method is applicable to study a real-world protein with short-range structural complexity such as hydrogen bonding and close contact. In this MFCC-DM-PIC method, a protein molecule is partitioned into properly capped fragments and
Gopalan, Giri; Hrafnkelsson, Birgir; Aðalgeirsdóttir, Guðfinna; Jarosch, Alexander H.; Pálsson, Finnur
2018-03-01
Bayesian hierarchical modeling can assist the study of glacial dynamics and ice flow properties. This approach will allow glaciologists to make fully probabilistic predictions for the thickness of a glacier at unobserved spatio-temporal coordinates, and it will also allow for the derivation of posterior probability distributions for key physical parameters such as ice viscosity and basal sliding. The goal of this paper is to develop a proof of concept for a Bayesian hierarchical model constructed, which uses exact analytical solutions for the shallow ice approximation (SIA) introduced by Bueler et al. (2005). A suite of test simulations utilizing these exact solutions suggests that this approach is able to adequately model numerical errors and produce useful physical parameter posterior distributions and predictions. A byproduct of the development of the Bayesian hierarchical model is the derivation of a novel finite difference method for solving the SIA partial differential equation (PDE). An additional novelty of this work is the correction of numerical errors induced through a numerical solution using a statistical model. This error correcting process models numerical errors that accumulate forward in time and spatial variation of numerical errors between the dome, interior, and margin of a glacier.
Approximated and User Steerable tSNE for Progressive Visual Analytics
Pezzotti, N.; Lelieveldt, B.P.F.; van der Maaten, L.J.P.; Hollt, T.; Eisemann, E.; Vilanova Bartroli, A.
2016-01-01
Progressive Visual Analytics aims at improving the interactivity in existing analytics techniques by means of visualization as well as interaction with intermediate results. One key method for data analysis is dimensionality reduction, for example, to produce 2D embeddings that can be visualized and
Diffusive Wave Approximation to the Shallow Water Equations: Computational Approach
Collier, Nathan; Radwan, Hany; Dalcin, Lisandro; Calo, Victor M.
2011-01-01
We discuss the use of time adaptivity applied to the one dimensional diffusive wave approximation to the shallow water equations. A simple and computationally economical error estimator is discussed which enables time-step size adaptivity
Pavement Performance : Approaches Using Predictive Analytics
2018-03-23
Acceptable pavement condition is paramount to road safety. Using predictive analytics techniques, this project attempted to develop models that provide an assessment of pavement condition based on an array of indictors that include pavement distress,...
International Nuclear Information System (INIS)
Lublinsky, M.
2004-01-01
A simple analytic expression for the non-singlet structure function fns is given. The expression is derived from the result of B. I. Ermolaev et al. (1996) obtained by low x resummation of the quark ladder diagrams in the double logarithmic approximation of perturbative QCD. (orig.)
Controllability distributions and systems approximations: a geometric approach
Ruiz, A.C.; Nijmeijer, Henk
1994-01-01
Given a nonlinear system we determine a relation at an equilibrium between controllability distributions defined for a nonlinear system and a Taylor series approximation of it. The value of such a relation is appreciated if we recall that the solvability conditions as well as the solutions to some
Controllability distributions and systems approximations: a geometric approach
Ruiz, A.C.; Nijmeijer, Henk
1992-01-01
Given a nonlinear system, a relation between controllability distributions defined for a nonlinear system and a Taylor series approximation of it is determined. Special attention is given to this relation at the equilibrium. It is known from nonlinear control theory that the solvability conditions
Learning analytics approach of EMMA project
Tammets, Kairit; Brouns, Francis
2014-01-01
The EMMA project provides a MOOC platform to aggregate and delivers massive open online courses (MOOC) in multiple languages from a variety of European universities. Learning analytics play an important role in MOOCs to support the individual needs of the learner.
An Approximation Approach for Solving the Subpath Planning Problem
Safilian, Masoud; Tashakkori, S. Mehdi; Eghbali, Sepehr; Safilian, Aliakbar
2016-01-01
The subpath planning problem is a branch of the path planning problem, which has widespread applications in automated manufacturing process as well as vehicle and robot navigation. This problem is to find the shortest path or tour subject for travelling a set of given subpaths. The current approaches for dealing with the subpath planning problem are all based on meta-heuristic approaches. It is well-known that meta-heuristic based approaches have several deficiencies. To address them, we prop...
Earth's core convection: Boussinesq approximation or incompressible approach?
Czech Academy of Sciences Publication Activity Database
Anufriev, A. P.; Hejda, Pavel
2010-01-01
Roč. 104, č. 1 (2010), s. 65-83 ISSN 0309-1929 R&D Projects: GA AV ČR IAA300120704 Grant - others:INTAS(XE) 03-51-5807 Institutional research plan: CEZ:AV0Z30120515 Keywords : geodynamic models * core convection * Boussinesq approximation Subject RIV: DE - Earth Magnetism, Geodesy, Geography Impact factor: 0.831, year: 2010
Approximate analytical solutions in the analysis of elastic structures of complex geometry
Goloskokov, Dmitriy P.; Matrosov, Alexander V.
2018-05-01
A method of analytical decomposition for analysis plane structures of a complex configuration is presented. For each part of the structure in the form of a rectangle all the components of the stress-strain state are constructed by the superposition method. The method is based on two solutions derived in the form of trigonometric series with unknown coefficients using the method of initial functions. The coefficients are determined from the system of linear algebraic equations obtained while satisfying the boundary conditions and the conditions for joining the structure parts. The components of the stress-strain state of a bent plate with holes are calculated using the analytical decomposition method.
International Nuclear Information System (INIS)
Belendez, A.; Hernandez, A.; Belendez, T.; Neipp, C.; Marquez, A.
2008-01-01
He's homotopy perturbation method is used to calculate higher-order approximate periodic solutions of a nonlinear oscillator with discontinuity for which the elastic force term is proportional to sgn(x). We find He's homotopy perturbation method works very well for the whole range of initial amplitudes, and the excellent agreement of the approximate frequencies and periodic solutions with the exact ones has been demonstrated and discussed. Only one iteration leads to high accuracy of the solutions with a maximal relative error for the approximate period of less than 1.56% for all values of oscillation amplitude, while this relative error is 0.30% for the second iteration and as low as 0.057% when the third-order approximation is considered. Comparison of the result obtained using this method with those obtained by different harmonic balance methods reveals that He's homotopy perturbation method is very effective and convenient
Analytic approximation to the largest eigenvalue distribution of a white Wishart matrix
CSIR Research Space (South Africa)
Vlok, JD
2012-08-14
Full Text Available offers largely simplified computation and provides statistics such as the mean value and region of support of the largest eigenvalue distribution. Numeric results from the literature are compared with the approximation and Monte Carlo simulation results...
Energy Technology Data Exchange (ETDEWEB)
Barlow, Nathaniel S., E-mail: nsbsma@rit.edu [School of Mathematical Sciences, Rochester Institute of Technology, Rochester, New York 14623 (United States); Schultz, Andrew J., E-mail: ajs42@buffalo.edu; Kofke, David A., E-mail: kofke@buffalo.edu [Department of Chemical and Biological Engineering, University at Buffalo, State University of New York, Buffalo, New York 14260 (United States); Weinstein, Steven J., E-mail: sjweme@rit.edu [Department of Chemical Engineering, Rochester Institute of Technology, Rochester, New York 14623 (United States)
2015-08-21
The mathematical structure imposed by the thermodynamic critical point motivates an approximant that synthesizes two theoretically sound equations of state: the parametric and the virial. The former is constructed to describe the critical region, incorporating all scaling laws; the latter is an expansion about zero density, developed from molecular considerations. The approximant is shown to yield an equation of state capable of accurately describing properties over a large portion of the thermodynamic parameter space, far greater than that covered by each treatment alone.
Barlow, Nathaniel S; Schultz, Andrew J; Weinstein, Steven J; Kofke, David A
2015-08-21
The mathematical structure imposed by the thermodynamic critical point motivates an approximant that synthesizes two theoretically sound equations of state: the parametric and the virial. The former is constructed to describe the critical region, incorporating all scaling laws; the latter is an expansion about zero density, developed from molecular considerations. The approximant is shown to yield an equation of state capable of accurately describing properties over a large portion of the thermodynamic parameter space, far greater than that covered by each treatment alone.
International Nuclear Information System (INIS)
Kazarnovskij, M.V.; Matushko, G.K.; Matushko, V.L.; Par'ev, Eh.Ya.; Serezhnikov, S.V.
1981-01-01
The problem on propagation of the internuclear cascade initiated by nucleons of 0.1-1 GeV energy in accelerator schielding is solved approximately in the analytical form. Analytical expressions for the function of spatial, angular and energy distribution of the flux density of nucleons with the energy above 20 MeV and some functionals from it are obtained. The results of the calculations obtained by the developed methods are compared with calculations obtained by the method of direct simulation. It is shown that at the atomic mass of shielding material [ru
Analytical approximations of diving-wave imaging in constant-gradient medium
Stovas, Alexey
2014-06-24
Full-waveform inversion (FWI) in practical applications is currently used to invert the direct arrivals (diving waves, no reflections) using relatively long offsets. This is driven mainly by the high nonlinearity introduced to the inversion problem when reflection data are included, which in some cases require extremely low frequency for convergence. However, analytical insights into diving waves have lagged behind this sudden interest. We use analytical formulas that describe the diving wave’s behavior and traveltime in a constant-gradient medium to develop insights into the traveltime moveout of diving waves and the image (model) point dispersal (residual) when the wrong velocity is used. The explicit formulations that describe these phenomena reveal the high dependence of diving-wave imaging on the gradient and the initial velocity. The analytical image point residual equation can be further used to scan for the best-fit linear velocity model, which is now becoming a common sight as an initial velocity model for FWI. We determined the accuracy and versatility of these analytical formulas through numerical tests.
Analytical fuzzy approach to biological data analysis
Directory of Open Access Journals (Sweden)
Weiping Zhang
2017-03-01
Full Text Available The assessment of the physiological state of an individual requires an objective evaluation of biological data while taking into account both measurement noise and uncertainties arising from individual factors. We suggest to represent multi-dimensional medical data by means of an optimal fuzzy membership function. A carefully designed data model is introduced in a completely deterministic framework where uncertain variables are characterized by fuzzy membership functions. The study derives the analytical expressions of fuzzy membership functions on variables of the multivariate data model by maximizing the over-uncertainties-averaged-log-membership values of data samples around an initial guess. The analytical solution lends itself to a practical modeling algorithm facilitating the data classification. The experiments performed on the heartbeat interval data of 20 subjects verified that the proposed method is competing alternative to typically used pattern recognition and machine learning algorithms.
An approximate methods approach to probabilistic structural analysis
Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.
1989-01-01
A probabilistic structural analysis method (PSAM) is described which makes an approximate calculation of the structural response of a system, including the associated probabilistic distributions, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The method employs the fast probability integration (FPI) algorithm of Wu and Wirsching. Typical solution strategies are illustrated by formulations for a representative critical component chosen from the Space Shuttle Main Engine (SSME) as part of a major NASA-sponsored program on PSAM. Typical results are presented to demonstrate the role of the methodology in engineering design and analysis.
Bessel collocation approach for approximate solutions of Hantavirus infection model
Directory of Open Access Journals (Sweden)
Suayip Yuzbasi
2017-11-01
Full Text Available In this study, a collocation method is introduced to find the approximate solutions of Hantavirus infection model which is a system of nonlinear ordinary differential equations. The method is based on the Bessel functions of the first kind, matrix operations and collocation points. This method converts Hantavirus infection model into a matrix equation in terms of the Bessel functions of first kind, matrix operations and collocation points. The matrix equation corresponds to a system of nonlinear equations with the unknown Bessel coefficients. The reliability and efficiency of the suggested scheme are demonstrated by numerical applications and all numerical calculations have been done by using a program written in Maple.
Diffusive Wave Approximation to the Shallow Water Equations: Computational Approach
Collier, Nathan
2011-05-14
We discuss the use of time adaptivity applied to the one dimensional diffusive wave approximation to the shallow water equations. A simple and computationally economical error estimator is discussed which enables time-step size adaptivity. This robust adaptive time discretization corrects the initial time step size to achieve a user specified bound on the discretization error and allows time step size variations of several orders of magnitude. In particular, in the one dimensional results presented in this work feature a change of four orders of magnitudes for the time step over the entire simulation.
Forecasting Hotspots-A Predictive Analytics Approach.
Maciejewski, R; Hafen, R; Rudolph, S; Larew, S G; Mitchell, M A; Cleveland, W S; Ebert, D S
2011-04-01
Current visual analytics systems provide users with the means to explore trends in their data. Linked views and interactive displays provide insight into correlations among people, events, and places in space and time. Analysts search for events of interest through statistical tools linked to visual displays, drill down into the data, and form hypotheses based upon the available information. However, current systems stop short of predicting events. In spatiotemporal data, analysts are searching for regions of space and time with unusually high incidences of events (hotspots). In the cases where hotspots are found, analysts would like to predict how these regions may grow in order to plan resource allocation and preventative measures. Furthermore, analysts would also like to predict where future hotspots may occur. To facilitate such forecasting, we have created a predictive visual analytics toolkit that provides analysts with linked spatiotemporal and statistical analytic views. Our system models spatiotemporal events through the combination of kernel density estimation for event distribution and seasonal trend decomposition by loess smoothing for temporal predictions. We provide analysts with estimates of error in our modeling, along with spatial and temporal alerts to indicate the occurrence of statistically significant hotspots. Spatial data are distributed based on a modeling of previous event locations, thereby maintaining a temporal coherence with past events. Such tools allow analysts to perform real-time hypothesis testing, plan intervention strategies, and allocate resources to correspond to perceived threats.
International Nuclear Information System (INIS)
Wei Gaofeng; Dong Shihai
2008-01-01
In this Letter the approximately analytical bound state solutions of the Dirac equation with the Manning-Rosen potential for arbitrary spin-orbit coupling quantum number k are carried out by taking a properly approximate expansion for the spin-orbit coupling term. In the case of exact spin symmetry, the associated two-component spinor wave functions of the Dirac equation for arbitrary spin-orbit quantum number k are presented and the corresponding bound state energy equation is derived. We study briefly two special cases; the general s-wave problem and the equal scalar and vector Manning-Rosen potential
Approximate dynamic programming approaches for appointment scheduling with patient preferences.
Li, Xin; Wang, Jin; Fung, Richard Y K
2018-04-01
During the appointment booking process in out-patient departments, the level of patient satisfaction can be affected by whether or not their preferences can be met, including the choice of physicians and preferred time slot. In addition, because the appointments are sequential, considering future possible requests is also necessary for a successful appointment system. This paper proposes a Markov decision process model for optimizing the scheduling of sequential appointments with patient preferences. In contrast to existing models, the evaluation of a booking decision in this model focuses on the extent to which preferences are satisfied. Characteristics of the model are analysed to develop a system for formulating booking policies. Based on these characteristics, two types of approximate dynamic programming algorithms are developed to avoid the curse of dimensionality. Experimental results suggest directions for further fine-tuning of the model, as well as improving the efficiency of the two proposed algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.
Distribution of Steps with Finite-Range Interactions: Analytic Approximations and Numerical Results
GonzáLez, Diego Luis; Jaramillo, Diego Felipe; TéLlez, Gabriel; Einstein, T. L.
2013-03-01
While most Monte Carlo simulations assume only nearest-neighbor steps interact elastically, most analytic frameworks (especially the generalized Wigner distribution) posit that each step elastically repels all others. In addition to the elastic repulsions, we allow for possible surface-state-mediated interactions. We investigate analytically and numerically how next-nearest neighbor (NNN) interactions and, more generally, interactions out to q'th nearest neighbor alter the form of the terrace-width distribution and of pair correlation functions (i.e. the sum over n'th neighbor distribution functions, which we investigated recently.[2] For physically plausible interactions, we find modest changes when NNN interactions are included and generally negligible changes when more distant interactions are allowed. We discuss methods for extracting from simulated experimental data the characteristic scale-setting terms in assumed potential forms.
An analytical approximation for the prediction of transients with temperature feedback
International Nuclear Information System (INIS)
Palma, Daniel A.P.; Martinez, Aquilino S.
2010-01-01
In the present paper a new analytical solution for the point kinetics equation system with temperature feedback is presented. This solution is based on the expansion of the neutron density in terms of the generation time of prompt neutrons (Nahla, 2009) and presents the advantage of being explicit in time and having a simple functional form in comparison with other existing formulations in supercritical transients. (orig.)
An analytical approximation for the prediction of transients with temperature feedback
Energy Technology Data Exchange (ETDEWEB)
Palma, Daniel A.P. [Instituto Federal do Rio de Janeiro (IFRJ), RJ (Brazil); Martinez, Aquilino S. [COPPE/UFRJ, RJ (Brazil). Programa de Engenharia Nuclear
2010-05-15
In the present paper a new analytical solution for the point kinetics equation system with temperature feedback is presented. This solution is based on the expansion of the neutron density in terms of the generation time of prompt neutrons (Nahla, 2009) and presents the advantage of being explicit in time and having a simple functional form in comparison with other existing formulations in supercritical transients. (orig.)
Ene, Remus-Daniel; Marinca, Vasile; Marinca, Bogdan
2016-01-01
Analytic approximate solutions using Optimal Homotopy Perturbation Method (OHPM) are given for steady boundary layer flow over a nonlinearly stretching wall in presence of partial slip at the boundary. The governing equations are reduced to nonlinear ordinary differential equation by means of similarity transformations. Some examples are considered and the effects of different parameters are shown. OHPM is a very efficient procedure, ensuring a very rapid convergence of the solutions after only two iterations.
A simple analytic approximation to the Rayleigh-Bénard stability threshold
Prosperetti, Andrea
2011-01-01
The Rayleigh-Bénard linear stability problem is solved by means of a Fourier series expansion. It is found that truncating the series to just the first term gives an excellent explicit approximation to the marginal stability relation between the Rayleigh number and the wave number of the
Analytic regularity and collocation approximation for elliptic PDEs with random domain deformations
Castrillon, Julio; Nobile, Fabio; Tempone, Raul
2016-01-01
In this work we consider the problem of approximating the statistics of a given Quantity of Interest (QoI) that depends on the solution of a linear elliptic PDE defined over a random domain parameterized by N random variables. The elliptic problem
Directory of Open Access Journals (Sweden)
G. H. Gudmundsson
2008-07-01
Full Text Available New analytical solutions describing the effects of small-amplitude perturbations in boundary data on flow in the shallow-ice-stream approximation are presented. These solutions are valid for a non-linear Weertman-type sliding law and for Newtonian ice rheology. Comparison is made with corresponding solutions of the shallow-ice-sheet approximation, and with solutions of the full Stokes equations. The shallow-ice-stream approximation is commonly used to describe large-scale ice stream flow over a weak bed, while the shallow-ice-sheet approximation forms the basis of most current large-scale ice sheet models. It is found that the shallow-ice-stream approximation overestimates the effects of bed topography perturbations on surface profile for wavelengths less than about 5 to 10 ice thicknesses, the exact number depending on values of surface slope and slip ratio. For high slip ratios, the shallow-ice-stream approximation gives a very simple description of the relationship between bed and surface topography, with the corresponding transfer amplitudes being close to unity for any given wavelength. The shallow-ice-stream estimates for the timescales that govern the transient response of ice streams to external perturbations are considerably more accurate than those based on the shallow-ice-sheet approximation. In particular, in contrast to the shallow-ice-sheet approximation, the shallow-ice-stream approximation correctly reproduces the short-wavelength limit of the kinematic phase speed given by solving a linearised version of the full Stokes system. In accordance with the full Stokes solutions, the shallow-ice-sheet approximation predicts surface fields to react weakly to spatial variations in basal slipperiness with wavelengths less than about 10 to 20 ice thicknesses.
International Nuclear Information System (INIS)
Mery, P.
1977-01-01
The operator and matrix Pade approximation are defined. The fact that these approximants can be derived from the Schwinger variational principle is emphasized. In potential theory, using this variational aspect it is shown that the matrix Pade approximation allow to reproduce the exact solution of the Lippman-Schwinger equation with any required accuracy taking only into account the knowledge of the first two coefficients in the Born expansion. The deep analytic structure of this variational matrix Pade approximation (hyper Pade approximation) is discussed
Directory of Open Access Journals (Sweden)
Mohammad Mehdi Rashidi
2008-01-01
Full Text Available The flow of a viscous incompressible fluid between two parallel plates due to the normal motion of the plates is investigated. The unsteady Navier-Stokes equations are reduced to a nonlinear fourth-order differential equation by using similarity solutions. Homotopy analysis method (HAM is used to solve this nonlinear equation analytically. The convergence of the obtained series solution is carefully analyzed. The validity of our solutions is verified by the numerical results obtained by fourth-order Runge-Kutta.
Spark - a modern approach for distributed analytics
CERN. Geneva; Kothuri, Prasanth
2016-01-01
The Hadoop ecosystem is the leading opensource platform for distributed storing and processing big data. It is a very popular system for implementing data warehouses and data lakes. Spark has also emerged to be one of the leading engines for data analytics. The Hadoop platform is available at CERN as a central service provided by the IT department. By attending the session, a participant will acquire knowledge of the essential concepts need to benefit from the parallel data processing offered by Spark framework. The session is structured around practical examples and tutorials. Main topics: Architecture overview - work distribution, concepts of a worker and a driver Computing concepts of transformations and actions Data processing APIs - RDD, DataFrame, and SparkSQL
Directory of Open Access Journals (Sweden)
Hua Yang
2012-01-01
Full Text Available We are concerned with the stochastic differential delay equations with Poisson jump and Markovian switching (SDDEsPJMSs. Most SDDEsPJMSs cannot be solved explicitly as stochastic differential equations. Therefore, numerical solutions have become an important issue in the study of SDDEsPJMSs. The key contribution of this paper is to investigate the strong convergence between the true solutions and the numerical solutions to SDDEsPJMSs when the drift and diffusion coefficients are Taylor approximations.
A discourse-analytical approach to intertextual advertisements: a ...
African Journals Online (AJOL)
A discourse-analytical approach to intertextual advertisements: a model to describe a dominant world-view. ... The intertextual messages in advertising discourse can be regarded as generallyaccepted shared ... AJOL African Journals Online.
Optimal and Approximate Approaches for Deployment of Heterogeneous Sensing Devices
Directory of Open Access Journals (Sweden)
Rabie Ramadan
2007-04-01
Full Text Available A modeling framework for the problem of deploying a set of heterogeneous sensors in a field with time-varying differential surveillance requirements is presented. The problem is formulated as mixed integer mathematical program with the objective to maximize coverage of a given field. Two metaheuristics are used to solve this problem. The first heuristic adopts a genetic algorithm (GA approach while the second heuristic implements a simulated annealing (SA algorithm. A set of experiments is used to illustrate the capabilities of the developed models and to compare their performance. The experiments investigate the effect of parameters related to the size of the sensor deployment problem including number of deployed sensors, size of the monitored field, and length of the monitoring horizon. They also examine several endogenous parameters related to the developed GA and SA algorithms.
Green functions of graphene: An analytic approach
Energy Technology Data Exchange (ETDEWEB)
Lawlor, James A., E-mail: jalawlor@tcd.ie [School of Physics, Trinity College Dublin, Dublin 2 (Ireland); Ferreira, Mauro S. [School of Physics, Trinity College Dublin, Dublin 2 (Ireland); CRANN, Trinity College Dublin, Dublin 2 (Ireland)
2015-04-15
In this article we derive the lattice Green Functions (GFs) of graphene using a Tight Binding Hamiltonian incorporating both first and second nearest neighbour hoppings and allowing for a non-orthogonal electron wavefunction overlap. It is shown how the resulting GFs can be simplified from a double to a single integral form to aid computation, and that when considering off-diagonal GFs in the high symmetry directions of the lattice this single integral can be approximated very accurately by an algebraic expression. By comparing our results to the conventional first nearest neighbour model commonly found in the literature, it is apparent that the extended model leads to a sizeable change in the electronic structure away from the linear regime. As such, this article serves as a blueprint for researchers who wish to examine quantities where these considerations are important.
Interacting steps with finite-range interactions: Analytical approximation and numerical results
Jaramillo, Diego Felipe; Téllez, Gabriel; González, Diego Luis; Einstein, T. L.
2013-05-01
We calculate an analytical expression for the terrace-width distribution P(s) for an interacting step system with nearest- and next-nearest-neighbor interactions. Our model is derived by mapping the step system onto a statistically equivalent one-dimensional system of classical particles. The validity of the model is tested with several numerical simulations and experimental results. We explore the effect of the range of interactions q on the functional form of the terrace-width distribution and pair correlation functions. For physically plausible interactions, we find modest changes when next-nearest neighbor interactions are included and generally negligible changes when more distant interactions are allowed. We discuss methods for extracting from simulated experimental data the characteristic scale-setting terms in assumed potential forms.
International Nuclear Information System (INIS)
Gurler, O.; Yalcin, S.; Gultekin, A.; Kaynak, G.; Gundogdu, O.
2006-01-01
The energy distributions of beta particles which penetrated a certain matter thickness were studied experimentally and theoretically by using a surface barrier solid state detector. A valid theoretical expression based on average values between energy and distance traveled during the slowing down of the electron was obtained. Two analytical expressions were proposed; one for the energy distribution of monoenergetic electrons which penetrated a certain matter thickness, and one for the response function in the detector for monoenergetic electrons detected with its entire energy. Response functions of the detector for beta particles emitted from 204 Tl isotope which penetrated a certain matter thickness were obtained for two different aluminum thicknesses, and the results were discussed by comparing with experimental energy spectra
Energy Technology Data Exchange (ETDEWEB)
Gurler, O. [Faculty of Arts and Sciences, University of Uludag, 16059 Bursa (Turkey)]. E-mail: ogurler@uludag.edu.tr; Yalcin, S. [Gazi University Kastamonu, Education Faculty, 37200 Kastamonu (Turkey); Gultekin, A. [Faculty of Arts and Sciences, University of Uludag, 16059 Bursa (Turkey); Kaynak, G. [Faculty of Arts and Sciences, University of Uludag, 16059 Bursa (Turkey); Gundogdu, O. [School of Engineering, University of Surrey, Guildford GU2 7XH (United Kingdom)
2006-04-15
The energy distributions of beta particles which penetrated a certain matter thickness were studied experimentally and theoretically by using a surface barrier solid state detector. A valid theoretical expression based on average values between energy and distance traveled during the slowing down of the electron was obtained. Two analytical expressions were proposed; one for the energy distribution of monoenergetic electrons which penetrated a certain matter thickness, and one for the response function in the detector for monoenergetic electrons detected with its entire energy. Response functions of the detector for beta particles emitted from {sup 204}Tl isotope which penetrated a certain matter thickness were obtained for two different aluminum thicknesses, and the results were discussed by comparing with experimental energy spectra.
International Nuclear Information System (INIS)
Smith, N.; Pritchard, D.E.
1981-01-01
We have recently demonstrated that the energy corrected sudden (ECS) scaling law of De Pristo et al. when conbined with the power law assumption for the basis rates k/sub l/→0proportional[l(l+1)]/sup -g/ can accurately fit a wide body of rotational energy transfer data. We develop a simple and accurate approximation to this fitting law, and in addition mathematically show the connection between it and our earlier proposed energy based law which also has been successful in describing both theoretical and experimental data on rotationally inelastic collisions
Ball Bearing Stiffnesses- A New Approach Offering Analytical Expressions
Guay, Pascal; Frikha, Ahmed
2015-09-01
Space mechanisms use preloaded ball bearings in order to withstand the severe vibrations during launch.The launch strength requires the calculation of the bearing stiffness, but this calculation is complex. Nowadays, there is no analytical expression that gives the stiffness of a bearing. Stiffness is computed using an iterative algorithm such as Newton-Raphson, to solve the nonlinear system of equations.This paper aims at offering a simplified analytical approach, based on the assumption that the contact angle is constant. This approach gives analytical formulas of the stiffness of preloaded ball bearing.
SU-F-T-144: Analytical Closed Form Approximation for Carbon Ion Bragg Curves in Water
Energy Technology Data Exchange (ETDEWEB)
Tuomanen, S; Moskvin, V; Farr, J [St. Jude Children’s Research Hospital, Memphis, TN (United States)
2016-06-15
Purpose: Semi-empirical modeling is a powerful computational method in radiation dosimetry. A set of approximations exist for proton ion depth dose distribution (DDD) in water. However, the modeling is more complicated for carbon ions due to fragmentation. This study addresses this by providing and evaluating a new methodology for DDD modeling of carbon ions in water. Methods: The FLUKA, Monte Carlo (MC) general-purpose transport code was used for simulation of carbon DDDs for energies of 100–400 MeV in water as reference data model benchmarking. Based on Thomas Bortfeld’s closed form equation approximating proton Bragg Curves as a basis, we derived the critical constants for a beam of Carbon ions by applying models of radiation transport by Lee et. al. and Geiger to our simulated Carbon curves. We hypothesized that including a new exponential (κ) residual distance parameter to Bortfeld’s fluence reduction relation would improve DDD modeling for carbon ions. We are introducing an additional term to be added to Bortfeld’s equation to describe fragmentation tail. This term accounts for the pre-peak dose from nuclear fragments (NF). In the post peak region, the NF transport will be treated as new beams utilizing the Glauber model for interaction cross sections and the Abrasion- Ablation fragmentation model. Results: The carbon beam specific constants in the developed model were determined to be : p= 1.75, β=0.008 cm-1, γ=0.6, α=0.0007 cm MeV, σmono=0.08, and the new exponential parameter κ=0.55. This produced a close match for the plateau part of the curve (max deviation 6.37%). Conclusion: The derived semi-empirical model provides an accurate approximation of the MC simulated clinical carbon DDDs. This is the first direct semi-empirical simulation for the dosimetry of therapeutic carbon ions. The accurate modeling of the NF tail in the carbon DDD will provide key insight into distal edge dose deposition formation.
Pedoinformatics Approach to Soil Text Analytics
Furey, J.; Seiter, J.; Davis, A.
2017-12-01
The several extant schema for the classification of soils rely on differing criteria, but the major soil science taxonomies, including the United States Department of Agriculture (USDA) and the international harmonized World Reference Base for Soil Resources systems, are based principally on inferred pedogenic properties. These taxonomies largely result from compiled individual observations of soil morphologies within soil profiles, and the vast majority of this pedologic information is contained in qualitative text descriptions. We present text mining analyses of hundreds of gigabytes of parsed text and other data in the digitally available USDA soil taxonomy documentation, the Soil Survey Geographic (SSURGO) database, and the National Cooperative Soil Survey (NCSS) soil characterization database. These analyses implemented iPython calls to Gensim modules for topic modelling, with latent semantic indexing completed down to the lowest taxon level (soil series) paragraphs. Via a custom extension of the Natural Language Toolkit (NLTK), approximately one percent of the USDA soil series descriptions were used to train a classifier for the remainder of the documents, essentially by treating soil science words as comprising a novel language. While location-specific descriptors at the soil series level are amenable to geomatics methods, unsupervised clustering of the occurrence of other soil science words did not closely follow the usual hierarchy of soil taxa. We present preliminary phrasal analyses that may account for some of these effects.
Xu, Zhenli; Ma, Manman; Liu, Pei
2014-07-01
We propose a modified Poisson-Nernst-Planck (PNP) model to investigate charge transport in electrolytes of inhomogeneous dielectric environment. The model includes the ionic polarization due to the dielectric inhomogeneity and the ion-ion correlation. This is achieved by the self energy of test ions through solving a generalized Debye-Hückel (DH) equation. We develop numerical methods for the system composed of the PNP and DH equations. Particularly, toward the numerical challenge of solving the high-dimensional DH equation, we developed an analytical WKB approximation and a numerical approach based on the selective inversion of sparse matrices. The model and numerical methods are validated by simulating the charge diffusion in electrolytes between two electrodes, for which effects of dielectrics and correlation are investigated by comparing the results with the prediction by the classical PNP theory. We find that, at the length scale of the interface separation comparable to the Bjerrum length, the results of the modified equations are significantly different from the classical PNP predictions mostly due to the dielectric effect. It is also shown that when the ion self energy is in weak or mediate strength, the WKB approximation presents a high accuracy, compared to precise finite-difference results.
Andrei, R.M.; Smith, C.S.; Fraanje, P.R.; Verhaegen, M.; Korkiakoski, V.A.; Keller, C.U.; Doelman, N.J.
2012-01-01
In this paper we give a new wavefront estimation technique that overcomes the main disadvantages of the phase diversity (PD) algorithms, namely the large computational complexity and the fact that the solutions can get stuck in a local minima. Our approach gives a good starting point for an
Franssens, G; De Maziére, M; Fonteyn, D
2000-08-20
A new derivation is presented for the analytical inversion of aerosol spectral extinction data to size distributions. It is based on the complex analytic extension of the anomalous diffraction approximation (ADA). We derive inverse formulas that are applicable to homogeneous nonabsorbing and absorbing spherical particles. Our method simplifies, generalizes, and unifies a number of results obtained previously in the literature. In particular, we clarify the connection between the ADA transform and the Fourier and Laplace transforms. Also, the effect of the particle refractive-index dispersion on the inversion is examined. It is shown that, when Lorentz's model is used for this dispersion, the continuous ADA inverse transform is mathematically well posed, whereas with a constant refractive index it is ill posed. Further, a condition is given, in terms of Lorentz parameters, for which the continuous inverse operator does not amplify the error.
Analytical approximations for the long-term decay behavior of spent fuel and high-level waste
International Nuclear Information System (INIS)
Malbrain, C.M.; Deutch, J.M.; Lester, R.K.
1982-01-01
Simple analytical approximations are presented that describe the radioactivity and radiogenic decay heat behavior of high-level wastes (HLWs) from various nuclear fuel cycles during the first 100,000 years of waste life. The correlations are based on detailed computations of HLW properties carried out with the isotope generation and depletion code ORIGEN 2. The ambiguities encountered in using simple comparisons of the hazards posed by HLWs and naturally occurring mineral deposits to establish the longevity requirements for geologic waste disposal schemes are discussed
Patel, Deepak
2011-01-01
There are many papers on describing a LHP as an overall system, but few detail on the condenser section of a loop heat pipe. The DeCoM (Deepak Condenser Model) method utilizes user set initial parameters in-order to simulate a condenser by calculating the interactions between the fluid and the wall. Equations are derived for two sections of the condenser: a two-phase section and a subcooled (liquid) section. All Equations are based upon the conservation of energy theory, from which fluid temperature, and fluid quality values are solved. In order to solve for the heat transfer value, between fluid and the wall in two phase section, the Lockhart-Martinelli correlation method was implemented as a solution approach. For Liquid phase, the Reynolds number was used in-order to differentiate the flow state, from either turbulent or laminar, and Nusselt number was used to solve for the film coefficient. To represent these calculations for both sections a flow chart is presented in order to display the execution process of DeCoM. The benefit of DeCoM is that it is capable of performing preliminary analysis without requiring a license and without much of users knowledge on condensers.
Directory of Open Access Journals (Sweden)
Norhasimah Mahiddin
2014-01-01
Full Text Available The modified decomposition method (MDM and homotopy perturbation method (HPM are applied to obtain the approximate solution of the nonlinear model of tumour invasion and metastasis. The study highlights the significant features of the employed methods and their ability to handle nonlinear partial differential equations. The methods do not need linearization and weak nonlinearity assumptions. Although the main difference between MDM and Adomian decomposition method (ADM is a slight variation in the definition of the initial condition, modification eliminates massive computation work. The approximate analytical solution obtained by MDM logically contains the solution obtained by HPM. It shows that HPM does not involve the Adomian polynomials when dealing with nonlinear problems.
An analytical statistical approach to the 3D reconstruction problem
Energy Technology Data Exchange (ETDEWEB)
Cierniak, Robert [Czestochowa Univ. of Technology (Poland). Inst. of Computer Engineering
2011-07-01
The presented here approach is concerned with the reconstruction problem for 3D spiral X-ray tomography. The reconstruction problem is formulated taking into considerations the statistical properties of signals obtained in X-ray CT. Additinally, image processing performed in our approach is involved in analytical methodology. This conception significantly improves quality of the obtained after reconstruction images and decreases the complexity of the reconstruction problem in comparison with other approaches. Computer simulations proved that schematically described here reconstruction algorithm outperforms conventional analytical methods in obtained image quality. (orig.)
Gately, Iain; Benjamin, Jonathan
2018-04-01
As a discipline that has grown up in the eyes of the camera, maritime and underwater archaeology has struggled historically to distinguish itself from early misrepresentations of it as adventure-seeking, treasure hunting and underwater salvage as popularized in the 1950s and 1960s. Though many professional archaeologists have successfully moved forward from this history through broader theoretical engagement and the development of the discipline within anthropology, public perception of archaeology under water has not advanced in stride. Central to this issue is the portrayal of underwater archaeology within popular culture and the representational structures from the 1950s and 1960s persistently used to introduce the profession to the public, through the consumption of popular books and especially television. This article explores representations of maritime and underwater archaeology to examine how the discipline has been consumed by the public, both methodologically and theoretically, through media. In order to interrogate this, we first examine maritime and underwater archaeology as a combined sub-discipline of archaeology and consider how it has been defined historically and in contemporary professional practice. Finally, we consider how practitioners can take a proactive approach to portray their work and convey archaeological media to the public. In this respect, we aim to advance the theoretical discussion in a way so as to reduce further cases whereby archaeology is accidentally misappropriated or deliberately hijacked.
International Nuclear Information System (INIS)
Doroshenko, A.Yu.; Tarasko, M.Z.; Piksaikin, V.M.
2002-01-01
The energy spectrum of the delayed neutrons is the poorest known of all input data required in the calculation of the effective delayed neutron fractions. In addition to delayed neutron spectra based on the aggregate spectrum measurements there are two different approaches for deriving the delayed neutron energy spectra. Both of them are based on the data related to the delayed neutron spectra from individual precursors of delayed neutrons. In present work these two different data sets were compared with the help of an approximation by gamma-function. The choice of this approximation function instead of the Maxwellian or evaporation type of distribution is substantiated. (author)
39 (APPROXIMATE ANALYTICAL SOLUTION)
African Journals Online (AJOL)
Rotating machines like motors, turbines, compressors etc. are generally subjected to periodic forces and the system parameters remain more or less constant. ... parameters change and, consequently, the natural frequencies too, due to reasons of changing gyroscopic moments, centrifugal forces, bearing characteristics,.
International Nuclear Information System (INIS)
Yang Pei; Li Zhibin; Chen Yong
2010-01-01
In this paper, the short-wave model equations are investigated, which are associated with the Camassa-Holm (CH) and Degasperis-Procesi (DP) shallow-water wave equations. Firstly, by means of the transformation of the independent variables and the travelling wave transformation, the partial differential equation is reduced to an ordinary differential equation. Secondly, the equation is solved by homotopy analysis method. Lastly, by the transformations back to the original independent variables, the solution of the original partial differential equation is obtained. The two types of solutions of the short-wave models are obtained in parametric form, one is one-cusp soliton for the CH equation while the other one is one-loop soliton for the DP equation. The approximate analytic solutions expressed by a series of exponential functions agree well with the exact solutions. It demonstrates the validity and great potential of homotopy analysis method for complicated nonlinear solitary wave problems. (general)
A general approach for cache-oblivious range reporting and approximate range counting
DEFF Research Database (Denmark)
Afshani, Peyman; Hamilton, Chris; Zeh, Norbert
2010-01-01
We present cache-oblivious solutions to two important variants of range searching: range reporting and approximate range counting. Our main contribution is a general approach for constructing cache-oblivious data structures that provide relative (1+ε)-approximations for a general class of range c...
International Nuclear Information System (INIS)
Darmani, G.; Setayeshi, S.; Ramezanpour, H.
2012-01-01
In this paper an efficient computational method based on extending the sensitivity approach (SA) is proposed to find an analytic exact solution of nonlinear differential difference equations. In this manner we avoid solving the nonlinear problem directly. By extension of sensitivity approach for differential difference equations (DDEs), the nonlinear original problem is transformed into infinite linear differential difference equations, which should be solved in a recursive manner. Then the exact solution is determined in the form of infinite terms series and by intercepting series an approximate solution is obtained. Numerical examples are employed to show the effectiveness of the proposed approach. (general)
Multi-analytical Approaches Informing the Risk of Sepsis
Gwadry-Sridhar, Femida; Lewden, Benoit; Mequanint, Selam; Bauer, Michael
Sepsis is a significant cause of mortality and morbidity and is often associated with increased hospital resource utilization, prolonged intensive care unit (ICU) and hospital stay. The economic burden associated with sepsis is huge. With advances in medicine, there are now aggressive goal oriented treatments that can be used to help these patients. If we were able to predict which patients may be at risk for sepsis we could start treatment early and potentially reduce the risk of mortality and morbidity. Analytic methods currently used in clinical research to determine the risk of a patient developing sepsis may be further enhanced by using multi-modal analytic methods that together could be used to provide greater precision. Researchers commonly use univariate and multivariate regressions to develop predictive models. We hypothesized that such models could be enhanced by using multiple analytic methods that together could be used to provide greater insight. In this paper, we analyze data about patients with and without sepsis using a decision tree approach and a cluster analysis approach. A comparison with a regression approach shows strong similarity among variables identified, though not an exact match. We compare the variables identified by the different approaches and draw conclusions about the respective predictive capabilities,while considering their clinical significance.
International Nuclear Information System (INIS)
Khan, S.H.; Ivanov, A.A.
1993-01-01
This paper describes an approximate method for calculating the static characteristics of linear step motors (LSM), being developed for control rod drives (CRD) in large nuclear reactors. The static characteristic of such an LSM which is given by the variation of electromagnetic force with armature displacement determines the motor performance in its standing and dynamic modes. The approximate method of calculation of these characteristics is based on the permeance analysis method applied to the phase magnetic circuit of LSM. This is a simple, fast and efficient analytical approach which gives satisfactory results for small stator currents and weak iron saturation, typical to the standing mode of operation of LSM. The method is validated by comparing theoretical results with experimental ones. (Author)
Bridging analytical approaches for low-carbon transitions
Geels, Frank W.; Berkhout, Frans; van Vuuren, Detlef P.
2016-06-01
Low-carbon transitions are long-term multi-faceted processes. Although integrated assessment models have many strengths for analysing such transitions, their mathematical representation requires a simplification of the causes, dynamics and scope of such societal transformations. We suggest that integrated assessment model-based analysis should be complemented with insights from socio-technical transition analysis and practice-based action research. We discuss the underlying assumptions, strengths and weaknesses of these three analytical approaches. We argue that full integration of these approaches is not feasible, because of foundational differences in philosophies of science and ontological assumptions. Instead, we suggest that bridging, based on sequential and interactive articulation of different approaches, may generate a more comprehensive and useful chain of assessments to support policy formation and action. We also show how these approaches address knowledge needs of different policymakers (international, national and local), relate to different dimensions of policy processes and speak to different policy-relevant criteria such as cost-effectiveness, socio-political feasibility, social acceptance and legitimacy, and flexibility. A more differentiated set of analytical approaches thus enables a more differentiated approach to climate policy making.
Analytical approach to the evaluation of nuclide transmutations
International Nuclear Information System (INIS)
Vukadin, Z.; Osmokrovic, P.
1995-01-01
Analytical approach to the evaluation of nuclide concentrations in a transmutation chain is presented. Non singular Bateman coefficients and depletion functions are used to overcome numerical difficulties when applying well-known Bateman solution of a simple radioactive decay. Method enables evaluation of complete decay chains without elimination of short lived radionuclides. It is efficient and accurate. Practical application of the method is demonstrated by computing the neptunium series inventory in used Candu TM fuel. (author)
International Nuclear Information System (INIS)
Caprini, Chiara; Durrer, Ruth; Servant, Geraldine
2008-01-01
Gravitational wave production from bubble collisions was calculated in the early 1990s using numerical simulations. In this paper, we present an alternative analytic estimate, relying on a different treatment of stochasticity. In our approach, we provide a model for the bubble velocity power spectrum, suitable for both detonations and deflagrations. From this, we derive the anisotropic stress and analytically solve the gravitational wave equation. We provide analytical formulas for the peak frequency and the shape of the spectrum which we compare with numerical estimates. In contrast to the previous analysis, we do not work in the envelope approximation. This paper focuses on a particular source of gravitational waves from phase transitions. In a companion article, we will add together the different sources of gravitational wave signals from phase transitions: bubble collisions, turbulence and magnetic fields and discuss the prospects for probing the electroweak phase transition at LISA
Analytic Model Predictive Control of Uncertain Nonlinear Systems: A Fuzzy Adaptive Approach
Directory of Open Access Journals (Sweden)
Xiuyan Peng
2015-01-01
Full Text Available A fuzzy adaptive analytic model predictive control method is proposed in this paper for a class of uncertain nonlinear systems. Specifically, invoking the standard results from the Moore-Penrose inverse of matrix, the unmatched problem which exists commonly in input and output dimensions of systems is firstly solved. Then, recurring to analytic model predictive control law, combined with fuzzy adaptive approach, the fuzzy adaptive predictive controller synthesis for the underlying systems is developed. To further reduce the impact of fuzzy approximation error on the system and improve the robustness of the system, the robust compensation term is introduced. It is shown that by applying the fuzzy adaptive analytic model predictive controller the rudder roll stabilization system is ultimately uniformly bounded stabilized in the H-infinity sense. Finally, simulation results demonstrate the effectiveness of the proposed method.
Elements of a function analytic approach to probability.
Energy Technology Data Exchange (ETDEWEB)
Ghanem, Roger Georges (University of Southern California, Los Angeles, CA); Red-Horse, John Robert
2008-02-01
We first provide a detailed motivation for using probability theory as a mathematical context in which to analyze engineering and scientific systems that possess uncertainties. We then present introductory notes on the function analytic approach to probabilistic analysis, emphasizing the connections to various classical deterministic mathematical analysis elements. Lastly, we describe how to use the approach as a means to augment deterministic analysis methods in a particular Hilbert space context, and thus enable a rigorous framework for commingling deterministic and probabilistic analysis tools in an application setting.
Salama, Amgad
2013-09-01
In this work the problem of flow in three-dimensional, axisymmetric, heterogeneous porous medium domain is investigated numerically. For this system, it is natural to use cylindrical coordinate system, which is useful in describing phenomena that have some rotational symmetry about the longitudinal axis. This can happen in porous media, for example, in the vicinity of production/injection wells. The basic feature of this system is the fact that the flux component (volume flow rate per unit area) in the radial direction is changing because of the continuous change of the area. In this case, variables change rapidly closer to the axis of symmetry and this requires the mesh to be denser. In this work, we generalize a methodology that allows coarser mesh to be used and yet yields accurate results. This method is based on constructing local analytical solution in each cell in the radial direction and moves the derivatives in the other directions to the source term. A new expression for the harmonic mean of the hydraulic conductivity in the radial direction is developed. Apparently, this approach conforms to the analytical solution for uni-directional flows in radial direction in homogeneous porous media. For the case when the porous medium is heterogeneous or the boundary conditions is more complex, comparing with the mesh-independent solution, this approach requires only coarser mesh to arrive at this solution while the traditional methods require more denser mesh. Comparisons for different hydraulic conductivity scenarios and boundary conditions have also been introduced. © 2013 Elsevier B.V.
xQuake: A Modern Approach to Seismic Network Analytics
Johnson, C. E.; Aikin, K. E.
2017-12-01
While seismic networks have expanded over the past few decades, and social needs for accurate and timely information has increased dramatically, approaches to the operational needs of both global and regional seismic observatories have been slow to adopt new technologies. This presentation presents the xQuake system that provides a fresh approach to seismic network analytics based on complexity theory and an adaptive architecture of streaming connected microservices as diverse data (picks, beams, and other data) flow into a final, curated catalog of events. The foundation for xQuake is the xGraph (executable graph) framework that is essentially a self-organizing graph database. An xGraph instance provides both the analytics as well as the data storage capabilities at the same time. Much of the analytics, such as synthetic annealing in the detection process and an evolutionary programing approach for event evolution, draws from the recent GLASS 3.0 seismic associator developed by and for the USGS National Earthquake Information Center (NEIC). In some respects xQuake is reminiscent of the Earthworm system, in that it comprises processes interacting through store and forward rings; not surprising as the first author was the lead architect of the original Earthworm project when it was known as "Rings and Things". While Earthworm components can easily be integrated into the xGraph processing framework, the architecture and analytics are more current (e.g. using a Kafka Broker for store and forward rings). The xQuake system is being released under an unrestricted open source license to encourage and enable sthe eismic community support in further development of its capabilities.
Fractal approach to computer-analytical modelling of tree crown
International Nuclear Information System (INIS)
Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.
1993-09-01
In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs
Leibov Roman
2017-01-01
This paper presents a bilinear approach to nonlinear differential equations system approximation problem. Sometimes the nonlinear differential equations right-hand sides linearization is extremely difficult or even impossible. Then piecewise-linear approximation of nonlinear differential equations can be used. The bilinear differential equations allow to improve piecewise-linear differential equations behavior and reduce errors on the border of different linear differential equations systems ...
Merging Belief Propagation and the Mean Field Approximation: A Free Energy Approach
DEFF Research Database (Denmark)
Riegler, Erwin; Kirkelund, Gunvor Elisabeth; Manchón, Carles Navarro
2013-01-01
We present a joint message passing approach that combines belief propagation and the mean field approximation. Our analysis is based on the region-based free energy approximation method proposed by Yedidia et al. We show that the message passing fixed-point equations obtained with this combination...... correspond to stationary points of a constrained region-based free energy approximation. Moreover, we present a convergent implementation of these message passing fixed-point equations provided that the underlying factor graph fulfills certain technical conditions. In addition, we show how to include hard...
Bozkaya, Uğur
2018-03-15
Efficient implementations of analytic gradients for the orbital-optimized MP3 and MP2.5 and their standard versions with the density-fitting approximation, which are denoted as DF-MP3, DF-MP2.5, DF-OMP3, and DF-OMP2.5, are presented. The DF-MP3, DF-MP2.5, DF-OMP3, and DF-OMP2.5 methods are applied to a set of alkanes and noncovalent interaction complexes to compare the computational cost with the conventional MP3, MP2.5, OMP3, and OMP2.5. Our results demonstrate that density-fitted perturbation theory (DF-MP) methods considered substantially reduce the computational cost compared to conventional MP methods. The efficiency of our DF-MP methods arise from the reduced input/output (I/O) time and the acceleration of gradient related terms, such as computations of particle density and generalized Fock matrices (PDMs and GFM), solution of the Z-vector equation, back-transformations of PDMs and GFM, and evaluation of analytic gradients in the atomic orbital basis. Further, application results show that errors introduced by the DF approach are negligible. Mean absolute errors for bond lengths of a molecular set, with the cc-pCVQZ basis set, is 0.0001-0.0002 Å. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Big data analytics in immunology: a knowledge-based approach.
Zhang, Guang Lan; Sun, Jing; Chitkushev, Lou; Brusic, Vladimir
2014-01-01
With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow.
Big Data Analytics in Immunology: A Knowledge-Based Approach
Directory of Open Access Journals (Sweden)
Guang Lan Zhang
2014-01-01
Full Text Available With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow.
Statistical Approaches to Assess Biosimilarity from Analytical Data.
Burdick, Richard; Coffey, Todd; Gutka, Hiten; Gratzl, Gyöngyi; Conlon, Hugh D; Huang, Chi-Ting; Boyne, Michael; Kuehne, Henriette
2017-01-01
Protein therapeutics have unique critical quality attributes (CQAs) that define their purity, potency, and safety. The analytical methods used to assess CQAs must be able to distinguish clinically meaningful differences in comparator products, and the most important CQAs should be evaluated with the most statistical rigor. High-risk CQA measurements assess the most important attributes that directly impact the clinical mechanism of action or have known implications for safety, while the moderate- to low-risk characteristics may have a lower direct impact and thereby may have a broader range to establish similarity. Statistical equivalence testing is applied for high-risk CQA measurements to establish the degree of similarity (e.g., highly similar fingerprint, highly similar, or similar) of selected attributes. Notably, some high-risk CQAs (e.g., primary sequence or disulfide bonding) are qualitative (e.g., the same as the originator or not the same) and therefore not amenable to equivalence testing. For biosimilars, an important step is the acquisition of a sufficient number of unique originator drug product lots to measure the variability in the originator drug manufacturing process and provide sufficient statistical power for the analytical data comparisons. Together, these analytical evaluations, along with PK/PD and safety data (immunogenicity), provide the data necessary to determine if the totality of the evidence warrants a designation of biosimilarity and subsequent licensure for marketing in the USA. In this paper, a case study approach is used to provide examples of analytical similarity exercises and the appropriateness of statistical approaches for the example data.
The behavior-analytic approach to emotional self-control
Directory of Open Access Journals (Sweden)
Jussara Rocha Batista
2012-12-01
Full Text Available Some psychological approaches distinguish behavioral self-control from emotional self-control, the latter being approached with the reference to inside events controlled by the individual himself. This paper offers some directions to a behavior-analytic approach of what has been referred to as emotional self-control. According to Behavior Analysis, no new process is found in emotional self-control, but components that are additional to those found in behavioral self-control, which require appropriate treatment. The paper highlights some determinants of behavioral repertoires taken as instances of emotional self-control: the social context in which self-control is produced and maintained; the conflicts between consequences for the individual and for the group; and the degree of participation of the motor apparatus in the emission of emotional responses. Keywords: emotional self-control; emotional responses; inner world; behavior analysis.
Energy Technology Data Exchange (ETDEWEB)
Schuemann, J; Giantsoudi, D; Grassberger, C; Paganetti, H [Massachusetts General Hospital, Boston, MA (United States)
2015-06-15
Purpose: To estimate the clinical relevance of approximations made in analytical dose calculation methods (ADCs) used for treatment planning on tumor coverage and tumor control probability (TCP) in proton therapy. Methods: We compared dose distributions planned with ADC to delivered dose distributions (as determined by TOPAS Monte Carlo (MC) simulations). We investigated 10 patients per site for 5 treatment sites (head-and-neck, lung, breast, prostate, liver). We evaluated differences between the two dose distributions analyzing dosimetric indices based on the dose-volume-histograms, the γ-index and the TCP. The normal tissue complication probability (NTCP) was estimated for the bladder and anterior rectum for the prostate patients. Results: We find that the target doses are overestimated by the ADC by 1–2% on average for all patients considered. All dosimetric indices (the mean dose, D95, D50 and D02, the dose values covering 95%, 50% and 2% of the target volume, respectively) are predicted within 5% of the delivered dose. A γ-index with a 3%/3mm criteria had a passing rate for target volumes above 96% for all patients. The TCP predicted by the two algorithms was up to 2%, 2.5%, 6%, 6.5%, and 11% for liver and breast, prostate, head-and-neck and lung patients, respectively. Differences in NTCP for anterior-rectum and bladder for prostate patients were less than 3%. Conclusion: We show that ADC provide adequate dose distributions for most patients, however, they can Result in underdosage of the target by as much as 5%. The TCP was found to be up to 11% lower than predicted. Advanced dose-calculation methods like MC simulations may be necessary in proton therapy to ensure target coverage for heterogeneous patient geometries, in clinical trials comparing proton therapy to conventional radiotherapy to avoid biases due to systematic discrepancies in calculated dose distributions, and, if tighter range margins are considered. Fully funded by NIH grants.
A Padé approximant approach to two kinds of transcendental equations with applications in physics
International Nuclear Information System (INIS)
Luo, Qiang; Wang, Zhidan; Han, Jiurong
2015-01-01
In this paper, we obtain the analytical solutions of two kinds of transcendental equations with numerous applications in college physics by means of the Lagrange inversion theorem. Afterwards we rewrite them in the form of a ratio of rational polynomials by a second-order Padé approximant from a practical and instructional perspective. Our method is illustrated in a pedagogical manner for the benefit of students at the undergraduate level. The approximate formulas introduced in the paper can be applied to abundant examples in physics textbooks, such as Fraunhofer single-slit diffraction, Wien’s displacement law, and the Schrödinger equation with single- or double-δ potential. These formulas, consequently, can reach considerable accuracies according to the numerical results; therefore, they promise to act as valuable ingredients in the standard teaching curriculum. (paper)
Bronstein, Leo; Koeppl, Heinz
2018-01-01
Approximate solutions of the chemical master equation and the chemical Fokker-Planck equation are an important tool in the analysis of biomolecular reaction networks. Previous studies have highlighted a number of problems with the moment-closure approach used to obtain such approximations, calling it an ad hoc method. In this article, we give a new variational derivation of moment-closure equations which provides us with an intuitive understanding of their properties and failure modes and allows us to correct some of these problems. We use mixtures of product-Poisson distributions to obtain a flexible parametric family which solves the commonly observed problem of divergences at low system sizes. We also extend the recently introduced entropic matching approach to arbitrary ansatz distributions and Markov processes, demonstrating that it is a special case of variational moment closure. This provides us with a particularly principled approximation method. Finally, we extend the above approaches to cover the approximation of multi-time joint distributions, resulting in a viable alternative to process-level approximations which are often intractable.
Satellite Orbit Under Influence of a Drag - Analytical Approach
Martinović, M. M.; Šegan, S. D.
2017-12-01
The report studies some changes in orbital elements of the artificial satellites of Earth under influence of atmospheric drag. In order to develop possibilities of applying the results in many future cases, an analytical interpretation of the orbital element perturbations is given via useful, but very long expressions. The development is based on the TD88 air density model, recently upgraded with some additional terms. Some expressions and formulae were developed by the computer algebra system Mathematica and tested in some hypothetical cases. The results have good agreement with iterative (numerical) approach.
Cryogenic parallel, single phase flows: an analytical approach
Eichhorn, R.
2017-02-01
Managing the cryogenic flows inside a state-of-the-art accelerator cryomodule has become a demanding endeavour: In order to build highly efficient modules, all heat transfers are usually intercepted at various temperatures. For a multi-cavity module, operated at 1.8 K, this requires intercepts at 4 K and at 80 K at different locations with sometimes strongly varying heat loads which for simplicity reasons are operated in parallel. This contribution will describe an analytical approach, based on optimization theories.
Advances in Assays and Analytical Approaches for Botulinum Toxin Detection
Energy Technology Data Exchange (ETDEWEB)
Grate, Jay W.; Ozanich, Richard M.; Warner, Marvin G.; Bruckner-Lea, Cindy J.; Marks, James D.
2010-08-04
Methods to detect botulinum toxin, the most poisonous substance known, are reviewed. Current assays are being developed with two main objectives in mind: 1) to obtain sufficiently low detection limits to replace the mouse bioassay with an in vitro assay, and 2) to develop rapid assays for screening purposes that are as sensitive as possible while requiring an hour or less to process the sample an obtain the result. This review emphasizes the diverse analytical approaches and devices that have been developed over the last decade, while also briefly reviewing representative older immunoassays to provide background and context.
Analytical approaches for the characterization of nickel proteome.
Jiménez-Lamana, Javier; Szpunar, Joanna
2017-08-16
The use of nickel in modern industry and in consumer products implies some health problems for the human being. Nickel allergy and nickel carcinogenicity are well-known health effects related to human exposure to nickel, either during production of nickel-containing products or by direct contact with the final item. In this context, the study of nickel toxicity and nickel carcinogenicity involves the understanding of their molecular mechanisms and hence the characterization of the nickel-binding proteins in different biological samples. During the last 50 years, a broad range of analytical techniques, covering from the first chromatographic columns to the last generation mass spectrometers, have been used in order to fully characterize the nickel proteome. The aim of this review is to present a critical view of the different analytical approaches that have been applied for the purification, isolation, detection and identification of nickel-binding proteins. The different analytical techniques used are discussed from a critical point of view, highlighting advantages and limitations.
ANALYTICAL APPROACHES TO THE STUDY OF EXPORT TRANSACTIONS
Directory of Open Access Journals (Sweden)
Ekaterina Viktorovna Medvedeva
2015-12-01
Full Text Available Analytical approaches to research of export operations depend on the conditions containing in separate external economic contracts with foreign buyers and also on a form of an exit of the Russian supplier of export goods to a foreign market. By means of analytical procedures it is possible to foresee and predict admissible situations which can have an adverse effect on a financial position of the economic subject. The economic entity, the engaged foreign economic activity, has to carry out surely not only the analysis of the current activity, but also the analysis of export operations. In article analytical approaches of carrying out the analysis of export operations are considered, on an example the analysis of export operations in dynamics is submitted, it is recommended to use the formulas allowing to estimate export in dynamics. For the comparative analysis export volume in the comparable prices is estimated. On the commodity groups including and quantitatively and qualitatively commensurable goods, the index of quantitative structure is calculated, the coefficient of delay of delivery of goods in comparison with other periods pays off. Carrying out the analysis allows to determine a tendency of change of export deliveries by export operations for the analyzed period for adoption of administrative decisions.Purpose Definition of the ways and receptions of the analysis applying when carrying out the analysis of export operations.Methodology in article economic-mathematical methods, and also statistical methods of the analysis were used.Results: the most informative parameters showing some aspects of carrying out the analysis of export operations are received.Practical implications it is expedient to apply the received results the economic subjects which are carrying out foreign economic activity, one of which elements are export operations.
Towards a Set Theoretical Approach to Big Data Analytics
DEFF Research Database (Denmark)
Mukkamala, Raghava Rao; Hussain, Abid; Vatrapu, Ravi
2014-01-01
Formal methods, models and tools for social big data analytics are largely limited to graph theoretical approaches such as social network analysis (SNA) informed by relational sociology. There are no other unified modeling approaches to social big data that integrate the conceptual, formal...... this technique to the data analysis of big social data collected from Facebook page of the fast fashion company, H&M....... and software realms. In this paper, we first present and discuss a theory and conceptual model of social data. Second, we outline a formal model based on set theory and discuss the semantics of the formal model with a real-world social data example from Facebook. Third, we briefly present and discuss...
The Navier-Stokes equations an elementary functional analytic approach
Sohr, Hermann
2001-01-01
The primary objective of this monograph is to develop an elementary and self-contained approach to the mathematical theory of a viscous, incompressible fluid in a domain of the Euclidean space, described by the equations of Navier-Stokes. Moreover, the theory is presented for completely general domains, in particular, for arbitrary unbounded, nonsmooth domains. Therefore, restriction was necessary to space dimensions two and three, which are also the most significant from a physical point of view. For mathematical generality, however, the linearized theory is expounded for general dimensions higher than one. Although the functional analytic approach developed here is, in principle, known to specialists, the present book fills a gap in the literature providing a systematic treatment of a subject that has been documented until now only in fragments. The book is mainly directed to students familiar with basic tools in Hilbert and Banach spaces. However, for the readers’ convenience, some fundamental properties...
Cognitive neuroscience robotics B analytic approaches to human understanding
Ishiguro, Hiroshi; Asada, Minoru; Osaka, Mariko; Fujikado, Takashi
2016-01-01
Cognitive Neuroscience Robotics is the first introductory book on this new interdisciplinary area. This book consists of two volumes, the first of which, Synthetic Approaches to Human Understanding, advances human understanding from a robotics or engineering point of view. The second, Analytic Approaches to Human Understanding, addresses related subjects in cognitive science and neuroscience. These two volumes are intended to complement each other in order to more comprehensively investigate human cognitive functions, to develop human-friendly information and robot technology (IRT) systems, and to understand what kind of beings we humans are. Volume B describes to what extent cognitive science and neuroscience have revealed the underlying mechanism of human cognition, and investigates how development of neural engineering and advances in other disciplines could lead to deep understanding of human cognition.
The Navier-Stokes equations an elementary functional analytic approach
Sohr, Hermann
2001-01-01
The primary objective of this monograph is to develop an elementary and self contained approach to the mathematical theory of a viscous incompressible fluid in a domain 0 of the Euclidean space ]Rn, described by the equations of Navier Stokes. The book is mainly directed to students familiar with basic functional analytic tools in Hilbert and Banach spaces. However, for readers' convenience, in the first two chapters we collect without proof some fundamental properties of Sobolev spaces, distributions, operators, etc. Another important objective is to formulate the theory for a completely general domain O. In particular, the theory applies to arbitrary unbounded, non-smooth domains. For this reason, in the nonlinear case, we have to restrict ourselves to space dimensions n = 2,3 that are also most significant from the physical point of view. For mathematical generality, we will develop the lin earized theory for all n 2 2. Although the functional-analytic approach developed here is, in principle, known ...
Managing knowledge business intelligence: A cognitive analytic approach
Surbakti, Herison; Ta'a, Azman
2017-10-01
The purpose of this paper is to identify and analyze integration of Knowledge Management (KM) and Business Intelligence (BI) in order to achieve competitive edge in context of intellectual capital. Methodology includes review of literatures and analyzes the interviews data from managers in corporate sector and models established by different authors. BI technologies have strong association with process of KM for attaining competitive advantage. KM have strong influence from human and social factors and turn them to the most valuable assets with efficient system run under BI tactics and technologies. However, the term of predictive analytics is based on the field of BI. Extracting tacit knowledge is a big challenge to be used as a new source for BI to use in analyzing. The advanced approach of the analytic methods that address the diversity of data corpus - structured and unstructured - required a cognitive approach to provide estimative results and to yield actionable descriptive, predictive and prescriptive results. This is a big challenge nowadays, and this paper aims to elaborate detail in this initial work.
On the functional integral approach in quantum statistics. 1. Some approximations
International Nuclear Information System (INIS)
Dai Xianxi.
1990-08-01
In this paper the susceptibility of a Kondo system in a fairly wide temperature region is calculated in the first harmonic approximation in a functional integral approach. The comparison with that of the renormalization group theory shows that in this region the two results agree quite well. The expansion of the partition function with infinite independent harmonics for the Anderson model is studied. Some symmetry relations are generalized. It is a challenging problem to develop a functional integral approach including diagram analysis, mixed mode effects and some exact relations in the Anderson system proved in the functional integral approach. These topics will be discussed in the next paper. (author). 22 refs, 1 fig
Sharma, Dharmendar Kumar; Irfanullah, Mir; Basu, Santanu Kumar; Madhu, Sheri; De, Suman; Jadhav, Sameer; Ravikanth, Mangalampalli; Chowdhury, Arindam
2017-03-01
While fluorescence microscopy has become an essential tool amongst chemists and biologists for the detection of various analyte within cellular environments, non-uniform spatial distribution of sensors within cells often restricts extraction of reliable information on relative abundance of analytes in different subcellular regions. As an alternative to existing sensing methodologies such as ratiometric or FRET imaging, where relative proportion of analyte with respect to the sensor can be obtained within cells, we propose a methodology using spectrally-resolved fluorescence microscopy, via which both the relative abundance of sensor as well as their relative proportion with respect to the analyte can be simultaneously extracted for local subcellular regions. This method is exemplified using a BODIPY sensor, capable of detecting mercury ions within cellular environments, characterized by spectral blue-shift and concurrent enhancement of emission intensity. Spectral emission envelopes collected from sub-microscopic regions allowed us to compare the shift in transition energies as well as integrated emission intensities within various intracellular regions. Construction of a 2D scatter plot using spectral shifts and emission intensities, which depend on the relative amount of analyte with respect to sensor and the approximate local amounts of the probe, respectively, enabled qualitative extraction of relative abundance of analyte in various local regions within a single cell as well as amongst different cells. Although the comparisons remain semi-quantitative, this approach involving analysis of multiple spectral parameters opens up an alternative way to extract spatial distribution of analyte in heterogeneous systems. The proposed method would be especially relevant for fluorescent probes that undergo relatively nominal shift in transition energies compared to their emission bandwidths, which often restricts their usage for quantitative ratiometric imaging in
Equivalent properties for perforated plates. An analytical approach
International Nuclear Information System (INIS)
Cepkauskas, M.M.; Yang Jianfeng
2005-01-01
Structures that contain perforated plates have been a subject of interest in the Nuclear Industry. Steam generators, condensers and reactor internals utilize plates containing holes which act as flow holes or separate structures from flow by using a 'tube bank' design. The equivalent plate method has been beneficial in analyzing perforate plates. Details are found in various papers found in the bibliography. In addition the ASME code addresses perforated plates in Appendix A-8000, but is limited to a triangular hole pattern. This early work performed in this field utilized test data and analytical approaches. This paper is an examination of an analytical approach for determining equivalent plate mechanical and thermal properties. First a patch of the real plate is identified that provides a model for the necessary physical behavior of the plate. The average strain of this patch is obtained by first applying simplified one dimensional mechanical load to the patch, determining stress as a function of position, converting the stress to strain and then integrating the strain over the patch length. This average strain is then equated to the average strain of an equivalent fictitious rectangular patch. This results in obtaining equivalent Young's Modulus and Poison's Ratio for the equivalent plate in all three orthogonal directions. The corresponding equivalent shear modulus in all three directions is then determined. An orthotropic material stress strain matrix relationship is provided for the fictitious properties. By equating the real average strain with the fictitious average strain in matrix form, a stress multiplier is found to convert average fictitious stress to average real stress. This same type of process is repeated for heat conduction coefficients and coefficients of thermal expansion. Results are provided for both a square and triangular hole pattern. Reasonable results are obtained when comparing the effective Young's Modulus and Poison's Ratio with ASME
Analytical approach for confirming the achievement of LMFBR reliability goals
International Nuclear Information System (INIS)
Ingram, G.E.; Elerath, J.G.; Wood, A.P.
1981-01-01
The approach, recommended by GE-ARSD, for confirming the achievement of LMFBR reliability goals relies upon a comprehensive understanding of the physical and operational characteristics of the system and the environments to which the system will be subjected during its operational life. This kind of understanding is required for an approach based on system hardware testing or analyses, as recommended in this report. However, for a system as complex and expensive as the LMFBR, an approach which relies primarily on system hardware testing would be prohibitive both in cost and time to obtain the required system reliability test information. By using an analytical approach, results of tests (reliability and functional) at a low level within the specific system of interest, as well as results from other similar systems can be used to form the data base for confirming the achievement of the system reliability goals. This data, along with information relating to the design characteristics and operating environments of the specific system, will be used in the assessment of the system's reliability
International Nuclear Information System (INIS)
Pan Jun-Yang; Xie Yi
2015-01-01
With tremendous advances in modern techniques, Einstein's general relativity has become an inevitable part of deep space missions. We investigate the relativistic algorithm for time transfer between the proper time τ of the onboard clock and the Geocentric Coordinate Time, which extends some previous works by including the effects of propagation of electromagnetic signals. In order to evaluate the implicit algebraic equations and integrals in the model, we take an analytic approach to work out their approximate values. This analytic model might be used in an onboard computer because of its limited capability to perform calculations. Taking an orbiter like Yinghuo-1 as an example, we find that the contributions of the Sun, the ground station and the spacecraft dominate the outcomes of the relativistic corrections to the model. (research papers)
Modern Adaptive Analytics Approach to Lowering Seismic Network Detection Thresholds
Johnson, C. E.
2017-12-01
Modern seismic networks present a number of challenges, but perhaps most notably are those related to 1) extreme variation in station density, 2) temporal variation in station availability, and 3) the need to achieve detectability for much smaller events of strategic importance. The first of these has been reasonably addressed in the development of modern seismic associators, such as GLASS 3.0 by the USGS/NEIC, though some work still remains to be done in this area. However, the latter two challenges demand special attention. Station availability is impacted by weather, equipment failure or the adding or removing of stations, and while thresholds have been pushed to increasingly smaller magnitudes, new algorithms are needed to achieve even lower thresholds. Station availability can be addressed by a modern, adaptive architecture that maintains specified performance envelopes using adaptive analytics coupled with complexity theory. Finally, detection thresholds can be lowered using a novel approach that tightly couples waveform analytics with the event detection and association processes based on a principled repicking algorithm that uses particle realignment for enhanced phase discrimination.
An analytic approach to optimize tidal turbine fields
Pelz, P.; Metzler, M.
2013-12-01
Motivated by global warming due to CO2-emission various technologies for harvesting of energy from renewable sources are developed. Hydrokinetic turbines get applied to surface watercourse or tidal flow to gain electrical energy. Since the available power for hydrokinetic turbines is proportional to the projected cross section area, fields of turbines are installed to scale shaft power. Each hydrokinetic turbine of a field can be considered as a disk actuator. In [1], the first author derives the optimal operation point for hydropower in an open-channel. The present paper concerns about a 0-dimensional model of a disk-actuator in an open-channel flow with bypass, as a special case of [1]. Based on the energy equation, the continuity equation and the momentum balance an analytical approach is made to calculate the coefficient of performance for hydrokinetic turbines with bypass flow as function of the turbine head and the ratio of turbine width to channel width.
A Visual Analytics Approach for Correlation, Classification, and Regression Analysis
Energy Technology Data Exchange (ETDEWEB)
Steed, Chad A [ORNL; SwanII, J. Edward [Mississippi State University (MSU); Fitzpatrick, Patrick J. [Mississippi State University (MSU); Jankun-Kelly, T.J. [Mississippi State University (MSU)
2012-02-01
New approaches that combine the strengths of humans and machines are necessary to equip analysts with the proper tools for exploring today's increasing complex, multivariate data sets. In this paper, a novel visual data mining framework, called the Multidimensional Data eXplorer (MDX), is described that addresses the challenges of today's data by combining automated statistical analytics with a highly interactive parallel coordinates based canvas. In addition to several intuitive interaction capabilities, this framework offers a rich set of graphical statistical indicators, interactive regression analysis, visual correlation mining, automated axis arrangements and filtering, and data classification techniques. The current work provides a detailed description of the system as well as a discussion of key design aspects and critical feedback from domain experts.
Schneider, André; Lin, Zhongbing; Sterckeman, Thibault; Nguyen, Christophe
2018-04-01
The dissociation of metal complexes in the soil solution can increase the availability of metals for root uptake. When it is accounted for in models of bioavailability of soil metals, the number of partial differential equations (PDEs) increases and the computation time to numerically solve these equations may be problematic when a large number of simulations are required, for example for sensitivity analyses or when considering root architecture. This work presents analytical solutions for the set of PDEs describing the bioavailability of soil metals including the kinetics of complexation for three scenarios where the metal complex in solution was fully inert, fully labile, or partially labile. The analytical solutions are only valid i) at steady-state when the PDEs become ordinary differential equations, the transient phase being not covered, ii) when diffusion is the major mechanism of transport and therefore, when convection is negligible, iii) when there is no between-root competition. The formulation of the analytical solutions is for cylindrical geometry but the solutions rely on the spread of the depletion profile around the root, which was modelled assuming a planar geometry. The analytical solutions were evaluated by comparison with the corresponding PDEs for cadmium in the case of the French agricultural soils. Provided that convection was much lower than diffusion (Péclet's number<0.02), the cumulative uptakes calculated from the analytic solutions were in very good agreement with those calculated from the PDEs, even in the case of a partially labile complex. The analytic solutions can be used instead of the PDEs to predict root uptake of metals. The analytic solutions were also used to build an indicator of the contribution of a complex to the uptake of the metal by roots, which can be helpful to predict the effect of soluble organic matter on the bioavailability of soil metals. Copyright © 2017 Elsevier B.V. All rights reserved.
DEFF Research Database (Denmark)
Opper, Manfred; Winther, Ole
2001-01-01
We develop an advanced mean held method for approximating averages in probabilistic data models that is based on the Thouless-Anderson-Palmer (TAP) approach of disorder physics. In contrast to conventional TAP. where the knowledge of the distribution of couplings between the random variables...... is required. our method adapts to the concrete couplings. We demonstrate the validity of our approach, which is so far restricted to models with nonglassy behavior? by replica calculations for a wide class of models as well as by simulations for a real data set....
Disentangling WTP per QALY data: different analytical approaches, different answers.
Gyrd-Hansen, Dorte; Kjaer, Trine
2012-03-01
A large random sample of the Danish general population was asked to value health improvements by way of both the time trade-off elicitation technique and willingness-to-pay (WTP) using contingent valuation methods. The data demonstrate a high degree of heterogeneity across respondents in their relative valuations on the two scales. This has implications for data analysis. We show that the estimates of WTP per QALY are highly sensitive to the analytical strategy. For both open-ended and dichotomous choice data we demonstrate that choice of aggregated approach (ratios of means) or disaggregated approach (means of ratios) affects estimates markedly as does the interpretation of the constant term (which allows for disproportionality across the two scales) in the regression analyses. We propose that future research should focus on why some respondents are unwilling to trade on the time trade-off scale, on how to interpret the constant value in the regression analyses, and on how best to capture the heterogeneity in preference structures when applying mixed multinomial logit. Copyright © 2011 John Wiley & Sons, Ltd.
Energy Technology Data Exchange (ETDEWEB)
Petracca, S [Salerno Univ. (Italy)
1996-08-01
Debye potentials, the Lorentz reciprocity theorem, and (extended) Leontovich boundary conditions can be used to obtain simple and accurate analytic estimates of the longitudinal and transverse coupling impedances of (piecewise longitudinally uniform) multi-layered pipes with non simple transverse geometry and/or (spatially inhomogeneous) boundary conditions. (author)
International Nuclear Information System (INIS)
Aboanber, A E; Nahla, A A
2002-01-01
A method based on the Pade approximations is applied to the solution of the point kinetics equations with a time varying reactivity. The technique consists of treating explicitly the roots of the inhour formula. A significant improvement has been observed by treating explicitly the most dominant roots of the inhour equation, which usually would make the Pade approximation inaccurate. Also the analytical inversion method which permits a fast inversion of polynomials of the point kinetics matrix is applied to the Pade approximations. Results are presented for several cases of Pade approximations using various options of the method with different types of reactivity. The formalism is applicable equally well to non-linear problems, where the reactivity depends on the neutron density through temperature feedback. It was evident that the presented method is particularly good for cases in which the reactivity can be represented by a series of steps and performed quite well for more general cases
Towards Big Earth Data Analytics: The EarthServer Approach
Baumann, Peter
2013-04-01
Big Data in the Earth sciences, the Tera- to Exabyte archives, mostly are made up from coverage data whereby the term "coverage", according to ISO and OGC, is defined as the digital representation of some space-time varying phenomenon. Common examples include 1-D sensor timeseries, 2-D remote sensing imagery, 3D x/y/t image timeseries and x/y/z geology data, and 4-D x/y/z/t atmosphere and ocean data. Analytics on such data requires on-demand processing of sometimes significant complexity, such as getting the Fourier transform of satellite images. As network bandwidth limits prohibit transfer of such Big Data it is indispensable to devise protocols allowing clients to task flexible and fast processing on the server. The EarthServer initiative, funded by EU FP7 eInfrastructures, unites 11 partners from computer and earth sciences to establish Big Earth Data Analytics. One key ingredient is flexibility for users to ask what they want, not impeded and complicated by system internals. The EarthServer answer to this is to use high-level query languages; these have proven tremendously successful on tabular and XML data, and we extend them with a central geo data structure, multi-dimensional arrays. A second key ingredient is scalability. Without any doubt, scalability ultimately can only be achieved through parallelization. In the past, parallelizing code has been done at compile time and usually with manual intervention. The EarthServer approach is to perform a samentic-based dynamic distribution of queries fragments based on networks optimization and further criteria. The EarthServer platform is comprised by rasdaman, an Array DBMS enabling efficient storage and retrieval of any-size, any-type multi-dimensional raster data. In the project, rasdaman is being extended with several functionality and scalability features, including: support for irregular grids and general meshes; in-situ retrieval (evaluation of database queries on existing archive structures, avoiding data
Energy Technology Data Exchange (ETDEWEB)
Dondapati, Raja Sekhar, E-mail: drsekhar@ieee.org [School of Mechanical Engineering, Lovely Professional University, Phagwara, Punjab 144401 (India); Ravula, Jeswanth [School of Mechanical Engineering, Lovely Professional University, Phagwara, Punjab 144401 (India); Thadela, S. [Department of Mechanical Engineering, Andhra University, Visakhapatnam, Andhra Pradesh (India); Usurumarti, Preeti Rao [Department of Mechanical Engineering, P.V.K. Institute of Technology, Anantapur, Andhra Pradesh (India)
2015-12-15
Future power transmission applications demand higher efficiency due to the limited resources of energy. In order to meet such demand, a novel method of transmission is being developed using High Temperature Superconducting (HTS) cables. However, these HTS cables need to be cooled below the critical temperature of superconductors used in constructing the cable to retain the superconductivity. With the advent of new superconductors whose critical temperatures having reached up to 134 K (Hg based), a need arises to find a suitable coolant which can accommodate the heating loads on the superconductors. The present work proposes, Supercritical Nitrogen (SCN) to be a feasible coolant to achieve the required cooling. Further, the feasibility of proposed coolant to be used in futuristic HTS cables is investigated by studying the thermophysical properties such as density, viscosity, specific heat and thermal conductivity with respect to temperature (T{sub C} + 10 K) and pressure (P{sub C} + 10 bar). In addition, few temperature dependent analytical functions are developed for thermophysical properties of SCN which are useful in predicting thermohydraulic performance (pressure drop, pumping power and cooling capacity) using numerical or computational techniques. Also, the developed analytical functions are used to calculate the pumping power and the temperature difference between inlet and outlet of HTS cable. These results are compared with those of liquid nitrogen (LN2) and found that the circulating pumping power required to pump SCN is significantly smaller than that to pump LN2. Further, it is found that the temperature difference between the inlet and outlet is smaller as compared to that when LN2 is used, SCN can be preferred to cool long length Hg based HTS cables. - Highlights: • Analytical functions are developed for thermophysical properties of Supercritical Nitrogen. • Error analysis shows extremely low errors in the developed analytical functions.
International Nuclear Information System (INIS)
Dondapati, Raja Sekhar; Ravula, Jeswanth; Thadela, S.; Usurumarti, Preeti Rao
2015-01-01
Future power transmission applications demand higher efficiency due to the limited resources of energy. In order to meet such demand, a novel method of transmission is being developed using High Temperature Superconducting (HTS) cables. However, these HTS cables need to be cooled below the critical temperature of superconductors used in constructing the cable to retain the superconductivity. With the advent of new superconductors whose critical temperatures having reached up to 134 K (Hg based), a need arises to find a suitable coolant which can accommodate the heating loads on the superconductors. The present work proposes, Supercritical Nitrogen (SCN) to be a feasible coolant to achieve the required cooling. Further, the feasibility of proposed coolant to be used in futuristic HTS cables is investigated by studying the thermophysical properties such as density, viscosity, specific heat and thermal conductivity with respect to temperature (T_C + 10 K) and pressure (P_C + 10 bar). In addition, few temperature dependent analytical functions are developed for thermophysical properties of SCN which are useful in predicting thermohydraulic performance (pressure drop, pumping power and cooling capacity) using numerical or computational techniques. Also, the developed analytical functions are used to calculate the pumping power and the temperature difference between inlet and outlet of HTS cable. These results are compared with those of liquid nitrogen (LN2) and found that the circulating pumping power required to pump SCN is significantly smaller than that to pump LN2. Further, it is found that the temperature difference between the inlet and outlet is smaller as compared to that when LN2 is used, SCN can be preferred to cool long length Hg based HTS cables. - Highlights: • Analytical functions are developed for thermophysical properties of Supercritical Nitrogen. • Error analysis shows extremely low errors in the developed analytical functions.
Gai, Litao; Bilige, Sudao; Jie, Yingmo
2016-01-01
In this paper, we successfully obtained the exact solutions and the approximate analytic solutions of the (2 + 1)-dimensional KP equation based on the Lie symmetry, the extended tanh method and the homotopy perturbation method. In first part, we obtained the symmetries of the (2 + 1)-dimensional KP equation based on the Wu-differential characteristic set algorithm and reduced it. In the second part, we constructed the abundant exact travelling wave solutions by using the extended tanh method. These solutions are expressed by the hyperbolic functions, the trigonometric functions and the rational functions respectively. It should be noted that when the parameters are taken as special values, some solitary wave solutions are derived from the hyperbolic function solutions. Finally, we apply the homotopy perturbation method to obtain the approximate analytic solutions based on four kinds of initial conditions.
Energy Technology Data Exchange (ETDEWEB)
Tumelero, Fernanda; Petersen, Claudio Zen; Goncalves, Glenio Aguiar [Universidade Federal de Pelotas, Capao do Leao, RS (Brazil). Programa de Pos Graduacao em Modelagem Matematica; Schramm, Marcelo [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica
2016-12-15
In this work, we report a solution to solve the Neutron Point Kinetics Equations applying the Polynomial Approach Method. The main idea is to expand the neutron density and delayed neutron precursors as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions and the analytical continuation is used to determine the solutions of the next intervals. A genuine error control is developed based on an analogy with the Rest Theorem. For illustration, we also report simulations for different approaches types (linear, quadratic and cubic). The results obtained by numerical simulations for linear approximation are compared with results in the literature.
Uncertainties in workplace external dosimetry - An analytical approach
International Nuclear Information System (INIS)
Ambrosi, P.
2006-01-01
The uncertainties associated with external dosimetry measurements at workplaces depend on the type of dosemeter used together with its performance characteristics and the information available on the measurement conditions. Performance characteristics were determined in the course of a type test and information about the measurement conditions can either be general, e.g. 'research' and 'medicine', or specific, e.g. 'X-ray testing equipment for aluminium wheel rims'. This paper explains an analytical approach to determine the measurement uncertainty. It is based on the Draft IEC Technical Report IEC 62461 Radiation Protection Instrumentation - Determination of Uncertainty in Measurement. Both this paper and the report cannot eliminate the fact that the determination of the uncertainty requires a larger effort than performing the measurement itself. As a counterbalance, the process of determining the uncertainty results not only in a numerical value of the uncertainty but also produces the best estimate of the quantity to be measured, which may differ from the indication of the instrument. Thus it also improves the result of the measurement. (authors)
Analytic game—theoretic approach to ground-water extraction
Loáiciga, Hugo A.
2004-09-01
The roles of cooperation and non-cooperation in the sustainable exploitation of a jointly used groundwater resource have been quantified mathematically using an analytical game-theoretic formulation. Cooperative equilibrium arises when ground-water users respect water-level constraints and consider mutual impacts, which allows them to derive economic benefits from ground-water indefinitely, that is, to achieve sustainability. This work shows that cooperative equilibrium can be obtained from the solution of a quadratic programming problem. For cooperative equilibrium to hold, however, enforcement must be effective. Otherwise, according to the commonized costs-privatized profits paradox, there is a natural tendency towards non-cooperation and non-sustainable aquifer mining, of which overdraft is a typical symptom. Non-cooperative behavior arises when at least one ground-water user neglects the externalities of his adopted ground-water pumping strategy. In this instance, water-level constraints may be violated in a relatively short time and the economic benefits from ground-water extraction fall below those obtained with cooperative aquifer use. One example illustrates the game theoretic approach of this work.
A Decision Analytic Approach to Exposure-Based Chemical ...
The manufacture of novel synthetic chemicals has increased in volume and variety, but often the environmental and health risks are not fully understood in terms of toxicity and, in particular, exposure. While efforts to assess risks have generally been effective when sufficient data are available, the hazard and exposure data necessary to assess risks adequately are unavailable for the vast majority of chemicals in commerce. The US Environmental Protection Agency has initiated the ExpoCast Program to develop tools for rapid chemical evaluation based on potential for exposure. In this context, a model is presented in which chemicals are evaluated based on inherent chemical properties and behaviorally-based usage characteristics over the chemical’s life cycle. These criteria are assessed and integrated within a decision analytic framework, facilitating rapid assessment and prioritization for future targeted testing and systems modeling. A case study outlines the prioritization process using 51 chemicals. The results show a preliminary relative ranking of chemicals based on exposure potential. The strength of this approach is the ability to integrate relevant statistical and mechanistic data with expert judgment, allowing for an initial tier assessment that can further inform targeted testing and risk management strategies. The National Exposure Research Laboratory′s (NERL′s) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in suppor
Ram Pressure Stripping Made Easy: An Analytical Approach
Köppen, J.; Jáchym, P.; Taylor, R.; Palouš, J.
2018-06-01
The removal of gas by ram pressure stripping of galaxies is treated by a purely kinematic description. The solution has two asymptotic limits: if the duration of the ram pressure pulse exceeds the period of vertical oscillations perpendicular to the galactic plane, the commonly used quasi-static criterion of Gunn & Gott is obtained which uses the maximum ram pressure that the galaxy has experienced along its orbit. For shorter pulses the outcome depends on the time-integrated ram pressure. This parameter pair fully describes the gas mass fraction that is stripped from a given galaxy. This approach closely reproduces results from SPH simulations. We show that typical galaxies follow a very tight relation in this parameter space corresponding to a pressure pulse length of about 300 Myr. Thus, the Gunn & Gott criterion provides a good description for galaxies in larger clusters. Applying the analytic description to a sample of 232 Virgo galaxies from the GoldMine database, we show that the ICM provides indeed the ram pressures needed to explain the deficiencies. We also can distinguish current and past strippers, including objects whose stripping state was unknown.
Linear response theory an analytic-algebraic approach
De Nittis, Giuseppe
2017-01-01
This book presents a modern and systematic approach to Linear Response Theory (LRT) by combining analytic and algebraic ideas. LRT is a tool to study systems that are driven out of equilibrium by external perturbations. In particular the reader is provided with a new and robust tool to implement LRT for a wide array of systems. The proposed formalism in fact applies to periodic and random systems in the discrete and the continuum. After a short introduction describing the structure of the book, its aim and motivation, the basic elements of the theory are presented in chapter 2. The mathematical framework of the theory is outlined in chapters 3–5: the relevant von Neumann algebras, noncommutative $L^p$- and Sobolev spaces are introduced; their construction is then made explicit for common physical systems; the notion of isopectral perturbations and the associated dynamics are studied. Chapter 6 is dedicated to the main results, proofs of the Kubo and Kubo-Streda formulas. The book closes with a chapter about...
Energy Technology Data Exchange (ETDEWEB)
Kotelnikova, O.A.; Prudnikov, V.N. [Physical Faculty, Lomonosov State University, Department of Magnetism, Moscow (Russian Federation); Rudoy, Yu.G., E-mail: rudikar@mail.ru [People' s Friendship University of Russia, Department of Theoretical Physics, Moscow (Russian Federation)
2015-06-01
The aim of this paper is to generalize the microscopic approach to the description of the magnetocaloric effect (MCE) started by Kokorina and Medvedev (E.E. Kokorina, M.V. Medvedev, Physica B 416 (2013) 29.) by applying it to the anisotropic ferromagnet of the “easy axis” type in two settings—with external magnetic field parallel and perpendicular to the axis of easy magnetization. In the last case there appears the field induced (or spin-reorientation) phase transition which occurs at the critical value of the external magnetic field. This value is proportional to the exchange anisotropy constant at low temperatures, but with the rise of temperature it may be renormalized (as a rule, proportional to the magnetization). We use the explicit form of the Hamiltonian of the anisotropic ferromagnet and apply widely used random phase approximation (RPA) (known also as Tyablikov approximation in the Green function method) which is more accurate than the well known molecular field approximation (MFA). It is shown that in the first case the magnitude of MCE is raised whereas in the second one the MCE disappears due to compensation of the critical field renormalized with the magnetization.
Zeroth order regular approximation approach to electric dipole moment interactions of the electron
Gaul, Konstantin; Berger, Robert
2017-07-01
A quasi-relativistic two-component approach for an efficient calculation of P ,T -odd interactions caused by a permanent electric dipole moment of the electron (eEDM) is presented. The approach uses a (two-component) complex generalized Hartree-Fock and a complex generalized Kohn-Sham scheme within the zeroth order regular approximation. In applications to select heavy-elemental polar diatomic molecular radicals, which are promising candidates for an eEDM experiment, the method is compared to relativistic four-component electron-correlation calculations and confirms values for the effective electric field acting on the unpaired electron for RaF, BaF, YbF, and HgF. The calculations show that purely relativistic effects, involving only the lower component of the Dirac bi-spinor, are well described by treating only the upper component explicitly.
An intrinsic robust rank-one-approximation approach for currencyportfolio optimization
Directory of Open Access Journals (Sweden)
Hongxuan Huang
2018-03-01
Full Text Available A currency portfolio is a special kind of wealth whose value fluctuates with foreignexchange rates over time, which possesses 3Vs (volume, variety and velocity properties of big datain the currency market. In this paper, an intrinsic robust rank one approximation (ROA approachis proposed to maximize the value of currency portfolios over time. The main results of the paperinclude four parts: Firstly, under the assumptions about the currency market, the currency portfoliooptimization problem is formulated as the basic model, in which there are two types of variablesdescribing currency amounts in portfolios and the amount of each currency exchanged into another,respectively. Secondly, the rank one approximation problem and its variants are also formulated toapproximate a foreign exchange rate matrix, whose performance is measured by the Frobenius normor the 2-norm of a residual matrix. The intrinsic robustness of the rank one approximation is provedtogether with summarizing properties of the basic ROA problem and designing a modified powermethod to search for the virtual exchange rates hidden in a foreign exchange rate matrix. Thirdly,a technique for decision variables reduction is presented to attack the currency portfolio optimization.The reduced formulation is referred to as the ROA model, which keeps only variables describingcurrency amounts in portfolios. The optimal solution to the ROA model also induces a feasible solutionto the basic model of the currency portfolio problem by integrating forex operations from the ROAmodel with practical forex rates. Finally, numerical examples are presented to verify the feasibility ande ciency of the intrinsic robust rank one approximation approach. They also indicate that there existsan objective measure for evaluating and optimizing currency portfolios over time, which is related tothe virtual standard currency and independent of any real currency selected specially for measurement.
Analytical and unitary approach in mesons electromagnetic form factor applications
International Nuclear Information System (INIS)
Liptaj, A.
2010-07-01
related to a very different type of experiment, a direct lifetime measurement, that was predominantly used to get the Γ_π_"0_→_γ_γ value (unlike in the case of our evaluation or in the case of the PDG values for Γ_η_→_γ_γ and Γ_η_"'_→_γ_γ. We are looking forward to analyze this issue and contribute to the solution. We finally study the behavior of the elastic pion EM form factor in the space-like domain. In this case we aimed to minimize the model dependence and based our approach only on the analytic properties of the form factor and the precise data in the time-like region. Our motivation was the data in the space-like region that, we believe, cannot be fully trusted. Further, we wanted to compare our prediction to other QCD inspired model. We have shown, that the prediction we obtain has only small model dependence. By making a prediction in the time-like region we have also shown that our approach is self-consistent, the prediction describes well the data points that were initially used to get it. Eventually we observed that our prediction is close tho the most recent result obtained in the framework of the AdS/CFT theory. The obtained results allow us to conclude that the unitary and analytic model and approach as such are correct tools to study meson form factors and we have shown, that they have big potential to give important results in several domains of particle physics. (author)
Learning Analytics for Online Discussions: Embedded and Extracted Approaches
Wise, Alyssa Friend; Zhao, Yuting; Hausknecht, Simone Nicole
2014-01-01
This paper describes an application of learning analytics that builds on an existing research program investigating how students contribute and attend to the messages of others in asynchronous online discussions. We first overview the E-Listening research program and then explain how this work was translated into analytics that students and…
A Progressive Approach to Teaching Analytics in the Marketing Curriculum
Liu, Yiyuan; Levin, Michael A.
2018-01-01
With the emerging use of analytics tools and methodologies in marketing, marketing educators have provided students training and experiences beyond the soft skills associated with understanding consumer behavior. Previous studies have only discussed how to apply analytics in course designs, tools, and related practices. However, there is a lack of…
Directory of Open Access Journals (Sweden)
Alsaedi Ahmed
2009-01-01
Full Text Available A generalized quasilinearization technique is developed to obtain a sequence of approximate solutions converging monotonically and quadratically to a unique solution of a boundary value problem involving Duffing type nonlinear integro-differential equation with integral boundary conditions. The convergence of order for the sequence of iterates is also established. It is found that the work presented in this paper not only produces new results but also yields several old results in certain limits.
International Nuclear Information System (INIS)
Delgado-Aparicio, L.; Hill, K.; Bitter, M.; Tritz, K.; Kramer, T.; Stutman, D.; Finkenthal, M.
2010-01-01
A new set of analytic formulas describes the transmission of soft x-ray continuum radiation through a metallic foil for its application to fast electron temperature measurements in fusion plasmas. This novel approach shows good agreement with numerical calculations over a wide range of plasma temperatures in contrast with the solutions obtained when using a transmission approximated by a single-Heaviside function [S. von Goeler et al., Rev. Sci. Instrum. 70, 599 (1999)]. The new analytic formulas can improve the interpretation of the experimental results and thus contribute in obtaining fast temperature measurements in between intermittent Thomson scattering data.
Analytical Approach to Polarization Mode Dispersion in Linearly Spun Fiber with Birefringence
Directory of Open Access Journals (Sweden)
Vinod K. Mishra
2016-01-01
Full Text Available The behavior of Polarization Mode Dispersion (PMD in spun optical fiber is a topic of great interest in optical networking. Earlier work in this area has focused more on approximate or numerical solutions. In this paper we present analytical results for PMD in spun fibers with triangular spin profile function. It is found that in some parameter ranges the analytical results differ from the approximations.
Geovisual Analytics Approach to Exploring Public Political Discourse on Twitter
Directory of Open Access Journals (Sweden)
Jonathan K. Nelson
2015-03-01
Full Text Available We introduce spatial patterns of Tweets visualization (SPoTvis, a web-based geovisual analytics tool for exploring messages on Twitter (or “tweets” collected about political discourse, and illustrate the potential of the approach with a case study focused on a set of linked political events in the United States. In October 2013, the U.S. Congressional debate over the allocation of funds to the Patient Protection and Affordable Care Act (commonly known as the ACA or “Obamacare” culminated in a 16-day government shutdown. Meanwhile the online health insurance marketplace related to the ACA was making a public debut hampered by performance and functionality problems. Messages on Twitter during this time period included sharply divided opinions about these events, with many people angry about the shutdown and others supporting the delay of the ACA implementation. SPoTvis supports the analysis of these events using an interactive map connected dynamically to a term polarity plot; through the SPoTvis interface, users can compare the dominant subthemes of Tweets in any two states or congressional districts. Demographic attributes and political information on the display, coupled with functionality to show (dissimilar features, enrich users’ understandings of the units being compared. Relationships among places, politics and discourse on Twitter are quantified using statistical analyses and explored visually using SPoTvis. A two-part user study evaluates SPoTvis’ ability to enable insight discovery, as well as the tool’s design, functionality and applicability to other contexts.
Learning Analytics to Inform Teaching and Learning Approaches
Gray, Geraldine; McGuinness, Colm; Owende, Philip
2016-01-01
Learning analytics is an evolving discipline with capability for educational data analysis to enable better understanding of learning processes. This paper reports on learning analytics research at Institute of Technology Blanchardstown, Ireland, that indicated measureable factors can identify first year students at risk of failing based on data available prior to commencement of first year of study. The study was conducted over three years, 2010 to 2012, on a student population from a range ...
Big Data Meets Quantum Chemistry Approximations: The Δ-Machine Learning Approach.
Ramakrishnan, Raghunathan; Dral, Pavlo O; Rupp, Matthias; von Lilienfeld, O Anatole
2015-05-12
Chemically accurate and comprehensive studies of the virtual space of all possible molecules are severely limited by the computational cost of quantum chemistry. We introduce a composite strategy that adds machine learning corrections to computationally inexpensive approximate legacy quantum methods. After training, highly accurate predictions of enthalpies, free energies, entropies, and electron correlation energies are possible, for significantly larger molecular sets than used for training. For thermochemical properties of up to 16k isomers of C7H10O2 we present numerical evidence that chemical accuracy can be reached. We also predict electron correlation energy in post Hartree-Fock methods, at the computational cost of Hartree-Fock, and we establish a qualitative relationship between molecular entropy and electron correlation. The transferability of our approach is demonstrated, using semiempirical quantum chemistry and machine learning models trained on 1 and 10% of 134k organic molecules, to reproduce enthalpies of all remaining molecules at density functional theory level of accuracy.
Multi-analytical approach for profiling some essential medical drugs
International Nuclear Information System (INIS)
Abubakar, M.
2015-07-01
Counterfeit and substandard pharmaceutical drugs are chiefly rampant in developing countries due to inadequate analytical facilities and lack of regulatory oversight. The production of counterfeit or substandard drugs is broadly problematic. Underestimating it therefore leads to morbidity, mortality, drug resistance, introduction of toxic substances into the body and loss of confidence in health care systems. Medical drugs that are often counterfeited range from antimalarial drugs to antiretroviral drugs with antibiotics being counterfeited the most. This research work, therefore, aims at contributing towards the establishment of measures/processes for distinguishing between fake and genuine amoxicillin drugs. This was achieved by the identification and quantification of the Active Pharmaceutical Ingredient (API) and the excipients in the drug formulation. The major analytical techniques employed for this research work were Instrumental Neutron Activation Analysis (INAA), X-ray Powder Diffraction (XRD), High Performance Liquid Chromatography (HPLC) and in vitro Dissolution Test. The amoxicillin samples analyzed were the foreign generic amoxicillin purchased from Ernest Chemists pharmacy at East Legon, Accra, the National Health Insurance Scheme (NHIS) amoxicillin purchased at Fair Mile pharmacy at West Legon, Accra and the Suspected Fake amoxicillin purchased at Okaishi market. For the establishment of fingerprint for identification of substandard amoxicillin, INAA was used to qualitatively determine the short lived radionuclides (excipients) which then facilitated the correct identification of the API and the excipient phases in each of the amoxicillin groups. The phases identified were Amoxicillin Trihydrate as the excipient, Magnesium Stearate (hydrated) and Magnesium Stearate (anhydrous) as the excipients. For Quality control purposes, High Performance Liquid Chromatography approach and also, the in vitro Dissolution test were conducted on each of the groups of
International Nuclear Information System (INIS)
Okita, Taishi; Takagi, Toshiyuki
2010-01-01
We analytically derive the solutions for electromagnetic fields of electric current dipole moment, which is placed in the exterior of the spherical homogeneous conductor, and is pointed along the radial direction. The dipole moment is driven in the low frequency f = 1 kHz and high frequency f = 1 GHz regimes. The electrical properties of the conductor are appropriately chosen in each frequency. Electromagnetic fields are rigorously formulated at an arbitrary point in a spherical geometry, in which the magnetic vector potential is straightforwardly given by the Biot-Savart formula, and the scalar potential is expanded with the Legendre polynomials, taking into account the appropriate boundary conditions at the spherical surface of the conductor. The induced electric fields are numerically calculated along the several paths in the low and high frequency excitation. The self-consistent solutions obtained in this work will be of much importance in a wide region of electromagnetic induction problems. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)
International Nuclear Information System (INIS)
Denner, A.; Dittmaier, S.; Roth, M.; Wackeroth, D.
2000-01-01
We calculate the complete O(α) electroweak radiative corrections to e + e - →WW→4f in the electroweak Standard Model in the double-pole approximation. We give analytical results for the non-factorizable virtual corrections and express the factorizable virtual corrections in terms of the known corrections to on-shell W-pair production and W decay. The calculation of the bremsstrahlung corrections, i.e., the processes e + e - →4fγ in lowest order, is based on the full matrix elements. The matching of soft and collinear singularities between virtual and real corrections is done alternatively in two different ways, namely by using a subtraction method and by applying phase-space slicing. The O(α) corrections as well as higher-order initial-state photon radiation are implemented in the Monte Carlo generator RACOONWW. Numerical results of this program are presented for the W-pair-production cross section, angular and W-invariant-mass distributions at LEP2. We also discuss the intrinsic theoretical uncertainty of our approach
Lee, K. David; Wiesenfeld, Eric; Gelfand, Andrew
2007-04-01
One of the greatest challenges in modern combat is maintaining a high level of timely Situational Awareness (SA). In many situations, computational complexity and accuracy considerations make the development and deployment of real-time, high-level inference tools very difficult. An innovative hybrid framework that combines Bayesian inference, in the form of Bayesian Networks, and Possibility Theory, in the form of Fuzzy Logic systems, has recently been introduced to provide a rigorous framework for high-level inference. In previous research, the theoretical basis and benefits of the hybrid approach have been developed. However, lacking is a concrete experimental comparison of the hybrid framework with traditional fusion methods, to demonstrate and quantify this benefit. The goal of this research, therefore, is to provide a statistical analysis on the comparison of the accuracy and performance of hybrid network theory, with pure Bayesian and Fuzzy systems and an inexact Bayesian system approximated using Particle Filtering. To accomplish this task, domain specific models will be developed under these different theoretical approaches and then evaluated, via Monte Carlo Simulation, in comparison to situational ground truth to measure accuracy and fidelity. Following this, a rigorous statistical analysis of the performance results will be performed, to quantify the benefit of hybrid inference to other fusion tools.
Analytic operator approach to fermionic lattice field theories
International Nuclear Information System (INIS)
Duncan, A.
1985-01-01
An analytic Lanczos algorithm previously used to extract the spectrum of bosonic lattice field theories in the continuum region is extended to theories with fermions. The method is illustrated in detail for the (1+1)-dimensional Gross-Neveu model. All parameters in the model (coupling, lattice size N, number of fermion flavors Nsub(F), etc.) appear explicitly in analytic formulas for matrix elements of the hamiltonian. The method is applied to the calculation of the collective field vacuum expectation value and the mass gap, and excellent agreement obtained with explicit results available from the large Nsub(F) solution of the model. (orig.)
Dariescu, Marina-Aura; Dariescu, Ciprian
2012-10-01
Working with a magnetic field periodic along Oz and decaying in time, we deal with the Dirac-type equation characterizing the fermions evolving in magnetar's crust. For ultra-relativistic particles, one can employ the perturbative approach, to compute the conserved current density components. If the magnetic field is frozen and the magnetar is treated as a stationary object, the fermion's wave function is expressed in terms of the Heun's Confluent functions. Finally, we are extending some previous investigations on the linearly independent fermionic modes solutions to the Mathieu's equation and we discuss the energy spectrum and the Mathieu Characteristic Exponent.
Energy Technology Data Exchange (ETDEWEB)
Jennings, Elise; Wolf, Rachel; Sako, Masao
2016-11-09
Cosmological parameter estimation techniques that robustly account for systematic measurement uncertainties will be crucial for the next generation of cosmological surveys. We present a new analysis method, superABC, for obtaining cosmological constraints from Type Ia supernova (SN Ia) light curves using Approximate Bayesian Computation (ABC) without any likelihood assumptions. The ABC method works by using a forward model simulation of the data where systematic uncertainties can be simulated and marginalized over. A key feature of the method presented here is the use of two distinct metrics, the `Tripp' and `Light Curve' metrics, which allow us to compare the simulated data to the observed data set. The Tripp metric takes as input the parameters of models fit to each light curve with the SALT-II method, whereas the Light Curve metric uses the measured fluxes directly without model fitting. We apply the superABC sampler to a simulated data set of $\\sim$1000 SNe corresponding to the first season of the Dark Energy Survey Supernova Program. Varying $\\Omega_m, w_0, \\alpha$ and $\\beta$ and a magnitude offset parameter, with no systematics we obtain $\\Delta(w_0) = w_0^{\\rm true} - w_0^{\\rm best \\, fit} = -0.036\\pm0.109$ (a $\\sim11$% 1$\\sigma$ uncertainty) using the Tripp metric and $\\Delta(w_0) = -0.055\\pm0.068$ (a $\\sim7$% 1$\\sigma$ uncertainty) using the Light Curve metric. Including 1% calibration uncertainties in four passbands, adding 4 more parameters, we obtain $\\Delta(w_0) = -0.062\\pm0.132$ (a $\\sim14$% 1$\\sigma$ uncertainty) using the Tripp metric. Overall we find a $17$% increase in the uncertainty on $w_0$ with systematics compared to without. We contrast this with a MCMC approach where systematic effects are approximately included. We find that the MCMC method slightly underestimates the impact of calibration uncertainties for this simulated data set.
‘Positioning’ in the conversation analytic approach
DEFF Research Database (Denmark)
Day, Dennis; Kjærbeck, Susanne
2013-01-01
of ‘positioning’ is used at all levels of analysis in the former, there appears to be no such analytical concept in EM/CA. The aim of this article is to inquire if EM/CA tools for the analysis of identities and relations in talk might be considered interesting from the perspective of positioning theory. To do so...
Governance Analytical Framework : an Approach to Health Systems ...
International Development Research Centre (IDRC) Digital Library (Canada)
Researchers will develop and test a methodology - Governance Analytical Framework - for analyzing and assessing the influence of governance pattern on health ... IDRC and the São Paulo Research Foundation (FAPESP) signed a scientific and technological cooperation agreement to support joint research projects in ...
Lyophilization: a useful approach to the automation of analytical processes?
de Castro, M. D. Luque; Izquierdo, A.
1990-01-01
An overview of the state-of-the-art in the use of lyophilization for the pretreatment of samples and standards prior to their storage and/or preconcentration is presented. The different analytical applications of this process are dealt with according to the type of material (reagent, standard, samples) and matrix involved.
An analytical approach to managing complex process problems
Energy Technology Data Exchange (ETDEWEB)
Ramstad, Kari; Andersen, Espen; Rohde, Hans Christian; Tydal, Trine
2006-03-15
The oil companies are continuously investing time and money to ensure optimum regularity on their production facilities. High regularity increases profitability, reduces workload on the offshore organisation and most important; - reduces discharge to air and sea. There are a number of mechanisms and tools available in order to achieve high regularity. Most of these are related to maintenance, system integrity, well operations and process conditions. However, for all of these tools, they will only be effective if quick and proper analysis of fluids and deposits are carried out. In fact, analytical backup is a powerful tool used to maintain optimised oil production, and should as such be given high priority. The present Operator (Hydro Oil and Energy) and the Chemical Supplier (MI Production Chemicals) have developed a cooperation to ensure that analytical backup is provided efficiently to the offshore installations. The Operator's Research and Development (R and D) departments and the Chemical Supplier have complementary specialties in both personnel and equipment, and this is utilized to give the best possible service when required from production technologists or operations. In order for the Operator's Research departments, Health, Safety and Environment (HSE) departments and Operations to approve analytical work performed by the Chemical Supplier, a number of analytical tests are carried out following procedures agreed by both companies. In the present paper, three field case examples of analytical cooperation for managing process problems will be presented. 1) Deposition in a Complex Platform Processing System. 2) Contaminated Production Chemicals. 3) Improved Monitoring of Scale Inhibitor, Suspended Solids and Ions. In each case the Research Centre, Operations and the Chemical Supplier have worked closely together to achieve fast solutions and Best Practice. (author) (tk)
Beam steering in superconducting quarter-wave resonators: An analytical approach
Directory of Open Access Journals (Sweden)
Alberto Facco
2011-07-01
Full Text Available Beam steering in superconducting quarter-wave resonators (QWRs, which is mainly caused by magnetic fields, has been pointed out in 2001 in an early work [A. Facco and V. Zviagintsev, in Proceedings of the Particle Accelerator Conference, Chicago, IL, 2001 (IEEE, New York, 2001, p. 1095], where an analytical formula describing it was proposed and the influence of cavity geometry was discussed. Since then, the importance of this effect was recognized and effective correction techniques have been found [P. N. Ostroumov and K. W. Shepard, Phys. Rev. ST Accel. Beams 4, 110101 (2001PRABFM1098-440210.1103/PhysRevSTAB.4.110101]. This phenomenon was further studied in the following years, mainly with numerical methods. In this paper we intend to go back to the original approach and, using well established approximations, derive a simple analytical expression for QWR steering which includes correction methods and reproduces the data starting from a few calculable geometrical constants which characterize every cavity. This expression, of the type of the Panofski equation, can be a useful tool in the design of superconducting quarter-wave resonators and in the definition of their limits of application with different beams.
A novel approach for choosing summary statistics in approximate Bayesian computation.
Aeschbacher, Simon; Beaumont, Mark A; Futschik, Andreas
2012-11-01
The choice of summary statistics is a crucial step in approximate Bayesian computation (ABC). Since statistics are often not sufficient, this choice involves a trade-off between loss of information and reduction of dimensionality. The latter may increase the efficiency of ABC. Here, we propose an approach for choosing summary statistics based on boosting, a technique from the machine-learning literature. We consider different types of boosting and compare them to partial least-squares regression as an alternative. To mitigate the lack of sufficiency, we also propose an approach for choosing summary statistics locally, in the putative neighborhood of the true parameter value. We study a demographic model motivated by the reintroduction of Alpine ibex (Capra ibex) into the Swiss Alps. The parameters of interest are the mean and standard deviation across microsatellites of the scaled ancestral mutation rate (θ(anc) = 4N(e)u) and the proportion of males obtaining access to matings per breeding season (ω). By simulation, we assess the properties of the posterior distribution obtained with the various methods. According to our criteria, ABC with summary statistics chosen locally via boosting with the L(2)-loss performs best. Applying that method to the ibex data, we estimate θ(anc)≈ 1.288 and find that most of the variation across loci of the ancestral mutation rate u is between 7.7 × 10(-4) and 3.5 × 10(-3) per locus per generation. The proportion of males with access to matings is estimated as ω≈ 0.21, which is in good agreement with recent independent estimates.
Pan, Zhen; Anderes, Ethan; Knox, Lloyd
2018-05-01
One of the major targets for next-generation cosmic microwave background (CMB) experiments is the detection of the primordial B-mode signal. Planning is under way for Stage-IV experiments that are projected to have instrumental noise small enough to make lensing and foregrounds the dominant source of uncertainty for estimating the tensor-to-scalar ratio r from polarization maps. This makes delensing a crucial part of future CMB polarization science. In this paper we present a likelihood method for estimating the tensor-to-scalar ratio r from CMB polarization observations, which combines the benefits of a full-scale likelihood approach with the tractability of the quadratic delensing technique. This method is a pixel space, all order likelihood analysis of the quadratic delensed B modes, and it essentially builds upon the quadratic delenser by taking into account all order lensing and pixel space anomalies. Its tractability relies on a crucial factorization of the pixel space covariance matrix of the polarization observations which allows one to compute the full Gaussian approximate likelihood profile, as a function of r , at the same computational cost of a single likelihood evaluation.
Heavy element stable isotope ratios. Analytical approaches and applications
International Nuclear Information System (INIS)
Tanimizu, Masaharu; Sohrin, Yoshiki; Hirata, Takafumi
2013-01-01
Continuous developments in inorganic mass spectrometry techniques, including a combination of an inductively coupled plasma ion source and a magnetic sector-based mass spectrometer equipped with a multiple-collector array, have revolutionized the precision of isotope ratio measurements, and applications of inorganic mass spectrometry for biochemistry, geochemistry, and marine chemistry are beginning to appear on the horizon. Series of pioneering studies have revealed that natural stable isotope fractionations of many elements heavier than S (e.g., Fe, Cu, Zn, Sr, Ce, Nd, Mo, Cd, W, Tl, and U) are common on Earth, and it had been widely recognized that most physicochemical reactions or biochemical processes induce mass-dependent isotope fractionation. The variations in isotope ratios of the heavy elements can provide new insights into past and present biochemical and geochemical processes. To achieve this, the analytical community is actively solving problems such as spectral interference, mass discrimination drift, chemical separation and purification, and reduction of the contamination of analytes. This article describes data calibration and standardization protocols to allow interlaboratory comparisons or to maintain traceability of data, and basic principles of isotope fractionation in nature, together with high-selectivity and high-yield chemical separation and purification techniques for stable isotope studies.
A conceptual approach to approximate tree root architecture in infinite slope models
Schmaltz, Elmar; Glade, Thomas
2016-04-01
Vegetation-related properties - particularly tree root distribution and coherent hydrologic and mechanical effects on the underlying soil mantle - are commonly not considered in infinite slope models. Indeed, from a geotechnical point of view, these effects appear to be difficult to be reproduced reliably in a physically-based modelling approach. The growth of a tree and the expansion of its root architecture are directly connected with both intrinsic properties such as species and age, and extrinsic factors like topography, availability of nutrients, climate and soil type. These parameters control four main issues of the tree root architecture: 1) Type of rooting; 2) maximum growing distance to the tree stem (radius r); 3) maximum growing depth (height h); and 4) potential deformation of the root system. Geometric solids are able to approximate the distribution of a tree root system. The objective of this paper is to investigate whether it is possible to implement root systems and the connected hydrological and mechanical attributes sufficiently in a 3-dimensional slope stability model. Hereby, a spatio-dynamic vegetation module should cope with the demands of performance, computation time and significance. However, in this presentation, we focus only on the distribution of roots. The assumption is that the horizontal root distribution around a tree stem on a 2-dimensional plane can be described by a circle with the stem located at the centroid and a distinct radius r that is dependent on age and species. We classified three main types of tree root systems and reproduced the species-age-related root distribution with three respective mathematical solids in a synthetic 3-dimensional hillslope ambience. Thus, two solids in an Euclidian space were distinguished to represent the three root systems: i) cylinders with radius r and height h, whilst the dimension of latter defines the shape of a taproot-system or a shallow-root-system respectively; ii) elliptic
ANALYTIC CAUSATIVES IN JAVANESE: A LEXICAL- FUNCTIONAL APPROACH
Directory of Open Access Journals (Sweden)
Agus Subiyanto
2014-01-01
Full Text Available Analytic causatives are the type of causatives formed by separate predicates expressing the cause and the effect, that is, the causing notion is realized by a word separate from the word denoting the caused activity. This paper aims to discuss the forms and syntactic structure of analytic causatives in Javanese. To discuss the syntactic structure, the theory of lexical functional grammar (LFG is employed. The data used in this study is the „ngoko‟ level of Javanese of the Surakarta dialect. By using a negation marker and modals as the syntactic operators to test mono- or bi-clausality of analytic causatives, the writer found that analytic causatives in Javanese form biclausal constructions. These constructions have an X-COMP structure, in that the SUBJ of the second verb is controlled by the OBJ of the causative verb (Ngawe „make‟. In terms of the constituent structure, analytic causatives have two kinds of structures, which are V-cause OBJ X-COMP and V-cause X-COMP OBJ. Kausatif analitik adalah tipe kausatif yang dibentuk oleh dua predikat atau dua kata terpisah untuk mengungkapkan makna sebab dan akibat, yakni makna sebab direalisasikan oleh kata yang berbeda dengan kata yang menyatakan makna akibat. Tulisan ini membahas bentuk dan struktur sintaksis kausatif analitik dalam bahasa Jawa. Untuk menjelaskan struktur sintaksis digunakan teori Tata Bahasa Leksikal Fungsional. Data yang digunakan dalam penelitian ini adalah bahasa Jawa dialek Surakarta ragam ngoko. Dengan menggunakan alat uji pemarkah negasi dan penggunaaan modalitas, penulis menemukan bahwa kausatif analitik dalam bahasa Jawa membentuk struktur biklausa. Konstruksi ini memiliki struktur X
Thorsland, Martin N.; Novak, Joseph D.
1974-01-01
Described is an approach to assessment of intuitive and analytic modes of thinking in physics. These modes of thinking are associated with Ausubel's theory of learning. High ability in either intuitive or analytic thinking was associated with success in college physics, with high learning efficiency following a pattern expected on the basis of…
Analytical approach to the investigation of Rayleigh-Taylor structures of the equatorial F region
International Nuclear Information System (INIS)
Komarov, V.N.; Sazonov, S.V.
1991-01-01
On the basis of approximation of a strong vertical extension the nonlinear dynamics of Rayleigh-Taylor structures in the equatorial F region is analytically studied. The successive approximation method, proposed herein, is true for structures having longitudinal symmetry. Using this method it is managed to describe the mushroom-shaped bubble with a shock wave profile in its head part. The nonlinearity leads to bubble formation under conditions with aggravation, limiting the growth of positive disturbances at the same time
An analytical approach to the assessment of transuranics transmutation
International Nuclear Information System (INIS)
Piera, M.; Sanz, J.; Perlado, M.; Minguez, E.; Martinez-Val, J.M.
1999-01-01
An analytical study of Pu isotopes burnup in different transmutator prototypes is presented in this paper. Each prototype has been identified by a set of averaged cross sections, i.e., they are characterized by the neutron spectrum. Three types of systems have been considered: a fast spectrum reactor, which can be associated to molten lead systems; a fully thermalized reactor; and an epithermal reactor with a strong contribution to resonance reactions. The study has been focused on the burnup of Pu-239, Pu-240 and Pu-241 because they account (directly or indirectly) for the highest contribution to long-term radiotoxicity, as already pointed out. Pu-239 also conveys significant concerns on long-term proliferation risks. Therefore, elimination of these nuclei is the most important priority in the framework of reducing the nuclear waste risk in the long-term scenario. (author)
Asgharzadeh, Hafez; Borazjani, Iman
2017-02-15
diagonal of the Jacobian further improves the performance by 42 - 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.
Asgharzadeh, Hafez; Borazjani, Iman
2016-01-01
diagonal of the Jacobian further improves the performance by 42 – 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80–90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future. PMID:28042172
Fymat, A. L.; Smith, C. B.
1979-01-01
It is shown that the inverse analytical solutions, provided separately by Fymat and Box-McKellar, for reconstructing particle size distributions from remote spectral transmission measurements under the anomalous diffraction approximation can be derived using a cosine and a sine transform, respectively. Sufficient conditions of validity of the two formulas are established. Their comparison shows that the former solution is preferable to the latter in that it requires less a priori information (knowledge of the particle number density is not needed) and has wider applicability. For gamma-type distributions, and either a real or a complex refractive index, explicit expressions are provided for retrieving the distribution parameters; such expressions are, interestingly, proportional to the geometric area of the polydispersion.
Approximate Receding Horizon Approach for Markov Decision Processes: Average Award Case
National Research Council Canada - National Science Library
Chang, Hyeong S; Marcus, Steven I
2002-01-01
...) with countable state space, finite action space, and bounded rewards that uses an approximate solution of a fixed finite-horizon sub-MDP of a given infinite-horizon MDP to create a stationary policy...
DEFF Research Database (Denmark)
Popov, Vladislav; Lavrinenko, Andrei; Novitsky, Andrey
2016-01-01
that the zeroth-, first-, and second-order approximations of the operator effective medium theory correspond to electric dipoles, chirality, and magnetic dipoles plus electric quadrupoles, respectively. We discover that the spatially dispersive bianisotropic effective medium obtained in the second...
Energy Technology Data Exchange (ETDEWEB)
Xing, Zhi-zhong [Institute of High Energy Physics, Chinese Academy of Sciences,Beijing 100049 (China); School of Physical Sciences, University of Chinese Academy of Sciences,Beijing 100049 (China); Center for High Energy Physics, Peking University,Beijing 100080 (China); Zhu, Jing-yu [Institute of High Energy Physics, Chinese Academy of Sciences,Beijing 100049 (China)
2016-07-04
Given an accelerator-based neutrino experiment with the beam energy E≲1 GeV, we expand the probabilities of ν{sub μ}→ν{sub e} and ν̄{sub μ}→ν̄{sub e} oscillations in matter in terms of two small quantities Δ{sub 21}/Δ{sub 31} and A/Δ{sub 31}, where Δ{sub 21}≡m{sub 2}{sup 2}−m{sub 1}{sup 2} and Δ{sub 31}≡m{sub 3}{sup 2}−m{sub 1}{sup 2} are the neutrino mass-squared differences, and A measures the strength of terrestrial matter effects. Our analytical approximations are numerically more accurate than those made by Freund in this energy region, and thus they are particularly applicable for the study of leptonic CP violation in the low-energy MOMENT, ESSνSM and T2K oscillation experiments. As a by-product, the new analytical approximations help us to easily understand why the matter-corrected Jarlskog parameter J̃ peaks at the resonance energy E{sub ∗}≃0.14 GeV (or 0.12 GeV) for the normal (or inverted) neutrino mass hierarchy, and how the three Dirac unitarity triangles are deformed due to the terrestrial matter contamination. We also affirm that a medium-baseline neutrino oscillation experiment with the beam energy E lying in the E{sub ∗}≲E≲2E{sub ∗} range is capable of exploring leptonic CP violation with little matter-induced suppression.
Analytical approaches for arsenic determination in air: A critical review
Energy Technology Data Exchange (ETDEWEB)
Sánchez-Rodas, Daniel, E-mail: rodas@uhu.es [Centre for Research in Sustainable Chemistry-CIQSO, Associated Unit CSIC-University of Huelva “Atmospheric Pollution”, Campus El Carmen, University of Huelva, 21071 Huelva (Spain); Department of Chemistry and Materials Science, University of Huelva, 21071 Huelva (Spain); Sánchez de la Campa, Ana M. [Centre for Research in Sustainable Chemistry-CIQSO, Associated Unit CSIC-University of Huelva “Atmospheric Pollution”, Campus El Carmen, University of Huelva, 21071 Huelva (Spain); Department of Mining, Mechanic and Energetic Engineering, ETSI, University of Huelva, 21071 Huelva (Spain); Alsioufi, Louay [Centre for Research in Sustainable Chemistry-CIQSO, Associated Unit CSIC-University of Huelva “Atmospheric Pollution”, Campus El Carmen, University of Huelva, 21071 Huelva (Spain)
2015-10-22
This review describes the different steps involved in the determination of arsenic in air, considering the particulate matter (PM) and the gaseous phase. The review focuses on sampling, sample preparation and instrumental analytical techniques for both total arsenic determination and speciation analysis. The origin, concentration and legislation concerning arsenic in ambient air are also considered. The review intends to describe the procedures for sample collection of total suspended particles (TSP) or particles with a certain diameter expressed in microns (e.g. PM10 and PM2.5), or the collection of the gaseous phase containing gaseous arsenic species. Sample digestion of the collecting media for PM is described, indicating proposed and established procedures that use acids or mixtures of acids aided with different heating procedures. The detection techniques are summarized and compared (ICP-MS, ICP-OES and ET-AAS), as well those techniques capable of direct analysis of the solid sample (PIXE, INAA and XRF). The studies about speciation in PM are also discussed, considering the initial works that employed a cold trap in combination with atomic spectroscopy detectors, or the more recent studies based on chromatography (GC or HPLC) combined with atomic or mass detectors (AFS, ICP-MS and MS). Further trends and challenges about determination of As in air are also addressed. - Highlights: • Review about arsenic in the air. • Sampling, sample treatment and analysis of arsenic in particulate matter and gaseous phase. • Total arsenic determination and arsenic speciation analysis.
A tiered analytical approach for investigating poor quality emergency contraceptives.
Directory of Open Access Journals (Sweden)
María Eugenia Monge
Full Text Available Reproductive health has been deleteriously affected by poor quality medicines. Emergency contraceptive pills (ECPs are an important birth control method that women can use after unprotected coitus for reducing the risk of pregnancy. In response to the detection of poor quality ECPs commercially available in the Peruvian market we developed a tiered multi-platform analytical strategy. In a survey to assess ECP medicine quality in Peru, 7 out of 25 different batches showed inadequate release of levonorgestrel by dissolution testing or improper amounts of active ingredient. One batch was found to contain a wrong active ingredient, with no detectable levonorgestrel. By combining ultrahigh performance liquid chromatography-ion mobility spectrometry-mass spectrometry (UHPLC-IMS-MS and direct analysis in real time MS (DART-MS the unknown compound was identified as the antibiotic sulfamethoxazole. Quantitation by UHPLC-triple quadrupole tandem MS (QqQ-MS/MS indicated that the wrong ingredient was present in the ECP sample at levels which could have significant physiological effects. Further chemical characterization of the poor quality ECP samples included the identification of the excipients by 2D Diffusion-Ordered Nuclear Magnetic Resonance Spectroscopy (DOSY 1H NMR indicating the presence of lactose and magnesium stearate.
GANViz: A Visual Analytics Approach to Understand the Adversarial Game.
Wang, Junpeng; Gou, Liang; Yang, Hao; Shen, Han-Wei
2018-06-01
Generative models bear promising implications to learn data representations in an unsupervised fashion with deep learning. Generative Adversarial Nets (GAN) is one of the most popular frameworks in this arena. Despite the promising results from different types of GANs, in-depth understanding on the adversarial training process of the models remains a challenge to domain experts. The complexity and the potential long-time training process of the models make it hard to evaluate, interpret, and optimize them. In this work, guided by practical needs from domain experts, we design and develop a visual analytics system, GANViz, aiming to help experts understand the adversarial process of GANs in-depth. Specifically, GANViz evaluates the model performance of two subnetworks of GANs, provides evidence and interpretations of the models' performance, and empowers comparative analysis with the evidence. Through our case studies with two real-world datasets, we demonstrate that GANViz can provide useful insight into helping domain experts understand, interpret, evaluate, and potentially improve GAN models.
Analytical and statistical approaches in the characterization of synthetic polymers
Dimzon, I.K.
2015-01-01
Polymers vary in terms of the monomer/s used; the number, distribution and type of linkage of monomers per molecule; and the side chains and end groups attached. Given this diversity, traditional single-technique approaches to characterization often give limited and inadequate information about a
Radio drama adaptations: an approach towards an analytical methodology
Huwiler, E.
2010-01-01
This article establishes a methodology with which radio drama pieces can be analysed. It thereby integrates all features the art form has to offer: voices, music, noises, but also technical features like cutting and mixing contribute to the narrative that is being told. This approach emphasizes the
Analytical methods for prefiltering of close approaches between ...
African Journals Online (AJOL)
user
2010-02-10
Feb 10, 2010 ... find out the close approach for all objects with simulations. ... the operational satellite and other orbiting objects. ... Recently, space scientists all over the Globe are giving much ... avoidances (Alarcon-Rodriguez et al., 2004, Gronchi, 2005 and Choi et al., 2009) for the stability of future Low Earth Orbit (LEO).
International Nuclear Information System (INIS)
Wan, X; Xu, G H; Tao, T F; Zhang, Q; Tse, P W
2016-01-01
Most previous studies on nonlinear Lamb waves are conducted using mode pairs that satisfying strict phase velocity matching and non-zero power flux criteria. However, there are some limitations in existence. First, strict phase velocity matching is not existed in the whole frequency bandwidth; Second, excited center frequency is not always exactly equal to the true phase-velocity-matching frequency; Third, mode pairs are isolated and quite limited in number; Fourth, exciting a single desired primary mode is extremely difficult in practice and the received signal is quite difficult to process and interpret. And few attention has been paid to solving these shortcomings. In this paper, nonlinear S0 mode Lamb waves at low-frequency range satisfying approximate phase velocity matching is proposed for the purpose of overcoming these limitations. In analytical studies, the secondary amplitudes with the propagation distance considering the fundamental frequency, the maximum cumulative propagation distance (MCPD) with the fundamental frequency and the maximum linear cumulative propagation distance (MLCPD) using linear regression analysis are investigated. Based on analytical results, approximate phase velocity matching is quantitatively characterized as the relative phase velocity deviation less than a threshold value of 1%. Numerical studies are also conducted using tone burst as the excitation signal. The influences of center frequency and frequency bandwidth on the secondary amplitudes and MCPD are investigated. S1–S2 mode with the fundamental frequency at 1.8 MHz, the primary S0 mode at the center frequencies of 100 and 200 kHz are used respectively to calculate the ratios of nonlinear parameter of Al 6061-T6 to Al 7075-T651. The close agreement of the computed ratios to the actual value verifies the effectiveness of nonlinear S0 mode Lamb waves satisfying approximate phase velocity matching for characterizing the material nonlinearity. Moreover, the ratios derived
Directory of Open Access Journals (Sweden)
Loreen eHertäg
2014-09-01
Full Text Available Computational models offer a unique tool for understanding the network-dynamical mechanisms which mediate between physiological and biophysical properties, and behavioral function. A traditional challenge in computational neuroscience is, however, that simple neuronal models which can be studied analytically fail to reproduce the diversity of electrophysiological behaviors seen in real neurons, while detailed neuronal models which do reproduce such diversity are intractable analytically and computationally expensive. A number of intermediate models have been proposed whose aim is to capture the diversity of firing behaviors and spike times of real neurons while entailing a mathematical description as simple as possible. One such model is the exponential integrate-and-fire neuron with spike rate adaptation (aEIF which consists of two differential equations for the membrane potential (V and an adaptation current (w. Despite its simplicity, it can reproduce a wide variety of physiologically observed spiking patterns, can be fit to physiological recordings quantitatively, and, once done so, is able to predict spike times on traces not used for model fitting. Here we compute the steady-state firing rate of aEIF in the presence of Gaussian synaptic noise, using two approaches. The first approach is based on the 2-dimensional Fokker-Planck equation that describes the (V,w-probability distribution, which is solved using an expansion in the ratio between the time constants of the two variables. The second is based on the firing rate of the EIF model, which is averaged over the distribution of the $w$ variable. These analytically derived closed-form expressions were tested on simulations from a large variety of model cells quantitatively fitted to in vitro electrophysiological recordings from pyramidal cells and interneurons. Theoretical predictions closely agreed with the firing rate of the simulated cells fed with in-vivo-like synaptic noise.
Hertäg, Loreen; Durstewitz, Daniel; Brunel, Nicolas
2014-01-01
Computational models offer a unique tool for understanding the network-dynamical mechanisms which mediate between physiological and biophysical properties, and behavioral function. A traditional challenge in computational neuroscience is, however, that simple neuronal models which can be studied analytically fail to reproduce the diversity of electrophysiological behaviors seen in real neurons, while detailed neuronal models which do reproduce such diversity are intractable analytically and computationally expensive. A number of intermediate models have been proposed whose aim is to capture the diversity of firing behaviors and spike times of real neurons while entailing the simplest possible mathematical description. One such model is the exponential integrate-and-fire neuron with spike rate adaptation (aEIF) which consists of two differential equations for the membrane potential (V) and an adaptation current (w). Despite its simplicity, it can reproduce a wide variety of physiologically observed spiking patterns, can be fit to physiological recordings quantitatively, and, once done so, is able to predict spike times on traces not used for model fitting. Here we compute the steady-state firing rate of aEIF in the presence of Gaussian synaptic noise, using two approaches. The first approach is based on the 2-dimensional Fokker-Planck equation that describes the (V,w)-probability distribution, which is solved using an expansion in the ratio between the time constants of the two variables. The second is based on the firing rate of the EIF model, which is averaged over the distribution of the w variable. These analytically derived closed-form expressions were tested on simulations from a large variety of model cells quantitatively fitted to in vitro electrophysiological recordings from pyramidal cells and interneurons. Theoretical predictions closely agreed with the firing rate of the simulated cells fed with in-vivo-like synaptic noise.
Analytic equation of state for FCC C60 solid based on analytic mean-field potential approach
International Nuclear Information System (INIS)
Sun Jiuxun
2006-01-01
The analytic mean-field approach (AMFP) was applied to the FCC C60 solid. For the intermolecular forces the Girifalco potential has been utilized. The analytic expressions for the Helmholtz free energy, internal energy and equation of state have been derived. The numerical results of thermodynamic quantities are compared with the molecular dynamic (MD) simulations and the unsymmetrized self-consistent field approach (CUSF) in the literature. It is shown that our AMFP results are in good agreement with the MD data both at low and high temperatures. The results of CUSF are in accordance with the AMFP at low temperature, but at high temperature the difference becomes prominent. Especially the AMFP predicted that the FCC C60 solid is stable upto 2202 K, the spinodal temperature, in good agreement with 2320 K from the MD simulation. However, the CUST just gives 1916 K, a temperature evidently lower than the MD data. The AMFP qualifies as a useful approach that can reasonably consider the anharmonic effects at high temperature
Spacecraft formation control using analytical finite-duration approaches
Ben Larbi, Mohamed Khalil; Stoll, Enrico
2018-03-01
This paper derives a control concept for formation flight (FF) applications assuming circular reference orbits. The paper focuses on a general impulsive control concept for FF which is then extended to the more realistic case of non-impulsive thrust maneuvers. The control concept uses a description of the FF in relative orbital elements (ROE) instead of the classical Cartesian description since the ROE provide a direct insight into key aspects of the relative motion and are particularly suitable for relative orbit control purposes and collision avoidance analysis. Although Gauss' variational equations have been first derived to offer a mathematical tool for processing orbit perturbations, they are suitable for several different applications. If the perturbation acceleration is due to a control thrust, Gauss' variational equations show the effect of such a control thrust on the Keplerian orbital elements. Integrating the Gauss' variational equations offers a direct relation between velocity increments in the local vertical local horizontal frame and the subsequent change of Keplerian orbital elements. For proximity operations, these equations can be generalized from describing the motion of single spacecraft to the description of the relative motion of two spacecraft. This will be shown for impulsive and finite-duration maneuvers. Based on that, an analytical tool to estimate the error induced through impulsive maneuver planning is presented. The resulting control schemes are simple and effective and thus also suitable for on-board implementation. Simulations show that the proposed concept improves the timing of the thrust maneuver executions and thus reduces the residual error of the formation control.
Radiative heat transfer in honeycomb structures-New simple analytical and numerical approaches
International Nuclear Information System (INIS)
Baillis, D; Coquard, R; Randrianalisoa, J
2012-01-01
Porous Honeycomb Structures present the interest of combining, at the same time, high thermal insulating properties, low density and sufficient mechanical resistance. However, their thermal properties remain relatively unexplored. The aim of this study is the modelling of the combined heat transfer and especially radiative heat transfer through this type of anisotropic porous material. The equivalent radiative properties of the material are determined using ray-tracing procedures inside the honeycomb porous structure. From computational ray-tracing results, simple new analytical relations have been deduced. These useful analytical relations permit to determine radiative properties such as extinction, absorption and scattering coefficients and phase function functions of cell dimensions and optical properties of cell walls. The radiative properties of honeycomb material strongly depend on the direction of propagation. From the radiative properties computed, we have estimated the radiative heat flux passing through slabs of honeycomb core materials submitted to a 1-D temperature difference between a hot and a cold plate. We have compared numerical results obtained from Discrete Ordinate Method with analytical results obtained from Rosseland-Deissler approximation. This approximation is usually used in the case of isotropic materials. We have extended it to anisotropic honeycomb materials. Indeed a mean over incident directions of Rosseland extinction coefficient is proposed. Results tend to show that Rosseland-Deissler extended approximation can be used as a first approximation. Deviation on radiative conductivity obtained from Rosseland-Deissler approximation and from the Discrete Ordinated Method are lower than 6.7% for all the cases studied.
A new analytical approach for monitoring microplastics in marine sediments
International Nuclear Information System (INIS)
Nuelle, Marie-Theres; Dekiff, Jens H.; Remy, Dominique; Fries, Elke
2014-01-01
A two-step method was developed to extract microplastics from sediments. First, 1 kg sediments was pre-extracted using the air-induced overflow (AIO) method, based on fluidisation in a sodium chloride (NaCl) solution. The original sediment mass was reduced by up to 80%. As a consequence, it was possible to reduce the volume of sodium iodide (NaI) solution used for the subsequent flotation step. Recoveries of the whole procedure for polyethylene, polypropylene (PP), polyvinyl chloride (PVC), polyethylene terephthalate (PET), polystyrene and polyurethane with sizes of approximately 1 mm were between 91 and 99%. After being stored for one week in a 35% H 2 O 2 solution, 92% of selected biogenic material had dissolved completely or had lost its colour, whereas the tested polymers were resistant. Microplastics were extracted from three sediment samples collected from the North Sea island Norderney. Using pyrolysis gas chromatography/mass spectrometry, these microplastics were identified as PP, PVC and PET. -- A two-step extraction technique enabled microplastics to be extracted from sediments at an affordable price by decreasing the sediment mass in the first step
A Volterra series approach to the approximation of stochastic nonlinear dynamics
Wouw, van de N.; Nijmeijer, H.; Campen, van D.H.
2002-01-01
A response approximation method for stochastically excited, nonlinear, dynamic systems is presented. Herein, the output of the nonlinear system isapproximated by a finite-order Volterra series. The original nonlinear system is replaced by a bilinear system in order to determine the kernels of this
Trombetti, Tomaso
This thesis presents an Experimental/Analytical approach to modeling and calibrating shaking tables for structural dynamic applications. This approach was successfully applied to the shaking table recently built in the structural laboratory of the Civil Engineering Department at Rice University. This shaking table is capable of reproducing model earthquake ground motions with a peak acceleration of 6 g's, a peak velocity of 40 inches per second, and a peak displacement of 3 inches, for a maximum payload of 1500 pounds. It has a frequency bandwidth of approximately 70 Hz and is designed to test structural specimens up to 1/5 scale. The rail/table system is mounted on a reaction mass of about 70,000 pounds consisting of three 12 ft x 12 ft x 1 ft reinforced concrete slabs, post-tensioned together and connected to the strong laboratory floor. The slip table is driven by a hydraulic actuator governed by a 407 MTS controller which employs a proportional-integral-derivative-feedforward-differential pressure algorithm to control the actuator displacement. Feedback signals are provided by two LVDT's (monitoring the slip table relative displacement and the servovalve main stage spool position) and by one differential pressure transducer (monitoring the actuator force). The dynamic actuator-foundation-specimen system is modeled and analyzed by combining linear control theory and linear structural dynamics. The analytical model developed accounts for the effects of actuator oil compressibility, oil leakage in the actuator, time delay in the response of the servovalve spool to a given electrical signal, foundation flexibility, and dynamic characteristics of multi-degree-of-freedom specimens. In order to study the actual dynamic behavior of the shaking table, the transfer function between target and actual table accelerations were identified using experimental results and spectral estimation techniques. The power spectral density of the system input and the cross power spectral
Co-evolving prisoner's dilemma: Performance indicators and analytic approaches
Zhang, W.; Choi, C. W.; Li, Y. S.; Xu, C.; Hui, P. M.
2017-02-01
Understanding the intrinsic relation between the dynamical processes in a co-evolving network and the necessary ingredients in formulating a reliable theory is an important question and a challenging task. Using two slightly different definitions of performance indicator in the context of a co-evolving prisoner's dilemma game, it is shown that very different cooperative levels result and theories of different complexity are required to understand the key features. When the payoff per opponent is used as the indicator (Case A), non-cooperative strategy has an edge and dominates in a large part of the parameter space formed by the cutting-and-rewiring probability and the strategy imitation probability. When the payoff from all opponents is used (Case B), cooperative strategy has an edge and dominates the parameter space. Two distinct phases, one homogeneous and dynamical and another inhomogeneous and static, emerge and the phase boundary in the parameter space is studied in detail. A simple theory assuming an average competing environment for cooperative agents and another for non-cooperative agents is shown to perform well in Case A. The same theory, however, fails badly for Case B. It is necessary to include more spatial correlation into a theory for Case B. We show that the local configuration approximation, which takes into account of the different competing environments for agents with different strategies and degrees, is needed to give reliable results for Case B. The results illustrate that formulating a proper theory requires both a conceptual understanding of the effects of the adaptive processes in the problem and a delicate balance between simplicity and accuracy.
An analytics approach to designing patient centered medical homes.
Ajorlou, Saeede; Shams, Issac; Yang, Kai
2015-03-01
Recently the patient centered medical home (PCMH) model has become a popular team based approach focused on delivering more streamlined care to patients. In current practices of medical homes, a clinical based prediction frame is recommended because it can help match the portfolio capacity of PCMH teams with the actual load generated by a set of patients. Without such balances in clinical supply and demand, issues such as excessive under and over utilization of physicians, long waiting time for receiving the appropriate treatment, and non-continuity of care will eliminate many advantages of the medical home strategy. In this paper, by using the hierarchical generalized linear model with multivariate responses, we develop a clinical workload prediction model for care portfolio demands in a Bayesian framework. The model allows for heterogeneous variances and unstructured covariance matrices for nested random effects that arise through complex hierarchical care systems. We show that using a multivariate approach substantially enhances the precision of workload predictions at both primary and non primary care levels. We also demonstrate that care demands depend not only on patient demographics but also on other utilization factors, such as length of stay. Our analyses of a recent data from Veteran Health Administration further indicate that risk adjustment for patient health conditions can considerably improve the prediction power of the model.
Analytical and computational approaches to define the Aspergillus niger secretome
Energy Technology Data Exchange (ETDEWEB)
Tsang, Adrian; Butler, Gregory D.; Powlowski, Justin; Panisko, Ellen A.; Baker, Scott E.
2009-03-01
We used computational and mass spectrometric approaches to characterize the Aspergillus niger secretome. The 11,200 gene models predicted in the genome of A. niger strain ATCC 1015 were the data source for the analysis. Depending on the computational methods used, 691 to 881 proteins were predicted to be secreted proteins. We cultured A. niger in six different media and analyzed the extracellular proteins produced using mass spectrometry. A total of 222 proteins were identified, with 39 proteins expressed under all six conditions and 74 proteins expressed under only one condition. The secreted proteins identified by mass spectrometry were used to guide the correction of about 20 gene models. Additional analysis focused on extracellular enzymes of interest for biomass processing. Of the 63 glycoside hydrolases predicted to be capable of hydrolyzing cellulose, hemicellulose or pectin, 94% of the exo-acting enzymes and only 18% of the endo-acting enzymes were experimentally detected.
Analytic and probabilistic approaches to dynamics in negative curvature
Peigné, Marc; Sambusetti, Andrea
2014-01-01
The work of E. Hopf and G.A. Hedlund, in the 1930s, on transitivity and ergodicity of the geodesic flow for hyperbolic surfaces, marked the beginning of the investigation of the statistical properties and stochastic behavior of the flow. The first central limit theorem for the geodesic flow was proved in the 1960s by Y. Sinai for compact hyperbolic manifolds. Since then, strong relationships have been found between the fields of ergodic theory, analysis, and geometry. Different approaches and new tools have been developed to study the geodesic flow, including measure theory, thermodynamic formalism, transfer operators, Laplace operators, and Brownian motion. All these different points of view have led to a deep understanding of more general dynamical systems, in particular the so-called Anosov systems, with applications to geometric problems such as counting, equirepartition, mixing, and recurrence properties of the orbits. This book comprises two independent texts that provide a self-contained introduction t...
Nuclear emergency response planning based on participatory decision analytic approaches
International Nuclear Information System (INIS)
Sinkko, K.
2004-10-01
This work was undertaken in order to develop methods and techniques for evaluating systematically and comprehensively protective action strategies in the case of a nuclear or radiation emergency. This was done in a way that the concerns and issues of all key players related to decisions on protective actions could be aggregated into decision- making transparently and in an equal manner. An approach called facilitated workshop, based on the theory of Decision Analysis, was tailored and tested in the planning of actions to be taken. The work builds on case studies in which it was assumed that a hypothetical accident in a nuclear power plant had led to a release of considerable amounts of radionuclides and therefore different types of protective actions should be considered. Altogether six workshops were organised in which all key players were represented, i.e., the authorities, expert organisations, industry and agricultural producers. The participants were those responsible for preparing advice or presenting matters for those responsible for the formal decision-making. Many preparatory meetings were held with various experts to prepare information for the workshops. It was considered essential that the set-up strictly follow the decision- making process to which the key players are accustomed. Key players or stakeholders comprise responsible administrators and organisations, politicians as well as representatives of the citizens affected and other persons who will and are likely to take part in decision-making in nuclear emergencies. The realistic nature and the disciplined process of a facilitated workshop and commitment to decision-making yielded up insight in many radiation protection issues. The objectives and attributes which are considered in a decision on protective actions were discussed in many occasions and were defined for different accident scenario to come. In the workshops intervention levels were derived according justification and optimisation
Analytic Approach to Resolving Parking Problems in Downtown Zagreb
Directory of Open Access Journals (Sweden)
Adolf Malić
2005-01-01
Full Text Available Parking issue is one of the major problems in Zagreb, andin relation to that Zagreb does not differ from other similar orbigger European cities. The problem the city is facing is beingpresented in the paper. It is complex and can be solved gradually,using operative and planning measures, by applying influentialparameters assessments based on which the appropriateparking-garage spaces assessment would be selected. Besides,all the knowledge learned from experiences of similar Europeancities should be used in resolving stationary traffic problem.Introduction of fast public urban transport would providepassengers with improved services (particularly in relation tothe travelling time introducing modern traffic system thatwould reduce the travelling time to below 30 minutes for the farthestrelations. Further improvement in reducing parking problemsin downtown as well as Zagreb broader area would not bepossible without t,nplementing th.s approach.
[Academic review of global health approaches: an analytical framework].
Franco-Giraldo, Alvaro
2015-09-01
In order to identify perspectives on global health, this essay analyzes different trends from academia that have enriched global health and international health. A database was constructed with information from the world's leading global health centers. The search covered authors on global diplomacy and global health and was performed in PubMed, LILACS, and Google Scholar with the key words "global health" and "international health". Research and training centers in different countries have taken various academic approaches to global health; various interests and ideological orientations have emerged in relation to the global health concept. Based on the mosaic of global health centers and their positions, the review concludes that the new concept reflects the construction of a paradigm of renewal in international health and global health, the pre-paradigmatic stage of which has still not reached a final version.
Unemployment and Causes of Hospital Admission Considering Different Analytical Approaches
DEFF Research Database (Denmark)
Berg-Beckhoff, Gabriele; Gulis, Gabriel; Kronborg Bak, Carsten
2016-01-01
and circulatory disease. Register-based data was analysed for the period of 2006 to 2009. In the cross-sectional analysis, a multiple logistic regression model was conducted based on the year 2006 and cohort information from the same year onward up to 2009 was available for a cox regression model. Social welfare...... compensated unemployment and both types of disease specific hospital admission was associated statistically significant in the cross-sectional analysis. With regard to circulatory disease, the cohort approach suggests that social welfare compensated unemployment might lead to hospital admission due...... to the disease. Given the significant results in the cross-sectional analysis for hospital admission due to cancer, the unfound cohort effect might indicate a reverse causation suggesting that the disease caused joblessness and finally, social welfare compensated unemployment and not vice versa. Comparing...
An intrinsic robust rank-one-approximation approach for currencyportfolio optimization
Hongxuan Huang; Zhengjun Zhang
2018-01-01
A currency portfolio is a special kind of wealth whose value fluctuates with foreignexchange rates over time, which possesses 3Vs (volume, variety and velocity) properties of big datain the currency market. In this paper, an intrinsic robust rank one approximation (ROA) approachis proposed to maximize the value of currency portfolios over time. The main results of the paperinclude four parts: Firstly, under the assumptions about the currency market, the currency portfoliooptimization problem ...
DEFF Research Database (Denmark)
Shuai, Hang; Ai, Xiaomeng; Wen, Jinyu
2017-01-01
This paper proposes a hybrid approximate dynamic programming (ADP) approach for the multiple time-period optimal power flow in integrated gas and power systems. ADP successively solves Bellman's equation to make decisions according to the current state of the system. So, the updated near future...
Directory of Open Access Journals (Sweden)
Ishak Altun
2016-01-01
Full Text Available We provide sufficient conditions for the existence of a unique common fixed point for a pair of mappings T,S:X→X, where X is a nonempty set endowed with a certain metric. Moreover, a numerical algorithm is presented in order to approximate such solution. Our approach is different to the usual used methods in the literature.
Analytic nuclear scattering theories
International Nuclear Information System (INIS)
Di Marzio, F.; University of Melbourne, Parkville, VIC
1999-01-01
A wide range of nuclear reactions are examined in an analytical version of the usual distorted wave Born approximation. This new approach provides either semi analytic or fully analytic descriptions of the nuclear scattering processes. The resulting computational simplifications, when used within the limits of validity, allow very detailed tests of both nuclear interaction models as well as large basis models of nuclear structure to be performed
International Nuclear Information System (INIS)
Badano, Aldo; Freed, Melanie; Fang Yuan
2011-01-01
Purpose: The authors describe the modifications to a previously developed analytical model of indirect CsI:Tl-based detector response required for studying oblique x-ray incidence effects in direct semiconductor-based detectors. This first-order approximation analysis allows the authors to describe the associated degradation in resolution in direct detectors and compare the predictions to the published data for indirect detectors. Methods: The proposed model is based on a physics-based analytical description developed by Freed et al. [''A fast, angle-dependent, analytical model of CsI detector response for optimization of 3D x-ray breast imaging systems,'' Med. Phys. 37(6), 2593-2605 (2010)] that describes detector response functions for indirect detectors and oblique incident x rays. The model, modified in this work to address direct detector response, describes the dependence of the response with x-ray energy, thickness of the transducer layer, and the depth-dependent blur and collection efficiency. Results: The authors report the detector response functions for indirect and direct detector models for typical thicknesses utilized in clinical systems for full-field digital mammography (150 μm for indirect CsI:Tl and 200 μm for a-Se direct detectors). The results suggest that the oblique incidence effect in a semiconductor detector differs from that in indirect detectors in two ways: The direct detector model produces a sharper overall PRF compared to the response corresponding to the indirect detector model for normal x-ray incidence and a larger relative increase in blur along the x-ray incidence direction compared to that found in indirect detectors with respect to the response at normal incidence angles. Conclusions: Compared to the effect seen in indirect detectors, the direct detector model exhibits a sharper response at normal x-ray incidence and a larger relative increase in blur along the x-ray incidence direction with respect to the blur in the
An analytical approach for the Propagation Saw Test
Benedetti, Lorenzo; Fischer, Jan-Thomas; Gaume, Johan
2016-04-01
The Propagation Saw Test (PST) [1, 2] is an experimental in-situ technique that has been introduced to assess crack propagation propensity in weak snowpack layers buried below cohesive snow slabs. This test attracted the interest of a large number of practitioners, being relatively easy to perform and providing useful insights for the evaluation of snow instability. The PST procedure requires isolating a snow column of 30 centimeters of width and -at least-1 meter in the downslope direction. Then, once the stratigraphy is known (e.g. from a manual snow profile), a saw is used to cut a weak layer which could fail, potentially leading to the release of a slab avalanche. If the length of the saw cut reaches the so-called critical crack length, the onset of crack propagation occurs. Furthermore, depending on snow properties, the crack in the weak layer can initiate the fracture and detachment of the overlying slab. Statistical studies over a large set of field data confirmed the relevance of the PST, highlighting the positive correlation between test results and the likelihood of avalanche release [3]. Recent works provided key information on the conditions for the onset of crack propagation [4] and on the evolution of slab displacement during the test [5]. In addition, experimental studies [6] and simplified models [7] focused on the qualitative description of snowpack properties leading to different failure types, namely full propagation or fracture arrest (with or without slab fracture). However, beside current numerical studies utilizing discrete elements methods [8], only little attention has been devoted to a detailed analytical description of the PST able to give a comprehensive mechanical framework of the sequence of processes involved in the test. Consequently, this work aims to give a quantitative tool for an exhaustive interpretation of the PST, stressing the attention on important parameters that influence the test outcomes. First, starting from a pure
An approximate dynamic programming approach to resource management in multi-cloud scenarios
Pietrabissa, Antonio; Priscoli, Francesco Delli; Di Giorgio, Alessandro; Giuseppi, Alessandro; Panfili, Martina; Suraci, Vincenzo
2017-03-01
The programmability and the virtualisation of network resources are crucial to deploy scalable Information and Communications Technology (ICT) services. The increasing demand of cloud services, mainly devoted to the storage and computing, requires a new functional element, the Cloud Management Broker (CMB), aimed at managing multiple cloud resources to meet the customers' requirements and, simultaneously, to optimise their usage. This paper proposes a multi-cloud resource allocation algorithm that manages the resource requests with the aim of maximising the CMB revenue over time. The algorithm is based on Markov decision process modelling and relies on reinforcement learning techniques to find online an approximate solution.
Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E
2018-03-14
We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.
Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.
2018-03-01
We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.
Approximate entropy: a new evaluation approach of mental workload under multitask conditions
Yao, Lei; Li, Xiaoling; Wang, Wei; Dong, Yuanzhe; Jiang, Ying
2014-04-01
There are numerous instruments and an abundance of complex information in the traditional cockpit display-control system, and pilots require a long time to familiarize themselves with the cockpit interface. This can cause accidents when they cope with emergency events, suggesting that it is necessary to evaluate pilot cognitive workload. In order to establish a simplified method to evaluate cognitive workload under a multitask condition. We designed a series of experiments involving different instrument panels and collected electroencephalograms (EEG) from 10 healthy volunteers. The data were classified and analyzed with an approximate entropy (ApEn) signal processing. ApEn increased with increasing experiment difficulty, suggesting that it can be used to evaluate cognitive workload. Our results demonstrate that ApEn can be used as an evaluation criteria of cognitive workload and has good specificity and sensitivity. Moreover, we determined an empirical formula to assess the cognitive workload interval, which can simplify cognitive workload evaluation under multitask conditions.
MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory.
Horowitz, Gary L; Zaman, Zahur; Blanckaert, Norbert J C; Chan, Daniel W; Dubois, Jeffrey A; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W; Nilsen, Olaug L; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang
2005-01-01
MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality.
An analytical approach to the CMB polarization in a spatially closed background
Niazy, Pedram; Abbassi, Amir H.
2018-03-01
The scalar mode polarization of the cosmic microwave background is derived in a spatially closed universe from the Boltzmann equation using the line of sight integral method. The EE and TE multipole coefficients have been extracted analytically by considering some tolerable approximations such as considering the evolution of perturbation hydrodynamically and sudden transition from opacity to transparency at the time of last scattering. As the major advantage of analytic expressions, CEE,ℓS and CTE,ℓ explicitly show the dependencies on baryon density ΩB, matter density ΩM, curvature ΩK, primordial spectral index ns, primordial power spectrum amplitude As, Optical depth τreion, recombination width σt and recombination time tL. Using a realistic set of cosmological parameters taken from a fit to data from Planck, the closed universe EE and TE power spectrums in the scalar mode are compared with numerical results from the CAMB code and also latest observational data. The analytic results agree with the numerical ones on the big and moderate scales. The peak positions are in good agreement with the numerical result on these scales while the peak heights agree with that to within 20% due to the approximations have been considered for these derivations. Also, several interesting properties of CMB polarization are revealed by the analytic spectra.
1989-10-31
fo tmaa OmfuogeM ara Mmi. fal in fM?05V~ ~ ~ ~ ~ ~ A D A 2 4 0409"~ n ugt Psoo,@’ oducbof Proton (07044 136M. WagaWapN. DC 20141 T1 3. REPORT TYPE...Al (circumscription, non- monotonic reasoning, and default reasoning), our approach is based on fuzzy logic and, more specifically, on the theory of
Analytical approach of laser beam propagation in the hollow polygonal light pipe.
Zhu, Guangzhi; Zhu, Xiao; Zhu, Changhong
2013-08-10
An analytical method of researching the light distribution properties on the output end of a hollow n-sided polygonal light pipe and a light source with a Gaussian distribution is developed. The mirror transformation matrices and a special algorithm of removing void virtual images are created to acquire the location and direction vector of each effective virtual image on the entrance plane. The analytical method is demonstrated by Monte Carlo ray tracing. At the same time, four typical cases are discussed. The analytical results indicate that the uniformity of light distribution varies with the structural and optical parameters of the hollow n-sided polygonal light pipe and light source with a Gaussian distribution. The analytical approach will be useful to design and choose the hollow n-sided polygonal light pipe, especially for high-power laser beam homogenization techniques.
A Multi-Level Middle-Out Cross-Zooming Approach for Large Graph Analytics
Energy Technology Data Exchange (ETDEWEB)
Wong, Pak C.; Mackey, Patrick S.; Cook, Kristin A.; Rohrer, Randall M.; Foote, Harlan P.; Whiting, Mark A.
2009-10-11
This paper presents a working graph analytics model that embraces the strengths of the traditional top-down and bottom-up approaches with a resilient crossover concept to exploit the vast middle-ground information overlooked by the two extreme analytical approaches. Our graph analytics model is developed in collaboration with researchers and users, who carefully studied the functional requirements that reflect the critical thinking and interaction pattern of a real-life intelligence analyst. To evaluate the model, we implement a system prototype, known as GreenHornet, which allows our analysts to test the theory in practice, identify the technological and usage-related gaps in the model, and then adapt the new technology in their work space. The paper describes the implementation of GreenHornet and compares its strengths and weaknesses against the other prevailing models and tools.
Reinholz, Daniel L.; Shah, Niral
2018-01-01
Equity in mathematics classroom discourse is a pressing concern, but analyzing issues of equity using observational tools remains a challenge. In this article, we propose equity analytics as a quantitative approach to analyzing aspects of equity and inequity in classrooms. We introduce a classroom observation tool that focuses on relatively…
Behavioural effects of advanced cruise control use : a meta-analytic approach.
Dragutinovic, N. Brookhuis, K.A. Hagenzieker, M.P. & Marchau, V.A.W.J.
2006-01-01
In this study, a meta-analytic approach was used to analyse effects of Advanced Cruise Control (ACC) on driving behaviour reported in seven driving simulator studies. The effects of ACC on three consistent outcome measures, namely, driving speed, headway and driver workload have been analysed. The
Colloca, M.; Blanchard, R.; Hellmich, C.; Ito, K.; Rietbergen, van B.
2014-01-01
Bone is a dynamic and hierarchical porous material whose spatial and temporal mechanical properties can vary considerably due to differences in its microstructure and due to remodeling. Hence, a multiscale analytical approach, which combines bone structural information at multiple scales to the
DuBois, Frank L.
1999-01-01
Describes use of the analytic hierarchy process (AHP) as a teaching tool to illustrate the complexities of decision making in an international environment. The AHP approach uses managerial input to develop pairwise comparisons of relevant decision criteria to efficiently generate an appropriate solution. (DB)
Methodological Demonstration of a Text Analytics Approach to Country Logistics System Assessments
DEFF Research Database (Denmark)
Kinra, Aseem; Mukkamala, Raghava Rao; Vatrapu, Ravi
2017-01-01
The purpose of this study is to develop and demonstrate a semi-automated text analytics approach for the identification and categorization of information that can be used for country logistics assessments. In this paper, we develop the methodology on a set of documents for 21 countries using...... and the text analyst. Implications are discussed and future work is outlined....
Knight, David B.; Brozina, Cory; Novoselich, Brian
2016-01-01
This paper investigates how first-year engineering undergraduates and their instructors describe the potential for learning analytics approaches to contribute to student success. Results of qualitative data collection in a first-year engineering course indicated that both students and instructors\temphasized a preference for learning analytics…
Schildcrout, Jonathan S.; Basford, Melissa A.; Pulley, Jill M.; Masys, Daniel R.; Roden, Dan M.; Wang, Deede; Chute, Christopher G.; Kullo, Iftikhar J.; Carrell, David; Peissig, Peggy; Kho, Abel; Denny, Joshua C.
2010-01-01
We describe a two-stage analytical approach for characterizing morbidity profile dissimilarity among patient cohorts using electronic medical records. We capture morbidities using the International Statistical Classification of Diseases and Related Health Problems (ICD-9) codes. In the first stage of the approach separate logistic regression analyses for ICD-9 sections (e.g., “hypertensive disease” or “appendicitis”) are conducted, and the odds ratios that describe adjusted differences in pre...
Cohen, Noa; Sabhachandani, Pooja; Golberg, Alexander; Konry, Tania
2015-04-15
In this study we describe a simple lab-on-a-chip (LOC) biosensor approach utilizing well mixed microfluidic device and a microsphere-based assay capable of performing near real-time diagnostics of clinically relevant analytes such cytokines and antibodies. We were able to overcome the adsorption kinetics reaction rate-limiting mechanism, which is diffusion-controlled in standard immunoassays, by introducing the microsphere-based assay into well-mixed yet simple microfluidic device with turbulent flow profiles in the reaction regions. The integrated microsphere-based LOC device performs dynamic detection of the analyte in minimal amount of biological specimen by continuously sampling micro-liter volumes of sample per minute to detect dynamic changes in target analyte concentration. Furthermore we developed a mathematical model for the well-mixed reaction to describe the near real time detection mechanism observed in the developed LOC method. To demonstrate the specificity and sensitivity of the developed real time monitoring LOC approach, we applied the device for clinically relevant analytes: Tumor Necrosis Factor (TNF)-α cytokine and its clinically used inhibitor, anti-TNF-α antibody. Based on the reported results herein, the developed LOC device provides continuous sensitive and specific near real-time monitoring method for analytes such as cytokines and antibodies, reduces reagent volumes by nearly three orders of magnitude as well as eliminates the washing steps required by standard immunoassays. Copyright © 2014 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Lorenzana, J.; Grynberg, M.D.; Yu, L.; Yonemitsu, K.; Bishop, A.R.
1992-11-01
The ground state energy, and static and dynamic correlation functions are investigated in the inhomogeneous Hartree-Fock (HF) plus random phase approximation (RPA) approach applied to a one-dimensional spinless fermion model showing self-trapped doping states at the mean field level. Results are compared with homogeneous HF and exact diagonalization. RPA fluctuations added to the generally inhomogeneous HF ground state allows the computation of dynamical correlation functions that compare well with exact diagonalization results. The RPA correction to the ground state energy agrees well with the exact results at strong and weak coupling limits. We also compare it with a related quasi-boson approach. The instability towards self-trapped behaviour is signaled by a RPA mode with frequency approaching zero. (author). 21 refs, 10 figs
Analytical approach to linear fractional partial differential equations arising in fluid mechanics
International Nuclear Information System (INIS)
Momani, Shaher; Odibat, Zaid
2006-01-01
In this Letter, we implement relatively new analytical techniques, the variational iteration method and the Adomian decomposition method, for solving linear fractional partial differential equations arising in fluid mechanics. The fractional derivatives are described in the Caputo sense. The two methods in applied mathematics can be used as alternative methods for obtaining analytic and approximate solutions for different types of fractional differential equations. In these methods, the solution takes the form of a convergent series with easily computable components. The corresponding solutions of the integer order equations are found to follow as special cases of those of fractional order equations. Some numerical examples are presented to illustrate the efficiency and reliability of the two methods
Amin, Talha
2013-01-01
In the paper, we present a comparison of dynamic programming and greedy approaches for construction and optimization of approximate decision rules relative to the number of misclassifications. We use an uncertainty measure that is a difference between the number of rows in a decision table T and the number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules that localize rows in subtables of T with uncertainty at most γ. Experimental results with decision tables from the UCI Machine Learning Repository are also presented. © 2013 Springer-Verlag.
A New Approach to Rational Discrete-Time Approximations to Continuous-Time Fractional-Order Systems
Matos , Carlos; Ortigueira , Manuel ,
2012-01-01
Part 10: Signal Processing; International audience; In this paper a new approach to rational discrete-time approximations to continuous fractional-order systems of the form 1/(sα+p) is proposed. We will show that such fractional-order LTI system can be decomposed into sub-systems. One has the classic behavior and the other is similar to a Finite Impulse Response (FIR) system. The conversion from continuous-time to discrete-time systems will be done using the Laplace transform inversion integr...
Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.
2017-12-01
The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.
International Nuclear Information System (INIS)
Yonemitsu, K.; Bishop, A.R.
1992-01-01
As a convenient qualitative approach to strongly correlated electronic systems, an inhomogeneous Hartree-Fock plus random-phase approximation is applied to response functions for the two-dimensional multiband Hubbard model for cuprate superconductors. A comparison of the results with those obtained by exact diagonalization by Wagner, Hanke, and Scalapino [Phys. Rev. B 43, 10 517 (1991)] shows that overall structures in optical and magnetic particle-hole excitation spectra are well reproduced by this method. This approach is computationally simple, retains conceptual clarity, and can be calibrated by comparison with exact results on small systems. Most importantly, it is easily extended to larger systems and straightforward to incorporate additional terms in the Hamiltonian, such as electron-phonon interactions, which may play a crucial role in high-temperature superconductivity
International Nuclear Information System (INIS)
Klüser, Lars; Di Biagio, Claudia; Kleiber, Paul D.; Formenti, Paola; Grassian, Vicki H.
2016-01-01
Optical properties (extinction efficiency, single scattering albedo, asymmetry parameter and scattering phase function) of five different desert dust minerals have been calculated with an asymptotic approximation approach (AAA) for non-spherical particles. The AAA method combines Rayleigh-limit approximations with an asymptotic geometric optics solution in a simple and straightforward formulation. The simulated extinction spectra have been compared with classical Lorenz–Mie calculations as well as with laboratory measurements of dust extinction. This comparison has been done for single minerals and with bulk dust samples collected from desert environments. It is shown that the non-spherical asymptotic approximation improves the spectral extinction pattern, including position of the extinction peaks, compared to the Lorenz–Mie calculations for spherical particles. Squared correlation coefficients from the asymptotic approach range from 0.84 to 0.96 for the mineral components whereas the corresponding numbers for Lorenz–Mie simulations range from 0.54 to 0.85. Moreover the blue shift typically found in Lorenz–Mie results is not present in the AAA simulations. The comparison of spectra simulated with the AAA for different shape assumptions suggests that the differences mainly stem from the assumption of the particle shape and not from the formulation of the method itself. It has been shown that the choice of particle shape strongly impacts the quality of the simulations. Additionally, the comparison of simulated extinction spectra with bulk dust measurements indicates that within airborne dust the composition may be inhomogeneous over the range of dust particle sizes, making the calculation of reliable radiative properties of desert dust even more complex. - Highlights: • A fast and simple method for estimating optical properties of dust. • Can be used with non-spherical particles of arbitrary size distributions. • Comparison with Mie simulations and
Energy Technology Data Exchange (ETDEWEB)
Kandemir, B S; Keskin, M [Department of Physics, Faculty of Sciences, Ankara University, 06100 Tandogan, Ankara (Turkey)
2008-08-13
In this paper, exact analytical expressions for the entire phonon spectra in single-walled carbon nanotubes with zigzag geometry are presented by using a new approach, originally developed by Kandemir and Altanhan. This approach is based on the concept of construction of a classical lattice Hamiltonian of single-walled carbon nanotubes, wherein the nearest and next nearest neighbor and bond bending interactions are all included, then its quantization and finally diagonalization of the resulting second quantized Hamiltonian. Furthermore, within this context, explicit analytical expressions for the relevant electron-phonon interaction coefficients are also investigated for single-walled carbon nanotubes having this geometry, by the phonon modulation of the hopping interaction.
International Nuclear Information System (INIS)
Kandemir, B S; Keskin, M
2008-01-01
In this paper, exact analytical expressions for the entire phonon spectra in single-walled carbon nanotubes with zigzag geometry are presented by using a new approach, originally developed by Kandemir and Altanhan. This approach is based on the concept of construction of a classical lattice Hamiltonian of single-walled carbon nanotubes, wherein the nearest and next nearest neighbor and bond bending interactions are all included, then its quantization and finally diagonalization of the resulting second quantized Hamiltonian. Furthermore, within this context, explicit analytical expressions for the relevant electron-phonon interaction coefficients are also investigated for single-walled carbon nanotubes having this geometry, by the phonon modulation of the hopping interaction
International Nuclear Information System (INIS)
1979-01-01
Analytical procedures were refined for the Structural Assessment Approach for assessing the Material Control and Accounting systems at facilities that contain special nuclear material. Requirements were established for an efficient, feasible algorithm to be used in evaluating system performance measures that involve the probability of detection. Algorithm requirements to calculate the probability of detection for a given type of adversary and the target set are described
Montesinos Ferrer, Marti
2016-01-01
Airport landside system is complex, with multiple interrelations. Currently, each facility is managed locally without a systemic view. This study analyzes the impact of different resource management policies on the overall system performance (embarking direction). The results are derived from an analytical approach, based on queueing theory, which allows investigating different time-varying resource allocation policies at each processing facility and its impact on system dynamics.
Factor-Analytic and Individualized Approaches to Constructing Brief Measures of ADHD Behaviors
Volpe, Robert J.; Gadow, Kenneth D.; Blom-Hoffman, Jessica; Feinberg, Adam B.
2009-01-01
Two studies were performed to examine a factor-analytic and an individualized approach to creating short progress-monitoring measures from the longer "ADHD-Symptom Checklist-4" (ADHD-SC4). In Study 1, teacher ratings on items of the ADHD:Inattentive (IA) and ADHD:Hyperactive-Impulsive (HI) scales of the ADHD-SC4 were factor analyzed in a normative…
Rebenda, Josef; Šmarda, Zdeněk
2017-07-01
In the paper, we propose a correct and efficient semi-analytical approach to solve initial value problem for systems of functional differential equations with delay. The idea is to combine the method of steps and differential transformation method (DTM). In the latter, formulas for proportional arguments and nonlinear terms are used. An example of using this technique for a system with constant and proportional delays is presented.
A general analytical approach to the one-group, one-dimensional transport equation
International Nuclear Information System (INIS)
Barichello, L.B.; Vilhena, M.T.
1993-01-01
The main feature of the presented approach to solve the neutron transport equation consists in the application of the Laplace transform to the discrete ordinates equations, which yields a linear system of order N to be solved (LTS N method). In this paper this system is solved analytically and the inversion is performed using the Heaviside expansion technique. The general formulation achieved by this procedure is then applied to homogeneous and heterogeneous one-group slab-geometry problems. (orig.) [de
Semi-analytical approach to modelling the dynamic behaviour of soil excited by embedded foundations
DEFF Research Database (Denmark)
Bucinskas, Paulius; Andersen, Lars Vabbersgaard
2017-01-01
The underlying soil has a significant effect on the dynamic behaviour of structures. The paper proposes a semi-analytical approach based on a Green’s function solution in frequency–wavenumber domain. The procedure allows calculating the dynamic stiffness for points on the soil surface as well...... are analysed. It is determined how simplification of the numerical model affects the overall dynamic behaviour. © 2017 The Authors. Published by Elsevier Ltd....
Directory of Open Access Journals (Sweden)
Larissa B. Del Piero
2016-06-01
Full Text Available Early neuroimaging studies suggested that adolescents show initial development in brain regions linked with emotional reactivity, but slower development in brain structures linked with emotion regulation. However, the increased sophistication of adolescent brain research has made this picture more complex. This review examines functional neuroimaging studies that test for differences in basic emotion processing (reactivity and regulation between adolescents and either children or adults. We delineated different emotional processing demands across the experimental paradigms in the reviewed studies to synthesize the diverse results. The methods for assessing change (i.e., analytical approach and cohort characteristics (e.g., age range were also explored as potential factors influencing study results. Few unifying dimensions were found to successfully distill the results of the reviewed studies. However, this review highlights the potential impact of subtle methodological and analytic differences between studies, need for standardized and theory-driven experimental paradigms, and necessity of analytic approaches that are can adequately test the trajectories of developmental change that have recently been proposed. Recommendations for future research highlight connectivity analyses and non-linear developmental trajectories, which appear to be promising approaches for measuring change across adolescence. Recommendations are made for evaluating gender and biological markers of development beyond chronological age.
Map Archive Mining: Visual-Analytical Approaches to Explore Large Historical Map Collections
Directory of Open Access Journals (Sweden)
Johannes H. Uhl
2018-04-01
Full Text Available Historical maps are unique sources of retrospective geographical information. Recently, several map archives containing map series covering large spatial and temporal extents have been systematically scanned and made available to the public. The geographical information contained in such data archives makes it possible to extend geospatial analysis retrospectively beyond the era of digital cartography. However, given the large data volumes of such archives (e.g., more than 200,000 map sheets in the United States Geological Survey topographic map archive and the low graphical quality of older, manually-produced map sheets, the process to extract geographical information from these map archives needs to be automated to the highest degree possible. To understand the potential challenges (e.g., salient map characteristics and data quality variations in automating large-scale information extraction tasks for map archives, it is useful to efficiently assess spatio-temporal coverage, approximate map content, and spatial accuracy of georeferenced map sheets at different map scales. Such preliminary analytical steps are often neglected or ignored in the map processing literature but represent critical phases that lay the foundation for any subsequent computational processes including recognition. Exemplified for the United States Geological Survey topographic map and the Sanborn fire insurance map archives, we demonstrate how such preliminary analyses can be systematically conducted using traditional analytical and cartographic techniques, as well as visual-analytical data mining tools originating from machine learning and data science.
Design of laser-generated shockwave experiments. An approach using analytic models
International Nuclear Information System (INIS)
Lee, Y.T.; Trainor, R.J.
1980-01-01
Two of the target-physics phenomena which must be understood before a clean experiment can be confidently performed are preheating due to suprathermal electrons and shock decay due to a shock-rarefaction interaction. Simple analytic models are described for these two processes and the predictions of these models are compared with those of the LASNEX fluid physics code. We have approached this work not with the view of surpassing or even approaching the reliability of the code calculations, but rather with the aim of providing simple models which may be used for quick parameter-sensitivity evaluations, while providing physical insight into the problems
Directory of Open Access Journals (Sweden)
H. K. Hetman
2011-01-01
Full Text Available A number of functions for approximating the universal magnetic curve and its derivatives, their accuracy and conformity to the requirements put forward by the authors have been studied.
Directory of Open Access Journals (Sweden)
Rainer Diaz-Bone
2006-05-01
Full Text Available Abstract: The German discourse researcher Siegfried JÄGER from Duisburg is the first to have published a German-language book about the methodology of discourse analysis after FOUCAULT. JÄGER integrates in his work the discourse analytic work of Jürgen LINK as well as the interdisciplinary discussion carried on in the discourse analytic journal "kultuRRevolution" (Journal for Applied Discourse Analysis. JÄGER and his co-workers were associated with the Duisburger Institute for Language Research and Social Research (DISS, see http://www.diss-duisburg.de/ for 20 years, developing discourse theory and the methodology of discourse analysis. The interview was done via e-mail. It depicts the discourse analytic approach of JÄGER and his co-workers following the works of FOUCAULT and LINK. The interview reconstructs JÄGERs vita and his academic career. Further topics of the interview are the agenda of JÄGERs discourse studies, methodological considerations, the (problematic relationship between FOUCAULDian discourse analysis and (discourses, linguistics, styles and organization of research and questions concerning applied discourse analytic research as a form of critical intervention. URN: urn:nbn:de:0114-fqs0603219
A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.
Yang, Harry; Zhang, Jianchun
2015-01-01
The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current
CERN. Geneva
2015-01-01
Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...
A Numerical-Analytical Approach to Modeling the Axial Rotation of the Earth
Markov, Yu. G.; Perepelkin, V. V.; Rykhlova, L. V.; Filippova, A. S.
2018-04-01
A model for the non-uniform axial rotation of the Earth is studied using a celestial-mechanical approach and numerical simulations. The application of an approximate model containing a small number of parameters to predict variations of the axial rotation velocity of the Earth over short time intervals is justified. This approximate model is obtained by averaging variable parameters that are subject to small variations due to non-stationarity of the perturbing factors. The model is verified and compared with predictions over a long time interval published by the International Earth Rotation and Reference Systems Service (IERS).
Chiadamrong, N.; Piyathanavong, V.
2017-12-01
Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.
Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael
2017-09-01
Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.
Tamosiunaite, Minija; Asfour, Tamim; Wörgötter, Florentin
2009-03-01
Reinforcement learning methods can be used in robotics applications especially for specific target-oriented problems, for example the reward-based recalibration of goal directed actions. To this end still relatively large and continuous state-action spaces need to be efficiently handled. The goal of this paper is, thus, to develop a novel, rather simple method which uses reinforcement learning with function approximation in conjunction with different reward-strategies for solving such problems. For the testing of our method, we use a four degree-of-freedom reaching problem in 3D-space simulated by a two-joint robot arm system with two DOF each. Function approximation is based on 4D, overlapping kernels (receptive fields) and the state-action space contains about 10,000 of these. Different types of reward structures are being compared, for example, reward-on- touching-only against reward-on-approach. Furthermore, forbidden joint configurations are punished. A continuous action space is used. In spite of a rather large number of states and the continuous action space these reward/punishment strategies allow the system to find a good solution usually within about 20 trials. The efficiency of our method demonstrated in this test scenario suggests that it might be possible to use it on a real robot for problems where mixed rewards can be defined in situations where other types of learning might be difficult.
Data analytics approach to create waste generation profiles for waste management and collection.
Niska, Harri; Serkkola, Ari
2018-04-30
Extensive monitoring data on waste generation is increasingly collected in order to implement cost-efficient and sustainable waste management operations. In addition, geospatial data from different registries of the society are opening for free usage. Novel data analytics approaches can be built on the top of the data to produce more detailed, and in-time waste generation information for the basis of waste management and collection. In this paper, a data-based approach based on the self-organizing map (SOM) and the k-means algorithm is developed for creating a set of waste generation type profiles. The approach is demonstrated using the extensive container-level waste weighting data collected in the metropolitan area of Helsinki, Finland. The results obtained highlight the potential of advanced data analytic approaches in producing more detailed waste generation information e.g. for the basis of tailored feedback services for waste producers and the planning and optimization of waste collection and recycling. Copyright © 2018 Elsevier Ltd. All rights reserved.
New vistas in refractive laser beam shaping with an analytic design approach
Duerr, Fabian; Thienpont, Hugo
2014-05-01
Many commercial, medical and scientific applications of the laser have been developed since its invention. Some of these applications require a specific beam irradiance distribution to ensure optimal performance. Often, it is possible to apply geometrical methods to design laser beam shapers. This common design approach is based on the ray mapping between the input plane and the output beam. Geometric ray mapping designs with two plano-aspheric lenses have been thoroughly studied in the past. Even though analytic expressions for various ray mapping functions do exist, the surface profiles of the lenses are still calculated numerically. In this work, we present an alternative novel design approach that allows direct calculation of the rotational symmetric lens profiles described by analytic functions. Starting from the example of a basic beam expander, a set of functional differential equations is derived from Fermat's principle. This formalism allows calculating the exact lens profiles described by Taylor series coefficients up to very high orders. To demonstrate the versatility of this new approach, two further cases are solved: a Gaussian to at-top irradiance beam shaping system, and a beam shaping system that generates a more complex dark-hollow Gaussian (donut-like) irradiance profile with zero intensity in the on-axis region. The presented ray tracing results confirm the high accuracy of all calculated solutions and indicate the potential of this design approach for refractive beam shaping applications.
Takata, Masashi; Takagi, Kenichiro; Nagase, Takashi; Kobayashi, Takashi; Naito, Hiroyoshi
2016-04-01
An analytical expression for impedance spectra in the case of double injection (both electrons and holes are injected into an organic semiconductor thin film) has been derived from the basic transport equations (the current density equation, the continuity equation and the Possion's equation). Capacitance-frequency characteristics calculated from the analytical expression have been examined at different recombination constants and different values of mobility balance defined by a ratio of electron mobility to hole mobility. Negative capacitance appears when the recombination constant is lower than the Langevin recombination constant and when the value of the mobility balance approaches unity. These results are consistent with the numerical results obtained by a device simulator (Atlas, Silvaco).
A semi-analytical refrigeration cycle modelling approach for a heat pump hot water heater
Panaras, G.; Mathioulakis, E.; Belessiotis, V.
2018-04-01
The use of heat pump systems in applications like the production of hot water or space heating makes important the modelling of the processes for the evaluation of the performance of existing systems, as well as for design purposes. The proposed semi-analytical model offers the opportunity to estimate the performance of a heat pump system producing hot water, without using detailed geometrical or any performance data. This is important, as for many commercial systems the type and characteristics of the involved subcomponents can hardly be detected, thus not allowing the implementation of more analytical approaches or the exploitation of the manufacturers' catalogue performance data. The analysis copes with the issues related with the development of the models of the subcomponents involved in the studied system. Issues not discussed thoroughly in the existing literature, as the refrigerant mass inventory in the case an accumulator is present, are examined effectively.
Energy Technology Data Exchange (ETDEWEB)
Guo, Y.; van Dam, J.; Bergua, R.; Jove, J.; Campbell, J.
2015-03-01
Nontorque loads induced by the wind turbine rotor overhang weight and aerodynamic forces can greatly affect drivetrain loads and responses. If not addressed properly, these loads can result in a decrease in gearbox component life. This work uses analytical modeling, computational modeling, and experimental data to evaluate a unique drivetrain design that minimizes the effects of nontorque loads on gearbox reliability: the Pure Torque(R) drivetrain developed by Alstom. The drivetrain has a hub-support configuration that transmits nontorque loads directly into the tower rather than through the gearbox as in other design approaches. An analytical model of Alstom's Pure Torque drivetrain provides insight into the relationships among turbine component weights, aerodynamic forces, and the resulting drivetrain loads. Main shaft bending loads are orders of magnitude lower than the rated torque and are hardly affected by wind conditions and turbine operations.
Analytical Features: A Knowledge-Based Approach to Audio Feature Generation
Directory of Open Access Journals (Sweden)
Pachet François
2009-01-01
Full Text Available We present a feature generation system designed to create audio features for supervised classification tasks. The main contribution to feature generation studies is the notion of analytical features (AFs, a construct designed to support the representation of knowledge about audio signal processing. We describe the most important aspects of AFs, in particular their dimensional type system, on which are based pattern-based random generators, heuristics, and rewriting rules. We show how AFs generalize or improve previous approaches used in feature generation. We report on several projects using AFs for difficult audio classification tasks, demonstrating their advantage over standard audio features. More generally, we propose analytical features as a paradigm to bring raw signals into the world of symbolic computation.
Pintér, Balázs; Erdélyi, R.
2018-01-01
Solar fundamental (f) acoustic mode oscillations are investigated analytically in a magnetohydrodynamic (MHD) model. The model consists of three layers in planar geometry, representing the solar interior, the magnetic atmosphere, and a transitional layer sandwiched between them. Since we focus on the fundamental mode here, we assume the plasma is incompressible. A horizontal, canopy-like, magnetic field is introduced to the atmosphere, in which degenerated slow MHD waves can exist. The global (f-mode) oscillations can couple to local atmospheric Alfvén waves, resulting, e.g., in a frequency shift of the oscillations. The dispersion relation of the global oscillation mode is derived, and is solved analytically for the thin-transitional layer approximation and for the weak-field approximation. Analytical formulae are also provided for the frequency shifts due to the presence of a thin transitional layer and a weak atmospheric magnetic field. The analytical results generally indicate that, compared to the fundamental value (ω =√{ gk }), the mode frequency is reduced by the presence of an atmosphere by a few per cent. A thin transitional layer reduces the eigen-frequencies further by about an additional hundred microhertz. Finally, a weak atmospheric magnetic field can slightly, by a few percent, increase the frequency of the eigen-mode. Stronger magnetic fields, however, can increase the f-mode frequency by even up to ten per cent, which cannot be seen in observed data. The presence of a magnetic atmosphere in the three-layer model also introduces non-permitted propagation windows in the frequency spectrum; here, f-mode oscillations cannot exist with certain values of the harmonic degree. The eigen-frequencies can be sensitive to the background physical parameters, such as an atmospheric density scale-height or the rate of the plasma density drop at the photosphere. Such information, if ever observed with high-resolution instrumentation and inverted, could help to
Integrated assessment of the global warming problem: A decision-analytical approach
International Nuclear Information System (INIS)
Van Lenthe, J.; Hendrickx, L.; Vlek, C.A.J.
1994-12-01
The multi-disciplinary character of the global warming problem asks for an integrated assessment approach for ordering and combining the various physical, ecological, economical, and sociological results. The Netherlands initiated their own National Research Program (NRP) on Global Air Pollution and Climate Change (NRP). The first phase (NRP-1) identified the integration theme as one of five central research themes. The second phase (NRP-2) shows a growing concern for integrated assessment issues. The current two-year research project 'Characterizing the risks: a comparative analysis of the risks of global warming and of relevant policy options, which started in September 1993, comes under the integrated assessment part of the Dutch NRP. The first part of the interim report describes the search for an integrated assessment methodology. It starts with emphasizing the need for integrated assessment at a relatively high level of aggregation and from a policy point of view. The conclusion will be that a decision-analytical approach might fit the purpose of a policy-oriented integrated modeling of the global warming problem. The discussion proceeds with an account on decision analysis and its explicit incorporation and analysis of uncertainty. Then influence diagrams, a relatively recent development in decision analysis, are introduced as a useful decision-analytical approach for integrated assessment. Finally, a software environment for creating and analyzing complex influence diagram models is discussed. The second part of the interim report provides a first, provisional integrated modeling of the global warming problem, emphasizing on the illustration of the decision-analytical approach. Major problem elements are identified and an initial problem structure is developed. The problem structure is described in terms of hierarchical influence diagrams. At some places the qualitative structure is filled with quantitative data
Carter, James L.; Resh, Vincent H.
2013-01-01
Biomonitoring programs based on benthic macroinvertebrates are well-established worldwide. Their value, however, depends on the appropriateness of the analytical techniques used. All United States State, benthic macroinvertebrate biomonitoring programs were surveyed regarding the purposes of their programs, quality-assurance and quality-control procedures used, habitat and water-chemistry data collected, treatment of macroinvertebrate data prior to analysis, statistical methods used, and data-storage considerations. State regulatory mandates (59 percent of programs), biotic index development (17 percent), and Federal requirements (15 percent) were the most frequently reported purposes of State programs, with the specific tasks of satisfying the requirements for 305b/303d reports (89 percent), establishment and monitoring of total maximum daily loads, and developing biocriteria being the purposes most often mentioned. Most states establish reference sites (81 percent), but classify them using State-specific methods. The most often used technique for determining the appropriateness of a reference site was Best Professional Judgment (86 percent of these states). Macroinvertebrate samples are almost always collected by using a D-frame net, and duplicate samples are collected from approximately 10 percent of sites for quality assurance and quality control purposes. Most programs have macroinvertebrate samples processed by contractors (53 percent) and have identifications confirmed by a second taxonomist (85 percent). All States collect habitat data, with most using the Rapid Bioassessment Protocol visual-assessment approach, which requires ~1 h/site. Dissolved oxygen, pH, and conductivity are measured in more than 90 percent of programs. Wide variation exists in which taxa are excluded from analyses and the level of taxonomic resolution used. Species traits, such as functional feeding groups, are commonly used (96 percent), as are tolerance values for organic pollution
International Nuclear Information System (INIS)
Chen, Jiaqi; Wang, Hao; Zhu, Hongzhou
2017-01-01
Highlights: • Derive an analytical approach to predict temperature fields of multi-layered asphalt pavement based on Green’s function. • Analyze the effects of thermal modifications on heat output from pavement to near-surface environment. • Evaluate pavement solutions for reducing urban heat island (UHI) effect. - Abstract: This paper aims to present an analytical approach to predict temperature fields in asphalt pavement and evaluate the effects of thermal modification on near-surface environment for urban heat island (UHI) effect. The analytical solution of temperature fields in the multi-layered pavement structure was derived with the Green’s function method, using climatic factors including solar radiation, wind velocity, and air temperature as input parameters. The temperature solutions were validated with an outdoor field experiment. By using the proposed analytical solution, temperature fields in the pavement with different pavement surface albedo, thermal conductivity, and layer combinations were analyzed. Heat output from pavement surface to the near-surface environment was studied as an indicator of pavement contribution to UHI effect. The analysis results show that increasing pavement surface albedo could decrease pavement temperature at various depths, and increase heat output intensity in the daytime but decrease heat output intensity in the nighttime. Using reflective pavement to mitigate UHI may be effective for an open street but become ineffective for the street surrounded by high buildings. On the other hand, high-conductivity pavement could alleviate the UHI effect in the daytime for both the open street and the street surrounded by high buildings. Among different combinations of thermal-modified asphalt mixtures, the layer combination of high-conductivity surface course and base course could reduce the maximum heat output intensity and alleviate the UHI effect most.
Directory of Open Access Journals (Sweden)
Zanzi Luigi
2010-01-01
Full Text Available The two-step approach is a fast algorithm for 3D migration originally introduced to process zero-offset seismic data. Its application to monostatic GPR (Ground Penetrating Radar data is straightforward. A direct extension of the algorithm for the application to bistatic radar data is possible provided that the TX-RX azimuth is constant. As for the zero-offset case, the two-step operator is exactly equivalent to the one-step 3D operator for a constant velocity medium and is an approximation of the one-step 3D operator for a medium where the velocity varies vertically. Two methods are explored for handling a heterogeneous medium; both are suitable for the application of the two-step approach, and they are compared in terms of accuracy of the final 3D operator. The aperture of the two-step operator is discussed, and a solution is proposed to optimize its shape. The analysis is of interest for any NDT application where the medium is expected to be heterogeneous, or where the antenna is not in direct contact with the medium (e.g., NDT of artworks, humanitarian demining, radar with air-launched antennas.
International Nuclear Information System (INIS)
Kocifaj, Miroslav
2016-01-01
The study of diffuse light of a night sky is undergoing a renaissance due to the development of inexpensive high performance computers which can significantly reduce the time needed for accurate numerical simulations. Apart from targeted field campaigns, numerical modeling appears to be one of the most attractive and powerful approaches for predicting the diffuse light of a night sky. However, computer-aided simulation of night-sky radiances over any territory and under arbitrary conditions is a complex problem that is difficult to solve. This study addresses three concepts for modeling the artificial light propagation through a turbid stratified atmosphere. Specifically, these are two-stream approximation, iterative approach to Radiative Transfer Equation (RTE) and Method of Successive Orders of Scattering (MSOS). The principles of the methods, their strengths and weaknesses are reviewed with respect to their implications for night-light modeling in different environments. - Highlights: • Three methods for modeling nightsky radiance are reviewed. • The two-stream approximation allows for rapid calculation of radiative fluxes. • The above approach is convenient for modeling large uniformly emitting areas. • SOS is applicable to heterogeneous deployment of well-separated cities or towns. • MSOS is generally CPU less-intensive than traditional 3D RTE.
Flow modeling in a porous cylinder with regressing walls using semi analytical approach
Directory of Open Access Journals (Sweden)
M Azimi
2016-10-01
Full Text Available In this paper, the mathematical modeling of the flow in a porous cylinder with a focus on applications to solid rocket motors is presented. As usual, the cylindrical propellant grain of a solid rocket motor is modeled as a long tube with one end closed at the headwall, while the other remains open. The cylindrical wall is assumed to be permeable so as to simulate the propellant burning and normal gas injection. At first, the problem description and formulation are considered. The Navier-Stokes equations for the viscous flow in a porous cylinder with regressing walls are reduced to a nonlinear ODE by using a similarity transformation in time and space. Application of Differential Transformation Method (DTM as an approximate analytical method has been successfully applied. Finally the results have been presented for various cases.
Directory of Open Access Journals (Sweden)
Simon Boitard
2016-03-01
Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.
Schempf, Ashley H; Kaufman, Jay S
2012-10-01
A common epidemiologic objective is to evaluate the contribution of residential context to individual-level disparities by race or socioeconomic position. We reviewed analytic strategies to account for the total (observed and unobserved factors) contribution of environmental context to health inequalities, including conventional fixed effects (FE) and hybrid FE implemented within a random effects (RE) or a marginal model. To illustrate results and limitations of the various analytic approaches of accounting for the total contextual component of health disparities, we used data on births nested within neighborhoods as an applied example of evaluating neighborhood confounding of racial disparities in gestational age at birth, including both a continuous and a binary outcome. Ordinary and RE models provided disparity estimates that can be substantially biased in the presence of neighborhood confounding. Both FE and hybrid FE models can account for cluster level confounding and provide disparity estimates unconfounded by neighborhood, with the latter having greater flexibility in allowing estimation of neighborhood-level effects and intercept/slope variability when implemented in a RE specification. Given the range of models that can be implemented in a hybrid approach and the frequent goal of accounting for contextual confounding, this approach should be used more often. Published by Elsevier Inc.
Directory of Open Access Journals (Sweden)
Christopher J Millow
Full Text Available Persistent organic pollutants (POPs are typically monitored via targeted mass spectrometry, which potentially identifies only a fraction of the contaminants actually present in environmental samples. With new anthropogenic compounds continuously introduced to the environment, novel and proactive approaches that provide a comprehensive alternative to targeted methods are needed in order to more completely characterize the diversity of known and unknown compounds likely to cause adverse effects. Nontargeted mass spectrometry attempts to extensively screen for compounds, providing a feasible approach for identifying contaminants that warrant future monitoring. We employed a nontargeted analytical method using comprehensive two-dimensional gas chromatography coupled to time-of-flight mass spectrometry (GC×GC/TOF-MS to characterize halogenated organic compounds (HOCs in California Black skimmer (Rynchops niger eggs. Our study identified 111 HOCs; 84 of these compounds were regularly detected via targeted approaches, while 27 were classified as typically unmonitored or unknown. Typically unmonitored compounds of note in bird eggs included tris(4-chlorophenylmethane (TCPM, tris(4-chlorophenylmethanol (TCPMOH, triclosan, permethrin, heptachloro-1'-methyl-1,2'-bipyrrole (MBP, as well as four halogenated unknown compounds that could not be identified through database searching or the literature. The presence of these compounds in Black skimmer eggs suggests they are persistent, bioaccumulative, potentially biomagnifying, and maternally transferring. Our results highlight the utility and importance of employing nontargeted analytical tools to assess true contaminant burdens in organisms, as well as to demonstrate the value in using environmental sentinels to proactively identify novel contaminants.
The Flipped MOOC: Using Gamification and Learning Analytics in MOOC Design—A Conceptual Approach
Directory of Open Access Journals (Sweden)
Roland Klemke
2018-02-01
Full Text Available Recently, research has highlighted the potential of Massive Open Online Courses (MOOCs for education, as well as their drawbacks, which are well known. Several studies state that the main limitations of the MOOCs are low completion and high dropout rates of participants. However, MOOCs suffer also from the lack of participant engagement, personalization, and despite the fact that several formats and types of MOOCs are reported in the literature, the majority of them contain a considerable amount of content that is mainly presented in a video format. This is in contrast to the results reported in other educational settings, where engagement and active participation are identified as success factors. We present the results of a study that involved educational experts and learning scientists giving new and interesting insights towards the conceptualization of a new design approach, the flipped MOOC, applying the flipped classroom approach to the MOOCs’ design and making use of gamification and learning analytics. We found important indications, applicable to the concept of a flipped MOOC, which entails turning MOOCs from mainly content-oriented delivery machines into personalized, interactive, and engaging learning environments. Our findings support the idea that MOOCs can be enriched by the orchestration of a flipped classroom approach in combination with the support of gamification and learning analytics.
Directory of Open Access Journals (Sweden)
Euro Beinat
2012-11-01
Full Text Available In this paper we present a visual analytics approach for deriving spatio-temporal patterns of collective human mobility from a vast mobile network traffic data set. More than 88 million movements between pairs of radio cells—so-called handovers—served as a proxy for more than two months of mobility within four urban test areas in Northern Italy. In contrast to previous work, our approach relies entirely on visualization and mapping techniques, implemented in several software applications. We purposefully avoid statistical or probabilistic modeling and, nonetheless, reveal characteristic and exceptional mobility patterns. The results show, for example, surprising similarities and symmetries amongst the total mobility and people flows between the test areas. Moreover, the exceptional patterns detected can be associated to real-world events such as soccer matches. We conclude that the visual analytics approach presented can shed new light on large-scale collective urban mobility behavior and thus helps to better understand the “pulse” of dynamic urban systems.
Bernard, Nicola K; Kashy, Deborah A; Levendosky, Alytia A; Bogat, G Anne; Lonstein, Joseph S
2017-03-01
Attunement between mothers and infants in their hypothalamic-pituitary-adrenal (HPA) axis responsiveness to acute stressors is thought to benefit the child's emerging physiological and behavioral self-regulation, as well as their socioemotional development. However, there is no universally accepted definition of attunement in the literature, which appears to have resulted in inconsistent statistical analyses for determining its presence or absence, and contributed to discrepant results. We used a series of data analytic approaches, some previously used in the attunement literature and others not, to evaluate the attunement between 182 women and their 1-year-old infants in their HPA axis responsivity to acute stress. Cortisol was measured in saliva samples taken from mothers and infants before and twice after a naturalistic laboratory stressor (infant arm restraint). The results of the data analytic approaches were mixed, with some analyses suggesting attunement while others did not. The strengths and weaknesses of each statistical approach are discussed, and an analysis using a cross-lagged model that considered both time and interactions between mother and infant appeared the most appropriate. Greater consensus in the field about the conceptualization and analysis of physiological attunement would be valuable in order to advance our understanding of this phenomenon. © 2016 Wiley Periodicals, Inc.
Exploring Multi-Scale Spatiotemporal Twitter User Mobility Patterns with a Visual-Analytics Approach
Directory of Open Access Journals (Sweden)
Junjun Yin
2016-10-01
Full Text Available Understanding human mobility patterns is of great importance for urban planning, traffic management, and even marketing campaign. However, the capability of capturing detailed human movements with fine-grained spatial and temporal granularity is still limited. In this study, we extracted high-resolution mobility data from a collection of over 1.3 billion geo-located Twitter messages. Regarding the concerns of infringement on individual privacy, such as the mobile phone call records with restricted access, the dataset is collected from publicly accessible Twitter data streams. In this paper, we employed a visual-analytics approach to studying multi-scale spatiotemporal Twitter user mobility patterns in the contiguous United States during the year 2014. Our approach included a scalable visual-analytics framework to deliver efficiency and scalability in filtering large volume of geo-located tweets, modeling and extracting Twitter user movements, generating space-time user trajectories, and summarizing multi-scale spatiotemporal user mobility patterns. We performed a set of statistical analysis to understand Twitter user mobility patterns across multi-level spatial scales and temporal ranges. In particular, Twitter user mobility patterns measured by the displacements and radius of gyrations of individuals revealed multi-scale or multi-modal Twitter user mobility patterns. By further studying such mobility patterns in different temporal ranges, we identified both consistency and seasonal fluctuations regarding the distance decay effects in the corresponding mobility patterns. At the same time, our approach provides a geo-visualization unit with an interactive 3D virtual globe web mapping interface for exploratory geo-visual analytics of the multi-level spatiotemporal Twitter user movements.
Al-Ababneh, Nedal
2014-07-01
We propose an accurate analytical model to calculate the optical crosstalk of a first-order free space optical interconnects system that uses microlenses with circular apertures. The proposed model is derived by evaluating the resulted finite integral in terms of an infinite series of Bessel functions. Compared to the model that uses complex Gaussian functions to expand the aperture function, it is shown that the proposed model is superior in estimating the crosstalk and provides more accurate results. Moreover, it is shown that the proposed model gives results close to that of the numerical model with superior computational efficiency.
An analysis of beam parameters on proton-acoustic waves through an analytic approach.
Kipergil, Esra Aytac; Erkol, Hakan; Kaya, Serhat; Gulsen, Gultekin; Unlu, Mehmet Burcin
2017-06-21
It has been reported that acoustic waves are generated when a high-energy pulsed proton beam is deposited in a small volume within tissue. One possible application of proton-induced acoustics is to get real-time feedback for intra-treatment adjustments by monitoring such acoustic waves. A high spatial resolution in ultrasound imaging may reduce proton range uncertainty. Thus, it is crucial to understand the dependence of the acoustic waves on the proton beam characteristics. In this manuscript, firstly, an analytic solution for the proton-induced acoustic wave is presented to reveal the dependence of the signal on the beam parameters; then it is combined with an analytic approximation of the Bragg curve. The influence of the beam energy, pulse duration and beam diameter variation on the acoustic waveform are investigated. Further analysis is performed regarding the Fourier decomposition of the proton-acoustic signals. Our results show that the smaller spill time of the proton beam upsurges the amplitude of the acoustic wave for a constant number of protons, which is hence beneficial for dose monitoring. The increase in the energy of each individual proton in the beam leads to the spatial broadening of the Bragg curve, which also yields acoustic waves of greater amplitude. The pulse duration and the beam width of the proton beam do not affect the central frequency of the acoustic wave, but they change the amplitude of the spectral components.
Collector modulation in high-voltage bipolar transistor in the saturation mode: Analytical approach
Dmitriev, A. P.; Gert, A. V.; Levinshtein, M. E.; Yuferev, V. S.
2018-04-01
A simple analytical model is developed, capable of replacing the numerical solution of a system of nonlinear partial differential equations by solving a simple algebraic equation when analyzing the collector resistance modulation of a bipolar transistor in the saturation mode. In this approach, the leakage of the base current into the emitter and the recombination of non-equilibrium carriers in the base are taken into account. The data obtained are in good agreement with the results of numerical calculations and make it possible to describe both the motion of the front of the minority carriers and the steady state distribution of minority carriers across the collector in the saturation mode.
Analytic network process (ANP approach for product mix planning in railway industry
Directory of Open Access Journals (Sweden)
Hadi Pazoki Toroudi
2016-08-01
Full Text Available Given the competitive environment in the global market in recent years, organizations need to plan for increased profitability and optimize their performance. Planning for an appropriate product mix plays essential role for the success of most production units. This paper applies analytical network process (ANP approach for product mix planning for a part supplier in Iran. The proposed method uses four criteria including cost of production, sales figures, supply of raw materials and quality of products. In addition, the study proposes different set of products as alternatives for production planning. The preliminary results have indicated that that the proposed study of this paper could increase productivity, significantly.
Europe needs to take clear, analytical approach in considering future of nuclear energy
Energy Technology Data Exchange (ETDEWEB)
Shepherd, John [nuclear 24, Redditch (United Kingdom)
2016-11-15
Europe's political leaders have been accused of failing to offer a clear and comprehensive approach to the future of nuclear power in Europe. The criticism came in an opinion adopted recently by the European Economic and Social Committee (EESC). According to the EESC, the European Commission should propose ''a clear analytical process and methodology which can offer a consistent, voluntary framework for national decision-making about the role - if any - of nuclear power in the energy mix''.
Analytical approach for predicting three-dimensional tire-pavement contact load
CSIR Research Space (South Africa)
Hernandez, JA
2014-12-01
Full Text Available stream_source_info De Beer1_2014.pdf.txt stream_content_type text/plain stream_size 38657 Content-Encoding UTF-8 stream_name De Beer1_2014.pdf.txt Content-Type text/plain; charset=UTF-8 75 Transportation Research Record... by measuring the applied forces in each perpendicular direction (15). Analytical Approach for Predicting Three-Dimensional Tire–Pavement Contact Load Jaime A. Hernandez, Angeli Gamez, Imad L. Al-Qadi, and Morris De Beer J. A. Hernandez, A. Gamez, and I. L...
Analytic mappings: a new approach in particle production by accelerated observers
International Nuclear Information System (INIS)
Sanchez, N.
1982-01-01
This is a summary of the authors recent results about physical consequences of analytic mappings in the space-time. Classically, the mapping defines an accelerated frame. At the quantum level it gives rise to particle production. Statistically, the real singularities of the mapping have associated temperatures. This concerns a new approach in Q.F.T. as formulated in accelerated frames. It has been considered as a first step in the understanding of the deep connection that could exist between the structure (geometry and topology) of the space-time and thermodynamics, mainly motivated by the works of Hawking since 1975. (Auth.)
A combined analytic-numeric approach for some boundary-value problems
Directory of Open Access Journals (Sweden)
Mustafa Turkyilmazoglu
2016-02-01
Full Text Available A combined analytic-numeric approach is undertaken in the present work for the solution of boundary-value problems in the finite or semi-infinite domains. Equations to be treated arise specifically from the boundary layer analysis of some two and three-dimensional flows in fluid mechanics. The purpose is to find quick but accurate enough solutions. Taylor expansions at either boundary conditions are computed which are next matched to the other asymptotic or exact boundary conditions. The technique is applied to the well-known Blasius as well as Karman flows. Solutions obtained in terms of series compare favorably with the existing ones in the literature.
Europe needs to take clear, analytical approach in considering future of nuclear energy
International Nuclear Information System (INIS)
Shepherd, John
2016-01-01
Europe's political leaders have been accused of failing to offer a clear and comprehensive approach to the future of nuclear power in Europe. The criticism came in an opinion adopted recently by the European Economic and Social Committee (EESC). According to the EESC, the European Commission should propose ''a clear analytical process and methodology which can offer a consistent, voluntary framework for national decision-making about the role - if any - of nuclear power in the energy mix''.
The analytical approach to optimization of active region structure of quantum dot laser
International Nuclear Information System (INIS)
Korenev, V V; Savelyev, A V; Zhukov, A E; Omelchenko, A V; Maximov, M V
2014-01-01
Using the analytical approach introduced in our previous papers we analyse the possibilities of optimization of size and structure of active region of semiconductor quantum dot lasers emitting via ground-state optical transitions. It is shown that there are optimal length' dispersion and number of QD layers in laser active region which allow one to obtain lasing spectrum of a given width at minimum injection current. Laser efficiency corresponding to the injection current optimized by the cavity length is practically equal to its maximum value
The analytical approach to optimization of active region structure of quantum dot laser
Korenev, V. V.; Savelyev, A. V.; Zhukov, A. E.; Omelchenko, A. V.; Maximov, M. V.
2014-10-01
Using the analytical approach introduced in our previous papers we analyse the possibilities of optimization of size and structure of active region of semiconductor quantum dot lasers emitting via ground-state optical transitions. It is shown that there are optimal length' dispersion and number of QD layers in laser active region which allow one to obtain lasing spectrum of a given width at minimum injection current. Laser efficiency corresponding to the injection current optimized by the cavity length is practically equal to its maximum value.
Analytical approach to the multi-state lasing phenomenon in quantum dot lasers
Korenev, V. V.; Savelyev, A. V.; Zhukov, A. E.; Omelchenko, A. V.; Maximov, M. V.
2013-03-01
We introduce an analytical approach to describe the multi-state lasing phenomenon in quantum dot lasers. We show that the key parameter is the hole-to-electron capture rate ratio. If it is lower than a certain critical value, the complete quenching of ground-state lasing takes place at high injection levels. At higher values of the ratio, the model predicts saturation of the ground-state power. This explains the diversity of experimental results and their contradiction to the conventional rate equation model. Recently found enhancement of ground-state lasing in p-doped samples and temperature dependence of the ground-state power are also discussed.
AEROSTATIC AND AERODYNAMIC MODULES OF A HYBRID BUOYANT AIRCRAFT: AN ANALYTICAL APPROACH
Directory of Open Access Journals (Sweden)
Anwar Ul Haque
2015-05-01
Full Text Available An analytical approach is essential for the estimation of the requirements of aerodynamic and aerostatic lift for a hybrid buoyant aircraft. Such aircrafts have two different modules to balance the weight of aircraft; aerostatic module and aerodynamic module. Both these modules are to be treated separately for estimation of the mass budget of propulsion systems and required power. In the present work, existing relationships of aircraft and airship are reviewed for its further application for these modules. Limitations of such relationships are also disussed and it is precieved that it will provide a strating point for better understanding of design anatomy of such aircraft.
Danaeifar, Mohammad; Granpayeh, Nosrat
2018-03-01
An analytical method is presented to analyze and synthesize bianisotropic metasurfaces. The equivalent parameters of metasurfaces in terms of meta-atom properties and other specifications of metasurfaces are derived. These parameters are related to electric, magnetic, and electromagnetic/magnetoelectric dipole moments of the bianisotropic media, and they can simplify the analysis of complicated and multilayer structures. A metasurface of split ring resonators is studied as an example demonstrating the proposed method. The optical properties of the meta-atom are explored, and the calculated polarizabilities are applied to find the reflection coefficient and the equivalent parameters of the metasurface. Finally, a structure consisting of two metasurfaces of the split ring resonators is provided, and the proposed analytical method is applied to derive the reflection coefficient. The validity of this analytical approach is verified by full-wave simulations which demonstrate good accuracy of the equivalent parameter method. This method can be used in the analysis and synthesis of bianisotropic metasurfaces with different materials and in different frequency ranges by considering electric, magnetic, and electromagnetic/magnetoelectric dipole moments.
Luo, Wei; Yin, Peifeng; Di, Qian; Hardisty, Frank; MacEachren, Alan M
2014-01-01
The world has become a complex set of geo-social systems interconnected by networks, including transportation networks, telecommunications, and the internet. Understanding the interactions between spatial and social relationships within such geo-social systems is a challenge. This research aims to address this challenge through the framework of geovisual analytics. We present the GeoSocialApp which implements traditional network analysis methods in the context of explicitly spatial and social representations. We then apply it to an exploration of international trade networks in terms of the complex interactions between spatial and social relationships. This exploration using the GeoSocialApp helps us develop a two-part hypothesis: international trade network clusters with structural equivalence are strongly 'balkanized' (fragmented) according to the geography of trading partners, and the geographical distance weighted by population within each network cluster has a positive relationship with the development level of countries. In addition to demonstrating the potential of visual analytics to provide insight concerning complex geo-social relationships at a global scale, the research also addresses the challenge of validating insights derived through interactive geovisual analytics. We develop two indicators to quantify the observed patterns, and then use a Monte-Carlo approach to support the hypothesis developed above.
Annual banned-substance review: analytical approaches in human sports drug testing.
Thevis, Mario; Kuuranne, Tiia; Walpurgis, Katja; Geyer, Hans; Schänzer, Wilhelm
2016-01-01
The aim of improving anti-doping efforts is predicated on several different pillars, including, amongst others, optimized analytical methods. These commonly result from exploiting most recent developments in analytical instrumentation as well as research data on elite athletes' physiology in general, and pharmacology, metabolism, elimination, and downstream effects of prohibited substances and methods of doping, in particular. The need for frequent and adequate adaptations of sports drug testing procedures has been incessant, largely due to the uninterrupted emergence of new chemical entities but also due to the apparent use of established or even obsolete drugs for reasons other than therapeutic means, such as assumed beneficial effects on endurance, strength, and regeneration capacities. Continuing the series of annual banned-substance reviews, literature concerning human sports drug testing published between October 2014 and September 2015 is summarized and reviewed in reference to the content of the 2015 Prohibited List as issued by the World Anti-Doping Agency (WADA), with particular emphasis on analytical approaches and their contribution to enhanced doping controls. Copyright © 2016 John Wiley & Sons, Ltd.
An analytical approach for a nodal scheme of two-dimensional neutron transport problems
International Nuclear Information System (INIS)
Barichello, L.B.; Cabrera, L.C.; Prolo Filho, J.F.
2011-01-01
Research highlights: → Nodal equations for a two-dimensional neutron transport problem. → Analytical Discrete Ordinates Method. → Numerical results compared with the literature. - Abstract: In this work, a solution for a two-dimensional neutron transport problem, in cartesian geometry, is proposed, on the basis of nodal schemes. In this context, one-dimensional equations are generated by an integration process of the multidimensional problem. Here, the integration is performed for the whole domain such that no iterative procedure between nodes is needed. The ADO method is used to develop analytical discrete ordinates solution for the one-dimensional integrated equations, such that final solutions are analytical in terms of the spatial variables. The ADO approach along with a level symmetric quadrature scheme, lead to a significant order reduction of the associated eigenvalues problems. Relations between the averaged fluxes and the unknown fluxes at the boundary are introduced as the usually needed, in nodal schemes, auxiliary equations. Numerical results are presented and compared with test problems.
Directory of Open Access Journals (Sweden)
Marc A. Rosen
2012-08-01
Full Text Available The temperature response in the soil surrounding multiple boreholes is evaluated analytically and numerically. The assumption of constant heat flux along the borehole wall is examined by coupling the problem to the heat transfer problem inside the borehole and presenting a model with variable heat flux along the borehole length. In the analytical approach, a line source of heat with a finite length is used to model the conduction of heat in the soil surrounding the boreholes. In the numerical method, a finite volume method in a three dimensional meshed domain is used. In order to determine the heat flux boundary condition, the analytical quasi-three-dimensional solution to the heat transfer problem of the U-tube configuration inside the borehole is used. This solution takes into account the variation in heating strength along the borehole length due to the temperature variation of the fluid running in the U-tube. Thus, critical depths at which thermal interaction occurs can be determined. Finally, in order to examine the validity of the numerical method, a comparison is made with the results of line source method.
Durante, Caterina; Baschieri, Carlo; Bertacchini, Lucia; Bertelli, Davide; Cocchi, Marina; Marchetti, Andrea; Manzini, Daniela; Papotti, Giulia; Sighinolfi, Simona
2015-04-15
Geographical origin and authenticity of food are topics of interest for both consumers and producers. Among the different indicators used for traceability studies, (87)Sr/(86)Sr isotopic ratio has provided excellent results. In this study, two analytical approaches for wine sample pre-treatment, microwave and low temperature mineralisation, were investigated to develop accurate and precise analytical method for (87)Sr/(86)Sr determination. The two procedures led to comparable results (paired t-test, with t
Directory of Open Access Journals (Sweden)
Gonzalo Abad
2018-05-01
Full Text Available This paper presents an analytical model, oriented to study harmonic mitigation aspects in AC grids. As it is well known, the presence of non-desired harmonics in AC grids can be palliated in several manners. However, in this paper, a power electronic-based active impedance at selective frequencies (ACISEF is used, due to its already proven flexibility and adaptability to the changing characteristics of AC grids. Hence, the proposed analytical model approach is specially conceived to globally consider both the model of the AC grid itself with its electric equivalent impedances, together with the power electronic-based ACISEF, including its control loops. In addition, the proposed analytical model presents practical and useful properties, as it is simple to understand and simple to use, it has low computational cost and simple adaptability to different scenarios of AC grids, and it provides an accurate enough representation of the reality. The benefits of using the proposed analytical model are shown in this paper through some examples of its usefulness, including an analysis of stability and the identification of sources of instability for a robust design, an analysis of effectiveness in harmonic mitigation, an analysis to assist in the choice of the most suitable active impedance under a given state of the AC grid, an analysis of the interaction between different compensators, and so on. To conclude, experimental validation of a 2.15 kA ACISEF in a real 33 kV AC grid is provided, in which real users (household and industry loads and crucial elements such as wind parks and HVDC systems are near inter-connected.
Rao, D. V.; Takeda, T.; Itai, Y.; Akatsuka, T.; Seltzer, S. M.; Hubbell, J. H.; Cesareo, R.; Brunetti, A.; Gigante, G. E.
Atomic Rayleigh scattering cross-sections for low, medium and high Z atoms are measured in vacuum using X-ray tube with a secondary target as an excitation source instead of radioisotopes. Monoenergetic Kα radiation emitted from the secondary target and monoenergetic radiation produced using two secondary targets with filters coupled to an X-ray tube are compared. The Kα radiation from the second target of the system is used to excite the sample. The background has been reduced considerably and the monochromacy is improved. Elastic scattering of Kα X-ray line energies of the secondary target by the sample is recorded with Hp Ge and Si (Li) detectors. A new approach is developed to estimate the solid angle approximation and geometrical efficiency for a system with experimental arrangement using X-ray tube and secondary target. The variation of the solid angle is studied by changing the radius and length of the collimators towards and away from the source and sample. From these values the variation of the total solid angle and geometrical efficiency is deduced and the optimum value is used for the experimental work. The efficiency is larger because the X-ray fluorescent source acts as a converter. Experimental results based on this system are compared with theoretical estimates and good agreement is observed in between them.
Johnson, Sara B; Little, Todd D; Masyn, Katherine; Mehta, Paras D; Ghazarian, Sharon R
2017-06-01
Characterizing the determinants of child health and development over time, and identifying the mechanisms by which these determinants operate, is a research priority. The growth of precision medicine has increased awareness and refinement of conceptual frameworks, data management systems, and analytic methods for multilevel data. This article reviews key methodological challenges in cohort studies designed to investigate multilevel influences on child health and strategies to address them. We review and summarize methodological challenges that could undermine prospective studies of the multilevel determinants of child health and ways to address them, borrowing approaches from the social and behavioral sciences. Nested data, variation in intervals of data collection and assessment, missing data, construct measurement across development and reporters, and unobserved population heterogeneity pose challenges in prospective multilevel cohort studies with children. We discuss innovations in missing data, innovations in person-oriented analyses, and innovations in multilevel modeling to address these challenges. Study design and analytic approaches that facilitate the integration across multiple levels, and that account for changes in people and the multiple, dynamic, nested systems in which they participate over time, are crucial to fully realize the promise of precision medicine for children and adolescents. Copyright © 2017 Elsevier Inc. All rights reserved.
Analytical strategic environmental assessment (ANSEA) developing a new approach to SEA
International Nuclear Information System (INIS)
Dalkmann, Holger; Herrera, Rodrigo Jiliberto; Bongardt, Daniel
2004-01-01
The objective of analytical strategic environmental assessment (ANSEA) is to provide a decision-centred approach to the SEA process. The ANSEA project evolved from the realisation that, in many cases, SEA, as currently practised, is not able to ensure an appropriate integration of environmental values. The focus of SEA is on predicting impacts, but the tool takes no account of the decision-making processes it is trying to influence. At strategic decision-making levels, in turn, it is often difficult to predict impacts with the necessary exactitude. The decision-making sciences could teach some valuable lessons here. Instead of focusing on the quantitative prediction of environmental consequences, the ANSEA approach concentrates on the integration of environmental objectives into decision-making processes. Thus, the ANSEA approach provides a framework for analysing and assessing the decision-making processes of policies, plans and programmes (PPP). To enhance environmental integration into the decision-making process, decision windows (DW) can be identified. The approach is designed to be objective and transparent to ensure that environmental considerations are taken into account, or--from an ex-post perspective--to allow an evaluation of how far environmental considerations have been integrated into the decision-making process under assessment. The paper describes the concepts and the framework of the ANSEA approach and discusses its relation to SEA and the EC Directive
DEFF Research Database (Denmark)
Poulsen, Stefan Othmar; Poulsen, Henning Friis
2014-01-01
The properties of compound refractive lenses (CRLs) of biconcave parabolic lenses for focusing and imaging synchrotron X-rays have been investigated theoretically by ray transfer matrix analysis and Gaussian beam propagation. We present approximate analytical expressions, that allow fast estimation...
Maternal and infant activity: Analytic approaches for the study of circadian rhythm.
Thomas, Karen A; Burr, Robert L; Spieker, Susan
2015-11-01
The study of infant and mother circadian rhythm entails choice of instruments appropriate for use in the home environment as well as selection of analytic approach that characterizes circadian rhythm. While actigraphy monitoring suits the needs of home study, limited studies have examined mother and infant rhythm derived from actigraphy. Among this existing research a variety of analyses have been employed to characterize 24-h rhythm, reducing ability to evaluate and synthesize findings. Few studies have examined the correspondence of mother and infant circadian parameters for the most frequently cited approaches: cosinor, non-parametric circadian rhythm analysis (NPCRA), and autocorrelation function (ACF). The purpose of this research was to examine analytic approaches in the study of mother and infant circadian activity rhythm. Forty-three healthy mother and infant pairs were studied in the home environment over a 72h period at infant age 4, 8, and 12 weeks. Activity was recorded continuously using actigraphy monitors and mothers completed a diary. Parameters of circadian rhythm were generated from cosinor analysis, NPCRA, and ACF. The correlation among measures of rhythm center (cosinor mesor, NPCRA mid level), strength or fit of 24-h period (cosinor magnitude and R(2), NPCRA amplitude and relative amplitude (RA)), phase (cosinor acrophase, NPCRA M10 and L5 midpoint), and rhythm stability and variability (NPCRA interdaily stability (IS) and intradaily variability (IV), ACF) was assessed, and additionally the effect size (eta(2)) for change over time evaluated. Results suggest that cosinor analysis, NPCRA, and autocorrelation provide several comparable parameters of infant and maternal circadian rhythm center, fit, and phase. IS and IV were strongly correlated with the 24-h cycle fit. The circadian parameters analyzed offer separate insight into rhythm and differing effect size for the detection of change over time. Findings inform selection of analysis and
A Big Data and Learning Analytics Approach to Process-Level Feedback in Cognitive Simulations.
Pecaric, Martin; Boutis, Kathy; Beckstead, Jason; Pusic, Martin
2017-02-01
Collecting and analyzing large amounts of process data for the purposes of education can be considered a big data/learning analytics (BD/LA) approach to improving learning. However, in the education of health care professionals, the application of BD/LA is limited to date. The authors discuss the potential advantages of the BD/LA approach for the process of learning via cognitive simulations. Using the lens of a cognitive model of radiograph interpretation with four phases (orientation, searching/scanning, feature detection, and decision making), they reanalyzed process data from a cognitive simulation of pediatric ankle radiography where 46 practitioners from three expertise levels classified 234 cases online. To illustrate the big data component, they highlight the data available in a digital environment (time-stamped, click-level process data). Learning analytics were illustrated using algorithmic computer-enabled approaches to process-level feedback.For each phase, the authors were able to identify examples of potentially useful BD/LA measures. For orientation, the trackable behavior of re-reviewing the clinical history was associated with increased diagnostic accuracy. For searching/scanning, evidence of skipping views was associated with an increased false-negative rate. For feature detection, heat maps overlaid on the radiograph can provide a metacognitive visualization of common novice errors. For decision making, the measured influence of sequence effects can reflect susceptibility to bias, whereas computer-generated path maps can provide insights into learners' diagnostic strategies.In conclusion, the augmented collection and dynamic analysis of learning process data within a cognitive simulation can improve feedback and prompt more precise reflection on a novice clinician's skill development.
Approach of decision making based on the analytic hierarchy process for urban landscape management.
Srdjevic, Zorica; Lakicevic, Milena; Srdjevic, Bojan
2013-03-01
This paper proposes a two-stage group decision making approach to urban landscape management and planning supported by the analytic hierarchy process. The proposed approach combines an application of the consensus convergence model and the weighted geometric mean method. The application of the proposed approach is shown on a real urban landscape planning problem with a park-forest in Belgrade, Serbia. Decision makers were policy makers, i.e., representatives of several key national and municipal institutions, and experts coming from different scientific fields. As a result, the most suitable management plan from the set of plans is recognized. It includes both native vegetation renewal in degraded areas of park-forest and continued maintenance of its dominant tourism function. Decision makers included in this research consider the approach to be transparent and useful for addressing landscape management tasks. The central idea of this paper can be understood in a broader sense and easily applied to other decision making problems in various scientific fields.
Approach of Decision Making Based on the Analytic Hierarchy Process for Urban Landscape Management
Srdjevic, Zorica; Lakicevic, Milena; Srdjevic, Bojan
2013-03-01
This paper proposes a two-stage group decision making approach to urban landscape management and planning supported by the analytic hierarchy process. The proposed approach combines an application of the consensus convergence model and the weighted geometric mean method. The application of the proposed approach is shown on a real urban landscape planning problem with a park-forest in Belgrade, Serbia. Decision makers were policy makers, i.e., representatives of several key national and municipal institutions, and experts coming from different scientific fields. As a result, the most suitable management plan from the set of plans is recognized. It includes both native vegetation renewal in degraded areas of park-forest and continued maintenance of its dominant tourism function. Decision makers included in this research consider the approach to be transparent and useful for addressing landscape management tasks. The central idea of this paper can be understood in a broader sense and easily applied to other decision making problems in various scientific fields.
International Nuclear Information System (INIS)
Esh, D.W.; Pinkston, K.E.; Barr, C.S.; Bradford, A.H.; Ridge, A.Ch.
2009-01-01
Nuclear Regulatory Commission (NRC) staff has developed a concentration averaging approach and guidance for the review of Department of Energy (DOE) non-HLW determinations. Although the approach was focused on this specific application, concentration averaging is generally applicable to waste classification and thus has implications for waste management decisions as discussed in more detail in this paper. In the United States, radioactive waste has historically been classified into various categories for the purpose of ensuring that the disposal system selected is commensurate with the hazard of the waste such that public health and safety will be protected. However, the risk from the near-surface disposal of radioactive waste is not solely a function of waste concentration but is also a function of the volume (quantity) of waste and its accessibility. A risk-informed approach to waste classification for near-surface disposal of low-level waste would consider the specific characteristics of the waste, the quantity of material, and the disposal system features that limit accessibility to the waste. NRC staff has developed example analytical approaches to estimate waste concentration, and therefore waste classification, for waste disposed in facilities or with configurations that were not anticipated when the regulation for the disposal of commercial low-level waste (i.e. 10 CFR Part 61) was developed. (authors)
International Nuclear Information System (INIS)
Kulakovskij, M.Ya.; Savitskij, V.I.
1981-01-01
The errors of multigroup calculating the neutron flux spatial and energy distribution in the fast reactor shield caused by using group and age approximations are considered. It is shown that at small distances from a source the age theory rather well describes the distribution of the slowing-down density. With the distance increase the age approximation leads to underestimating the neutron fluxes, and the error quickly increases at that. At small distances from the source (up to 15 lengths of free path in graphite) the multigroup diffusion approximation describes the distribution of slowing down density quite satisfactorily and at that the results almost do not depend on the number of groups. With the distance increase the multigroup diffusion calculations lead to considerable overestimating of the slowing-down density. The conclusion is drawn that the group approximation proper errors are opposite in sign to the error introduced by the age approximation and to some extent compensate each other
International Nuclear Information System (INIS)
Shukla, Anant Kant; Ramamohan, T R; Srinivas, S
2014-01-01
In this paper we propose a technique to obtain limit cycles and quasi-periodic solutions of forced nonlinear oscillators. We apply this technique to the forced Van der Pol oscillator and the forced Van der Pol Duffing oscillator and obtain for the first time their limit cycles (periodic) and quasi-periodic solutions analytically. We introduce a modification of the homotopy analysis method to obtain these solutions. We minimize the square residual error to obtain accurate approximations to these solutions. The obtained analytical solutions are convergent and agree well with numerical solutions even at large times. Time trajectories of the solution, its first derivative and phase plots are presented to confirm the validity of the proposed approach. We also provide rough criteria for the determination of parameter regimes which lead to limit cycle or quasi-periodic behaviour. (papers)
Neese, Frank; Wennmohs, Frank; Hansen, Andreas
2009-03-21
Coupled-electron pair approximations (CEPAs) and coupled-pair functionals (CPFs) have been popular in the 1970s and 1980s and have yielded excellent results for small molecules. Recently, interest in CEPA and CPF methods has been renewed. It has been shown that these methods lead to competitive thermochemical, kinetic, and structural predictions. They greatly surpass second order Moller-Plesset and popular density functional theory based approaches in accuracy and are intermediate in quality between CCSD and CCSD(T) in extended benchmark studies. In this work an efficient production level implementation of the closed shell CEPA and CPF methods is reported that can be applied to medium sized molecules in the range of 50-100 atoms and up to about 2000 basis functions. The internal space is spanned by localized internal orbitals. The external space is greatly compressed through the method of pair natural orbitals (PNOs) that was also introduced by the pioneers of the CEPA approaches. Our implementation also makes extended use of density fitting (or resolution of the identity) techniques in order to speed up the laborious integral transformations. The method is called local pair natural orbital CEPA (LPNO-CEPA) (LPNO-CPF). The implementation is centered around the concepts of electron pairs and matrix operations. Altogether three cutoff parameters are introduced that control the size of the significant pair list, the average number of PNOs per electron pair, and the number of contributing basis functions per PNO. With the conservatively chosen default values of these thresholds, the method recovers about 99.8% of the canonical correlation energy. This translates to absolute deviations from the canonical result of only a few kcal mol(-1). Extended numerical test calculations demonstrate that LPNO-CEPA (LPNO-CPF) has essentially the same accuracy as parent CEPA (CPF) methods for thermochemistry, kinetics, weak interactions, and potential energy surfaces but is up to 500
International Nuclear Information System (INIS)
Artozoul, Julien; Lescalier, Christophe; Dudzinski, Daniel
2015-01-01
Metal cutting is a highly complex thermo-mechanical process. The knowledge of temperature in the chip forming zone is essential to understand it. Conventional experimental methods such as thermocouples only provide global information which is incompatible with the high stress and temperature gradients met in the chip forming zone. Field measurements are essential to understand the localized thermo-mechanical problem. An experimental protocol has been developed using advanced infrared imaging in order to measure temperature distribution in both the tool and the chip during an orthogonal or oblique cutting operation. It also provides several information on the chip formation process such as some geometrical characteristics (tool-chip contact length, chip thickness, primary shear angle) and thermo-mechanical information (heat flux dissipated in deformation zone, local interface heat partition ratio). A study is carried out on the effects of cutting conditions i.e. cutting speed, feed and depth of cut on the temperature distribution along the contact zone for an elementary operation. An analytical thermal model has been developed to process experimental data and access more information i.e. local stress or heat flux distribution. - Highlights: • A thermal analytical model is proposed for orthogonal cutting process. • IR thermography is used during cutting tests. • Combined experimental and modeling approaches are applied. • Heat flux and stress distribution at the tool-chip interface are determined. • The decomposition into sticking and sliding zones is defined.
Quasi-Steady Evolution of Hillslopes in Layered Landscapes: An Analytic Approach
Glade, R. C.; Anderson, R. S.
2018-01-01
Landscapes developed in layered sedimentary or igneous rocks are common on Earth, as well as on other planets. Features such as hogbacks, exposed dikes, escarpments, and mesas exhibit resistant rock layers adjoining more erodible rock in tilted, vertical, or horizontal orientations. Hillslopes developed in the erodible rock are typically characterized by steep, linear-to-concave slopes or "ramps" mantled with material derived from the resistant layers, often in the form of large blocks. Previous work on hogbacks has shown that feedbacks between weathering and transport of the blocks and underlying soft rock can create relief over time and lead to the development of concave-up slope profiles in the absence of rilling processes. Here we employ an analytic approach, informed by numerical modeling and field data, to describe the quasi-steady state behavior of such rocky hillslopes for the full spectrum of resistant layer dip angles. We begin with a simple geometric analysis that relates structural dip to erosion rates. We then explore the mechanisms by which our numerical model of hogback evolution self-organizes to meet these geometric expectations, including adjustment of soil depth, erosion rates, and block velocities along the ramp. Analytical solutions relate easily measurable field quantities such as ramp length, slope, block size, and resistant layer dip angle to local incision rate, block velocity, and block weathering rate. These equations provide a framework for exploring the evolution of layered landscapes and pinpoint the processes for which we require a more thorough understanding to predict their evolution over time.
Traceability of 'Limone di Siracusa PGI' by a multidisciplinary analytical and chemometric approach.
Amenta, M; Fabroni, S; Costa, C; Rapisarda, P
2016-11-15
Food traceability is increasingly relevant with respect to safety, quality and typicality issues. Lemon fruits grown in a typical lemon-growing area of southern Italy (Siracusa), have been awarded the PGI (Protected Geographical Indication) recognition as 'Limone di Siracusa'. Due to its peculiarity, consumers have an increasing interest about this product. The detection of potential fraud could be improved by using the tools linking the composition of this production to its typical features. This study used a wide range of analytical techniques, including conventional techniques and analytical approaches, such as spectral (NIR spectra), multi-elemental (Fe, Zn, Mn, Cu, Li, Sr) and isotopic ((13)C/(12)C, (18)O/(16)O) marker investigations, joined with multivariate statistical analysis, such as PLS-DA (Partial Least Squares Discriminant Analysis) and LDA (Linear Discriminant Analysis), to implement a traceability system to verify the authenticity of 'Limone di Siracusa' production. The results demonstrated a very good geographical discrimination rate. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mobility spectrum analytical approach for intrinsic band picture of Ba(FeAs)2
Huynh, K. K.; Tanabe, Y.; Urata, T.; Heguri, S.; Tanigaki, K.; Kida, T.; Hagiwara, M.
2014-09-01
Unconventional high temperature superconductivity as well as three-dimensional bulk Dirac cone quantum states arising from the unique d-orbital topology have comprised an intriguing research area in physics. Here we apply a special analytical approach using a mobility spectrum, in which the carrier number is conveniently described as a function of mobility without any hypothesis, both on the types and the numbers of carriers, for the interpretations of longitudinal and transverse electric transport of high quality single crystal Ba(FeAs)2 in a wide range of magnetic fields. We show that the majority carriers are accommodated in large parabolic hole and electron pockets with very different topology as well as remarkably different mobility spectra, whereas the minority carriers reside in Dirac quantum states with the largest mobility as high as 70,000 cm2(Vs)-1. The deduced mobility spectra are discussed and compared to the reported sophisticated first principle band calculations.
Thermal and electrical energy management in a PEMFC stack - An analytical approach
Energy Technology Data Exchange (ETDEWEB)
Pandiyan, S.; Jayakumar, K.; Rajalakshmi, N.; Dhathathreyan, K.S. [Centre for Fuel Cell Technology, ARC International (ARCI), 120, Mambakkam Main Road, Medavakkam, Chennai 601 302 (India)
2008-02-15
An analytical method has been developed to differentiate the electrical and thermal resistance of the PEM fuel cell assembly in the fuel cell operating conditions. The usefulness of this method lies in the determination of the electrical resistance based on the polarization curve and the thermal resistance from the mass balance. This method also paves way for the evaluation of cogeneration from a PEMFC power plant. Based on this approach, the increase in current and resistance due to unit change in temperature at a particular current density has been evaluated. It was observed that the internal resistance of the cell is dependent on the electrode fabrication process, which also play a major role in the thermal management of the fuel cell stack. (author)
Analysis of the extracts of Isatis tinctoria by new analytical approaches of HPLC, MS and NMR.
Zhou, Jue; Qu, Fan
2011-01-01
The methods of extraction, separation and analysis of alkaloids and indole glucosinolates (GLs) ofIsatis tinctoria were reviewed. Different analytical approaches such as High-pressure Liquid Chromatography (HPLC), Liquid Chromatography with Electrospray Ionization Mass Spectrometry (LC/ESI/MS), Electrospray Ionization Time-Of-Flight Mass Spectrometry (ESI-TOF-MS), and Nuclear Magnetic Resonance (NMR) were used to validate and identity of these constituents. These methods provide rapid separation, identification and quantitative measurements of alkaloids and GLs of Isatis tinctoria. By connection with different detectors to HPLC such as PDA, ELSD, ESI- and APCI-MS in positive and negative ion modes, complicated compounds could be detected with at least two independent detection modes. The molecular formula can be derived in a second step of ESI-TOF-MS data. But for some constituents, UV and MS cannot provide sufficient structure identification. After peak purification, NMR by semi-preparative HPLC can be used as a complementary method.
Oud, Bart; Maris, Antonius J A; Daran, Jean-Marc; Pronk, Jack T
2012-01-01
Successful reverse engineering of mutants that have been obtained by nontargeted strain improvement has long presented a major challenge in yeast biotechnology. This paper reviews the use of genome-wide approaches for analysis of Saccharomyces cerevisiae strains originating from evolutionary engineering or random mutagenesis. On the basis of an evaluation of the strengths and weaknesses of different methods, we conclude that for the initial identification of relevant genetic changes, whole genome sequencing is superior to other analytical techniques, such as transcriptome, metabolome, proteome, or array-based genome analysis. Key advantages of this technique over gene expression analysis include the independency of genome sequences on experimental context and the possibility to directly and precisely reproduce the identified changes in naive strains. The predictive value of genome-wide analysis of strains with industrially relevant characteristics can be further improved by classical genetics or simultaneous analysis of strains derived from parallel, independent strain improvement lineages. PMID:22152095
Thermal fatigue crack growth in mixing tees nuclear piping - An analytical approach
International Nuclear Information System (INIS)
Radu, V.
2009-01-01
The assessment of fatigue crack growth due to cyclic thermal loads arising from turbulent mixing presents significant challenges, principally due to the difficulty of establishing the actual loading spectrum. So-called sinusoidal methods represent a simplified approach in which the entire spectrum is replaced by a sine-wave variation of the temperature at the inner pipe surface. The need for multiple calculations in this process has lead to the development of analytical solutions for thermal stresses in a pipe subject to sinusoidal thermal loading, described in previous work performed at JRC IE Petten, The Netherlands, during the author's stage as seconded national expert. Based on these stress distributions solutions, the paper presents a methodology for assessment of thermal fatigue crack growth life in mixing tees nuclear piping. (author)
Electromagnetic imaging of multiple-scattering small objects: non-iterative analytical approach
International Nuclear Information System (INIS)
Chen, X; Zhong, Y
2008-01-01
Multiple signal classification (MUSIC) imaging method and the least squares method are applied to solve the electromagnetic inverse scattering problem of determining the locations and polarization tensors of a collection of small objects embedded in a known background medium. Based on the analysis of induced electric and magnetic dipoles, the proposed MUSIC method is able to deal with some special scenarios, due to the shapes and materials of objects, to which the standard MUSIC doesn't apply. After the locations of objects are obtained, the nonlinear inverse problem of determining the polarization tensors of objects accounting for multiple scattering between objects is solved by a non-iterative analytical approach based on the least squares method
Advances on a Decision Analytic Approach to Exposure-Based Chemical Prioritization.
Wood, Matthew D; Plourde, Kenton; Larkin, Sabrina; Egeghy, Peter P; Williams, Antony J; Zemba, Valerie; Linkov, Igor; Vallero, Daniel A
2018-05-11
The volume and variety of manufactured chemicals is increasing, although little is known about the risks associated with the frequency and extent of human exposure to most chemicals. The EPA and the recent signing of the Lautenberg Act have both signaled the need for high-throughput methods to characterize and screen chemicals based on exposure potential, such that more comprehensive toxicity research can be informed. Prior work of Mitchell et al. using multicriteria decision analysis tools to prioritize chemicals for further research is enhanced here, resulting in a high-level chemical prioritization tool for risk-based screening. Reliable exposure information is a key gap in currently available engineering analytics to support predictive environmental and health risk assessments. An elicitation with 32 experts informed relative prioritization of risks from chemical properties and human use factors, and the values for each chemical associated with each metric were approximated with data from EPA's CP_CAT database. Three different versions of the model were evaluated using distinct weight profiles, resulting in three different ranked chemical prioritizations with only a small degree of variation across weight profiles. Future work will aim to include greater input from human factors experts and better define qualitative metrics. © 2018 Society for Risk Analysis.
Dagrau, Franck; Coulouvrat, François; Marchiano, Régis; Héron, Nicolas
2008-06-01
Dassault Aviation as a civil aircraft manufacturer is studying the feasibility of a supersonic business jet with the target of an "acceptable" sonic boom at the ground level, and in particular in case of focusing. A sonic boom computational process has been performed, that takes into account meteorological effects and aircraft manoeuvres. Turn manoeuvres and aircraft acceleration create zones of convergence of rays (caustics) which are the place of sound amplification. Therefore two elements have to be evaluated: firstly the geometrical position of the caustics, and secondly the noise level in the neighbourhood of the caustics. The modelling of the sonic boom propagation is based essentially on the assumptions of geometrical acoustics. Ray tracing is obtained according to Fermat's principle as paths that minimise the propagation time between the source (the aircraft) and the receiver. Wave amplitude and time waveform result from the solution of the inviscid Burgers' equation written along each individual ray. The "age variable" measuring the cumulative nonlinear effects is linked to the ray tube area. Caustics are located as the place where the ray tube area vanishes. Since geometrical acoustics does not take into account diffraction effects, it breaks down in the neighbourhood of caustics where it would predict unphysical infinite pressure amplitude. The aim of this study is to describe an original method for computing the focused noise level. The approach involves three main steps that can be summarised as follows. The propagation equation is solved by a forward marching procedure split into three successive steps: linear propagation in a homogeneous medium, linear perturbation due to the weak heterogeneity of the medium, and non-linear effects. The first step is solved using an "exact" angular spectrum algorithm. Parabolic approximation is applied only for the weak perturbation due to the heterogeneities. Finally, non linear effects are performed by solving the
A novel fast and accurate pseudo-analytical simulation approach for MOAO
Gendron, É .; Charara, Ali; Abdelfattah, Ahmad; Gratadour, D.; Keyes, David E.; Ltaief, Hatem; Morel, C.; Vidal, F.; Sevin, A.; Rousset, G.
2014-01-01
Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique for wide-field multi-object spectrographs (MOS). MOAO aims at applying dedicated wavefront corrections to numerous separated tiny patches spread over a large field of view (FOV), limited only by that of the telescope. The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. We have developed a novel hybrid, pseudo-analytical simulation scheme, somewhere in between the end-to- end and purely analytical approaches, that allows us to simulate in detail the tomographic problem as well as noise and aliasing with a high fidelity, and including fitting and bandwidth errors thanks to a Fourier-based code. Our tomographic approach is based on the computation of the minimum mean square error (MMSE) reconstructor, from which we derive numerically the covariance matrix of the tomographic error, including aliasing and propagated noise. We are then able to simulate the point-spread function (PSF) associated to this covariance matrix of the residuals, like in PSF reconstruction algorithms. The advantage of our approach is that we compute the same tomographic reconstructor that would be computed when operating the real instrument, so that our developments open the way for a future on-sky implementation of the tomographic control, plus the joint PSF and performance estimation. The main challenge resides in the computation of the tomographic reconstructor which involves the inversion of a large matrix (typically 40 000 × 40 000 elements). To perform this computation efficiently, we chose an optimized approach based on the use of GPUs as accelerators and using an optimized linear algebra library: MORSE providing a significant speedup against standard CPU oriented libraries such as Intel MKL. Because the covariance matrix is
A novel fast and accurate pseudo-analytical simulation approach for MOAO
Gendron, É.
2014-08-04
Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique for wide-field multi-object spectrographs (MOS). MOAO aims at applying dedicated wavefront corrections to numerous separated tiny patches spread over a large field of view (FOV), limited only by that of the telescope. The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. We have developed a novel hybrid, pseudo-analytical simulation scheme, somewhere in between the end-to- end and purely analytical approaches, that allows us to simulate in detail the tomographic problem as well as noise and aliasing with a high fidelity, and including fitting and bandwidth errors thanks to a Fourier-based code. Our tomographic approach is based on the computation of the minimum mean square error (MMSE) reconstructor, from which we derive numerically the covariance matrix of the tomographic error, including aliasing and propagated noise. We are then able to simulate the point-spread function (PSF) associated to this covariance matrix of the residuals, like in PSF reconstruction algorithms. The advantage of our approach is that we compute the same tomographic reconstructor that would be computed when operating the real instrument, so that our developments open the way for a future on-sky implementation of the tomographic control, plus the joint PSF and performance estimation. The main challenge resides in the computation of the tomographic reconstructor which involves the inversion of a large matrix (typically 40 000 × 40 000 elements). To perform this computation efficiently, we chose an optimized approach based on the use of GPUs as accelerators and using an optimized linear algebra library: MORSE providing a significant speedup against standard CPU oriented libraries such as Intel MKL. Because the covariance matrix is
2013-12-10
analytics, Hsinchun Chen, Roger HL Chiang, and Veda C. Storey, “Business intelligence and analytics: from big data to big impact,” MIS Quarterly 36...Roger H. L. Chiang, and Veda C. Storey. “Business Intelligence and Analytics: From Big Data to Big Impact.” MIS Quarterly 36, no. 4 (December 2012
An analytical approach to separate climate and human contributions to basin streamflow variability
Li, Changbin; Wang, Liuming; Wanrui, Wang; Qi, Jiaguo; Linshan, Yang; Zhang, Yuan; Lei, Wu; Cui, Xia; Wang, Peng
2018-04-01
Climate variability and anthropogenic regulations are two interwoven factors in the ecohydrologic system across large basins. Understanding the roles that these two factors play under various hydrologic conditions is of great significance for basin hydrology and sustainable water utilization. In this study, we present an analytical approach based on coupling water balance method and Budyko hypothesis to derive effectiveness coefficients (ECs) of climate change, as a way to disentangle contributions of it and human activities to the variability of river discharges under different hydro-transitional situations. The climate dominated streamflow change (ΔQc) by EC approach was compared with those deduced by the elasticity method and sensitivity index. The results suggest that the EC approach is valid and applicable for hydrologic study at large basin scale. Analyses of various scenarios revealed that contributions of climate change and human activities to river discharge variation differed among the regions of the study area. Over the past several decades, climate change dominated hydro-transitions from dry to wet, while human activities played key roles in the reduction of streamflow during wet to dry periods. Remarkable decline of discharge in upstream was mainly due to human interventions, although climate contributed more to runoff increasing during dry periods in the semi-arid downstream. Induced effectiveness on streamflow changes indicated a contribution ratio of 49% for climate and 51% for human activities at the basin scale from 1956 to 2015. The mathematic derivation based simple approach, together with the case example of temporal segmentation and spatial zoning, could help people understand variation of river discharge with more details at a large basin scale under the background of climate change and human regulations.
Steinmetz, Philipp; Kellner, Michael; Hötzer, Johannes; Nestler, Britta
2018-02-01
For the analytical description of the relationship between undercoolings, lamellar spacings and growth velocities during the directional solidification of ternary eutectics in 2D and 3D, different extensions based on the theory of Jackson and Hunt are reported in the literature. Besides analytical approaches, the phase-field method has been established to study the spatially complex microstructure evolution during the solidification of eutectic alloys. The understanding of the fundamental mechanisms controlling the morphology development in multiphase, multicomponent systems is of high interest. For this purpose, a comparison is made between the analytical extensions and three-dimensional phase-field simulations of directional solidification in an ideal ternary eutectic system. Based on the observed accordance in two-dimensional validation cases, the experimentally reported, inherently three-dimensional chain-like pattern is investigated in extensive simulation studies. The results are quantitatively compared with the analytical results reported in the literature, and with a newly derived approach which uses equal undercoolings. A good accordance of the undercooling-spacing characteristics between simulations and the analytical Jackson-Hunt apporaches are found. The results show that the applied phase-field model, which is based on the Grand potential approach, is able to describe the analytically predicted relationship between the undercooling and the lamellar arrangements during the directional solidification of a ternary eutectic system in 3D.
Optimal starting conditions for the rendezvous maneuver: Analytical and computational approach
Ciarcia, Marco
The three-dimensional rendezvous between two spacecraft is considered: a target spacecraft on a circular orbit around the Earth and a chaser spacecraft initially on some elliptical orbit yet to be determined. The chaser spacecraft has variable mass, limited thrust, and its trajectory is governed by three controls, one determining the thrust magnitude and two determining the thrust direction. We seek the time history of the controls in such a way that the propellant mass required to execute the rendezvous maneuver is minimized. Two cases are considered: (i) time-to-rendezvous free and (ii) time-to-rendezvous given, respectively equivalent to (i) free angular travel and (ii) fixed angular travel for the target spacecraft. The above problem has been studied by several authors under the assumption that the initial separation coordinates and the initial separation velocities are given, hence known initial conditions for the chaser spacecraft. In this paper, it is assumed that both the initial separation coordinates and initial separation velocities are free except for the requirement that the initial chaser-to-target distance is given so as to prevent the occurrence of trivial solutions. Two approaches are employed: optimal control formulation (Part A) and mathematical programming formulation (Part B). In Part A, analyses are performed with the multiple-subarc sequential gradient-restoration algorithm for optimal control problems. They show that the fuel-optimal trajectory is zero-bang, namely it is characterized by two subarcs: a long coasting zero-thrust subarc followed by a short powered max-thrust braking subarc. While the thrust direction of the powered subarc is continuously variable for the optimal trajectory, its replacement with a constant (yet optimized) thrust direction produces a very efficient guidance trajectory. Indeed, for all values of the initial distance, the fuel required by the guidance trajectory is within less than one percent of the fuel required
Field-driven chiral bubble dynamics analysed by a semi-analytical approach
Vandermeulen, J.; Leliaert, J.; Dupré, L.; Van Waeyenberge, B.
2017-12-01
Nowadays, field-driven chiral bubble dynamics in the presence of the Dzyaloshinskii-Moriya interaction are a topic of thorough investigation. In this paper, a semi-analytical approach is used to derive equations of motion that express the bubble wall (BW) velocity and the change in in-plane magnetization angle as function of the micromagnetic parameters of the involved interactions, thereby taking into account the two-dimensional nature of the bubble wall. It is demonstrated that the equations of motion enable an accurate description of the expanding and shrinking convex bubble dynamics and an expression for the transition field between shrinkage and expansion is derived. In addition, these equations of motion show that the BW velocity is not only dependent on the driving force, but also on the BW curvature. The absolute BW velocity increases for both a shrinking and an expanding bubble, but for different reasons: for expanding bubbles, it is due to the increasing importance of the driving force, while for shrinking bubbles, it is due to the increasing importance of contributions related to the BW curvature. Finally, using this approach we show how the recently proposed magnetic bubblecade memory can operate in the flow regime in the presence of a tilted sinusoidal magnetic field and at greatly reduced bubble sizes compared to the original device prototype.
A Visual Analytics Approach for Station-Based Air Quality Data.
Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui
2016-12-24
With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support.
Schildcrout, Jonathan S; Basford, Melissa A; Pulley, Jill M; Masys, Daniel R; Roden, Dan M; Wang, Deede; Chute, Christopher G; Kullo, Iftikhar J; Carrell, David; Peissig, Peggy; Kho, Abel; Denny, Joshua C
2010-12-01
We describe a two-stage analytical approach for characterizing morbidity profile dissimilarity among patient cohorts using electronic medical records. We capture morbidities using the International Statistical Classification of Diseases and Related Health Problems (ICD-9) codes. In the first stage of the approach separate logistic regression analyses for ICD-9 sections (e.g., "hypertensive disease" or "appendicitis") are conducted, and the odds ratios that describe adjusted differences in prevalence between two cohorts are displayed graphically. In the second stage, the results from ICD-9 section analyses are combined into a general morbidity dissimilarity index (MDI). For illustration, we examine nine cohorts of patients representing six phenotypes (or controls) derived from five institutions, each a participant in the electronic MEdical REcords and GEnomics (eMERGE) network. The phenotypes studied include type II diabetes and type II diabetes controls, peripheral arterial disease and peripheral arterial disease controls, normal cardiac conduction as measured by electrocardiography, and senile cataracts. Copyright © 2010 Elsevier Inc. All rights reserved.
An Analytical Approach for Fast Recovery of the LSI Properties in Magnetic Particle Imaging
Directory of Open Access Journals (Sweden)
Hamed Jabbari Asl
2016-01-01
Full Text Available Linearity and shift invariance (LSI characteristics of magnetic particle imaging (MPI are important properties for quantitative medical diagnosis applications. The MPI image equations have been theoretically shown to exhibit LSI; however, in practice, the necessary filtering action removes the first harmonic information, which destroys the LSI characteristics. This lost information can be constant in the x-space reconstruction method. Available recovery algorithms, which are based on signal matching of multiple partial field of views (pFOVs, require much processing time and a priori information at the start of imaging. In this paper, a fast analytical recovery algorithm is proposed to restore the LSI properties of the x-space MPI images, representable as an image of discrete concentrations of magnetic material. The method utilizes the one-dimensional (1D x-space imaging kernel and properties of the image and lost image equations. The approach does not require overlapping of pFOVs, and its complexity depends only on a small-sized system of linear equations; therefore, it can reduce the processing time. Moreover, the algorithm only needs a priori information which can be obtained at one imaging process. Considering different particle distributions, several simulations are conducted, and results of 1D and 2D imaging demonstrate the effectiveness of the proposed approach.
A Visual Analytics Approach for Station-Based Air Quality Data
Directory of Open Access Journals (Sweden)
Yi Du
2016-12-01
Full Text Available With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support.
General analytical approach for sound transmission loss analysis through a thick metamaterial plate
International Nuclear Information System (INIS)
Oudich, Mourad; Zhou, Xiaoming; Badreddine Assouar, M.
2014-01-01
We report theoretically and numerically on the sound transmission loss performance through a thick plate-type acoustic metamaterial made of spring-mass resonators attached to the surface of a homogeneous elastic plate. Two general analytical approaches based on plane wave expansion were developed to calculate both the sound transmission loss through the metamaterial plate (thick and thin) and its band structure. The first one can be applied to thick plate systems to study the sound transmission for any normal or oblique incident sound pressure. The second approach gives the metamaterial dispersion behavior to describe the vibrational motions of the plate, which helps to understand the physics behind sound radiation through air by the structure. Computed results show that high sound transmission loss up to 72 dB at 2 kHz is reached with a thick metamaterial plate while only 23 dB can be obtained for a simple homogeneous plate with the same thickness. Such plate-type acoustic metamaterial can be a very effective solution for high performance sound insulation and structural vibration shielding in the very low-frequency range
General analytical approach for sound transmission loss analysis through a thick metamaterial plate
Energy Technology Data Exchange (ETDEWEB)
Oudich, Mourad; Zhou, Xiaoming; Badreddine Assouar, M., E-mail: Badreddine.Assouar@univ-lorraine.fr [CNRS, Institut Jean Lamour, Vandoeuvre-lès-Nancy F-54506 (France); Institut Jean Lamour, University of Lorraine, Boulevard des Aiguillettes, BP: 70239, 54506 Vandoeuvre-lès-Nancy (France)
2014-11-21
We report theoretically and numerically on the sound transmission loss performance through a thick plate-type acoustic metamaterial made of spring-mass resonators attached to the surface of a homogeneous elastic plate. Two general analytical approaches based on plane wave expansion were developed to calculate both the sound transmission loss through the metamaterial plate (thick and thin) and its band structure. The first one can be applied to thick plate systems to study the sound transmission for any normal or oblique incident sound pressure. The second approach gives the metamaterial dispersion behavior to describe the vibrational motions of the plate, which helps to understand the physics behind sound radiation through air by the structure. Computed results show that high sound transmission loss up to 72 dB at 2 kHz is reached with a thick metamaterial plate while only 23 dB can be obtained for a simple homogeneous plate with the same thickness. Such plate-type acoustic metamaterial can be a very effective solution for high performance sound insulation and structural vibration shielding in the very low-frequency range.
Contaminant ingress into multizone buildings: An analytical state-space approach
Parker, Simon
2013-08-13
The ingress of exterior contaminants into buildings is often assessed by treating the building interior as a single well-mixed space. Multizone modelling provides an alternative way of representing buildings that can estimate concentration time series in different internal locations. A state-space approach is adopted to represent the concentration dynamics within multizone buildings. Analysis based on this approach is used to demonstrate that the exposure in every interior location is limited to the exterior exposure in the absence of removal mechanisms. Estimates are also developed for the short term maximum concentration and exposure in a multizone building in response to a step-change in concentration. These have considerable potential for practical use. The analytical development is demonstrated using a simple two-zone building with an inner zone and a range of existing multizone models of residential buildings. Quantitative measures are provided of the standard deviation of concentration and exposure within a range of residential multizone buildings. Ratios of the maximum short term concentrations and exposures to single zone building estimates are also provided for the same buildings. © 2013 Tsinghua University Press and Springer-Verlag Berlin Heidelberg.
An analytical approach to the positive reactivity void coefficient of TRIGA Mark-II reactor
International Nuclear Information System (INIS)
Edgue, Erdinc; Yarman, Tolga
1988-01-01
Previous calculations of reactivity void coefficient of I.T.U. TRIGA Mark-II Reactor was done by the second author et al. The theoretical predictions were afterwards, checked in this reactor experimentally. In this work an analytical approach is developed to evaluate rather quickly the reactivity void coefficient of I.T.U. TRIGA Mark-II, versus the size of the void inserted into the reactor. It is thus assumed that the reactor is a cylindrical, bare nuclear system. Next a belt of water of 2πrΔrH is introduced axially at a distance r from the center line of the system. r here, is the thickness of the belt, and H is the height of the reactor. The void is described by decreasing the water density in the belt region. A two group diffusion theory is adopted to determine the criticality of our configuration. The space dependency of the group fluxes are, thereby, assumed to be J 0 (2.405 r / R) cos (π Z / H), the same as that associated with the original bare reactor uniformly loaded prior to the change. A perturbation type of approach, thence, furnishes the effect of introducing a void in the belt region. The reactivity void coefficient can, rather surprisingly, be indeed positive. To our knowledge, this fact had not been established, by the supplier. The agreement of our predictions with the experimental results is good. (author)
A Deep Learning Approach to on-Node Sensor Data Analytics for Mobile or Wearable Devices.
Ravi, Daniele; Wong, Charence; Lo, Benny; Yang, Guang-Zhong
2017-01-01
The increasing popularity of wearable devices in recent years means that a diverse range of physiological and functional data can now be captured continuously for applications in sports, wellbeing, and healthcare. This wealth of information requires efficient methods of classification and analysis where deep learning is a promising technique for large-scale data analytics. While deep learning has been successful in implementations that utilize high-performance computing platforms, its use on low-power wearable devices is limited by resource constraints. In this paper, we propose a deep learning methodology, which combines features learned from inertial sensor data together with complementary information from a set of shallow features to enable accurate and real-time activity classification. The design of this combined method aims to overcome some of the limitations present in a typical deep learning framework where on-node computation is required. To optimize the proposed method for real-time on-node computation, spectral domain preprocessing is used before the data are passed onto the deep learning framework. The classification accuracy of our proposed deep learning approach is evaluated against state-of-the-art methods using both laboratory and real world activity datasets. Our results show the validity of the approach on different human activity datasets, outperforming other methods, including the two methods used within our combined pipeline. We also demonstrate that the computation times for the proposed method are consistent with the constraints of real-time on-node processing on smartphones and a wearable sensor platform.
International Nuclear Information System (INIS)
Staśkiewicz, B.; Okrasiński, W.
2012-01-01
We propose a simple analytical form of the vapor–liquid equilibrium curve near the critical point for Lennard-Jones fluids. Coexistence densities curves and vapor pressure have been determined using the Van der Waals and Dieterici equation of state. In described method the Bernoulli differential equations, critical exponent theory and some type of Maxwell's criterion have been used. Presented approach has not yet been used to determine analytical form of phase curves as done in this Letter. Lennard-Jones fluids have been considered for analysis. Comparison with experimental data is done. The accuracy of the method is described. -- Highlights: ► We propose a new analytical way to determine the VLE curve. ► Simple, mathematically straightforward form of phase curves is presented. ► Comparison with experimental data is discussed. ► The accuracy of the method has been confirmed.
Maglaveras, Nicos; Kilintzis, Vassilis; Koutkias, Vassilis; Chouvarda, Ioanna
2016-01-01
Integrated care and connected health are two fast evolving concepts that have the potential to leverage personalised health. From the one side, the restructuring of care models and implementation of new systems and integrated care programs providing coaching and advanced intervention possibilities, enable medical decision support and personalized healthcare services. From the other side, the connected health ecosystem builds the means to follow and support citizens via personal health systems in their everyday activities and, thus, give rise to an unprecedented wealth of data. These approaches are leading to the deluge of complex data, as well as in new types of interactions with and among users of the healthcare ecosystem. The main challenges refer to the data layer, the information layer, and the output of information processing and analytics. In all the above mentioned layers, the primary concern is the quality both in data and information, thus, increasing the need for filtering mechanisms. Especially in the data layer, the big biodata management and analytics ecosystem is evolving, telemonitoring is a step forward for data quality leverage, with numerous challenges still left to address, partly due to the large number of micro-nano sensors and technologies available today, as well as the heterogeneity in the users' background and data sources. This leads to new R&D pathways as it concerns biomedical information processing and management, as well as to the design of new intelligent decision support systems (DSS) and interventions for patients. In this paper, we illustrate these issues through exemplar research targeting chronic patients, illustrating the current status and trends in PHS within the integrated care and connected care world.
Approximation by planar elastic curves
DEFF Research Database (Denmark)
Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge
2016-01-01
We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....
Exploration of Simple Analytical Approaches for Rapid Detection of Pathogenic Bacteria
Energy Technology Data Exchange (ETDEWEB)
Rahman, Salma [Iowa State Univ., Ames, IA (United States)
2005-01-01
Many of the current methods for pathogenic bacterial detection require long sample-preparation and analysis time, as well as complex instrumentation. This dissertation explores simple analytical approaches (e.g., flow cytometry and diffuse reflectance spectroscopy) that may be applied towards ideal requirements of a microbial detection system, through method and instrumentation development, and by the creation and characterization of immunosensing platforms. This dissertation is organized into six sections. In the general Introduction section a literature review on several of the key aspects of this work is presented. First, different approaches for detection of pathogenic bacteria will be reviewed, with a comparison of the relative strengths and weaknesses of each approach, A general overview regarding diffuse reflectance spectroscopy is then presented. Next, the structure and function of self-assembled monolayers (SAMs) formed from organosulfur molecules at gold and micrometer and sub-micrometer patterning of biomolecules using SAMs will be discussed. This section is followed by four research chapters, presented as separate manuscripts. Chapter 1 describes the efforts and challenges towards the creation of imunosensing platforms that exploit the flexibility and structural stability of SAMs of thiols at gold. 1H, 1H, 2H, 2H-perfluorodecyl-1-thiol SAM (PFDT) and dithio-bis(succinimidyl propionate)-(DSP)-derived SAMs were used to construct the platform. Chapter 2 describes the characterization of the PFDT- and DSP-derived SAMs, and the architectures formed when it is coupled to antibodies as well as target bacteria. These studies used infrared reflection spectroscopy (IRS), X-ray photoelectron spectroscopy (XPS), and electrochemical quartz crystal microbalance (EQCM), Chapter 3 presents a new sensitive, and portable diffuse reflection based technique for the rapid identification and quantification of pathogenic bacteria. Chapter 4 reports research efforts in the
Mitigating Sports Injury Risks Using Internet of Things and Analytics Approaches.
Wilkerson, Gary B; Gupta, Ashish; Colston, Marisa A
2018-03-12
Sport injuries restrict participation, impose a substantial economic burden, and can have persisting adverse effects on health-related quality of life. The effective use of Internet of Things (IoT), when combined with analytics approaches, can improve player safety through identification of injury risk factors that can be addressed by targeted risk reduction training activities. Use of IoT devices can facilitate highly efficient quantification of relevant functional capabilities prior to sport participation, which could substantially advance the prevailing sport injury management paradigm. This study introduces a framework for using sensor-derived IoT data to supplement other data for objective estimation of each individual college football player's level of injury risk, which is an approach to injury prevention that has not been previously reported. A cohort of 45 NCAA Division I-FCS college players provided data in the form of self-ratings of persisting effects of previous injuries and single-leg postural stability test. Instantaneous change in body mass acceleration (jerk) during the test was quantified by a smartphone accelerometer, with data wirelessly transmitted to a secure cloud server. Injuries sustained from the beginning of practice sessions until the end of the 13-game season were documented, along with the number of games played by each athlete over the course of a 13-game season. Results demonstrate a strong prediction model. Our approach may have strong relevance to the estimation of injury risk for other physically demanding activities. Clearly, there is great potential for improvement of injury prevention initiatives through identification of individual athletes who possess elevated injury risk and targeted interventions. © 2018 Society for Risk Analysis.
Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Gunay, Nur Sibel; Wang, Jing; Sun, Elaine Y; Pradines, Joël R; Farutin, Victor; Shriver, Zachary; Kaundinya, Ganesh V; Capila, Ishan
2017-02-01
Heparan sulfate (HS), a glycosaminoglycan present on the surface of cells, has been postulated to have important roles in driving both normal and pathological physiologies. The chemical structure and sulfation pattern (domain structure) of HS is believed to determine its biological function, to vary across tissue types, and to be modified in the context of disease. Characterization of HS requires isolation and purification of cell surface HS as a complex mixture. This process may introduce additional chemical modification of the native residues. In this study, we describe an approach towards thorough characterization of bovine kidney heparan sulfate (BKHS) that utilizes a variety of orthogonal analytical techniques (e.g. NMR, IP-RPHPLC, LC-MS). These techniques are applied to characterize this mixture at various levels including composition, fragment level, and overall chain properties. The combination of these techniques in many instances provides orthogonal views into the fine structure of HS, and in other instances provides overlapping / confirmatory information from different perspectives. Specifically, this approach enables quantitative determination of natural and modified saccharide residues in the HS chains, and identifies unusual structures. Analysis of partially digested HS chains allows for a better understanding of the domain structures within this mixture, and yields specific insights into the non-reducing end and reducing end structures of the chains. This approach outlines a useful framework that can be applied to elucidate HS structure and thereby provides means to advance understanding of its biological role and potential involvement in disease progression. In addition, the techniques described here can be applied to characterization of heparin from different sources.
International Nuclear Information System (INIS)
Suarez Antola, R.
2005-01-01
It was proponed recently to apply an extension of Lyapunov's first method to the non-linear regime, known as non-linear modal analysis (NMA), to the study of space-time problems in nuclear reactor kinetics, nuclear power plant dynamics and nuclear power plant instrumentation and control(1). The present communication shows how to apply NMA to the study of Xenon spatial oscillations in large nuclear reactors. The set of non-linear modal equations derived by J. Lewins(2) for neutron flux, Xenon concentration and Iodine concentration are discussed, and a modified version of these equations is taken as a starting point. Using the methods of singular perturbation theory a slow manifold is constructed in the space of mode amplitudes. This allows the reduction of the original high dimensional dynamics to a low dimensional one. It is shown how the amplitudes of the first mode for neutron flux field, temperature field and concentrations of Xenon and Iodine fields can have a stable steady state value while the corresponding amplitudes of the second mode oscillates in a stable limit cycle. The extrapolated dimensions of the reactor's core are used as bifurcation parameters. Approximate analytical formulae are obtained for the critical values of this parameters( below which the onset of oscillations is produced), for the period and for the amplitudes of the above mentioned oscillations. These results are applied to the discussion of neutron flux and temperature excursions in critical locations of the reactor's core. The results of NMA can be validated from the results obtained applying suitable computer codes, using homogenization theory(3) to link the complex heterogeneous model of the codes with the simplified mathematical model used for NMA
Oseev, Aleksandr; Lucklum, Ralf; Zubtsov, Mikhail; Schmidt, Marc-Peter; Mukhin, Nikolay V; Hirsch, Soeren
2017-09-23
The current work demonstrates a novel surface acoustic wave (SAW) based phononic crystal sensor approach that allows the integration of a velocimetry-based sensor concept into single chip integrated solutions, such as Lab-on-a-Chip devices. The introduced sensor platform merges advantages of ultrasonic velocimetry analytic systems and a microacoustic sensor approach. It is based on the analysis of structural resonances in a periodic composite arrangement of microfluidic channels confined within a liquid analyte. Completed theoretical and experimental investigations show the ability to utilize periodic structure localized modes for the detection of volumetric properties of liquids and prove the efficacy of the proposed sensor concept.
International Nuclear Information System (INIS)
Morini, Filippo; Deleuze, Michael S.; Watanabe, Noboru; Takahashi, Masahiko
2015-01-01
The influence of thermally induced nuclear dynamics (molecular vibrations) in the initial electronic ground state on the valence orbital momentum profiles of furan has been theoretically investigated using two different approaches. The first of these approaches employs the principles of Born-Oppenheimer molecular dynamics, whereas the so-called harmonic analytical quantum mechanical approach resorts to an analytical decomposition of contributions arising from quantized harmonic vibrational eigenstates. In spite of their intrinsic differences, the two approaches enable consistent insights into the electron momentum distributions inferred from new measurements employing electron momentum spectroscopy and an electron impact energy of 1.2 keV. Both approaches point out in particular an appreciable influence of a few specific molecular vibrations of A 1 symmetry on the 9a 1 momentum profile, which can be unravelled from considerations on the symmetry characteristics of orbitals and their energy spacing
Directory of Open Access Journals (Sweden)
V. F. Chekhun
2013-09-01
Full Text Available New data on cytogenetic approximation of the experimental cytogenetic dependence "dose - effect" based on the spline regression model that improves biological dosimetry of human radiological exposure were received. This is achieved by reducing the error of the determination of absorbed dose as compared to the traditional use of linear and linear-quadratic models and makes it possible to predict the effect of dose curves on plateau.
Spatial Analytic Hierarchy Process Model for Flood Forecasting: An Integrated Approach
International Nuclear Information System (INIS)
Matori, Abd Nasir; Yusof, Khamaruzaman Wan; Hashim, Mustafa Ahmad; Lawal, Dano Umar; Balogun, Abdul-Lateef
2014-01-01
Various flood influencing factors such as rainfall, geology, slope gradient, land use, soil type, drainage density, temperature etc. are generally considered for flood hazard assessment. However, lack of appropriate handling/integration of data from different sources is a challenge that can make any spatial forecasting difficult and inaccurate. Availability of accurate flood maps and thorough understanding of the subsurface conditions can adequately enhance flood disasters management. This study presents an approach that attempts to provide a solution to this drawback by combining Geographic Information System (GIS)-based Analytic Hierarchy Process (AHP) model as spatial forecasting tools. In achieving the set objectives, spatial forecasting of flood susceptible zones in the study area was made. A total number of five set of criteria/factors believed to be influencing flood generation in the study area were selected. Priority weights were assigned to each criterion/factor based on Saaty's nine point scale of preference and weights were further normalized through the AHP. The model was integrated into a GIS system in order to produce a flood forecasting map
Oud, Bart; van Maris, Antonius J A; Daran, Jean-Marc; Pronk, Jack T
2012-03-01
Successful reverse engineering of mutants that have been obtained by nontargeted strain improvement has long presented a major challenge in yeast biotechnology. This paper reviews the use of genome-wide approaches for analysis of Saccharomyces cerevisiae strains originating from evolutionary engineering or random mutagenesis. On the basis of an evaluation of the strengths and weaknesses of different methods, we conclude that for the initial identification of relevant genetic changes, whole genome sequencing is superior to other analytical techniques, such as transcriptome, metabolome, proteome, or array-based genome analysis. Key advantages of this technique over gene expression analysis include the independency of genome sequences on experimental context and the possibility to directly and precisely reproduce the identified changes in naive strains. The predictive value of genome-wide analysis of strains with industrially relevant characteristics can be further improved by classical genetics or simultaneous analysis of strains derived from parallel, independent strain improvement lineages. © 2011 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.
A Hybrid Approach for Reliability Analysis Based on Analytic Hierarchy Process and Bayesian Network
International Nuclear Information System (INIS)
Zubair, Muhammad
2014-01-01
By using analytic hierarchy process (AHP) and Bayesian Network (BN) the present research signifies the technical and non-technical issues of nuclear accidents. The study exposed that the technical faults was one major reason of these accidents. Keep an eye on other point of view it becomes clearer that human behavior like dishonesty, insufficient training, and selfishness are also play a key role to cause these accidents. In this study, a hybrid approach for reliability analysis based on AHP and BN to increase nuclear power plant (NPP) safety has been developed. By using AHP, best alternative to improve safety, design, operation, and to allocate budget for all technical and non-technical factors related with nuclear safety has been investigated. We use a special structure of BN based on the method AHP. The graphs of the BN and the probabilities associated with nodes are designed to translate the knowledge of experts on the selection of best alternative. The results show that the improvement in regulatory authorities will decrease failure probabilities and increase safety and reliability in industrial area.
An analytic approach to probability tables for the unresolved resonance region
Brown, David; Kawano, Toshihiko
2017-09-01
The Unresolved Resonance Region (URR) connects the fast neutron region with the Resolved Resonance Region (RRR). The URR is problematic since resonances are not resolvable experimentally yet the fluctuations in the neutron cross sections play a discernible and technologically important role: the URR in a typical nucleus is in the 100 keV - 2 MeV window where the typical fission spectrum peaks. The URR also represents the transition between R-matrix theory used to described isolated resonances and Hauser-Feshbach theory which accurately describes the average cross sections. In practice, only average or systematic features of the resonances in the URR are known and are tabulated in evaluations in a nuclear data library such as ENDF/B-VII.1. Codes such as AMPX and NJOY can compute the probability distribution of the cross section in the URR under some assumptions using Monte Carlo realizations of sets of resonances. These probability distributions are stored in the so-called PURR tables. In our work, we begin to develop a scheme for computing the covariance of the cross section probability distribution analytically. Our approach offers the possibility of defining the limits of applicability of Hauser-Feshbach theory and suggests a way to calculate PURR tables directly from systematics for nuclei whose RRR is unknown, provided one makes appropriate assumptions about the shape of the cross section probability distribution.
Decision support for energy conservation promotion: an analytic hierarchy process approach
International Nuclear Information System (INIS)
Kablan, M.M.
2004-01-01
An effective energy conservation program in any country should encourage the different enterprises, utilities and individuals to employ energy efficient processes, technologies, equipment, and materials. Governments use different mechanisms or policy instruments such as pricing policy (PP), regulation and legislation (RL), training and education, fiscal and financial incentives (FFI), and R and D to promote energy conservation. Effective implementation of energy conservation policies requires prioritization of the different available policy instruments. This paper presents an analytic hierarchy process (AHP) based modeling framework for the prioritization of energy conservation policy instruments. The use of AHP to support management in the prioritization process of policy instruments for promoting energy conservation is illustrated in this research using the case study of Jordan. The research provided a comprehensive framework for performing the prioritization in a scientific and systematic manner. The four most promising policy instruments for promoting energy conservation in Jordan are RL (37.4%), followed by FFI (22.2%), PP (18.0%), and Training, education and qualification (14.5%). One of the major advantages of the use of the AHP approach is that it breaks a large problem into smaller problems which enables the decision-maker (DM) to have a better concentration and to make more sound decisions. In addition, AHP employs a consistency test that can screen out inconsistent judgements. The presented methodology of the research might be beneficial to DMs in other countries
Directory of Open Access Journals (Sweden)
Neill Korobov
2018-05-01
Full Text Available A discourse analytic approach was used to examine how twenty young adult romantic couples (ages 19-26 employed criticisms and insinuations of infidelity in their natural unstructured interactions to indirectly and creatively pursue closeness. The research has been motivated by an expanding arena of research that shows that ostensibly contentious interactional moments among young adult intimates may not be adversarial, but rather may be methods that promote a playful repartee that leads to affiliation. I demonstrate how criticisms are both often highly gendered and typically formulated and responded to in tongue-in-cheek, non-serious ways that involve the creative use of various forms of irony, laughter, rekeyings, abrupt non-sequiturs, and topic shifts that mitigate the potential for the criticisms to become adversarial. Similarly, the insinuations of infidelity were often designed by the couples to attend to interactional breaches. They functioned as a brief but effective way for one partner to signal that they had been dismissed or neglected in the preceding discursive turns. My central finding is that young adult romantic couples maintain closeness amidst potential conflict in their natural everyday conversational interactions.
An analytic approach to 2D electronic PE spectra of molecular systems
International Nuclear Information System (INIS)
Szoecs, V.
2011-01-01
Graphical abstract: The three-pulse photon echo (3P-PE) spectra of finite molecular systems using direct calculation from electronic Hamiltonians allows peak classification from 3P-PE spectra dynamics. Display Omitted Highlights: → RWA approach to electronic photon echo. → A straightforward calculation of 2D electronic spectrograms in finite molecular systems. → Importance of population time dynamics in relation to inter-site coherent coupling. - Abstract: The three-pulse photon echo (3P-PE) spectra of finite molecular systems and simplified line broadening models is presented. The Fourier picture of a heterodyne detected three-pulse rephasing PE signal in the δ-pulse limit of the external field is derived in analytic form. The method includes contributions of one and two-excitonic states and allows direct calculation of Fourier PE spectrogram from corresponding Hamiltonian. As an illustration, the proposed treatment is applied to simple systems, e.g. 2-site two-level system (TLS) and n-site TLS model of photosynthetic unit. The importance of relation between Fourier picture of 3P-PE dynamics (corresponding to nonzero population time, T) and coherent inter-state coupling is emphasized.
An analytic approach to probability tables for the unresolved resonance region
Directory of Open Access Journals (Sweden)
Brown David
2017-01-01
Full Text Available The Unresolved Resonance Region (URR connects the fast neutron region with the Resolved Resonance Region (RRR. The URR is problematic since resonances are not resolvable experimentally yet the fluctuations in the neutron cross sections play a discernible and technologically important role: the URR in a typical nucleus is in the 100 keV – 2 MeV window where the typical fission spectrum peaks. The URR also represents the transition between R-matrix theory used to described isolated resonances and Hauser-Feshbach theory which accurately describes the average cross sections. In practice, only average or systematic features of the resonances in the URR are known and are tabulated in evaluations in a nuclear data library such as ENDF/B-VII.1. Codes such as AMPX and NJOY can compute the probability distribution of the cross section in the URR under some assumptions using Monte Carlo realizations of sets of resonances. These probability distributions are stored in the so-called PURR tables. In our work, we begin to develop a scheme for computing the covariance of the cross section probability distribution analytically. Our approach offers the possibility of defining the limits of applicability of Hauser-Feshbach theory and suggests a way to calculate PURR tables directly from systematics for nuclei whose RRR is unknown, provided one makes appropriate assumptions about the shape of the cross section probability distribution.
International Nuclear Information System (INIS)
Donadille, L.; Derreumaux, S.; Mantione, J.; Robbes, I.; Trompier, F.; Amgarou, K.; Asselineau, B.; Martin, A.
2008-01-01
Full text: X-rays produced by high-energy (larger than 6 MeV) medical electron linear accelerators create secondary neutron radiation fields mainly by photonuclear reactions inside the materials of the accelerator head, the patient and the walls of the therapy room. Numerous papers were devoted to the study of neutron production in medical linear accelerators and resulting decay of activation products. However, data associated to doses delivered to workers in treatment conditions are scarce. In France, there are more than 350 external radiotherapy facilities representing almost all types of techniques and designs. IRSN carried out a measurement campaign in order to investigate the variation of the occupational dose according the different encountered situations. Six installations were investigated, associated with the main manufacturers (Varian, Elekta, General Electrics, Siemens), for several nominal energies, conventional and IMRT techniques, and bunker designs. Measurements were carried out separately for neutron and photon radiation fields, and for radiation associated with the decay of the activation products, by means of radiometers, tissue-equivalent proportional counters and spectrometers (neutron and photon spectrometry). They were performed at the positions occupied by the workers, i.e. outside the bunker during treatments, inside between treatments. Measurements have been compared to published data. In addition, semi-empirical analytical approaches recommended by international protocols were used to estimate doses inside and outside the bunkers. The results obtained by both approaches were compared and analysed. The annual occupational effective dose was estimated to about 1 mSv, including more than 50 % associated with the decay of activation products and less than 10 % due to direct exposure to leakage neutrons produced during treatments. (author)
Directory of Open Access Journals (Sweden)
Orhan Dengiz
2018-01-01
Full Text Available Land evaluation analysis is a prerequisite to achieving optimum utilization of the available land resources. Lack of knowledge on best combination of factors that suit production of yields has contributed to the low production. The aim of this study was to determine the most suitable areas for agricultural uses. For that reasons, in order to determine land suitability classes of the study area, multi-criteria approach was used with linear combination technique and analytical hierarchy process by taking into consideration of some land and soil physico-chemical characteristic such as slope, texture, depth, derange, stoniness, erosion, pH, EC, CaCO3 and organic matter. These data and land mapping unites were taken from digital detailed soil map scaled as 1:5.000. In addition, in order to was produce land suitability map GIS was program used for the study area. This study was carried out at Mahmudiye, Karaamca, Yazılı, Çiçeközü, Orhaniye and Akbıyık villages in Yenişehir district of Bursa province. Total study area is 7059 ha. 6890 ha of total study area has been used as irrigated agriculture, dry farming agriculture, pasture while, 169 ha has been used for non-agricultural activities such as settlement, road water body etc. Average annual temperature and precipitation of the study area are 16.1oC and 1039.5 mm, respectively. Finally after determination of land suitability distribution classes for the study area, it was found that 15.0% of the study area has highly (S1 and moderately (S2 while, 85% of the study area has marginally suitable and unsuitable coded as S3 and N. It was also determined some relation as compared results of linear combination technique with other hierarchy approaches such as Land Use Capability Classification and Suitability Class for Agricultural Use methods.
Kymes, Steven M; Plotzke, Michael R; Li, Jim Z; Nichol, Michael B; Wu, Joanne; Fain, Joel
2010-07-01
Glaucoma accounts for more than 11% of all cases of blindness in the United States, but there have been few studies of economic impact. We examine incremental cost of primary open-angle glaucoma considering both visual and nonvisual medical costs over a lifetime of glaucoma. A decision analytic approach taking the payor's perspective with microsimulation estimation. We constructed a Markov model to replicate health events over the remaining lifetime of someone newly diagnosed with glaucoma. Costs of this group were compared with those estimated for a control group without glaucoma. The cost of management of glaucoma (including medications) before the onset of visual impairment was not considered. The model was populated with probability data estimated from Medicare claims data (1999 through 2005). Cost of nonocular medications and nursing home use was estimated from California Medicare claims, and all other costs were estimated from Medicare claims data. We found modest differences in the incidence of comorbid conditions and health service use between people with glaucoma and the control group. Over their expected lifetime, the cost of care for people with primary open-angle glaucoma was higher than that of people without primary open-angle glaucoma by $1688 or approximately $137 per year. Among Medicare beneficiaries, glaucoma diagnosis not found to be associated with significant risk of comorbidities before development of visual impairment. Further study is necessary to consider the impact of glaucoma on quality of life, as well as aspects of physical and visual function not captured in this claims-based analysis. 2010 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Beelman, R.J.
1999-01-01
A symptom approach to the analytical validation of symptom-based EOPs includes: (1) Identification of critical safety functions to the maintenance of fission product barrier integrity; (2) Identification of the symptoms which manifest an impending challenge to critical safety function maintenance; (3) Development of a symptomatic methodology to delineate bounding plant transient response modes; (4) Specification of bounding scenarios; (5) Development of a systematic calculational approach consistent with the objectives of the methodology; (6) Performance of thermal-hydraulic computer code calculations implementing the analytical methodology; (7) Interpretation of the analytical results on the basis of information available to the operator; (8) Application of the results to the validation of the proposed operator actions; (9) Production of a technical basis document justifying the proposed operator actions. (author)
A Semi-Analytical Approach for the Response of Nonlinear Conservative Systems
DEFF Research Database (Denmark)
Kimiaeifar, Amin; Barari, Amin; Fooladi, M
2011-01-01
This work applies Parameter expanding method (PEM) as a powerful analytical technique in order to obtain the exact solution of nonlinear problems in the classical dynamics. Lagrange method is employed to derive the governing equations. The nonlinear governing equations are solved analytically by ...
Tromp, P.C.; Kuijpers, E.; Bekker, C.; Godderis, L.; Lan, Q.; Jedynska, A.D.; Vermeulen, R.; Pronk, A.
2017-01-01
To date there is no consensus about the most appropriate analytical method for measuring carbon nanotubes (CNTs), hampering the assessment and limiting the comparison of data. The goal of this study is to develop an approach for the assessment of the level and nature of inhalable multi-wall CNTs
Vahdat, M.; Oneto, L.; Anguita, D.; Funk, M.; Rauterberg, M.; Conole, G.; Klobucar, T.; Rensing, C.; Konert, J.; Lavoue, E.
2015-01-01
This paper presents a Learning Analytics approach for understanding the learning behavior of students while interacting with Technology Enhanced Learning tools. In this work we show that it is possible to gain insight into the learning processes of students from their interaction data. We base our
Causanilles Llanes, A.
2018-01-01
The research presented in this thesis supports the hypothesis that wastewater-based epidemiology (WBE) approach can be used as an alternative and non-intrusive technique that provides information about a population’s health and lifestyle habits. The focus is in the essential role of analytical
International Nuclear Information System (INIS)
Gawand, Hemangi Laxman; Bhattacharjee, A. K.; Roy, Kallol
2017-01-01
In industrial plants such as nuclear power plants, system operations are performed by embedded controllers orchestrated by Supervisory Control and Data Acquisition (SCADA) software. A targeted attack (also termed a control aware attack) on the controller/SCADA software can lead a control system to operate in an unsafe mode or sometimes to complete shutdown of the plant. Such malware attacks can result in tremendous cost to the organization for recovery, cleanup, and maintenance activity. SCADA systems in operational mode generate huge log files. These files are useful in analysis of the plant behavior and diagnostics during an ongoing attack. However, they are bulky and difficult for manual inspection. Data mining techniques such as least squares approximation and computational methods can be used in the analysis of logs and to take proactive actions when required. This paper explores methodologies and algorithms so as to develop an effective monitoring scheme against control aware cyber attacks. It also explains soft computation techniques such as the computational geometric method and least squares approximation that can be effective in monitor design. This paper provides insights into diagnostic monitoring of its effectiveness by attack simulations on a four-tank model and using computation techniques to diagnose it. Cyber security of instrumentation and control systems used in nuclear power plants is of paramount importance and hence could be a possible target of such applications
Energy Technology Data Exchange (ETDEWEB)
Gawand, Hemangi Laxman [Homi Bhabha National Institute, Computer Section, BARC, Mumbai (India); Bhattacharjee, A. K. [Reactor Control Division, BARC, Mumbai (India); Roy, Kallol [BHAVINI, Kalpakkam (India)
2017-04-15
In industrial plants such as nuclear power plants, system operations are performed by embedded controllers orchestrated by Supervisory Control and Data Acquisition (SCADA) software. A targeted attack (also termed a control aware attack) on the controller/SCADA software can lead a control system to operate in an unsafe mode or sometimes to complete shutdown of the plant. Such malware attacks can result in tremendous cost to the organization for recovery, cleanup, and maintenance activity. SCADA systems in operational mode generate huge log files. These files are useful in analysis of the plant behavior and diagnostics during an ongoing attack. However, they are bulky and difficult for manual inspection. Data mining techniques such as least squares approximation and computational methods can be used in the analysis of logs and to take proactive actions when required. This paper explores methodologies and algorithms so as to develop an effective monitoring scheme against control aware cyber attacks. It also explains soft computation techniques such as the computational geometric method and least squares approximation that can be effective in monitor design. This paper provides insights into diagnostic monitoring of its effectiveness by attack simulations on a four-tank model and using computation techniques to diagnose it. Cyber security of instrumentation and control systems used in nuclear power plants is of paramount importance and hence could be a possible target of such applications.
Directory of Open Access Journals (Sweden)
Hemangi Laxman Gawand
2017-04-01
Full Text Available In industrial plants such as nuclear power plants, system operations are performed by embedded controllers orchestrated by Supervisory Control and Data Acquisition (SCADA software. A targeted attack (also termed a control aware attack on the controller/SCADA software can lead a control system to operate in an unsafe mode or sometimes to complete shutdown of the plant. Such malware attacks can result in tremendous cost to the organization for recovery, cleanup, and maintenance activity. SCADA systems in operational mode generate huge log files. These files are useful in analysis of the plant behavior and diagnostics during an ongoing attack. However, they are bulky and difficult for manual inspection. Data mining techniques such as least squares approximation and computational methods can be used in the analysis of logs and to take proactive actions when required. This paper explores methodologies and algorithms so as to develop an effective monitoring scheme against control aware cyber attacks. It also explains soft computation techniques such as the computational geometric method and least squares approximation that can be effective in monitor design. This paper provides insights into diagnostic monitoring of its effectiveness by attack simulations on a four-tank model and using computation techniques to diagnose it. Cyber security of instrumentation and control systems used in nuclear power plants is of paramount importance and hence could be a possible target of such applications.
Abidin, Zainal; Handayani, Wahyu; Fattah, Mochammad
2016-01-01
Masamo as new variety of catfish cultivated by the farmer group "Sumber Lancar" in Blimbing, Malang currently has a lot of demand due to increasing consumers who like to eat fish to meet the need for protein for the body. Increasing of Masamo catfish demand followed by production and marketing efforts. This study wants to know whether the marketing efficient. Therefore, this study uses analytical approach approach in order to identify institutional and channel of Masamo Catfish marketing perf...
DEFF Research Database (Denmark)
Mewes, Julie Sascia; Elliot, Michelle L.; Lee, Kim
2017-01-01
In this paper, three qualitative researchers with professional backgrounds in social anthropology, occupational therapy, and occupational science present their methodological and theoretical standpoints and resultant analytical approaches on a single set of ethnographic data – an event occurring......, such an approach reveals similarities, differences, and complexity that may arise when attempting to locate occupation as the central unit of analysis. The conclusion suggests that cutting through the layers of occupation necessarily provides multiple ontologies....
Schmidt, Wolfgang M
1980-01-01
"In 1970, at the U. of Colorado, the author delivered a course of lectures on his famous generalization, then just established, relating to Roth's theorem on rational approxi- mations to algebraic numbers. The present volume is an ex- panded and up-dated version of the original mimeographed notes on the course. As an introduction to the author's own remarkable achievements relating to the Thue-Siegel-Roth theory, the text can hardly be bettered and the tract can already be regarded as a classic in its field."(Bull.LMS) "Schmidt's work on approximations by algebraic numbers belongs to the deepest and most satisfactory parts of number theory. These notes give the best accessible way to learn the subject. ... this book is highly recommended." (Mededelingen van het Wiskundig Genootschap)
Ordered cones and approximation
Keimel, Klaus
1992-01-01
This book presents a unified approach to Korovkin-type approximation theorems. It includes classical material on the approximation of real-valuedfunctions as well as recent and new results on set-valued functions and stochastic processes, and on weighted approximation. The results are notonly of qualitative nature, but include quantitative bounds on the order of approximation. The book is addressed to researchers in functional analysis and approximation theory as well as to those that want to applythese methods in other fields. It is largely self- contained, but the readershould have a solid background in abstract functional analysis. The unified approach is based on a new notion of locally convex ordered cones that are not embeddable in vector spaces but allow Hahn-Banach type separation and extension theorems. This concept seems to be of independent interest.
Phillips, Jordan J; Peralta, Juan E
2011-11-14
We introduce a method for evaluating magnetic exchange couplings based on the constrained density functional theory (C-DFT) approach of Rudra, Wu, and Van Voorhis [J. Chem. Phys. 124, 024103 (2006)]. Our method shares the same physical principles as C-DFT but makes use of the fact that the electronic energy changes quadratically and bilinearly with respect to the constraints in the range of interest. This allows us to use coupled perturbed Kohn-Sham spin density functional theory to determine approximately the corrections to the energy of the different spin configurations and construct a priori the relevant energy-landscapes obtained by constrained spin density functional theory. We assess this methodology in a set of binuclear transition-metal complexes and show that it reproduces very closely the results of C-DFT. This demonstrates a proof-of-concept for this method as a potential tool for studying a number of other molecular phenomena. Additionally, routes to improving upon the limitations of this method are discussed. © 2011 American Institute of Physics
A multiplex degenerate PCR analytical approach targeting to eight genes for screening GMOs.
Guo, Jinchao; Chen, Lili; Liu, Xin; Gao, Ying; Zhang, Dabing; Yang, Litao
2012-06-01
Currently, the detection methods with lower cost and higher throughput are the major trend in screening genetically modified (GM) food or feed before specific identification. In this study, we developed a quadruplex degenerate PCR screening approach for more than 90 approved GMO events. This assay is consisted of four PCR systems targeting on nine DNA sequences from eight trait genes widely introduced into GMOs, such as CP4-EPSPS derived from Acetobacterium tumefaciens sp. strain CP4, phosphinothricin acetyltransferase gene derived from Streptomyceshygroscopicus (bar) and Streptomyces viridochromogenes (pat), and Cry1Ab, Cry1Ac, Cry1A(b/c), mCry3A, and Cry3Bb1 derived from Bacillus thuringiensis. The quadruplex degenerate PCR assay offers high specificity and sensitivity with the absolute limit of detection (LOD) of approximate 80targetcopies. Furthermore, the applicability of the quadruplex PCR assay was confirmed by screening either several artificially prepared samples or samples of Grain Inspection, Packers and Stockyards Administration (GIPSA) proficiency program. Copyright © 2011 Elsevier Ltd. All rights reserved.
A Two-Step Approach for Analytical Optimal Hedging with Two Triggers
Directory of Open Access Journals (Sweden)
Tiesong Hu
2016-02-01
Full Text Available Hedging is widely used to mitigate severe water shortages in the operation of reservoirs during droughts. Rationing is usually instituted with one hedging policy, which is based only on one trigger, i.e., initial storage level or current water availability. It may perform poorly in balancing the benefits of a release during the current period versus those of carryover storage during future droughts. This study proposes a novel hedging rule to improve the efficiency of a reservoir operated to supply water, in which, based on two triggers, hedging is initiated with three different hedging sub-rules through a two-step approach. In the first step, the sub-rule is triggered based on the relationship between the initial reservoir storage level and the level of the target rule curve or the firm rule curve at the end of the current period. This step is mainly concerned with increasing the water level or not in the current period. Hedging is then triggered under the sub-rule based on current water availability in the second step, in which the trigger implicitly considers both initial and ending reservoir storage levels in the current period. Moreover, the amount of hedging is analytically derived based on the Karush–Kuhn–Tucker (KKT conditions. In addition, the hedging parameters are optimized using the improved particle swarm optimization (IPSO algorithm coupled with a rule-based simulation. A single water-supply reservoir located in Hubei Province in central China is selected as a case study. The operation results show that the proposed rule is reasonable and significantly improves the reservoir operation performance for both long-term and critical periods relative to other operation policies, such as the standard operating policy (SOP and the most commonly used hedging rules.
Maurage, Pierre; Timary, Philippe de; D'Hondt, Fabien
2017-08-01
Emotional and interpersonal impairments have been largely reported in alcohol-dependence, and their role in its development and maintenance is widely established. However, earlier studies have exclusively focused on group comparisons between healthy controls and alcohol-dependent individuals, considering them as a homogeneous population. The variability of socio-emotional profiles in this disorder thus remains totally unexplored. The present study used a cluster analytic approach to explore the heterogeneity of affective and social disorders in alcohol-dependent individuals. 296 recently-detoxified alcohol-dependent patients were first compared with 246 matched healthy controls regarding self-reported emotional (i.e. alexithymia) and social (i.e. interpersonal problems) difficulties. Then, a cluster analysis was performed, focusing on the alcohol-dependent sample, to explore the presence of differential patterns of socio-emotional deficits and their links with demographic, psychopathological and alcohol-related variables. The group comparison between alcohol-dependent individuals and controls clearly confirmed that emotional and interpersonal difficulties constitute a key factor in alcohol-dependence. However, the cluster analysis identified five subgroups of alcohol-dependent individuals, presenting distinct combinations of alexithymia and interpersonal problems ranging from a total absence of reported impairment to generalized socio-emotional difficulties. Alcohol-dependent individuals should no more be considered as constituting a unitary group regarding their affective and interpersonal difficulties, but rather as a population encompassing a wide variety of socio-emotional profiles. Future experimental studies on emotional and social variables should thus go beyond mere group comparisons to explore this heterogeneity, and prevention programs proposing an individualized evaluation and rehabilitation of these deficits should be promoted. Copyright © 2017
Borghesi, Fabrizio; Migani, Francesca; Andreotti, Alessandro; Baccetti, Nicola; Bianchi, Nicola; Birke, Manfred; Dinelli, Enrico
2016-02-15
Assessing trace metal pollution using feathers has long attracted the attention of ecotoxicologists as a cost-effective and non-invasive biomonitoring method. In order to interpret the concentrations in feathers considering the external contamination due to lithic residue particles, we adopted a novel geochemical approach. We analysed 58 element concentrations in feathers of wild Eurasian Greater Flamingo Phoenicopterus roseus fledglings, from 4 colonies in Western Europe (Spain, France, Sardinia, and North-eastern Italy) and one group of adults from zoo. In addition, 53 elements were assessed in soil collected close to the nesting islets. This enabled to compare a wide selection of metals among the colonies, highlighting environmental anomalies and tackling possible causes of misinterpretation of feather results. Most trace elements in feathers (Al, Ce, Co, Cs, Fe, Ga, Li, Mn, Nb, Pb, Rb, Ti, V, Zr, and REEs) were of external origin. Some elements could be constitutive (Cu, Zn) or significantly bioaccumulated (Hg, Se) in flamingos. For As, Cr, and to a lesser extent Pb, it seems that bioaccumulation potentially could be revealed by highly exposed birds, provided feathers are well cleaned. This comprehensive study provides a new dataset and confirms that Hg has been accumulated in feathers in all sites to some extent, with particular concern for the Sardinian colony, which should be studied further including Cr. The Spanish colony appears critical for As pollution and should be urgently investigated in depth. Feathers collected from North-eastern Italy were the hardest to clean, but our methods allowed biological interpretation of Cr and Pb. Our study highlights the importance of external contamination when analysing trace elements in feathers and advances methodological recommendations in order to reduce the presence of residual particles carrying elements of external origin. Geochemical data, when available, can represent a valuable tool for a correct
Directory of Open Access Journals (Sweden)
Richard L Fidler
Full Text Available Heart rate (HR alarms are prevalent in ICU, and these parameters are configurable. Not much is known about nursing behavior associated with tailoring HR alarm parameters to individual patients to reduce clinical alarm fatigue.To understand the relationship between heart rate (HR alarms and adjustments to reduce unnecessary heart rate alarms.Retrospective, quantitative analysis of an adjudicated database using analytical approaches to understand behaviors surrounding parameter HR alarm adjustments. Patients were sampled from five adult ICUs (77 beds over one month at a quaternary care university medical center. A total of 337 of 461 ICU patients had HR alarms with 53.7% male, mean age 60.3 years, and 39% non-Caucasian. Default HR alarm parameters were 50 and 130 beats per minute (bpm. The occurrence of each alarm, vital signs, and physiologic waveforms was stored in a relational database (SQL server.There were 23,624 HR alarms for analysis, with 65.4% exceeding the upper heart rate limit. Only 51% of patients with HR alarms had parameters adjusted, with a median upper limit change of +5 bpm and -1 bpm lower limit. The median time to first HR parameter adjustment was 17.9 hours, without reduction in alarms occurrence (p = 0.57.HR alarms are prevalent in ICU, and half of HR alarm settings remain at default. There is a long delay between HR alarms and parameters changes, with insufficient changes to decrease HR alarms. Increasing frequency of HR alarms shortens the time to first adjustment. Best practice guidelines for HR alarm limits are needed to reduce alarm fatigue and improve monitoring precision.
Shigayeva, Altynay; Coker, Richard J
2015-04-01
There is renewed concern over the sustainability of disease control programmes, and re-emergence of policy recommendations to integrate programmes with general health systems. However, the conceptualization of this issue has remarkably received little critical attention. Additionally, the study of programmatic sustainability presents methodological challenges. In this article, we propose a conceptual framework to support analyses of sustainability of communicable disease programmes. Through this work, we also aim to clarify a link between notions of integration and sustainability. As a part of development of the conceptual framework, we conducted a systematic literature review of peer-reviewed literature on concepts, definitions, analytical approaches and empirical studies on sustainability in health systems. Identified conceptual proposals for analysis of sustainability in health systems lack an explicit conceptualization of what a health system is. Drawing upon theoretical concepts originating in sustainability sciences and our review here, we conceptualize a communicable disease programme as a component of a health system which is viewed as a complex adaptive system. We propose five programmatic characteristics that may explain a potential for sustainability: leadership, capacity, interactions (notions of integration), flexibility/adaptability and performance. Though integration of elements of a programme with other system components is important, its role in sustainability is context specific and difficult to predict. The proposed framework might serve as a basis for further empirical evaluations in understanding complex interplay between programmes and broader health systems in the development of sustainable responses to communicable diseases. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine © The Author 2014; all rights reserved.
Kanthaswamy, S
2015-10-01
This review highlights the importance of domestic animal genetic evidence sources, genetic testing, markers and analytical approaches as well as the challenges this field is facing in view of the de facto 'gold standard' human DNA identification. Because of the genetic similarity between humans and domestic animals, genetic analysis of domestic animal hair, saliva, urine, blood and other biological material has generated vital investigative leads that have been admitted into a variety of court proceedings, including criminal and civil litigation. Information on validated short tandem repeat, single nucleotide polymorphism and mitochondrial DNA markers and public access to genetic databases for forensic DNA analysis is becoming readily available. Although the fundamental aspects of animal forensic genetic testing may be reliable and acceptable, animal forensic testing still lacks the standardized testing protocols that human genetic profiling requires, probably because of the absence of monetary support from government agencies and the difficulty in promoting cooperation among competing laboratories. Moreover, there is a lack in consensus about how to best present the results and expert opinion to comply with court standards and bear judicial scrutiny. This has been the single most persistent challenge ever since the earliest use of domestic animal forensic genetic testing in a criminal case in the mid-1990s. Crime laboratory accreditation ensures that genetic test results have the courts' confidence. Because accreditation requires significant commitments of effort, time and resources, the vast majority of animal forensic genetic laboratories are not accredited nor are their analysts certified forensic examiners. The relevance of domestic animal forensic genetics in the criminal justice system is undeniable. However, further improvements are needed in a wide range of supporting resources, including standardized quality assurance and control protocols for sample
Bureick, Johannes; Alkhatib, Hamza; Neumann, Ingo
2016-03-01
In many geodetic engineering applications it is necessary to solve the problem of describing a measured data point cloud, measured, e. g. by laser scanner, by means of free-form curves or surfaces, e. g., with B-Splines as basis functions. The state of the art approaches to determine B-Splines yields results which are seriously manipulated by the occurrence of data gaps and outliers. Optimal and robust B-Spline fitting depend, however, on optimal selection of the knot vector. Hence we combine in our approach Monte-Carlo methods and the location and curvature of the measured data in order to determine the knot vector of the B-Spline in such a way that no oscillating effects at the edges of data gaps occur. We introduce an optimized approach based on computed weights by means of resampling techniques. In order to minimize the effect of outliers, we apply robust M-estimators for the estimation of control points. The above mentioned approach will be applied to a multi-sensor system based on kinematic terrestrial laserscanning in the field of rail track inspection.
Modeling of Coaxial Slot Waveguides Using Analytical and Numerical Approaches: Revisited
Directory of Open Access Journals (Sweden)
Kok Yeow You
2012-01-01
Full Text Available Our reviews of analytical methods and numerical methods for coaxial slot waveguides are presented. The theories, background, and physical principles related to frequency-domain electromagnetic equations for coaxial waveguides are reassessed. Comparisons of the accuracies of various types of admittance and impedance equations and numerical simulations are made, and the fringing field at the aperture sensor, which is represented by the lumped capacitance circuit, is evaluated. The accuracy and limitations of the analytical equations are explained in detail. The reasons for the replacement of analytical methods by numerical methods are outlined.
International Nuclear Information System (INIS)
Borah, Neelakshi N. K.; Choudhury, D. K.
2014-01-01
A next-to-leading order QCD calculation of nonsinglet spin structure function g 1 NS (x,t) at small x is presented using the analytical methods: Lagrange’s method and method of characteristics. The compatibility of these analytical approaches is tested by comparing the analytical solutions with the available polarized global fits
International Nuclear Information System (INIS)
Bubert, H.; Garten, R.; Klockenkaemper, R.; Puderbach, H.
1983-01-01
Corrosion protective coatings on galvanized steel sheets have been studied by a combination of SEM, EDX, AES, ISS and SIMS. Analytical statements concerning such rough, poly-crystalline and contaminated surfaces of technical samples are quite difficult to obtain. The use of a surface-analytical multi-method approach overcomes, the intrinsic limitations of the individual method applied, thus resulting in a consistent picture of those technical surfaces. Such results can be used to examine technical faults and to optimize the technical process. (Author)
Harne, R. L.; Zhang, Chunlin; Li, Bing; Wang, K. W.
2016-07-01
Impulsive energies are abundant throughout the natural and built environments, for instance as stimulated by wind gusts, foot-steps, or vehicle-road interactions. In the interest of maximizing the sustainability of society's technological developments, one idea is to capture these high-amplitude and abrupt energies and convert them into usable electrical power such as for sensors which otherwise rely on less sustainable power supplies. In this spirit, the considerable sensitivity to impulse-type events previously uncovered for bistable oscillators has motivated recent experimental and numerical studies on the power generation performance of bistable vibration energy harvesters. To lead to an effective and efficient predictive tool and design guide, this research develops a new analytical approach to estimate the electroelastic response and power generation of a bistable energy harvester when excited by an impulse. Comparison with values determined by direct simulation of the governing equations shows that the analytically predicted net converted energies are very accurate for a wide range of impulse strengths. Extensive experimental investigations are undertaken to validate the analytical approach and it is seen that the predicted estimates of the impulsive energy conversion are in excellent agreement with the measurements, and the detailed structural dynamics are correctly reproduced. As a result, the analytical approach represents a significant leap forward in the understanding of how to effectively leverage bistable structures as energy harvesting devices and introduces new means to elucidate the transient and far-from-equilibrium dynamics of nonlinear systems more generally.
Combined analytical and numerical approaches in Dynamic Stability analyses of engineering systems
Náprstek, Jiří
2015-03-01
Dynamic Stability is a widely studied area that has attracted many researchers from various disciplines. Although Dynamic Stability is usually associated with mechanics, theoretical physics or other natural and technical disciplines, it is also relevant to social, economic, and philosophical areas of our lives. Therefore, it is useful to occasionally highlight the general aspects of this amazing area, to present some relevant examples and to evaluate its position among the various branches of Rational Mechanics. From this perspective, the aim of this study is to present a brief review concerning the Dynamic Stability problem, its basic definitions and principles, important phenomena, research motivations and applications in engineering. The relationships with relevant systems that are prone to stability loss (encountered in other areas such as physics, other natural sciences and engineering) are also noted. The theoretical background, which is applicable to many disciplines, is presented. In this paper, the most frequently used Dynamic Stability analysis methods are presented in relation to individual dynamic systems that are widely discussed in various engineering branches. In particular, the Lyapunov function and exponent procedures, Routh-Hurwitz, Liénard, and other theorems are outlined together with demonstrations. The possibilities for analytical and numerical procedures are mentioned together with possible feedback from experimental research and testing. The strengths and shortcomings of these approaches are evaluated together with examples of their effective complementing of each other. The systems that are widely encountered in engineering are presented in the form of mathematical models. The analyses of their Dynamic Stability and post-critical behaviour are also presented. The stability limits, bifurcation points, quasi-periodic response processes and chaotic regimes are discussed. The limit cycle existence and stability are examined together with their
DEFF Research Database (Denmark)
Ryberg, Thomas; Dirckinck-Holmfeld, Lone
2008-01-01
This paper sets out to problematize generational categories such as ‘Power Users’ or ‘New Millennium Learners’ by discussing these in the light of recent research on youth and ICT. We then suggest analytic and conceptual pathways to engage in more critical and empirically founded studies of young...... people’s learning in technology and media-rich settings. Based on a study of a group of young ‘Power Users’ it is argued, that conceptualising and analysing learning as a process of patchworking can enhance our knowledge of young people’s learning in such settings. We argue that the analytical approach...... gives us ways of critically investigating young people’s learning in technology and media-rich settings, and study if these are processes of critical, reflexive enquiry where resources are creatively re-appropriated. With departure in an analytical example the paper presents the proposed metaphor...
Analytical Approaches to Understanding the Role of Non-carbohydrate Components in Wood Biorefinery
Leskinen, Timo Ensio
This dissertation describes the production and analysis of wood subjected to a novel electron beam-steam explosion pretreatment (EB-SE) pretreatment with the aim to evaluate its suitability for the production of bioethanol. The goal of these studies was to: 1) develop analytical methods for the investigation of depolymerization of wood components under pretreatments, 2) analyze the effects of EB-SE pretreatment on the pretreated biomass, 3) define how lignin and extractive components affect the action of enzymes on cellulosic substrates, and 4) examine how changes in lignin structure impact its isolation and potential conversion into value added chemicals. The first section of the work describes the development of a size-exclusion chromatography (SEC) methodology for molecular weight analysis for native and pretreated wood. The selective analysis of carbohydrates and lignin from native wood was made possible by the combination of two selective derivatization methods, ionic liquid assisted benzoylation of the carbohydrate fraction and acetobromination of the lignin in acetic acid media. This method was then used to examine changes in softwood samples after the EB-SE pretreatment. The methodology was shown to be effective for monitoring changes in the molecular weight profiles of the pretreated wood. The second section of the work investigates synergistic effects of the EB-SE pretreatment on the molecular level structures of wood components and the significance of these alterations in terms of enzymatic digestibility. The two pretreatment steps depolymerized cell wall components in different fashion, while showing synergistic effects. Hardwood and softwood species responded differently to similar treatment conditions, which was attributed to the well-known differences in the structure of their lignin and hemicellulose fractions. The relatively crosslinked lignin in softwood appeared to limit swelling and subsequent depolymerization in comparison to hardwood
Directory of Open Access Journals (Sweden)
Jingjing Feng
2016-01-01
Full Text Available In dynamic systems, some nonlinearities generate special connection problems of non-Z2 symmetric homoclinic and heteroclinic orbits. Such orbits are important for analyzing problems of global bifurcation and chaos. In this paper, a general analytical method, based on the undetermined Padé approximation method, is proposed to construct non-Z2 symmetric homoclinic and heteroclinic orbits which are affected by nonlinearity factors. Geometric and symmetrical characteristics of non-Z2 heteroclinic orbits are analyzed in detail. An undetermined frequency coefficient and a corresponding new analytic expression are introduced to improve the accuracy of the orbit trajectory. The proposed method shows high precision results for the Nagumo system (one single orbit; general types of non-Z2 symmetric nonlinear quintic systems (orbit with one cusp; and Z2 symmetric system with high-order nonlinear terms (orbit with two cusps. Finally, numerical simulations are used to verify the techniques and demonstrate the enhanced efficiency and precision of the proposed method.
International Nuclear Information System (INIS)
Mikhailovskii, A.B.; Shirokov, M.S.; Konovalov, S.V.; Tsypin, V.S.
2005-01-01
Transport threshold models of neoclassical tearing modes in tokamaks are investigated analytically. An analysis is made of the competition between strong transverse heat transport, on the one hand, and longitudinal heat transport, longitudinal heat convection, longitudinal inertial transport, and rotational transport, on the other hand, which leads to the establishment of the perturbed temperature profile in magnetic islands. It is shown that, in all these cases, the temperature profile can be found analytically by using rigorous solutions to the heat conduction equation in the near and far regions of a chain of magnetic islands and then by matching these solutions. Analytic expressions for the temperature profile are used to calculate the contribution of the bootstrap current to the generalized Rutherford equation for the island width evolution with the aim of constructing particular transport threshold models of neoclassical tearing modes. Four transport threshold models, differing in the underlying competing mechanisms, are analyzed: collisional, convective, inertial, and rotational models. The collisional model constructed analytically is shown to coincide exactly with that calculated numerically; the reason is that the analytical temperature profile turns out to be the same as the numerical profile. The results obtained can be useful in developing the next generation of general threshold models. The first steps toward such models have already been made
Fahad, M.; Iqbal, Y.; Riaz, M.; Ubic, R.; Redfern, S. A. T.
2015-12-01
- 628oC. The multi-analytical approach applied in the present study allows the best possible discrimination. The detailed databank relating to the quarried material, created here for the first time, provides a solid basis for possible studies on the provenance and distribution of building stones from these areas.
International Nuclear Information System (INIS)
Liu Hongzhun; Pan Zuliang; Li Peng
2006-01-01
In this article, we will derive an equality, where the Taylor series expansion around ε = 0 for any asymptotical analytical solution of the perturbed partial differential equation (PDE) with perturbing parameter ε must be admitted. By making use of the equality, we may obtain a transformation, which directly map the analytical solutions of a given unperturbed PDE to the asymptotical analytical solutions of the corresponding perturbed one. The notion of Lie-Baecklund symmetries is introduced in order to obtain more transformations. Hence, we can directly create more transformations in virtue of known Lie-Baecklund symmetries and recursion operators of corresponding unperturbed equation. The perturbed Burgers equation and the perturbed Korteweg-de Vries (KdV) equation are used as examples.
International Nuclear Information System (INIS)
Kay, N.R.; Ghosh, S.; Guven, I.; Madenci, E.
2006-01-01
This study concerns the development of a combined experimental and analytical technique to determine the critical values of fracture parameters for interfaces between dissimilar materials in electronic packages. This technique utilizes specimens from post-production electronic packages. The mechanical testing is performed inside a scanning electron microscope while the measurements are achieved by means of digital image correlation. The measured displacements around the crack tip are used as the boundary conditions for the analytical model to compute the energy release rate. The critical energy release rate values obtained from post-production package specimens are obtained to be lower than those laboratory specimens
Elder, D P; Snodin, D; Teasdale, A
2010-04-06
This review summarizes the analytical approaches reported in the literature relating to epoxide and hydroperoxide impurities. It is intended that it should provide guidance for analysts faced by the need to control such impurities, particularly where this is due to concerns relating to their potential genotoxicity. An extensive search of the literature relating to this class of impurities revealed a large number of references relating to analysis of epoxides/hydroperoxides associated with herbal remedies. Given the general applicability of the analytical methodology and due to the widespread use of herbal products the authors decided to include herbal medicines in this review. The review also reflects on the very different approaches taken in terms of the assessment/control of genotoxic impurities for such herbal remedies to that required for pharmaceutical products. Copyright 2009 Elsevier B.V. All rights reserved.
DEFF Research Database (Denmark)
Klemmensen, Charlotte Marie Bisgaard
part of where the participants mainly are persons with acquired brain damage and occupational therapists. I will discuss how a new approach to sense-making practice may be designed in order to study more closely a participants’ perspective in unique situations as they arise. I am interested......The approach of language psychology is grounded in the persons communicating; where as the approach of discursive psychology is grounded in social interaction. There is a lack of scientific knowledge on the social/communicative/interactional challenges of communication difficulties and brain injury...... in everyday life. A sense-making-in-practice approach may help form a new discourse. How may a new analytical approach be designed? May ‘communication’ be described as ‘participation abilities’, using the framework from language psychology combined with discursive psychology and the conventions...
International Nuclear Information System (INIS)
Bakulev, Alexander P.
2010-01-01
Using the results on the electromagnetic pion Form Factor (FF) obtained in the O(α s ) QCD sum rules with non-local condensates [A.P. Bakulev, A.V. Pimikov, and N.G. Stefanis, Phys. Rev. D79 (2009) 093010] we determine the effective continuum threshold for the local duality approach. Then we apply it to construct the O(α s 2 ) estimation of the pion FF in the framework of the fractional analytic perturbation theory.
Salehi, Mona; Keramati, Abbas
2009-01-01
This study proposes a framework to investigate website success factors, and their relative importance in selecting the most preferred e-banking website. For one thing, Updated Delone and Mclean IS success model is chosen to extract significant website success factors in the context of e-banking in Iran. Secondly, Updated Delone and McLean IS success model is extended through applying an analytic network process (ANP) approach in order to investigate the relative importance of each factor and ...
Shillito, Lisa-Marie; Blong, John C; Jenkins, Dennis L; Stafford Jr, Thomas W; Whelton, Helen; McDonough, Katelyn; Bull, Ian
2018-01-01
Paisley Caves in Oregon has become well known due to early dates, and human presence in the form of coprolites, found to contain ancient human DNA. Questions remain over whether the coprolites themselves are human, or whether the DNA is mobile in the sediments. This brief introduces new research applying an integrated analytical approach combining sediment micromorphology and lipid biomarker analysis, which aims to resolve these problems.
Combined multi-analytical approach for study of pore system in bricks: How much porosity is there?
Energy Technology Data Exchange (ETDEWEB)
Coletti, Chiara, E-mail: chiara.coletti@studenti.unipd.it [Department of Geosciences, University of Padova, Via G. Gradenigo 6, 35131 Padova (Italy); Department of Mineralogy and Petrology, Faculty of Science, University of Granada, Avda. Fuentenueva s/n, 18002 Granada (Spain); Cultrone, Giuseppe [Department of Mineralogy and Petrology, Faculty of Science, University of Granada, Avda. Fuentenueva s/n, 18002 Granada (Spain); Maritan, Lara; Mazzoli, Claudio [Department of Geosciences, University of Padova, Via G. Gradenigo 6, 35131 Padova (Italy)
2016-11-15
During the firing of bricks, mineralogical and textural transformations produce an artificial aggregate characterised by significant porosity. Particularly as regards pore-size distribution and the interconnection model, porosity is an important parameter to evaluate and predict the durability of bricks. The pore system is in fact the main element, which correlates building materials and their environment (especially in cases of aggressive weathering, e.g., salt crystallisation and freeze-thaw cycles) and determines their durability. Four industrial bricks with differing compositions and firing temperatures were analysed with “direct” and “indirect” techniques, traditional methods (mercury intrusion porosimetry, hydric tests, nitrogen adsorption) and new analytical approaches based on digital image reconstruction of 2D and 3D models (back-scattered electrons and computerised X-ray micro-Tomography, respectively). The comparison of results from different analytical methods in the “overlapping ranges” of porosity and the careful reconstruction of a cumulative curve, allowed overcoming their specific limitations and achieving better knowledge on the pore system of bricks. - Highlights: •Pore-size distribution and structure of the pore system in four commercial bricks •A multi-analytical approach combining “direct” and “indirect” techniques •Traditional methods vs. new approaches based on 2D/3D digital image reconstruction •The use of “overlapping ranges” to overcome the limitations of various techniques.
International Nuclear Information System (INIS)
Aufiero, Manuele; Brovchenko, Mariya; Cammi, Antonio; Clifford, Ivor; Geoffroy, Olivier; Heuer, Daniel; Laureau, Axel; Losa, Mario; Luzzi, Lelio; Merle-Lucotte, Elsa; Ricotti, Marco E.; Rouch, Hervé
2014-01-01
Highlights: • Calculation of effective delayed neutron fraction in circulating-fuel reactors. • Extension of the Monte Carlo SERPENT-2 code for delayed neutron precursor tracking. • Forward and adjoint multi-group diffusion eigenvalue problems in OpenFOAM. • Analytical approach for β eff calculation in simple geometries and flow conditions. • Good agreement among the three proposed approaches in the MSFR test-case. - Abstract: This paper deals with the calculation of the effective delayed neutron fraction (β eff ) in circulating-fuel nuclear reactors. The Molten Salt Fast Reactor is adopted as test case for the comparison of the analytical, deterministic and Monte Carlo methods presented. The Monte Carlo code SERPENT-2 has been extended to allow for delayed neutron precursors drift, according to the fuel velocity field. The forward and adjoint eigenvalue multi-group diffusion problems are implemented and solved adopting the multi-physics tool-kit OpenFOAM, by taking into account the convective and turbulent diffusive terms in the precursors balance. These two approaches show good agreement in the whole range of the MSFR operating conditions. An analytical formula for the circulating-to-static conditions β eff correction factor is also derived under simple hypotheses, which explicitly takes into account the spatial dependence of the neutron importance. Its accuracy is assessed against Monte Carlo and deterministic results. The effects of in-core recirculation vortex and turbulent diffusion are finally analysed and discussed
Modal instability of rod fiber amplifiers: a semi-analytic approach
DEFF Research Database (Denmark)
Jørgensen, Mette Marie; Hansen, Kristian Rymann; Laurila, Marko
2013-01-01
The modal instability (MI) threshold is estimated for four rod fiber designs by combining a semi-analytic model with the finite element method. The thermal load due to the quantum defect is calculated and used to numerically determine the mode distributions on which the expression for the onset o...
Assessment of Learning in Digital Interactive Social Networks: A Learning Analytics Approach
Wilson, Mark; Gochyyev, Perman; Scalise, Kathleen
2016-01-01
This paper summarizes initial field-test results from data analytics used in the work of the Assessment and Teaching of 21st Century Skills (ATC21S) project, on the "ICT Literacy--Learning in digital networks" learning progression. This project, sponsored by Cisco, Intel and Microsoft, aims to help educators around the world enable…
A Social Media Practicum: An Action-Learning Approach to Social Media Marketing and Analytics
Atwong, Catherine T.
2015-01-01
To prepare students for the rapidly evolving field of digital marketing, which requires more and more technical skills every year, a social media practicum creates a learning environment in which students can apply marketing principles and become ready for collaborative work in social media marketing and analytics. Using student newspapers as…
Image Analytical Approach for Needle-Shaped Crystal Counting and Length Estimation
DEFF Research Database (Denmark)
Wu, Jian X.; Kucheryavskiy, Sergey V.; Jensen, Linda G.
2015-01-01
Estimation of nucleation and crystal growth rates from microscopic information is of critical importance. This can be an especially challenging task if needle growth of crystals is observed. To address this challenge, an image analytical method for counting of needle-shaped crystals and estimating...
Cheung, Mike W. L.; Chan, Wai
2009-01-01
Structural equation modeling (SEM) is widely used as a statistical framework to test complex models in behavioral and social sciences. When the number of publications increases, there is a need to systematically synthesize them. Methodology of synthesizing findings in the context of SEM is known as meta-analytic SEM (MASEM). Although correlation…
Mattarozzi, Monica; Suman, Michele; Cascio, Claudia; Calestani, Davide; Weigel, Stefan; Undas, Anna; Peters, Ruud
2017-01-01
Estimating consumer exposure to nanomaterials (NMs) in food products and predicting their toxicological properties are necessary steps in the assessment of the risks of this technology. To this end, analytical methods have to be available to detect, characterize and quantify NMs in food and
New analytic approach to the theory of charge exchange in atom-multiply charged ion collisions
International Nuclear Information System (INIS)
Presnyakov, L.P.; Uskov, D.B.; Janev, R.K.
1981-01-01
A new method is discussed for the solution of many-level charge-exchange problems. The results provide the distribution of the final electronic states over the angular quantum numbers in analytical form. The obtained Z oscillations (Z is the ion charge) of the cross sections are found to be in good agreement with recent experimental data. (orig.)
Wasser, L. A.; Gold, A. U.
2017-12-01
There is a deluge of earth systems data available to address cutting edge science problems yet specific skills are required to work with these data. The Earth analytics education program, a core component of Earth Lab at the University of Colorado - Boulder - is building a data intensive program that provides training in realms including 1) interdisciplinary communication and collaboration 2) earth science domain knowledge including geospatial science and remote sensing and 3) reproducible, open science workflows ("earth analytics"). The earth analytics program includes an undergraduate internship, undergraduate and graduate level courses and a professional certificate / degree program. All programs share the goals of preparing a STEM workforce for successful earth analytics driven careers. We are developing an program-wide evaluation framework that assesses the effectiveness of data intensive instruction combined with domain science learning to better understand and improve data-intensive teaching approaches using blends of online, in situ, asynchronous and synchronous learning. We are using targeted online search engine optimization (SEO) to increase visibility and in turn program reach. Finally our design targets longitudinal program impacts on participant career tracts over time.. Here we present results from evaluation of both an interdisciplinary undergrad / graduate level earth analytics course and and undergraduate internship. Early results suggest that a blended approach to learning and teaching that includes both synchronous in-person teaching and active classroom hands-on learning combined with asynchronous learning in the form of online materials lead to student success. Further we will present our model for longitudinal tracking of participant's career focus overtime to better understand long-term program impacts. We also demonstrate the impact of SEO optimization on online content reach and program visibility.
Prabhu, Gurpur Rakesh D; Witek, Henryk A; Urban, Pawel L
2018-05-31
Most analytical methods are based on "analogue" inputs from sensors of light, electric potentials, or currents. The signals obtained by such sensors are processed using certain calibration functions to determine concentrations of the target analytes. The signal readouts are normally done after an optimised and fixed time period, during which an assay mixture is incubated. This minireview covers another-and somewhat unusual-analytical strategy, which relies on the measurement of time interval between the occurrences of two distinguishable states in the assay reaction. These states manifest themselves via abrupt changes in the properties of the assay mixture (e.g. change of colour, appearance or disappearance of luminescence, change in pH, variations in optical activity or mechanical properties). In some cases, a correlation between the time of appearance/disappearance of a given property and the analyte concentration can be also observed. An example of an assay based on time measurement is an oscillating reaction, in which the period of oscillations is linked to the concentration of the target analyte. A number of chemo-chronometric assays, relying on the existing (bio)transformations or artificially designed reactions, were disclosed in the past few years. They are very attractive from the fundamental point of view but-so far-only few of them have be validated and used to address real-world problems. Then, can chemo-chronometric assays become a practical tool for chemical analysis? Is there a need for further development of such assays? We are aiming to answer these questions.
Analytical approach to cross-layer protocol optimization in wireless sensor networks
Hortos, William S.
2008-04-01
In the distributed operations of route discovery and maintenance, strong interaction occurs across mobile ad hoc network (MANET) protocol layers. Quality of service (QoS) requirements of multimedia service classes must be satisfied by the cross-layer protocol, along with minimization of the distributed power consumption at nodes and along routes to battery-limited energy constraints. In previous work by the author, cross-layer interactions in the MANET protocol are modeled in terms of a set of concatenated design parameters and associated resource levels by multivariate point processes (MVPPs). Determination of the "best" cross-layer design is carried out using the optimal control of martingale representations of the MVPPs. In contrast to the competitive interaction among nodes in a MANET for multimedia services using limited resources, the interaction among the nodes of a wireless sensor network (WSN) is distributed and collaborative, based on the processing of data from a variety of sensors at nodes to satisfy common mission objectives. Sensor data originates at the nodes at the periphery of the WSN, is successively transported to other nodes for aggregation based on information-theoretic measures of correlation and ultimately sent as information to one or more destination (decision) nodes. The "multimedia services" in the MANET model are replaced by multiple types of sensors, e.g., audio, seismic, imaging, thermal, etc., at the nodes; the QoS metrics associated with MANETs become those associated with the quality of fused information flow, i.e., throughput, delay, packet error rate, data correlation, etc. Significantly, the essential analytical approach to MANET cross-layer optimization, now based on the MVPPs for discrete random events occurring in the WSN, can be applied to develop the stochastic characteristics and optimality conditions for cross-layer designs of sensor network protocols. Functional dependencies of WSN performance metrics are described in
International Nuclear Information System (INIS)
Ranaivo Nomenjanahary, F.; Rakoto, H.; Ratsimbazafy, J.B.
1994-08-01
This paper is concerned with resistivity sounding measurements performed from single site (vertical sounding) or from several sites (profiles) within a bounded area. The objective is to present an accurate information about the study area and to estimate the likelihood of the produced quantitative models. The achievement of this objective obviously requires quite relevant data and processing methods. It also requires interpretation methods which should take into account the probable effect of an heterogeneous structure. In front of such difficulties, the interpretation of resistivity sounding data inevitably involves the use of inversion methods. We suggest starting the interpretation in simple situation (1-D approximation), and using the rough but correct model obtained as an a-priori model for any more refined interpretation. Related to this point of view, special attention should be paid for the inverse problem applied to the resistivity sounding data. This inverse problem is nonlinear, while linearity inherent in the functional response used to describe the physical experiment. Two different approaches are used to build an approximate but higher dimensional inversion of geoelectrical data: the linear approach and the bayesian statistical approach. Some illustrations of their application in resistivity sounding data acquired at Tritrivakely volcanic lake (single site) and at Mahitsy area (several sites) will be given. (author). 28 refs, 7 figs
Analytical approaches to optimizing system "Semiconductor converter-electric drive complex"
Kormilicin, N. V.; Zhuravlev, A. M.; Khayatov, E. S.
2018-03-01
In the electric drives of the machine-building industry, the problem of optimizing the drive in terms of mass-size indicators is acute. The article offers analytical methods that ensure the minimization of the mass of a multiphase semiconductor converter. In multiphase electric drives, the form of the phase current at which the best possible use of the "semiconductor converter-electric drive complex" for active materials is different from the sinusoidal form. It is shown that under certain restrictions on the phase current form, it is possible to obtain an analytical solution. In particular, if one assumes the shape of the phase current to be rectangular, the optimal shape of the control actions will depend on the width of the interpolar gap. In the general case, the proposed algorithm can be used to solve the problem under consideration by numerical methods.
Practical approach to a procedure for judging the results of analytical verification measurements
International Nuclear Information System (INIS)
Beyrich, W.; Spannagel, G.
1979-01-01
For practical safeguards a particularly transparent procedure is described to judge analytical differences between declared and verified values based on experimental data relevant to the actual status of the measurement technique concerned. Essentially it consists of two parts: Derivation of distribution curves for the occurrence of interlaboratory differences from the results of analytical intercomparison programmes; and judging of observed differences using criteria established on the basis of these probability curves. By courtesy of the Euratom Safeguards Directorate, Luxembourg, the applicability of this judging procedure has been checked in practical data verification for safeguarding; the experience gained was encouraging and implementation of the method is intended. Its reliability might be improved further by evaluation of additional experimental data. (author)
Annual banned-substance review: analytical approaches in human sports drug testing.
Thevis, Mario; Kuuranne, Tiia; Geyer, Hans; Schänzer, Wilhelm
2017-01-01
There has been an immense amount of visibility of doping issues on the international stage over the past 12 months with the complexity of doping controls reiterated on various occasions. Hence, analytical test methods continuously being updated, expanded, and improved to provide specific, sensitive, and comprehensive test results in line with the World Anti-Doping Agency's (WADA) 2016 Prohibited List represent one of several critical cornerstones of doping controls. This enterprise necessitates expediting the (combined) exploitation of newly generated information on novel and/or superior target analytes for sports drug testing assays, drug elimination profiles, alternative test matrices, and recent advances in instrumental developments. This paper is a continuation of the series of annual banned-substance reviews appraising the literature published between October 2015 and September 2016 concerning human sports drug testing in the context of WADA's 2016 Prohibited List. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
An analytical approach to activating demand elasticity with a demand response mechanism
International Nuclear Information System (INIS)
Clastres, Cedric; Khalfallah, Haikel
2015-01-01
The aim of this work is to demonstrate analytically the conditions under which activating the elasticity of consumer demand could benefit social welfare. We have developed an analytical equilibrium model to quantify the effect of deploying demand response on social welfare and energy trade. The novelty of this research is that it demonstrates the existence of an optimal area for the price signal in which demand response enhances social welfare. This optimal area is negatively correlated to the degree of competitiveness of generation technologies and the market size of the system. In particular, it should be noted that the value of un-served energy or energy reduction which the producers could lose from such a demand response scheme would limit its effectiveness. This constraint is even greater if energy trade between countries is limited. Finally, we have demonstrated scope for more aggressive demand response, when only considering the impact in terms of consumer surplus. (authors)
International Nuclear Information System (INIS)
Moraes, Pedro Gabriel B.; Leite, Michel C.A.; Barros, Ricardo C.
2013-01-01
In this work we developed a software to model and generate results in tables and graphs of one-dimensional neutron transport problems in multi-group formulation of energy. The numerical method we use to solve the problem of neutron diffusion is analytic, thus eliminating the truncation errors that appear in classical numerical methods, e.g., the method of finite differences. This numerical analytical method increases the computational efficiency, since they are not refined spatial discretization necessary because for any spatial discretization grids used, the numerical result generated for the same point of the domain remains unchanged unless the rounding errors of computational finite arithmetic. We chose to develop a computational application in MatLab platform for numerical computation and program interface is simple and easy with knobs. We consider important to model this neutron transport problem with a fixed source in the context of shielding calculations of radiation that protects the biosphere, and could be sensitive to ionizing radiation
Luo, Wei; Yin, Peifeng; Di, Qian; Hardisty, Frank; MacEachren, Alan M.
2014-01-01
The world has become a complex set of geo-social systems interconnected by networks, including transportation networks, telecommunications, and the internet. Understanding the interactions between spatial and social relationships within such geo-social systems is a challenge. This research aims to address this challenge through the framework of geovisual analytics. We present the GeoSocialApp which implements traditional network analysis methods in the context of explicitly spatial and social...
International Nuclear Information System (INIS)
Gao, J.
1993-09-01
Starting from a single resonant rf cavity, disk-loaded travelling (forward or backward) wave accelerating structures' properties are determined by rather simple analytical formulae. They include the coupling coefficient K in the dispersion relation, group velocity v g , shunt impedance R, wake potential W (longitudinal and transverse), the coupling coefficient β of the coupler cavity and the coupler cavity axis shift δ r which is introduced to compensate the asymmetry caused by the coupling aperture. (author) 12 refs., 18 figs
An analytical approach to the solution of in-itself strong focusing beam
International Nuclear Information System (INIS)
Paulin, A.; Ticar, I.; Zoric, T.; Znidarsic, K.; Bezic, N.
1981-01-01
The aim of this paper is a description of the problem, how to represent the high current, high current density charged particle beam with straightforward analytical expressions. The principal difficulties in the solution of differential equation for stationary, axial and radial distribution of charged particles in the high current, high current density beam are mentioned. In all the derivations, an accomplished space charge effects compensation with suitable combined beam of oppositely charged particles is assumed. (author)
The Usefulness of Analytical Procedures - An Empirical Approach in the Auditing Sector in Portugal
Directory of Open Access Journals (Sweden)
Carlos Pinho
2014-08-01
Full Text Available The conceptual conflict between the efficiency and efficacy on financial auditing arises from the fact that resources are scarce, both in terms of the time available to carry out the audit and the quality and timeliness of the information available to the external auditor. Audits tend to be more efficient, the lower the combination of inherent risk and control risk is assessed to be, allowing the auditor to carry out less extensive and less timely auditing tests, meaning that in some cases analytical audit procedures are a good tool to support the opinions formed by the auditor. This research, by means of an empirical study of financial auditing in Portugal, aims to evaluate the extent to which analytical procedures are used during a financial audit engagement in Portugal, throughout the different phases involved in auditing. The conclusions point to the fact that, in general terms and regardless of the size of the audit company and the way in which professionals work, Portuguese auditors use analytical procedures more frequently during the planning phase rather than during the phase of evidence gathering and the phase of opinion formation.
Analytical methods in sphingolipidomics: Quantitative and profiling approaches in food analysis.
Canela, Núria; Herrero, Pol; Mariné, Sílvia; Nadal, Pedro; Ras, Maria Rosa; Rodríguez, Miguel Ángel; Arola, Lluís
2016-01-08
In recent years, sphingolipidomics has emerged as an interesting omic science that encompasses the study of the full sphingolipidome characterization, content, structure and activity in cells, tissues or organisms. Like other omics, it has the potential to impact biomarker discovery, drug development and systems biology knowledge. Concretely, dietary food sphingolipids have gained considerable importance due to their extensively reported bioactivity. Because of the complexity of this lipid family and their diversity among foods, powerful analytical methodologies are needed for their study. The analytical tools developed in the past have been improved with the enormous advances made in recent years in mass spectrometry (MS) and chromatography, which allow the convenient and sensitive identification and quantitation of sphingolipid classes and form the basis of current sphingolipidomics methodologies. In addition, novel hyphenated nuclear magnetic resonance (NMR) strategies, new ionization strategies, and MS imaging are outlined as promising technologies to shape the future of sphingolipid analyses. This review traces the analytical methods of sphingolipidomics in food analysis concerning sample extraction, chromatographic separation, the identification and quantification of sphingolipids by MS and their structural elucidation by NMR. Copyright © 2015 Elsevier B.V. All rights reserved.