An approximate analytical approach to resampling averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr......Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach...
An approximate analytical approach to resampling averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...
A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, Manfred
2003-01-01
We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...
Directory of Open Access Journals (Sweden)
Mosaffa Amirhossein
2013-01-01
Full Text Available Results are reported of an investigation of the solidification of a phase change material (PCM in a cylindrical shell thermal energy storage with radial internal fins. An approximate analytical solution is presented for two cases. In case 1, the inner wall is kept at a constant temperature and, in case 2, a constant heat flux is imposed on the inner wall. In both cases, the outer wall is insulated. The results are compared to those for a numerical approach based on an enthalpy method. The results show that the analytical model satisfactory estimates the solid-liquid interface. In addition, a comparative study is reported of the solidified fraction of encapsulated PCM for different geometric configurations of finned storage having the same volume and surface area of heat transfer.
Analytical approximations of Chandrasekhar's H-Function
International Nuclear Information System (INIS)
Simovic, R.; Vukanic, J.
1995-01-01
Analytical approximations of Chandrasekhar's H-function are derived in this paper by using ordinary and modified DPN methods. The accuracy of the approximations is discussed and the energy dependent albedo problem is treated. (author)
Analytical Ballistic Trajectories with Approximately Linear Drag
Directory of Open Access Journals (Sweden)
Giliam J. P. de Carpentier
2014-01-01
Full Text Available This paper introduces a practical analytical approximation of projectile trajectories in 2D and 3D roughly based on a linear drag model and explores a variety of different planning algorithms for these trajectories. Although the trajectories are only approximate, they still capture many of the characteristics of a real projectile in free fall under the influence of an invariant wind, gravitational pull, and terminal velocity, while the required math for these trajectories and planners is still simple enough to efficiently run on almost all modern hardware devices. Together, these properties make the proposed approach particularly useful for real-time applications where accuracy and performance need to be carefully balanced, such as in computer games.
An analytical approximation for resonance integral
International Nuclear Information System (INIS)
Magalhaes, C.G. de; Martinez, A.S.
1985-01-01
It is developed a method which allows to obtain an analytical solution for the resonance integral. The problem formulation is completely theoretical and based in concepts of physics of general character. The analytical expression for integral does not involve any empiric correlation or parameter. Results of approximation are compared with pattern values for each individual resonance and for sum of all resonances. (M.C.K.) [pt
Analytical approximations for wide and narrow resonances
International Nuclear Information System (INIS)
Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da
2005-01-01
This paper aims at developing analytical expressions for the adjoint neutron spectrum in the resonance energy region, taking into account both narrow and wide resonance approximations, in order to reduce the numerical computations involved. These analytical expressions, besides reducing computing time, are very simple from a mathematical point of view. The results obtained with this analytical formulation were compared to a reference solution obtained with a numerical method previously developed to solve the neutron balance adjoint equations. Narrow and wide resonances of U 238 were treated and the analytical procedure gave satisfactory results as compared with the reference solution, for the resonance energy range. The adjoint neutron spectrum is useful to determine the neutron resonance absorption, so that multigroup adjoint cross sections used by the adjoint diffusion equation can be obtained. (author)
Analytical approximations for wide and narrow resonances
Energy Technology Data Exchange (ETDEWEB)
Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear]. E-mail: aquilino@lmp.ufrj.br
2005-07-01
This paper aims at developing analytical expressions for the adjoint neutron spectrum in the resonance energy region, taking into account both narrow and wide resonance approximations, in order to reduce the numerical computations involved. These analytical expressions, besides reducing computing time, are very simple from a mathematical point of view. The results obtained with this analytical formulation were compared to a reference solution obtained with a numerical method previously developed to solve the neutron balance adjoint equations. Narrow and wide resonances of U{sup 238} were treated and the analytical procedure gave satisfactory results as compared with the reference solution, for the resonance energy range. The adjoint neutron spectrum is useful to determine the neutron resonance absorption, so that multigroup adjoint cross sections used by the adjoint diffusion equation can be obtained. (author)
Approximate analytical modeling of leptospirosis infection
Ismail, Nur Atikah; Azmi, Amirah; Yusof, Fauzi Mohamed; Ismail, Ahmad Izani
2017-11-01
Leptospirosis is an infectious disease carried by rodents which can cause death in humans. The disease spreads directly through contact with feces, urine or through bites of infected rodents and indirectly via water contaminated with urine and droppings from them. Significant increase in the number of leptospirosis cases in Malaysia caused by the recent severe floods were recorded during heavy rainfall season. Therefore, to understand the dynamics of leptospirosis infection, a mathematical model based on fractional differential equations have been developed and analyzed. In this paper an approximate analytical method, the multi-step Laplace Adomian decomposition method, has been used to conduct numerical simulations so as to gain insight on the spread of leptospirosis infection.
Analytic approximate radiation effects due to Bremsstrahlung
Energy Technology Data Exchange (ETDEWEB)
Ben-Zvi I.
2012-02-01
The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R&D Energy Recovery Linac.
Analytic approximate radiation effects due to Bremsstrahlung
International Nuclear Information System (INIS)
Ben-Zvi, I.
2012-01-01
The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R and D Energy Recovery Linac.
Uniform analytic approximation of Wigner rotation matrices
Hoffmann, Scott E.
2018-02-01
We derive the leading asymptotic approximation, for low angle θ, of the Wigner rotation matrix elements, dm1m2 j(θ ) , uniform in j, m1, and m2. The result is in terms of a Bessel function of integer order. We numerically investigate the error for a variety of cases and find that the approximation can be useful over a significant range of angles. This approximation has application in the partial wave analysis of wavepacket scattering.
Analytic Approximation to Radiation Fields from Line Source Geometry
International Nuclear Information System (INIS)
Michieli, I.
2000-01-01
Line sources with slab shields represent typical source-shield configuration in gamma-ray attenuation problems. Such shielding problems often lead to the generalized Secant integrals of the specific form. Besides numerical integration approach, various expansions and rational approximations with limited applicability are in use for computing the value of such integral functions. Lately, the author developed rapidly convergent infinite series representation of generalized Secant Integrals involving incomplete Gamma functions. Validity of such representation was established for zero and positive values of integral parameter a (a=0). In this paper recurrence relations for generalized Secant Integrals are derived allowing us simple approximate analytic calculation of the integral for arbitrary a values. It is demonstrated how truncated series representation can be used, as the basis for such calculations, when possibly negative a values are encountered. (author)
Analytical gradients for density functional calculations with approximate spin projection.
Saito, Toru; Thiel, Walter
2012-11-08
We have derived and implemented analytical gradients for broken-symmetry unrestricted density functional calculations (BS-UDFT) with removal of spin contamination by Yamaguchi's approximate spin projection method. Geometry optimizations with these analytical gradients (AGAP-opt) yield results consistent with those obtained with the previously available numerical gradients (NAP-opt). The AGAP-opt approach is found to be more precise, efficient, and robust than NAP-opt. It allows full geometry optimizations for large open-shell systems. We report results for three types of organic diradicals and for a binuclear vanadium(II) complex to demonstrate the merits of removing the spin contamination effects during geometry optimization (AGAP-opt vs BS-UDFT) and to illustrate the superior performance of the analytical gradients (AGAP-opt vs NAP-opt). The results for the vanadium(II) complex indicate that the AGAP-opt method is capable of handling pronounced spin contamination effects in large binuclear transition metal complexes with two magnetic centers.
A new analytical approximation to the Duffing-harmonic oscillator
International Nuclear Information System (INIS)
Fesanghary, M.; Pirbodaghi, T.; Asghari, M.; Sojoudi, H.
2009-01-01
In this paper, a novel analytical approximation to the nonlinear Duffing-harmonic oscillator is presented. The variational iteration method (VIM) is used to obtain some accurate analytical results for frequency. The accuracy of the results is excellent in the whole range of oscillation amplitude variations.
Analytical approximation formulae for hydrogen diffusion in a metal slab
International Nuclear Information System (INIS)
Pohl, F.; Bohdansky, J.
1984-12-01
This report treats hydrogen diffusion in the first wall of a fusion machine (INTOR, reactor, etc.), taking the thermal load into account. Analytical approximation formulae are given for the concentration and flux density of hydrogen diffusing through a plane metal slab. The re-emission flux, particularly during the dwell time(s) of machine operation, is also described with analytical formulae. The analytical formulae are compared with numerical calculations for steel as first wall material. (orig.)
Nonlinear ordinary differential equations analytical approximation and numerical methods
Hermann, Martin
2016-01-01
The book discusses the solutions to nonlinear ordinary differential equations (ODEs) using analytical and numerical approximation methods. Recently, analytical approximation methods have been largely used in solving linear and nonlinear lower-order ODEs. It also discusses using these methods to solve some strong nonlinear ODEs. There are two chapters devoted to solving nonlinear ODEs using numerical methods, as in practice high-dimensional systems of nonlinear ODEs that cannot be solved by analytical approximate methods are common. Moreover, it studies analytical and numerical techniques for the treatment of parameter-depending ODEs. The book explains various methods for solving nonlinear-oscillator and structural-system problems, including the energy balance method, harmonic balance method, amplitude frequency formulation, variational iteration method, homotopy perturbation method, iteration perturbation method, homotopy analysis method, simple and multiple shooting method, and the nonlinear stabilized march...
Directory of Open Access Journals (Sweden)
M. T. Mustafa
2014-01-01
Full Text Available A new approach for generating approximate analytic solutions of transient nonlinear heat conduction problems is presented. It is based on an effective combination of Lie symmetry method, homotopy perturbation method, finite element method, and simulation based error reduction techniques. Implementation of the proposed approach is demonstrated by applying it to determine approximate analytic solutions of real life problems consisting of transient nonlinear heat conduction in semi-infinite bars made of stainless steel AISI 304 and mild steel. The results from the approximate analytical solutions and the numerical solution are compared indicating good agreement.
Precise analytic approximations for the Bessel function J1 (x)
Maass, Fernando; Martin, Pablo
2018-03-01
Precise and straightforward analytic approximations for the Bessel function J1 (x) have been found. Power series and asymptotic expansions have been used to determine the parameters of the approximation, which is as a bridge between both expansions, and it is a combination of rational and trigonometric functions multiplied with fractional powers of x. Here, several improvements with respect to the so called Multipoint Quasirational Approximation technique have been performed. Two procedures have been used to determine the parameters of the approximations. The maximum absolute errors are in both cases smaller than 0.01. The zeros of the approximation are also very precise with less than 0.04 per cent for the first one. A second approximation has been also determined using two more parameters, and in this way the accuracy has been increased to less than 0.001.
Analytical approximations for stick-slip vibration amplitudes
DEFF Research Database (Denmark)
Thomsen, Jon Juel; Fidlin, A.
2003-01-01
The classical "mass-on-moving-belt" model for describing friction-induced vibrations is considered, with a friction law describing friction forces that first decreases and then increases smoothly with relative interface speed. Approximate analytical expressions are derived for the conditions...
Approximation of Analytic Functions by Bessel's Functions of Fractional Order
Directory of Open Access Journals (Sweden)
Soon-Mo Jung
2011-01-01
Full Text Available We will solve the inhomogeneous Bessel's differential equation x2y″(x+xy′(x+(x2-ν2y(x=∑m=0∞amxm, where ν is a positive nonintegral number and apply this result for approximating analytic functions of a special type by the Bessel functions of fractional order.
Pade approximants and efficient analytic continuation of a power series
International Nuclear Information System (INIS)
Suetin, S P
2002-01-01
This survey reflects the current state of the theory of Pade approximants, that is, best rational approximations of power series. The main focus is on the so-called inverse problems of this theory, in which one must make deductions about analytic continuation of a given power series on the basis of the known asymptotic behaviour of the poles of some sequence of Pade approximants of this series. Row and diagonal sequences are studied from this point of view. Gonchar's and Rakhmanov's fundamental results of inverse nature are presented along with results of the author
Finite Gaussian Mixture Approximations to Analytically Intractable Density Kernels
DEFF Research Database (Denmark)
Khorunzhina, Natalia; Richard, Jean-Francois
The objective of the paper is that of constructing finite Gaussian mixture approximations to analytically intractable density kernels. The proposed method is adaptive in that terms are added one at the time and the mixture is fully re-optimized at each step using a distance measure that approxima......The objective of the paper is that of constructing finite Gaussian mixture approximations to analytically intractable density kernels. The proposed method is adaptive in that terms are added one at the time and the mixture is fully re-optimized at each step using a distance measure...... that approximates the corresponding importance sampling variance. All functions of interest are evaluated under Gaussian quadrature rules. Examples include a sequential (filtering) evaluation of the likelihood function of a stochastic volatility model where all relevant densities (filtering, predictive...
Large deflection of clamped circular plate and accuracy of its approximate analytical solutions
Zhang, Yin
2016-02-01
A different set of governing equations on the large deflection of plates are derived by the principle of virtual work (PVW), which also leads to a different set of boundary conditions. Boundary conditions play an important role in determining the computation accuracy of the large deflection of plates. Our boundary conditions are shown to be more appropriate by analyzing their difference with the previous ones. The accuracy of approximate analytical solutions is important to the bulge/blister tests and the application of various sensors with the plate structure. Different approximate analytical solutions are presented and their accuracies are evaluated by comparing them with the numerical results. The error sources are also analyzed. A new approximate analytical solution is proposed and shown to have a better approximation. The approximate analytical solution offers a much simpler and more direct framework to study the plate-membrane transition behavior of deflection as compared with the previous approaches of complex numerical integration.
Comparison of Two Approaches to Approximated Reasoning
van den Broek, P.M.; Wagenknecht, Michael; Hampel, Rainer
A comparison is made of two approaches to approximate reasoning: Mamdani's interpolation method and the implication method. Both approaches are variants of Zadeh's compositional rule of inference. It is shown that the approaches are not equivalent. A correspondence between the approaches is
Analytical approximation and numerical simulations for periodic travelling water waves.
Kalimeris, Konstantinos
2018-01-28
We present recent analytical and numerical results for two-dimensional periodic travelling water waves with constant vorticity. The analytical approach is based on novel asymptotic expansions. We obtain numerical results in two different ways: the first is based on the solution of a constrained optimization problem, and the second is realized as a numerical continuation algorithm. Both methods are applied on some examples of non-constant vorticity.This article is part of the theme issue 'Nonlinear water waves'. © 2017 The Author(s).
Polynomial approximation approach to transient heat conduction ...
African Journals Online (AJOL)
This work reports polynomial approximation approach to transient heat conduction in a long slab, long cylinder and sphere with linear internal heat generation. It has been shown that the polynomial approximation method is able to calculate average temperature as a function of time for higher value of Biot numbers.
Analytical approximations for stick-slip vibration amplitudes
DEFF Research Database (Denmark)
Thomsen, Jon Juel; Fidlin, A.
2003-01-01
The classical "mass-on-moving-belt" model for describing friction-induced vibrations is considered, with a friction law describing friction forces that first decreases and then increases smoothly with relative interface speed. Approximate analytical expressions are derived for the conditions...... and periodicity. The results are illustrated and tested by time-series, phase plots and amplitude response diagrams, which compare very favorably with results obtained by numerical simulation of the equation of motion, as long as the difference in static and kinetic friction is not too large....
Lee, Ping I
2011-10-10
The purpose of this review is to provide an overview of approximate analytical solutions to the general moving boundary diffusion problems encountered during the release of a dispersed drug from matrix systems. Starting from the theoretical basis of the Higuchi equation and its subsequent improvement and refinement, available approximate analytical solutions for the more complicated cases involving heterogeneous matrix, boundary layer effect, finite release medium, surface erosion, and finite dissolution rate are also discussed. Among various modeling approaches, the pseudo-steady state assumption employed in deriving the Higuchi equation and related approximate analytical solutions appears to yield reasonably accurate results in describing the early stage release of a dispersed drug from matrices of different geometries whenever the initial drug loading (A) is much larger than the drug solubility (C(s)) in the matrix (or A≫C(s)). However, when the drug loading is not in great excess of the drug solubility (i.e. low A/C(s) values) or when the drug loading approaches the drug solubility (A→C(s)) which occurs often with drugs of high aqueous solubility, approximate analytical solutions based on the pseudo-steady state assumption tend to fail, with the Higuchi equation for planar geometry exhibiting a 11.38% error as compared with the exact solution. In contrast, approximate analytical solutions to this problem without making the pseudo-steady state assumption, based on either the double-integration refinement of the heat balance integral method or the direct simplification of available exact analytical solutions, show close agreement with the exact solutions in different geometries, particularly in the case of low A/C(s) values or drug loading approaching the drug solubility (A→C(s)). However, the double-integration heat balance integral approach is generally more useful in obtaining approximate analytical solutions especially when exact solutions are not
Fast semi-analytical solution of Maxwell's equations in Born approximation for periodic structures.
Pisarenco, Maxim; Quintanilha, Richard; van Kraaij, Mark G M M; Coene, Wim M J
2016-04-01
We propose a fast semi-analytical approach for solving Maxwell's equations in Born approximation based on the Fourier modal method (FMM). We show that, as a result of Born approximation, most matrices in the FMM algorithm become diagonal, thus allowing a reduction of computational complexity from cubic to linear. Moreover, due to the analytical representation of the solution in the vertical direction, the number of degrees of freedom in this direction is independent of the wavelength. The method is derived for planar illumination with two basic polarizations (TE/TM) and an arbitrary 2D geometry infinitely periodic in one horizontal direction.
Analytically approximate screened calculations of atomic-field pair production cross sections
International Nuclear Information System (INIS)
Dugne, J.J.
1976-01-01
A new method is described to obtain analytically approximate screened cross sections of atomic-field pair production. The Thomas-Fermi-Csavinszky potential model is expanded at the first order and put in the place of the point Coulomb potential in the Dirac equation. That method can be very useful to calculate approximate screened cross sections for the intermediate photon energy range (5m 0 c 2 to about 50m 0 c 2 ) where numerically exact screened cross sections are needing a prohibitive computer time and when the form factor approach based on Born approximation is not always valid. (Auth.)
Proteomics - new analytical approaches
International Nuclear Information System (INIS)
Hancock, W.S.
2001-01-01
Full text: Recent developments in the sequencing of the human genome have indicated that the number of coding gene sequences may be as few as 30,000. It is clear, however, that the complexity of the human species is dependent on the much greater diversity of the corresponding protein complement. Estimates of the diversity (discrete protein species) of the human proteome range from 200,000 to 300,000 at the lower end to 2,000,000 to 3,000,000 at the high end. In addition, proteomics (the study of the protein complement to the genome) has been subdivided into two main approaches. Global proteomics refers to a high throughput examination of the full protein set present in a cell under a given environmental condition. Focused proteomics refers to a more detailed study of a restricted set of proteins that are related to a specified biochemical pathway or subcellular structure. While many of the advances in proteomics will be based on the sequencing of the human genome, de novo characterization of protein microheterogeneity (glycosylation, phosphorylation and sulfation as well as the incorporation of lipid components) will be required in disease studies. To characterize these modifications it is necessary to digest the protein mixture with an enzyme to produce the corresponding mixture of peptides. In a process analogous to sequencing of the genome, shot-gun sequencing of the proteome is based on the characterization of the key fragments produced by such a digest. Thus, a glycopeptide and hence a specific glycosylation motif will be identified by a unique mass and then a diagnostic MS/MS spectrum. Mass spectrometry will be the preferred detector in these applications because of the unparalleled information content provided by one or more dimensions of mass measurement. In addition, highly efficient separation processes are an absolute requirement for advanced proteomic studies. For example, a combination of the orthogonal approaches, HPLC and HPCE, can be very powerful
A Varifold Approach to Surface Approximation
Buet, Blanche; Leonardi, Gian Paolo; Masnou, Simon
2017-11-01
We show that the theory of varifolds can be suitably enriched to open the way to applications in the field of discrete and computational geometry. Using appropriate regularizations of the mass and of the first variation of a varifold we introduce the notion of approximate mean curvature and show various convergence results that hold, in particular, for sequences of discrete varifolds associated with point clouds or pixel/voxel-type discretizations of d-surfaces in the Euclidean n-space, without restrictions on dimension and codimension. The variational nature of the approach also allows us to consider surfaces with singularities, and in that case the approximate mean curvature is consistent with the generalized mean curvature of the limit surface. A series of numerical tests are provided in order to illustrate the effectiveness and generality of the method.
Approximate Analytical Solutions for Hypersonic Flow Over Slender Power Law Bodies
Mirels, Harold
1959-01-01
Approximate analytical solutions are presented for two-dimensional and axisymmetric hypersonic flow over slender power law bodies. Both zero order (M approaches infinity) and first order (small but nonvanishing values of 1/(M(Delta)(sup 2) solutions are presented, where M is free-stream Mach number and Delta is a characteristic slope. These solutions are compared with exact numerical integration of the equations of motion and appear to be accurate particularly when the shock is relatively close to the body.
Analytical models approximating individual processes: a validation method.
Favier, C; Degallier, N; Menkès, C E
2010-12-01
Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.
Analytic approximations for integrated electron-atom excitations
International Nuclear Information System (INIS)
McCarthy, I.E.; Saha, B.C.; Stelbovics, A.T.
1980-09-01
Accurate calculations of atomic excitations require estimates of the effect of higher excitations on the effective (optical) potential coupling various reaction channels. The total cross section for a particular excitation is proportional to the maximum contribution of that excitation to the imaginary part of the elastic momentum-space optical potential, and is typical of the contribution to the potential in general. Analytic expressions relevant to the calculation of optical potentials are given. Their validity is estimated by comparison with more-accurate calculations and with experimental excitation cross sections
Dataset concerning the analytical approximation of the Ae3 temperature
Directory of Open Access Journals (Sweden)
B.L. Ennis
2017-02-01
The dataset includes the terms of the function and the values for the polynomial coefficients for major alloying elements in steel. A short description of the approximation method used to derive and validate the coefficients has also been included. For discussion and application of this model, please refer to the full length article entitled “The role of aluminium in chemical and phase segregation in a TRIP-assisted dual phase steel” 10.1016/j.actamat.2016.05.046 (Ennis et al., 2016 [1].
Analytical approximations for the oscillators with anti-symmetric quadratic nonlinearity
Alal Hosen, Md.; Chowdhury, M. S. H.; Yeakub Ali, Mohammad; Faris Ismail, Ahmad
2017-12-01
A second-order ordinary differential equation involving anti-symmetric quadratic nonlinearity changes sign. The behaviour of the oscillators with an anti-symmetric quadratic nonlinearity is assumed to oscillate different in the positive and negative directions. In this reason, Harmonic Balance Method (HBM) cannot be directly applied. The main purpose of the present paper is to propose an analytical approximation technique based on the HBM for obtaining approximate angular frequencies and the corresponding periodic solutions of the oscillators with anti-symmetric quadratic nonlinearity. After applying HBM, a set of complicated nonlinear algebraic equations is found. Analytical approach is not always fruitful for solving such kinds of nonlinear algebraic equations. In this article, two small parameters are found, for which the power series solution produces desired results. Moreover, the amplitude-frequency relationship has also been determined in a novel analytical way. The presented technique gives excellent results as compared with the corresponding numerical results and is better than the existing ones.
Directory of Open Access Journals (Sweden)
Giorgos Minas
2017-07-01
Full Text Available In order to analyse large complex stochastic dynamical models such as those studied in systems biology there is currently a great need for both analytical tools and also algorithms for accurate and fast simulation and estimation. We present a new stochastic approximation of biological oscillators that addresses these needs. Our method, called phase-corrected LNA (pcLNA overcomes the main limitations of the standard Linear Noise Approximation (LNA to remain uniformly accurate for long times, still maintaining the speed and analytically tractability of the LNA. As part of this, we develop analytical expressions for key probability distributions and associated quantities, such as the Fisher Information Matrix and Kullback-Leibler divergence and we introduce a new approach to system-global sensitivity analysis. We also present algorithms for statistical inference and for long-term simulation of oscillating systems that are shown to be as accurate but much faster than leaping algorithms and algorithms for integration of diffusion equations. Stochastic versions of published models of the circadian clock and NF-κB system are used to illustrate our results.
Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.
2017-02-01
CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.
Analytic approximation for the modified Bessel function I -2/3(x)
Martin, Pablo; Olivares, Jorge; Maass, Fernando
2017-12-01
In the present work an analytic approximation to modified Bessel function of negative fractional order I -2/3(x) is presented. The validity of the approximation is for every positive value of the independent variable. The accuracy is high in spite of the small number (4) of parameters used. The approximation is a combination of elementary functions with rational ones. Power series and assymptotic expansions are simultaneously used to obtain the approximation.
Liu, Qiang; Van Mieghem, Piet
2017-04-01
One of the most important quantities of the exact Markovian SIS epidemic process is the time-dependent prevalence, which is the average fraction of infected nodes. Unfortunately, the Markovian SIS epidemic model features an exponentially increasing computational complexity with growing network size N. In this paper, we evaluate a recently proposed analytic approximate prevalence function introduced in Van Mieghem (2016). We compare the approximate function with the N-Intertwined Mean-Field Approximation (NIMFA) and with simulation of the Markovian SIS epidemic process. The results show that the new analytic prevalence function is comparable with other approximate methods.
Directory of Open Access Journals (Sweden)
Xiao-Ying Qin
2014-01-01
Full Text Available An Adomian decomposition method (ADM is applied to solve a two-phase Stefan problem that describes the pure metal solidification process. In contrast to traditional analytical methods, ADM avoids complex mathematical derivations and does not require coordinate transformation for elimination of the unknown moving boundary. Based on polynomial approximations for some known and unknown boundary functions, approximate analytic solutions for the model with undetermined coefficients are obtained using ADM. Substitution of these expressions into other equations and boundary conditions of the model generates some function identities with the undetermined coefficients. By determining these coefficients, approximate analytic solutions for the model are obtained. A concrete example of the solution shows that this method can easily be implemented in MATLAB and has a fast convergence rate. This is an efficient method for finding approximate analytic solutions for the Stefan and the inverse Stefan problems.
Fast and Analytical EAP Approximation from a 4th-Order Tensor
Directory of Open Access Journals (Sweden)
Aurobrata Ghosh
2012-01-01
Full Text Available Generalized diffusion tensor imaging (GDTI was developed to model complex apparent diffusivity coefficient (ADC using higher-order tensors (HOTs and to overcome the inherent single-peak shortcoming of DTI. However, the geometry of a complex ADC profile does not correspond to the underlying structure of fibers. This tissue geometry can be inferred from the shape of the ensemble average propagator (EAP. Though interesting methods for estimating a positive ADC using 4th-order diffusion tensors were developed, GDTI in general was overtaken by other approaches, for example, the orientation distribution function (ODF, since it is considerably difficult to recuperate the EAP from a HOT model of the ADC in GDTI. In this paper, we present a novel closed-form approximation of the EAP using Hermite polynomials from a modified HOT model of the original GDTI-ADC. Since the solution is analytical, it is fast, differentiable, and the approximation converges well to the true EAP. This method also makes the effort of computing a positive ADC worthwhile, since now both the ADC and the EAP can be used and have closed forms. We demonstrate our approach with 4th-order tensors on synthetic data and in vivo human data.
Analytic approximate eigenvalues by a new technique. Application to sextic anharmonic potentials
Diaz Almeida, D.; Martin, P.
2018-03-01
A new technique to obtain analytic approximant for eigenvalues is presented here by a simultaneous use of power series and asymptotic expansions is presented. The analytic approximation here obtained is like a bridge to both expansions: rational functions, as Padé, are used, combined with elementary functions are used. Improvement to previous methods as multipoint quasirational approximation, MPQA, are also developed. The application of the method is done in detail for the 1-D Schrödinger equation with anharmonic sextic potential of the form V (x) =x2 + λx6 and both ground state and the first excited state of the anharmonic oscillator.
Number-conserving random phase approximation with analytically integrated matrix elements
International Nuclear Information System (INIS)
Kyotoku, M.; Schmid, K.W.; Gruemmer, F.; Faessler, A.
1990-01-01
In the present paper a number conserving random phase approximation is derived as a special case of the recently developed random phase approximation in general symmetry projected quasiparticle mean fields. All the occurring integrals induced by the number projection are performed analytically after writing the various overlap and energy matrices in the random phase approximation equation as polynomials in the gauge angle. In the limit of a large number of particles the well-known pairing vibration matrix elements are recovered. We also present a new analytically number projected variational equation for the number conserving pairing problem
Analytical approximate solutions for a general class of nonlinear delay differential equations.
Căruntu, Bogdan; Bota, Constantin
2014-01-01
We use the polynomial least squares method (PLSM), which allows us to compute analytical approximate polynomial solutions for a very general class of strongly nonlinear delay differential equations. The method is tested by computing approximate solutions for several applications including the pantograph equations and a nonlinear time-delay model from biology. The accuracy of the method is illustrated by a comparison with approximate solutions previously computed using other methods.
International Nuclear Information System (INIS)
Kurnia, W; Tan, P C; Yeo, S H; Wong, M
2008-01-01
Theoretical models have been used to predict process performance measures in electrical discharge machining (EDM), namely the material removal rate (MRR), tool wear ratio (TWR) and surface roughness (SR). However, these contributions are mainly applicable to conventional EDM due to limits on the range of energy and pulse-on-time adopted by the models. This paper proposes an analytical approximation of micro-EDM performance measures, based on the crater prediction using a developed theoretical model. The results show that the analytical approximation of the MRR and TWR is able to provide a close approximation with the experimental data. The approximation results for the MRR and TWR are found to have a variation of up to 30% and 24%, respectively, from their associated experimental values. Since the voltage and current input used in the computation are captured in real time, the method can be applied as a reliable online monitoring system for the micro-EDM process
Analytical approach for the Floquet theory of delay differential equations.
Simmendinger, C; Wunderlin, A; Pelster, A
1999-05-01
We present an analytical approach to deal with nonlinear delay differential equations close to instabilities of time periodic reference states. To this end we start with approximately determining such reference states by extending the Poincaré-Lindstedt and the Shohat expansions, which were originally developed for ordinary differential equations. Then we systematically elaborate a linear stability analysis around a time periodic reference state. This allows us to approximately calculate the Floquet eigenvalues and their corresponding eigensolutions by using matrix valued continued fractions.
Zhoujin Cui; Zisen Mao; Sujuan Yang; Pinneng Yu
2013-01-01
The approximate analytical solutions of differential equations with fractional time derivative are obtained with the help of a general framework of the reduced differential transform method (RDTM) and the homotopy perturbation method (HPM). RDTM technique does not require any discretization, linearization, or small perturbations and therefore it reduces significantly the numerical computation. Comparing the methodology (RDTM) with some known technique (HPM) shows that the present approach is ...
Note on the Calculation of Analytical Hessians in the Zeroth-Order Regular Approximation (ZORA)
van Lenthe, J.H.; van Lingen, J.N.J.
2006-01-01
The previously proposed atomic zeroth-order regular approximation (ZORA) approch, which was shown to eliminate the gauge dependent effect on gradients and to be remarkably accurate for geometry optimization, is tested for the calculation of analytical second derivatives. It is shown that the
Analytic approximations for the elastic moduli of two-phase materials
DEFF Research Database (Denmark)
Zhang, Z. J.; Zhu, Y. K.; Zhang, P.
2017-01-01
Based on the models of series and parallel connections of the two phases in a composite, analytic approximations are derived for the elastic constants (Young's modulus, shear modulus, and Poisson's ratio) of elastically isotropic two-phase composites containing second phases of various volume...
Approximate Analytic and Numerical Solutions to Lane-Emden Equation via Fuzzy Modeling Method
Directory of Open Access Journals (Sweden)
De-Gang Wang
2012-01-01
Full Text Available A novel algorithm, called variable weight fuzzy marginal linearization (VWFML method, is proposed. This method can supply approximate analytic and numerical solutions to Lane-Emden equations. And it is easy to be implemented and extended for solving other nonlinear differential equations. Numerical examples are included to demonstrate the validity and applicability of the developed technique.
Energy Technology Data Exchange (ETDEWEB)
Palma, Daniel A.P. [Instituto Federal do Rio de Janeiro, Nilopolis, RJ (Brazil)], e-mail: dpalmaster@gmail.com; Silva, Adilson C. da; Goncalves, Alessandro C.; Martinez, Aquilino S. [Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear], e-mail: asilva@con.ufrj.br, e-mail: agoncalves@con.ufrj.br, e-mail: aquilino@lmp.ufrj.br
2009-07-01
The analytical solution of point kinetics equations with a group of delayed neutrons is useful in predicting neutron density variation during the operation of a nuclear reactor. Although different approximate solutions for the system of point kinetics equations with temperature feedback may be found in literature, some of them do not present an explicit dependence in time, which makes the computing implementation difficult and, as a result, its applicability in practical cases. The present paper uses the polynomial adjustment technique to overcome this problem in the analytical approximation as proposed by Nahla. In a systematic comparison with other existing approximations it is concluded that the method is adequate, presenting small deviations in relation to the reference values obtained from the reference numerical method. (author)
International Nuclear Information System (INIS)
Palma, Daniel A.P.; Silva, Adilson C. da; Goncalves, Alessandro C.; Martinez, Aquilino S.
2009-01-01
The analytical solution of point kinetics equations with a group of delayed neutrons is useful in predicting neutron density variation during the operation of a nuclear reactor. Although different approximate solutions for the system of point kinetics equations with temperature feedback may be found in literature, some of them do not present an explicit dependence in time, which makes the computing implementation difficult and, as a result, its applicability in practical cases. The present paper uses the polynomial adjustment technique to overcome this problem in the analytical approximation as proposed by Nahla. In a systematic comparison with other existing approximations it is concluded that the method is adequate, presenting small deviations in relation to the reference values obtained from the reference numerical method. (author)
DEFF Research Database (Denmark)
Bees, Martin Alan; Hill, N.A.; Pedley, T.J.
1998-01-01
Analytical approximations are obtained to solutions of the steady Fokker-Planck equation describing the probability density function for the orientation of dipolar particles in a steady, low-Reynolds-number shear flow and a uniform external field. Exact computer algebra is used to solve the equat......Analytical approximations are obtained to solutions of the steady Fokker-Planck equation describing the probability density function for the orientation of dipolar particles in a steady, low-Reynolds-number shear flow and a uniform external field. Exact computer algebra is used to solve...... the equation in terms of a truncated spherical harmonic expansion. It is demonstrated that very low orders of approximation are required for spheres but that spheriods introduce resolution problems in certain flow regimes. Moments of orientation probability density function are derived and applications...
Collaborative Visual Analytics: A Health Analytics Approach to Injury Prevention
Directory of Open Access Journals (Sweden)
Samar Al-Hajj
2017-09-01
Full Text Available Background: Accurate understanding of complex health data is critical in order to deal with wicked health problems and make timely decisions. Wicked problems refer to ill-structured and dynamic problems that combine multidimensional elements, which often preclude the conventional problem solving approach. This pilot study introduces visual analytics (VA methods to multi-stakeholder decision-making sessions about child injury prevention; Methods: Inspired by the Delphi method, we introduced a novel methodology—group analytics (GA. GA was pilot-tested to evaluate the impact of collaborative visual analytics on facilitating problem solving and supporting decision-making. We conducted two GA sessions. Collected data included stakeholders’ observations, audio and video recordings, questionnaires, and follow up interviews. The GA sessions were analyzed using the Joint Activity Theory protocol analysis methods; Results: The GA methodology triggered the emergence of ‘common ground’ among stakeholders. This common ground evolved throughout the sessions to enhance stakeholders’ verbal and non-verbal communication, as well as coordination of joint activities and ultimately collaboration on problem solving and decision-making; Conclusions: Understanding complex health data is necessary for informed decisions. Equally important, in this case, is the use of the group analytics methodology to achieve ‘common ground’ among diverse stakeholders about health data and their implications.
Directory of Open Access Journals (Sweden)
Md. Alal Hosen
2016-01-01
Full Text Available In the present paper, a novel analytical approximation technique has been proposed based on the energy balance method (EBM to obtain approximate periodic solutions for the focus generalized highly nonlinear oscillators. The expressions of the natural frequency-amplitude relationship are obtained using a novel analytical way. The accuracy of the proposed method is investigated on three benchmark oscillatory problems, namely, the simple relativistic oscillator, the stretched elastic wire oscillator (with a mass attached to its midpoint and the Duffing-relativistic oscillator. For an initial oscillation amplitude A0 = 100, the maximal relative errors of natural frequency found in three oscillators are 2.1637%, 0.0001% and 1.201%, respectively, which are much lower than the errors found using the existing methods. It is highly remarkable that an excellent accuracy of the approximate natural frequency has been found which is valid for the whole range of large values of oscillation amplitude as compared with the exact ones. Very simple solution procedure and high accuracy that is found in three benchmark problems reveal the novelty, reliability and wider applicability of the proposed analytical approximation technique.
Directory of Open Access Journals (Sweden)
M. Bishehniasar
2017-01-01
Full Text Available The demand of many scientific areas for the usage of fractional partial differential equations (FPDEs to explain their real-world systems has been broadly identified. The solutions may portray dynamical behaviors of various particles such as chemicals and cells. The desire of obtaining approximate solutions to treat these equations aims to overcome the mathematical complexity of modeling the relevant phenomena in nature. This research proposes a promising approximate-analytical scheme that is an accurate technique for solving a variety of noninteger partial differential equations (PDEs. The proposed strategy is based on approximating the derivative of fractional-order and reducing the problem to the corresponding partial differential equation (PDE. Afterwards, the approximating PDE is solved by using a separation-variables technique. The method can be simply applied to nonhomogeneous problems and is proficient to diminish the span of computational cost as well as achieving an approximate-analytical solution that is in excellent concurrence with the exact solution of the original problem. In addition and to demonstrate the efficiency of the method, it compares with two finite difference methods including a nonstandard finite difference (NSFD method and standard finite difference (SFD technique, which are popular in the literature for solving engineering problems.
Directory of Open Access Journals (Sweden)
S. Das
2013-12-01
Full Text Available In this article, optimal homotopy-analysis method is used to obtain approximate analytic solution of the time-fractional diffusion equation with a given initial condition. The fractional derivatives are considered in the Caputo sense. Unlike usual Homotopy analysis method, this method contains at the most three convergence control parameters which describe the faster convergence of the solution. Effects of parameters on the convergence of the approximate series solution by minimizing the averaged residual error with the proper choices of parameters are calculated numerically and presented through graphs and tables for different particular cases.
Aymard, François; Gulminelli, Francesca; Margueron, Jérôme
2016-08-01
We have recently addressed the problem of the determination of the nuclear surface energy for symmetric nuclei in the framework of the extended Thomas-Fermi (ETF) approximation using Skyrme functionals. We presently extend this formalism to the case of asymmetric nuclei and the question of the surface symmetry energy. We propose an approximate expression for the diffuseness and the surface energy. These quantities are analytically related to the parameters of the energy functional. In particular, the influence of the different equation of state parameters can be explicitly quantified. Detailed analyses of the different energy components (local/non-local, isoscalar/isovector, surface/curvature and higher order) are also performed. Our analytical solution of the ETF integral improves previous models and leads to a precision of better than 200 keV per nucleon in the determination of the nuclear binding energy for dripline nuclei.
Revisiting the approximate analytical solution of fractional-order gas dynamics equation
Directory of Open Access Journals (Sweden)
Mohammad Tamsir
2016-06-01
Full Text Available In this paper, an approximate analytical solution of the time fractional gas dynamics equation arising in the shock fronts, is obtained using a recent semi-analytical method referred as fractional reduced differential transform method. The fractional derivatives are considered in the Caputo sense. To validate the efficiency and reliability of the method, four numerical examples of the linear and nonlinear gas dynamics equations are considered. Computed results are compared with results available in the literature. It is found that obtained results agree excellently with DTM, and FHATM. The solutions behavior and its effects for different values of the fractional order are shown graphically. The main advantage of the method is easiness to implement and requires small size of computation. Hence, it is a very effective and efficient semi-analytical method for solving the fractional order gas dynamics equation.
Hofman, Radek; Seibert, Petra; Kovalets, Ivan; Andronopoulos, Spyros
2015-04-01
We are concerned with source term retrieval in the case of an accident in a nuclear power with off-site consequences. The goal is to optimize atmospheric dispersion model inputs using inverse modeling of gamma dose rate measurements (instantaneous or time-integrated). These are the most abundant type of measurements provided by various radiation monitoring networks across Europe and available continuously in near-real time. Usually, a source term of an accidental release comprises of a mixture of nuclides. Unfortunately, gamma dose rate measurements do not provide a direct information on the source term composition; however, physical properties of respective nuclides (deposition properties, decay half-life) can yield some insight. In the method presented, we assume that nuclide ratios are known at least approximately, e.g. from nuclide specific observations or reactor inventory and assumptions on the accident type. The source term can be in multiple phases, each being characterized by constant nuclide ratios. The method is an extension of a well-established source term inversion approach based on the optimization of an objective function (minimization of a cost function). This function has two quadratic terms: mismatch between model and measurements weighted by an observation error covariance matrix and the deviation of the solution from a first guess weighted by the first-guess error covariance matrix. For simplicity, both error covariance matrices are approximated as diagonal. Analytical minimization of the cost function leads to a liner system of equations. Possible negative parts of the solution are iteratively removed by the means of first guess error variance reduction. Nuclide ratios enter the problem in the form of additional linear equations, where the deviations from prescribed ratios are weighted by factors; the corresponding error variance allows us to control how strongly we want to impose the prescribed ratios. This introduces some freedom into the
Lau, Chun Sing
This thesis studies two types of problems in financial derivatives pricing. The first type is the free boundary problem, which can be formulated as a partial differential equation (PDE) subject to a set of free boundary condition. Although the functional form of the free boundary condition is given explicitly, the location of the free boundary is unknown and can only be determined implicitly by imposing continuity conditions on the solution. Two specific problems are studied in details, namely the valuation of fixed-rate mortgages and CEV American options. The second type is the multi-dimensional problem, which involves multiple correlated stochastic variables and their governing PDE. One typical problem we focus on is the valuation of basket-spread options, whose underlying asset prices are driven by correlated geometric Brownian motions (GBMs). Analytic approximate solutions are derived for each of these three problems. For each of the two free boundary problems, we propose a parametric moving boundary to approximate the unknown free boundary, so that the original problem transforms into a moving boundary problem which can be solved analytically. The governing parameter of the moving boundary is determined by imposing the first derivative continuity condition on the solution. The analytic form of the solution allows the price and the hedging parameters to be computed very efficiently. When compared against the benchmark finite-difference method, the computational time is significantly reduced without compromising the accuracy. The multi-stage scheme further allows the approximate results to systematically converge to the benchmark results as one recasts the moving boundary into a piecewise smooth continuous function. For the multi-dimensional problem, we generalize the Kirk (1995) approximate two-asset spread option formula to the case of multi-asset basket-spread option. Since the final formula is in closed form, all the hedging parameters can also be derived in
Analytical approximate solutions of the time-domain diffusion equation in layered slabs.
Martelli, Fabrizio; Sassaroli, Angelo; Yamada, Yukio; Zaccanti, Giovanni
2002-01-01
Time-domain analytical solutions of the diffusion equation for photon migration through highly scattering two- and three-layered slabs have been obtained. The effect of the refractive-index mismatch with the external medium is taken into account, and approximate boundary conditions at the interface between the diffusive layers have been considered. A Monte Carlo code for photon migration through a layered slab has also been developed. Comparisons with the results of Monte Carlo simulations showed that the analytical solutions correctly describe the mean path length followed by photons inside each diffusive layer and the shape of the temporal profile of received photons, while discrepancies are observed for the continuous-wave reflectance or transmittance.
International Nuclear Information System (INIS)
Roteta, M.; Baro, J.; Fernandez-Varea, J.M.; Salvat, F.
1994-01-01
The FORTRAN 77 code PHOTAC to compute photon attenuation coefficients of elements and compounds is described. The code is based on the semi-analytical approximate atomic cross sections proposed by Baro et al. (1994). Photoelectric cross sections are calculated directly from a simple analytical expression. Atomic cross sections for coherent and incoherent scattering and for pair production are obtained as integrals of the corresponding differential cross sections. These integrals are evaluated, to a pre-selected accuracy, by using a 20-point Gauss adaptive integration algorithm. Calculated attenuation coefficients agree with recently compiled databases to within equal 1%, in the energy range from 1 KeV to 1 GeV. The complete source listing of the program PHOTAC is included
Analytic number theory, approximation theory, and special functions in honor of Hari M. Srivastava
Rassias, Michael
2014-01-01
This book, in honor of Hari M. Srivastava, discusses essential developments in mathematical research in a variety of problems. It contains thirty-five articles, written by eminent scientists from the international mathematical community, including both research and survey works. Subjects covered include analytic number theory, combinatorics, special sequences of numbers and polynomials, analytic inequalities and applications, approximation of functions and quadratures, orthogonality, and special and complex functions. The mathematical results and open problems discussed in this book are presented in a simple and self-contained manner. The book contains an overview of old and new results, methods, and theories toward the solution of longstanding problems in a wide scientific field, as well as new results in rapidly progressing areas of research. The book will be useful for researchers and graduate students in the fields of mathematics, physics, and other computational and applied sciences.
Tao, Wanghai; Wang, Quanjiu; Lin, Henry
2018-03-01
Soil and water loss from farmland causes land degradation and water pollution, thus continued efforts are needed to establish mathematical model for quantitative analysis of relevant processes and mechanisms. In this study, an approximate analytical solution has been developed for overland flow model and sediment transport model, offering a simple and effective means to predict overland flow and erosion under natural rainfall conditions. In the overland flow model, the flow regime was considered to be transitional with the value of parameter β (in the kinematic wave model) approximately two. The change rate of unit discharge with distance was assumed to be constant and equal to the runoff rate at the outlet of the plane. The excess rainfall was considered to be constant under uniform rainfall conditions. The overland flow model developed can be further applied to natural rainfall conditions by treating excess rainfall intensity as constant over a small time interval. For the sediment model, the recommended values of the runoff erosion calibration constant (cr) and the splash erosion calibration constant (cf) have been given in this study so that it is easier to use the model. These recommended values are 0.15 and 0.12, respectively. Comparisons with observed results were carried out to validate the proposed analytical solution. The results showed that the approximate analytical solution developed in this paper closely matches the observed data, thus providing an alternative method of predicting runoff generation and sediment yield, and offering a more convenient method of analyzing the quantitative relationships between variables. Furthermore, the model developed in this study can be used as a theoretical basis for developing runoff and erosion control methods.
Energy Technology Data Exchange (ETDEWEB)
Roteta, M.; Baro, J.; Fernandez-Varea, J. M.; Salvat, F.
1994-07-01
The FORTRAN 77 code PHOTAC to compute photon attenuation coefficients of elements and compounds is described. The code is based on the semi analytical approximate atomic cross sections proposed by Baro et al. (1994). Photoelectric cross sections for coherent and incoherent scattering and for pair production are obtained as integrals of the corresponding differential cross sections. These integrals are evaluated, to a pre-selected accuracy, by using a 20-point Gauss adaptive integration algorithm. Calculated attenuation coefficients agree with recently compiled databases to within - 1%, in the energy range from 1 keV to 1 GeV. The complete source listing of the program PHOTAC is included. (Author) 14 refs.
International Nuclear Information System (INIS)
Roteta, M.; Baro, J.; Fernandez-Varea, J. M.; Salvat, F.
1994-01-01
The FORTRAN 77 code PHOTAC to compute photon attenuation coefficients of elements and compounds is described. The code is based on the semi analytical approximate atomic cross sections proposed by Baro et al. (1994). Photoelectric cross sections for coherent and incoherent scattering and for pair production are obtained as integrals of the corresponding differential cross sections. These integrals are evaluated, to a pre-selected accuracy, by using a 20-point Gauss adaptive integration algorithm. Calculated attenuation coefficients agree with recently compiled databases to within - 1%, in the energy range from 1 keV to 1 GeV. The complete source listing of the program PHOTAC is included. (Author) 14 refs
Approximation of orbital elements of telluric planets by compact analytical series
Kudryavtsev, S.
2014-12-01
We take the long-term numerical ephemeris of the major planets DE424 (Folkner 2011) and approximate the orbital elements of the telluric planets from that ephemeris by trigonometric series. Amplitudes of the series' terms are the second- or third-degree polynomials of time, and arguments are the fourth-degree time polynomials. The resulting series are precise and compact; in particular the maximum deviation of the planetary mean longitude calculated by the analytical series from that given by DE-424 over [-3000; 3000].
ANALYTIC APPROXIMATE SEISMOLOGY OF PROPAGATING MAGNETOHYDRODYNAMIC WAVES IN THE SOLAR CORONA
Energy Technology Data Exchange (ETDEWEB)
Goossens, M.; Soler, R. [Centre for Mathematical Plasma Astrophysics, Department of Mathematics, KU Leuven, Celestijnenlaan 200B, B-3001 Leuven (Belgium); Arregui, I. [Instituto de Astrofisica de Canarias, Via Lactea s/n, E-38205 La Laguna, Tenerife (Spain); Terradas, J., E-mail: marcel.goossens@wis.kuleuven.be [Solar Physics Group, Departament de Fisica, Universitat de les Illes Balears, E-07122 Palma de Mallorca (Spain)
2012-12-01
Observations show that propagating magnetohydrodynamic (MHD) waves are ubiquitous in the solar atmosphere. The technique of MHD seismology uses the wave observations combined with MHD wave theory to indirectly infer physical parameters of the solar atmospheric plasma and magnetic field. Here, we present an analytical seismological inversion scheme for propagating MHD waves. This scheme uses the observational information on wavelengths and damping lengths in a consistent manner, along with observed values of periods or phase velocities, and is based on approximate asymptotic expressions for the theoretical values of wavelengths and damping lengths. The applicability of the inversion scheme is discussed and an example is given.
An integrated approach of analytical chemistry
Directory of Open Access Journals (Sweden)
Guardia Miguel de la
1999-01-01
Full Text Available The tremendous development of physical methods of analysis offers an impressive number of tools to simultaneously determine a large number of elements and compounds at very low concentration levels. Todays Analytical Chemistry provides appropriate media to solve technical problems and to obtain correct information about chemical systems in order to take the most appropriate decisions for problem solving. In recent years the development of new strategies for sampling, sample treatment and data exploitation through the research on field sampling, microwaveassisted procedures and chemometrics, additionally the revolution of the analytical methodology provided by the development of flow analysis concepts and process analysis strategies offer a link between modern instrumentation and social or technological problems. The integrated approach of Analytical Chemistry requires correctly incorporating the developments in all of the fields of both, basic chemistry, instrumentation and information theory, in a scheme which considers all aspects of data obtention and interpretation taking also into consideration the side effects of chemical measurements. In this paper, new ideas and tools for trace analysis, speciation, surface analysis, data acquisition and data treatment, automation and decontamination, are presented in the frame of Analytical Chemistry as a problem solving strategy focused on the chemical composition of systems and on the specific figures of merit of the analytical measurements, like accuracy, precision, sensitivity, selectivity but also speed and cost. Technological and industrial, but also environmental, health and social problems, have been considered as challenges for which solution the chemist should select the most appropriate tool and to develop an appropriate strategy.
Approximate analytical solution to the Boussinesq equation with a sloping water-land boundary
Tang, Yuehao; Jiang, Qinghui; Zhou, Chuangbing
2016-04-01
An approximate solution is presented to the 1-D Boussinesq equation (BEQ) characterizing transient groundwater flow in an unconfined aquifer subject to a constant water variation at the sloping water-land boundary. The flow equation is decomposed to a linearized BEQ and a head correction equation. The linearized BEQ is solved using a Laplace transform. By means of the frozen-coefficient technique and Gauss function method, the approximate solution for the head correction equation can be obtained, which is further simplified to a closed-form expression under the condition of local energy equilibrium. The solutions of the linearized and head correction equations are discussed from physical concepts. Especially for the head correction equation, the well posedness of the approximate solution obtained by the frozen-coefficient method is verified to demonstrate its boundedness, which can be further embodied as the upper and lower error bounds to the exact solution of the head correction by statistical analysis. The advantage of this approximate solution is in its simplicity while preserving the inherent nonlinearity of the physical phenomenon. Comparisons between the analytical and numerical solutions of the BEQ validate that the approximation method can achieve desirable precisions, even in the cases with strong nonlinearity. The proposed approximate solution is applied to various hydrological problems, in which the algebraic expressions that quantify the water flow processes are derived from its basic solutions. The results are useful for the quantification of stream-aquifer exchange flow rates, aquifer response due to the sudden reservoir release, bank storage and depletion, and front position and propagation speed.
An approximate analytic model of a star cluster with potential escapers
Daniel, Kathryne J.; Heggie, Douglas C.; Varri, Anna Lisa
2017-06-01
In the context of a star cluster moving on a circular galactic orbit, a 'potential escaper' is a cluster star that has orbital energy greater than the escape energy, and yet is confined within the Jacobi radius of the stellar system. On the other hand, analytic models of stellar clusters typically have a truncation energy equal to the cluster escape energy, and therefore explicitly exclude these energetically unbound stars. Starting from the landmark analysis performed by Hénon of periodic orbits of the circular Hill equations, we present a numerical exploration of the population of 'non-escapers', defined here as those stars that remain within two Jacobi radii for several galactic periods, with energy above the escape energy. We show that they can be characterized by the Jacobi integral and two further approximate integrals, which are based on perturbation theory and ideas drawn from Lidov-Kozai theory. Finally, we use these results to construct an approximate analytic model that includes a phase-space description of a population resembling that of potential escapers, in addition to the usual bound population.
International Nuclear Information System (INIS)
Boisseau, Bruno; Forgacs, Peter; Giacomini, Hector
2007-01-01
A new (algebraic) approximation scheme to find global solutions of two-point boundary value problems of ordinary differential equations (ODEs) is presented. The method is applicable for both linear and nonlinear (coupled) ODEs whose solutions are analytic near one of the boundary points. It is based on replacing the original ODEs by a sequence of auxiliary first-order polynomial ODEs with constant coefficients. The coefficients in the auxiliary ODEs are uniquely determined from the local behaviour of the solution in the neighbourhood of one of the boundary points. The problem of obtaining the parameters of the global (connecting) solutions, analytic at one of the boundary points, reduces to find the appropriate zeros of algebraic equations. The power of the method is illustrated by computing the approximate values of the 'connecting parameters' for a number of nonlinear ODEs arising in various problems in field theory. We treat in particular the static and rotationally symmetric global vortex, the skyrmion, the Abrikosov-Nielsen-Olesen vortex, as well as the 't Hooft-Polyakov magnetic monopole. The total energy of the skyrmion and of the monopole is also computed by the new method. We also consider some ODEs coming from the exact renormalization group. The ground-state energy level of the anharmonic oscillator is also computed for arbitrary coupling strengths with good precision. (fast track communication)
Zhong, XiaoXu; Liao, ShiJun
2018-01-01
Analytic approximations of the Von Kármán's plate equations in integral form for a circular plate under external uniform pressure to arbitrary magnitude are successfully obtained by means of the homotopy analysis method (HAM), an analytic approximation technique for highly nonlinear problems. Two HAM-based approaches are proposed for either a given external uniform pressure Q or a given central deflection, respectively. Both of them are valid for uniform pressure to arbitrary magnitude by choosing proper values of the so-called convergence-control parameters c 1 and c 2 in the frame of the HAM. Besides, it is found that the HAM-based iteration approaches generally converge much faster than the interpolation iterative method. Furthermore, we prove that the interpolation iterative method is a special case of the first-order HAM iteration approach for a given external uniform pressure Q when c 1 = - θ and c 2 = -1, where θ denotes the interpolation iterative parameter. Therefore, according to the convergence theorem of Zheng and Zhou about the interpolation iterative method, the HAM-based approaches are valid for uniform pressure to arbitrary magnitude at least in the special case c 1 = - θ and c 2 = -1. In addition, we prove that the HAM approach for the Von Kármán's plate equations in differential form is just a special case of the HAM for the Von Kármán's plate equations in integral form mentioned in this paper. All of these illustrate the validity and great potential of the HAM for highly nonlinear problems, and its superiority over perturbation techniques.
Directory of Open Access Journals (Sweden)
Zhoujin Cui
2013-01-01
Full Text Available The approximate analytical solutions of differential equations with fractional time derivative are obtained with the help of a general framework of the reduced differential transform method (RDTM and the homotopy perturbation method (HPM. RDTM technique does not require any discretization, linearization, or small perturbations and therefore it reduces significantly the numerical computation. Comparing the methodology (RDTM with some known technique (HPM shows that the present approach is effective and powerful. The numerical calculations are carried out when the initial conditions in the form of periodic functions and the results are depicted through graphs. The two different cases have studied and proved that the method is extremely effective due to its simplistic approach and performance.
Fighting falsified medicines: The analytical approach.
Rebiere, Hervé; Guinot, Pauline; Chauvey, Denis; Brenier, Charlotte
2017-08-05
Given the harm to human health, the fight against falsified medicines has become a priority issue that involves numerous actors. Analytical laboratories contribute by performing analyses to chemically characterise falsified samples and assess their hazards for patients. A wide range of techniques can be used to obtain individual information on the organic and inorganic composition, the presence of an active substance or impurities, or the crystalline arrangement of the formulation's compound. After a presentation of these individual techniques, this review puts forward a methodology to combine them. In order to illustrate this approach, examples from the scientific literature (products used for erectile dysfunction treatment, weight loss and malaria) are placed in the centre of the proposed methodology. Combining analytical techniques allows the analyst to conclude on the falsification of a sample, on its compliance in terms of pharmaceutical quality and finally on the safety for patients. Copyright © 2017 Elsevier B.V. All rights reserved.
Approximate analytical solution for the fractional modified KdV by differential transform method
Kurulay, Muhammet; Bayram, Mustafa
2010-07-01
In this paper, the fractional modified Korteweg-de Vries equation (fmKdV) and fKdV are introduced by fractional derivatives. The approach rest mainly on two-dimensional differential transform method (DTM) which is one of the approximate methods. The method can easily be applied to many problems and is capable of reducing the size of computational work. The fractional derivative is described in the Caputo sense. Some illustrative examples are presented.
DEFF Research Database (Denmark)
Pedersen, Thomas Quistgaard
In this paper we derive an approximate analytical solution to the optimal con- sumption and portfolio choice problem of an infinitely-lived investor with power utility defined over the difference between consumption and an external habit. The investor is assumed to have access to two tradable......, and introduces an additional component that works as a hedge against changes in the investor's habit level. In an empirical application, we calibrate the model to U.S. data and show that habit formation has significant effects on both the optimal consumption and portfolio choice compared to a standard CRRA...... assets: a risk free asset with constant return and a risky asset with a time-varying premium. We extend the ap- proach proposed by Campbell and Viceira (1999), which builds on log-linearizations of the Euler equation, intertemporal budget constraint, and portfolio return, to also contain the log...
Analytic regularity and collocation approximation for elliptic PDEs with random domain deformations
Castrillon, Julio
2016-03-02
In this work we consider the problem of approximating the statistics of a given Quantity of Interest (QoI) that depends on the solution of a linear elliptic PDE defined over a random domain parameterized by N random variables. The elliptic problem is remapped onto a corresponding PDE with a fixed deterministic domain. We show that the solution can be analytically extended to a well defined region in CN with respect to the random variables. A sparse grid stochastic collocation method is then used to compute the mean and variance of the QoI. Finally, convergence rates for the mean and variance of the QoI are derived and compared to those obtained in numerical experiments.
Energy Technology Data Exchange (ETDEWEB)
Zou, Li [Dalian Univ. of Technology, Dalian City (China). State Key Lab. of Structural Analysis for Industrial Equipment; Liang, Songxin; Li, Yawei [Dalian Univ. of Technology, Dalian City (China). School of Mathematical Sciences; Jeffrey, David J. [Univ. of Western Ontario, London (Canada). Dept. of Applied Mathematics
2017-06-01
Nonlinear boundary value problems arise frequently in physical and mechanical sciences. An effective analytic approach with two parameters is first proposed for solving nonlinear boundary value problems. It is demonstrated that solutions given by the two-parameter method are more accurate than solutions given by the Adomian decomposition method (ADM). It is further demonstrated that solutions given by the ADM can also be recovered from the solutions given by the two-parameter method. The effectiveness of this method is demonstrated by solving some nonlinear boundary value problems modeling beam-type nano-electromechanical systems.
Liu, Jie; Liang, WanZhen
2011-07-07
We present the analytical expression and computer implementation for the second-order energy derivatives of the electronic excited state with respect to the nuclear coordinates in the time-dependent density functional theory (TDDFT) with Gaussian atomic orbital basis sets. Here, the Tamm-Dancoff approximation to the full TDDFT is adopted, and therefore the formulation process of TDDFT excited-state Hessian is similar to that of configuration interaction singles (CIS) Hessian. However, due to the replacement of the Hartree-Fock exchange integrals in CIS with the exchange-correlation kernels in TDDFT, many quantitative changes in the derived equations are arisen. The replacement also causes additional technical difficulties associated with the calculation of a large number of multiple-order functional derivatives with respect to the density variables and the nuclear coordinates. Numerical tests on a set of test molecules are performed. The simulated excited-state vibrational frequencies by the analytical Hessian approach are compared with those computed by CIS and the finite-difference method. It is found that the analytical Hessian method is superior to the finite-difference method in terms of the computational accuracy and efficiency. The numerical differentiation can be difficult due to root flipping for excited states that are close in energy. TDDFT yields more exact excited-state vibrational frequencies than CIS, which usually overestimates the values.
A Gaussian Approximation Approach for Value of Information Analysis.
Jalal, Hawre; Alarid-Escudero, Fernando
2018-02-01
Most decisions are associated with uncertainty. Value of information (VOI) analysis quantifies the opportunity loss associated with choosing a suboptimal intervention based on current imperfect information. VOI can inform the value of collecting additional information, resource allocation, research prioritization, and future research designs. However, in practice, VOI remains underused due to many conceptual and computational challenges associated with its application. Expected value of sample information (EVSI) is rooted in Bayesian statistical decision theory and measures the value of information from a finite sample. The past few years have witnessed a dramatic growth in computationally efficient methods to calculate EVSI, including metamodeling. However, little research has been done to simplify the experimental data collection step inherent to all EVSI computations, especially for correlated model parameters. This article proposes a general Gaussian approximation (GA) of the traditional Bayesian updating approach based on the original work by Raiffa and Schlaifer to compute EVSI. The proposed approach uses a single probabilistic sensitivity analysis (PSA) data set and involves 2 steps: 1) a linear metamodel step to compute the EVSI on the preposterior distributions and 2) a GA step to compute the preposterior distribution of the parameters of interest. The proposed approach is efficient and can be applied for a wide range of data collection designs involving multiple non-Gaussian parameters and unbalanced study designs. Our approach is particularly useful when the parameters of an economic evaluation are correlated or interact.
A top-down approach for approximate data anonymisation
Li, JianQiang; Yang, Ji-Jiang; Zhao, Yu; Liu, Bo
2013-08-01
Data sharing in today's information society poses a threat to individual privacy and organisational confidentiality. k-anonymity is a widely adopted model to prevent the owner of a record being re-identified. By generalising and/or suppressing certain portions of the released dataset, it guarantees that no records can be uniquely distinguished from at least other k-1 records. A key requirement for the k-anonymity problem is to minimise the information loss resulting from data modifications. This article proposes a top-down approach to solve this problem. It first considers each record as a vertex and the similarity between two records as the edge weight to construct a complete weighted graph. Then, an edge cutting algorithm is designed to divide the complete graph into multiple trees/components. The Large Components with size bigger than 2k-1 are subsequently split to guarantee that each resulting component has the vertex number between k and 2k-1. Finally, the generalisation operation is applied on the vertices in each component (i.e. equivalence class) to make sure all the records inside have identical quasi-identifier values. We prove that the proposed approach has polynomial running time and theoretical performance guarantee O(k). The empirical experiments show that our approach results in substantial improvements over the baseline heuristic algorithms, as well as the bottom-up approach with the same approximate bound O(k). Comparing to the baseline bottom-up O(logk)-approximation algorithm, when the required k is smaller than 50, the adopted top-down strategy makes our approach achieve similar performance in terms of information loss while spending much less computing time. It demonstrates that our approach would be a best choice for the k-anonymity problem when both the data utility and runtime need to be considered, especially when k is set to certain value smaller than 50 and the record set is big enough to make the runtime have to be taken into account.
Directory of Open Access Journals (Sweden)
Yongliang Wang
2015-01-01
Full Text Available Tilting pad bearings offer unique dynamic stability enabling successful deployment of high-speed rotating machinery. The model of dynamic stiffness, damping, and added mass coefficients is often used for rotordynamic analyses, and this method does not suffice to describe the dynamic behaviour due to the nonlinear effects of oil film force under larger shaft vibration or vertical rotor conditions. The objective of this paper is to present a nonlinear oil force model for finite length tilting pad journal bearings. An approximate analytic oil film force model was established by analysing the dynamic characteristic of oil film of a single pad journal bearing using variable separation method under the dynamic π oil film boundary condition. And an oil film force model of a four-tilting-pad journal bearing was established by using the pad assembly technique and considering pad tilting angle. The validity of the model established was proved by analyzing the distribution of oil film pressure and the locus of journal centre for tilting pad journal bearings and by comparing the model established in this paper with the model established using finite difference method.
A transformed analytical model for thermal noise of FinFET based on fringing field approximation
Madhulika Sharma, Savitesh; Dasgupta, S.; Kartikeyant, M. V.
2016-09-01
This paper delineates the effect of nonplanar structure of FinFETs on noise performance. We demonstrate the thermal noise analytical model that has been inferred by taking into account the presence of an additional inverted region in the extended (underlap) S/D region due to finite gate electrode thickness. Noise investigation includes the effects of source drain resistances which become significant as channel length becomes shorter. In this paper, we evaluate the additional noise caused by three dimensional (3-D) structure of the single fin device and then extended analysis of the multi-fin and multi-fingers structure. The addition of fringe field increases its minimum noise figure and noise resistance of approximately 1 dB and 100 Ω respectively and optimum admittance increases to 5.45 mƱ at 20 GHz for a device operating under saturation region. Hence, our transformed model plays a significant function in evaluation of accurate noise performance at circuit level. Project supported in part by the All India Council for Technical Education (AICTE).
Behavior-analytic approaches to decision making.
Fantino, Edmund
2004-06-30
Behavior analysis has much to offer the study of phenomena in the area of judgement and decision making. We review several research areas that should continue to profit from a behavior-analytic approach, including the relative merit of contingency-based and rule-governed instruction of solving algebra and analogy problems, and the role of conditioned reinforcement and the inter-trial interval in a type of Prisoner's Dilemma Game. We focus on two additional areas: (1) the study of base-rate neglect, a notorious reasoning fallacy and (2) the study of the sunk-cost effect, which characterizes ill-conceived investment decisions. In each of these two cases we review studies with humans and pigeons as subjects.
Pazmiño, R. A.; García-Peñalvo, F. J.; Conde, M. Á.
2017-01-01
Bichsel, proposes an analytics maturity model used to evaluate the progress in the use of academic and learning analytics. In the progress, there are positive results but, most institutions are below 80% level. Most institutions also scored low for data analytics tools, reporting, and expertise"]. In addition, a task with the methods of Data Mining and Learning Analytics is analyze them (precision, accuracy, sensitivity, coherence, fitness measures, cosine, confidence, lift, similarity weight...
Analytical and Numerical Approaches to Mathematical Relativity
Energy Technology Data Exchange (ETDEWEB)
Stewart, John M [Department of Applied Mathematics and Theoretical Physics-CMS, Wilberforce Road, Cambridge CB3 0BA (United Kingdom)
2007-08-07
The 319th Wilhelm-and-Else-Heraeus Seminar 'Mathematical Relativity: New Ideas and Developments' took place in March 2004. Twelve of the invited speakers have expanded their one hour talks into the papers appearing in this volume, preceded by a foreword by Roger Penrose. The first group consists of four papers on 'differential geometry and differential topology'. Paul Ehrlich opens with a very witty review of global Lorentzian geometry, which caused this reviewer to think more carefully about how he uses the adjective 'generic'. Robert Low addresses the issue of causality with a description of the 'space of null geodesics' and a tentative proposal for a new definition of causal boundary. The underlying review of global Lorentzian geometry is continued by Antonio Masiello, looking at variational approaches (actually valid for more general semi-Riemannian manifolds). This group concludes with a very clear review of pp-wave spacetimes from Jose Flores and Miguel Sanchez. (This reviewer was delighted to see a reproduction of Roger Penrose's seminal (1965) picture of null geodesics in plane wave spacetimes which attracted him into the subject.) Robert Beig opens the second group 'analytic methods and differential equations' with a brief but careful discussion of symmetric (regular) hyperbolicity for first (second) order systems, respectively, of partial differential equations. His description is peppered with examples, many specific to relativstic continuum mechanics. There follows a succinct review of linear elliptic boundary value problems with applications to general relativity from Sergio Dain. The numerous examples he provides are thought-provoking. The 'standard cosmological model' has been well understood for three quarters of a century. However recent observations suggest that the expansion in our Universe may be accelerating. Alan Rendall provides a careful discussion of the changes, both
Dynamic programming approach to optimization of approximate decision rules
Amin, Talha
2013-02-01
This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number of unordered pairs of rows with different decisions in the decision table T. For a nonnegative real number β, we consider β-decision rules that localize rows in subtables of T with uncertainty at most β. Our algorithm constructs a directed acyclic graph Δβ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most β. The graph Δβ(T) allows us to describe the whole set of so-called irredundant β-decision rules. We can describe all irredundant β-decision rules with minimum length, and after that among these rules describe all rules with maximum coverage. We can also change the order of optimization. The consideration of irredundant rules only does not change the results of optimization. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2012 Elsevier Inc. All rights reserved.
Rankin, Blake M; Ben-Amotz, Dor; Widom, B
2015-09-14
Molecular processes, ranging from hydrophobic aggregation and protein binding to mesoscopic self-assembly, are typically driven by a delicate balance of energetic and entropic non-covalent interactions. Here, we focus on a broad class of such processes in which multiple ligands bind to a central solute molecule as a result of solute-ligand (direct) and/or ligand-ligand (cooperative) interaction energies. Previously, we described a weighted random mixing (WRM) mean-field model for such processes and compared the resulting adsorption isotherms and aggregate size distributions with exact finite lattice (FL) predictions, for lattices with up to n = 20 binding sites. Here, we compare FL predictions obtained using both Bethe-Guggenheim (BG) and WRM approximations, and find that the latter two approximations are complementary, as they are each most accurate in different aggregation regimes. Moreover, we describe a computationally efficient method for exhaustively counting nearest neighbors in FL configurations, thus making it feasible to obtain FL predictions for systems with up n = 48 binding sites, whose properties approach the thermodynamic (infinite lattice) limit. We further illustrate the applicability of our results by comparing lattice model and molecular dynamics simulation predictions pertaining to the aggregation of methane around neopentane.
International Nuclear Information System (INIS)
Rekab, S.; Zenine, N.
2006-01-01
We consider the three dimensional non relativistic eigenvalue problem in the case of a Coulomb potential plus linear and quadratic radial terms. In the framework of the Rayleigh-Schrodinger Perturbation Theory, using a specific choice of the unperturbed Hamiltonian, we obtain approximate analytic expressions for the eigenvalues of orbital excitations. The implications and the range of validity of the obtained analytic expression are discussed
Directory of Open Access Journals (Sweden)
Birol İbiş
2014-12-01
Full Text Available The purpose of this paper was to obtain the analytical approximate solution of time-fractional Fornberg–Whitham, equation involving Jumarie’s modified Riemann–Liouville derivative by the fractional variational iteration method (FVIM. FVIM provides the solution in the form of a convergent series with easily calculable terms. The obtained approximate solutions are compared with the exact or existing numerical results in the literature to verify the applicability, efficiency and accuracy of the method.
Approximated and User Steerable tSNE for Progressive Visual Analytics
Pezzotti, N.; Lelieveldt, B.P.F.; van der Maaten, L.J.P.; Hollt, T.; Eisemann, E.; Vilanova Bartroli, A.
2016-01-01
Progressive Visual Analytics aims at improving the interactivity in existing analytics techniques by means of visualization as well as interaction with intermediate results. One key method for data analysis is dimensionality reduction, for example, to produce 2D embeddings that can be visualized and
Gopalan, Giri; Hrafnkelsson, Birgir; Aðalgeirsdóttir, Guðfinna; Jarosch, Alexander H.; Pálsson, Finnur
2018-03-01
Bayesian hierarchical modeling can assist the study of glacial dynamics and ice flow properties. This approach will allow glaciologists to make fully probabilistic predictions for the thickness of a glacier at unobserved spatio-temporal coordinates, and it will also allow for the derivation of posterior probability distributions for key physical parameters such as ice viscosity and basal sliding. The goal of this paper is to develop a proof of concept for a Bayesian hierarchical model constructed, which uses exact analytical solutions for the shallow ice approximation (SIA) introduced by Bueler et al. (2005). A suite of test simulations utilizing these exact solutions suggests that this approach is able to adequately model numerical errors and produce useful physical parameter posterior distributions and predictions. A byproduct of the development of the Bayesian hierarchical model is the derivation of a novel finite difference method for solving the SIA partial differential equation (PDE). An additional novelty of this work is the correction of numerical errors induced through a numerical solution using a statistical model. This error correcting process models numerical errors that accumulate forward in time and spatial variation of numerical errors between the dome, interior, and margin of a glacier.
Earth's core convection: Boussinesq approximation or incompressible approach?
Czech Academy of Sciences Publication Activity Database
Anufriev, A. P.; Hejda, Pavel
2010-01-01
Roč. 104, č. 1 (2010), s. 65-83 ISSN 0309-1929 R&D Projects: GA AV ČR IAA300120704 Grant - others:INTAS(XE) 03-51-5807 Institutional research plan: CEZ:AV0Z30120515 Keywords : geodynamic models * core convection * Boussinesq approximation Subject RIV: DE - Earth Magnetism, Geodesy, Geography Impact factor: 0.831, year: 2010
Education: Holistic Approach Urged for Teaching Analytical Chemistry.
Chemical and Engineering News, 1983
1983-01-01
Recommends teaching analytical chemistry using an approach that emphasizes the problem as well as the sample. This problem-solving approach would complement and not replace the study of fundamental and applied aspects of chemical determinations. Also considers four components of analytical chemistry: analysis, research, development, and education.…
Analytic approximation to the largest eigenvalue distribution of a white Wishart matrix
CSIR Research Space (South Africa)
Vlok, JD
2012-08-14
Full Text Available offers largely simplified computation and provides statistics such as the mean value and region of support of the largest eigenvalue distribution. Numeric results from the literature are compared with the approximation and Monte Carlo simulation results...
Directory of Open Access Journals (Sweden)
A. Beléndez
2012-01-01
Full Text Available Accurate approximate closed-form solutions for the cubic-quintic Duffing oscillator are obtained in terms of elementary functions. To do this, we use the previous results obtained using a cubication method in which the restoring force is expanded in Chebyshev polynomials and the original nonlinear differential equation is approximated by a cubic Duffing equation. Explicit approximate solutions are then expressed as a function of the complete elliptic integral of the first kind and the Jacobi elliptic function cn. Then we obtain other approximate expressions for these solutions, which are expressed in terms of elementary functions. To do this, the relationship between the complete elliptic integral of the first kind and the arithmetic-geometric mean is used and the rational harmonic balance method is applied to obtain the periodic solution of the original nonlinear oscillator.
Bessel collocation approach for approximate solutions of Hantavirus infection model
Directory of Open Access Journals (Sweden)
Suayip Yuzbasi
2017-11-01
Full Text Available In this study, a collocation method is introduced to find the approximate solutions of Hantavirus infection model which is a system of nonlinear ordinary differential equations. The method is based on the Bessel functions of the first kind, matrix operations and collocation points. This method converts Hantavirus infection model into a matrix equation in terms of the Bessel functions of first kind, matrix operations and collocation points. The matrix equation corresponds to a system of nonlinear equations with the unknown Bessel coefficients. The reliability and efficiency of the suggested scheme are demonstrated by numerical applications and all numerical calculations have been done by using a program written in Maple.
Diffusive Wave Approximation to the Shallow Water Equations: Computational Approach
Collier, Nathan
2011-05-14
We discuss the use of time adaptivity applied to the one dimensional diffusive wave approximation to the shallow water equations. A simple and computationally economical error estimator is discussed which enables time-step size adaptivity. This robust adaptive time discretization corrects the initial time step size to achieve a user specified bound on the discretization error and allows time step size variations of several orders of magnitude. In particular, in the one dimensional results presented in this work feature a change of four orders of magnitudes for the time step over the entire simulation.
Phase diagram of charged dumbbells: a random phase approximation approach.
Kudlay, Alexander; Ermoshkin, Alexander V; de la Cruz, Monica Olvera
2004-08-01
The phase diagram of the charged hard dumbbell system (hard spheres of opposite unit charge fixed at contact) is obtained with the use of the random phase approximation (RPA). The effect of the impenetrability of charged spheres on charge-charge fluctuations is described by introduction of a modified electrostatic potential. The correlations of ions in a pair are included via a correlation function in the RPA. The coexistence curve is in good agreement with Monte Carlo simulations. The relevance of the theory to the restricted primitive model is discussed.
Barlow, Nathaniel S; Schultz, Andrew J; Weinstein, Steven J; Kofke, David A
2015-08-21
The mathematical structure imposed by the thermodynamic critical point motivates an approximant that synthesizes two theoretically sound equations of state: the parametric and the virial. The former is constructed to describe the critical region, incorporating all scaling laws; the latter is an expansion about zero density, developed from molecular considerations. The approximant is shown to yield an equation of state capable of accurately describing properties over a large portion of the thermodynamic parameter space, far greater than that covered by each treatment alone.
International Nuclear Information System (INIS)
Kazarnovskij, M.V.; Matushko, G.K.; Matushko, V.L.; Par'ev, Eh.Ya.; Serezhnikov, S.V.
1981-01-01
The problem on propagation of the internuclear cascade initiated by nucleons of 0.1-1 GeV energy in accelerator schielding is solved approximately in the analytical form. Analytical expressions for the function of spatial, angular and energy distribution of the flux density of nucleons with the energy above 20 MeV and some functionals from it are obtained. The results of the calculations obtained by the developed methods are compared with calculations obtained by the method of direct simulation. It is shown that at the atomic mass of shielding material [ru
DEFF Research Database (Denmark)
Kimiaeifar, Amin; Lund, Erik; Thomsen, Ole Thybo
2010-01-01
In this work, an analytical method, which is referred to as Parameter-expansion Method is used to obtain the exact solution for the problem of nonlinear vibrations of an inextensible beam. It is shown that one term in the series expansion is sufficient to obtain a highly accurate solution, which...
Analytical approximations of diving-wave imaging in constant-gradient medium
Stovas, Alexey
2014-06-24
Full-waveform inversion (FWI) in practical applications is currently used to invert the direct arrivals (diving waves, no reflections) using relatively long offsets. This is driven mainly by the high nonlinearity introduced to the inversion problem when reflection data are included, which in some cases require extremely low frequency for convergence. However, analytical insights into diving waves have lagged behind this sudden interest. We use analytical formulas that describe the diving wave’s behavior and traveltime in a constant-gradient medium to develop insights into the traveltime moveout of diving waves and the image (model) point dispersal (residual) when the wrong velocity is used. The explicit formulations that describe these phenomena reveal the high dependence of diving-wave imaging on the gradient and the initial velocity. The analytical image point residual equation can be further used to scan for the best-fit linear velocity model, which is now becoming a common sight as an initial velocity model for FWI. We determined the accuracy and versatility of these analytical formulas through numerical tests.
Kuznetsov, G. N.; Stepanov, A. N.
2017-11-01
We obtain, and compare with exact solutions, the approximate analytic relations that determine, for increasing distance, irregularities of attenuation in the regular sound pressure components and orthogonal projections of the oscillation velocity vectors of low-frequency signals formed in a waveguide by various multipoles. We show that the mentioned field characteristics essentially depend on the type of multipole, the distance between the source and receivers, and the specific features of the received scalar or vector field components. It is established that the approximating dependences agree well with the exact laws of attenuation in the field and, despite the variety of dependences, they are divided into three compact groups with uniform characteristics.
Płociniczak, Łukasz
2015-07-01
In this paper we investigate the porous medium equation with a time-fractional derivative. We justify that the resulting equation emerges when we consider a waiting-time (or trapping) phenomenon that can have its place in the medium. Our deterministic derivation is dual to the stochastic CTRW framework and can include nonlinear effects. With the use of the previously developed method we approximate the investigated equation along with a constant flux boundary conditions and obtain a very accurate solution. Moreover, we generalise the approximation method and provide explicit formulas which can be readily used in applications. The subdiffusive anomalies in some porous media such as construction materials have been recently verified by experiment. Our simple approximate solution of the time-fractional porous medium equation fits accurately a sample data which comes from one of these experiments.
An analytical approximation for the prediction of transients with temperature feedback
International Nuclear Information System (INIS)
Palma, Daniel A.P.; Martinez, Aquilino S.
2010-01-01
In the present paper a new analytical solution for the point kinetics equation system with temperature feedback is presented. This solution is based on the expansion of the neutron density in terms of the generation time of prompt neutrons (Nahla, 2009) and presents the advantage of being explicit in time and having a simple functional form in comparison with other existing formulations in supercritical transients. (orig.)
An analytical approximation for the prediction of transients with temperature feedback
Energy Technology Data Exchange (ETDEWEB)
Palma, Daniel A.P. [Instituto Federal do Rio de Janeiro (IFRJ), RJ (Brazil); Martinez, Aquilino S. [COPPE/UFRJ, RJ (Brazil). Programa de Engenharia Nuclear
2010-05-15
In the present paper a new analytical solution for the point kinetics equation system with temperature feedback is presented. This solution is based on the expansion of the neutron density in terms of the generation time of prompt neutrons (Nahla, 2009) and presents the advantage of being explicit in time and having a simple functional form in comparison with other existing formulations in supercritical transients. (orig.)
H. Saberi-Nik; S. Effati; R. Buzhabadi
2010-01-01
In this paper, we give an analytical approximate solution for an integro- differential equation which describes the charged particle motion for certain configurations of oscillating magnetic fields is considered. The homotopy analysis method (HAM) is used for solving this equation. Several examples are given to reconfirm the efficiency of these algorithms. The results of applying this procedure to the integro-differential equation with time-periodic coefficients show the high accuracy, simpli...
Directory of Open Access Journals (Sweden)
H. Saberi-Nik
2013-07-01
Full Text Available In this paper, we give an analytical approximate solution for an integro- differential equation which describes the charged particle motion for certain configurations of oscillating magnetic fields is considered. The homotopy analysis method (HAM is used for solving this equation. Several examples are given to reconfirm the efficiency of these algorithms. The results of applying this procedure to the integro-differential equation with time-periodic coefficients show the high accuracy, simplicity and efficiency of this method.
Directory of Open Access Journals (Sweden)
H. Saberi-Nik
2010-06-01
Full Text Available In this paper, we give an analytical approximate solution for an integro- differential equation which describes the charged particle motion for certain configurations of oscillating magnetic fields is considered. The homotopy analysis method (HAM is used for solving this equation. Several examples are given to reconfirm the efficiency of these algorithms. The results of applying this procedure to the integro-differential equation with time-periodic coefficients show the high accuracy, simplicity and efficiency of this method.
Analytical Approximation Methods for the Stabilizing Solution of the Hamilton-Jacobi Equation
Sakamoto, Noboru; van der Schaft, Arjan J.
2008-01-01
In this paper, two methods for approximating the stabilizing solution of the Hamilton-Jacobi equation are proposed using symplectic geometry and a Hamiltonian perturbation technique as well as stable manifold theory. The first method uses the fact that the Hamiltonian lifted system of an integrable
Analytical Approximation Methods for the Stabilizing Solution of the Hamilton–Jacobi Equation
Sakamoto, Noboru; Schaft, Arjan J. van der
2008-01-01
In this paper, two methods for approximating the stabilizing solution of the Hamilton–Jacobi equation are proposed using symplectic geometry and a Hamiltonian perturbation technique as well as stable manifold theory. The first method uses the fact that the Hamiltonian lifted system of an integrable
A simple analytic approximation to the Rayleigh-Bénard stability threshold
Prosperetti, Andrea
2011-01-01
The Rayleigh-Bénard linear stability problem is solved by means of a Fourier series expansion. It is found that truncating the series to just the first term gives an excellent explicit approximation to the marginal stability relation between the Rayleigh number and the wave number of the
Analytical approximations for prices of swap rate dependent embedded options in insurance products
Plat, R.; Pelsser, A.
2009-01-01
Life insurance products have profit sharing features in combination with guarantees. These so-called embedded options are often dependent on or approximated by forward swap rates. In practice, these kinds of options are mostly valued by Monte Carlo simulations. However, for risk management
Forecasting Hotspots-A Predictive Analytics Approach.
Maciejewski, R; Hafen, R; Rudolph, S; Larew, S G; Mitchell, M A; Cleveland, W S; Ebert, D S
2011-04-01
Current visual analytics systems provide users with the means to explore trends in their data. Linked views and interactive displays provide insight into correlations among people, events, and places in space and time. Analysts search for events of interest through statistical tools linked to visual displays, drill down into the data, and form hypotheses based upon the available information. However, current systems stop short of predicting events. In spatiotemporal data, analysts are searching for regions of space and time with unusually high incidences of events (hotspots). In the cases where hotspots are found, analysts would like to predict how these regions may grow in order to plan resource allocation and preventative measures. Furthermore, analysts would also like to predict where future hotspots may occur. To facilitate such forecasting, we have created a predictive visual analytics toolkit that provides analysts with linked spatiotemporal and statistical analytic views. Our system models spatiotemporal events through the combination of kernel density estimation for event distribution and seasonal trend decomposition by loess smoothing for temporal predictions. We provide analysts with estimates of error in our modeling, along with spatial and temporal alerts to indicate the occurrence of statistically significant hotspots. Spatial data are distributed based on a modeling of previous event locations, thereby maintaining a temporal coherence with past events. Such tools allow analysts to perform real-time hypothesis testing, plan intervention strategies, and allocate resources to correspond to perceived threats.
Directory of Open Access Journals (Sweden)
G. H. Gudmundsson
2008-07-01
Full Text Available New analytical solutions describing the effects of small-amplitude perturbations in boundary data on flow in the shallow-ice-stream approximation are presented. These solutions are valid for a non-linear Weertman-type sliding law and for Newtonian ice rheology. Comparison is made with corresponding solutions of the shallow-ice-sheet approximation, and with solutions of the full Stokes equations. The shallow-ice-stream approximation is commonly used to describe large-scale ice stream flow over a weak bed, while the shallow-ice-sheet approximation forms the basis of most current large-scale ice sheet models. It is found that the shallow-ice-stream approximation overestimates the effects of bed topography perturbations on surface profile for wavelengths less than about 5 to 10 ice thicknesses, the exact number depending on values of surface slope and slip ratio. For high slip ratios, the shallow-ice-stream approximation gives a very simple description of the relationship between bed and surface topography, with the corresponding transfer amplitudes being close to unity for any given wavelength. The shallow-ice-stream estimates for the timescales that govern the transient response of ice streams to external perturbations are considerably more accurate than those based on the shallow-ice-sheet approximation. In particular, in contrast to the shallow-ice-sheet approximation, the shallow-ice-stream approximation correctly reproduces the short-wavelength limit of the kinematic phase speed given by solving a linearised version of the full Stokes system. In accordance with the full Stokes solutions, the shallow-ice-sheet approximation predicts surface fields to react weakly to spatial variations in basal slipperiness with wavelengths less than about 10 to 20 ice thicknesses.
Directory of Open Access Journals (Sweden)
Mohammad Mehdi Rashidi
2008-01-01
Full Text Available The flow of a viscous incompressible fluid between two parallel plates due to the normal motion of the plates is investigated. The unsteady Navier-Stokes equations are reduced to a nonlinear fourth-order differential equation by using similarity solutions. Homotopy analysis method (HAM is used to solve this nonlinear equation analytically. The convergence of the obtained series solution is carefully analyzed. The validity of our solutions is verified by the numerical results obtained by fourth-order Runge-Kutta.
Directory of Open Access Journals (Sweden)
Hua Yang
2012-01-01
Full Text Available We are concerned with the stochastic differential delay equations with Poisson jump and Markovian switching (SDDEsPJMSs. Most SDDEsPJMSs cannot be solved explicitly as stochastic differential equations. Therefore, numerical solutions have become an important issue in the study of SDDEsPJMSs. The key contribution of this paper is to investigate the strong convergence between the true solutions and the numerical solutions to SDDEsPJMSs when the drift and diffusion coefficients are Taylor approximations.
A semi-analytic approximation of charge induction in monolithic pixelated CdZnTe radiation detectors
International Nuclear Information System (INIS)
Bale, Derek S.
2010-01-01
A semi-analytic approximation to the weighting potential within monolithic pixelated CdZnTe radiation detectors is presented. The approximation is based on solving the multi-dimensional Laplace equation that results upon replacing rectangular pixels with equal-area circular pixels. Further, we utilize the simplicity of the resulting approximate weighting potential to extend the well-known Hecht equation, describing charge induction in a parallel plate detector, to that approximating the multi-dimensional charge induction within a pixelated detector. These newly found expressions for the weighting potential and charge induction in a pixelated detector are compared throughout to full 3D electrostatic and monte carlo simulations using eVDSIM (eV Microelectronics Device SIMulator). The semi-analytic expressions derived in this paper can be evaluated quickly, and can therefore be used to efficiently reduce the size and dimensionality of the parameter space on which a detailed 3D numerical analysis is needed for pixelated detector design in a wide range of applications.
Transition Studies: Basic Ideas and Analytical Approaches
Grin, J.; Brauch, H.G.; Oswald Spring, Ú.; Grin, J.; Scheffran, J.
2016-01-01
As a background to later contributions, this chapter provides a concise introduction to different approaches to (i) understanding and (ii) shaping transition dynamics: (1) A sociotechnical approach, with the multilevel perspective as its main concept, and strategic niche management as its governance
Teaching Analytical Chemistry to Pharmacy Students: A Combined, Iterative Approach
Masania, Jinit; Grootveld, Martin; Wilson, Philippe B.
2018-01-01
Analytical chemistry has often been a difficult subject to teach in a classroom or lecture-based context. Numerous strategies for overcoming the inherently practical-based difficulties have been suggested, each with differing pedagogical theories. Here, we present a combined approach to tackling the problem of teaching analytical chemistry, with…
Semiconductor quantum wells with BenDaniel–Duke boundary conditions: approximate analytical results
International Nuclear Information System (INIS)
Barsan, Victor; Ciornei, Mihaela-Cristina
2017-01-01
The Schrödinger equation for a particle moving in a square well potential with BenDaniel–Duke boundary conditions is solved. Using algebraic approximations for trigonometric functions, the transcendental equations of the bound states energy are transformed into tractable, algebraic equations. For the ground state and the first excited state, they are cubic equations; we obtain simple formulas for their physically interesting roots. The case of higher excited states is also analysed. Our results have direct applications in the physics of type I and type II semiconductor heterostructures. (paper)
Semiconductor quantum wells with BenDaniel-Duke boundary conditions: approximate analytical results
Barsan, Victor; Ciornei, Mihaela-Cristina
2017-01-01
The Schrödinger equation for a particle moving in a square well potential with BenDaniel-Duke boundary conditions is solved. Using algebraic approximations for trigonometric functions, the transcendental equations of the bound states energy are transformed into tractable, algebraic equations. For the ground state and the first excited state, they are cubic equations; we obtain simple formulas for their physically interesting roots. The case of higher excited states is also analysed. Our results have direct applications in the physics of type I and type II semiconductor heterostructures.
Spark - a modern approach for distributed analytics
CERN. Geneva; Kothuri, Prasanth
2016-01-01
The Hadoop ecosystem is the leading opensource platform for distributed storing and processing big data. It is a very popular system for implementing data warehouses and data lakes. Spark has also emerged to be one of the leading engines for data analytics. The Hadoop platform is available at CERN as a central service provided by the IT department. By attending the session, a participant will acquire knowledge of the essential concepts need to benefit from the parallel data processing offered by Spark framework. The session is structured around practical examples and tutorials. Main topics: Architecture overview - work distribution, concepts of a worker and a driver Computing concepts of transformations and actions Data processing APIs - RDD, DataFrame, and SparkSQL
SU-F-T-144: Analytical Closed Form Approximation for Carbon Ion Bragg Curves in Water
Energy Technology Data Exchange (ETDEWEB)
Tuomanen, S; Moskvin, V; Farr, J [St. Jude Children’s Research Hospital, Memphis, TN (United States)
2016-06-15
Purpose: Semi-empirical modeling is a powerful computational method in radiation dosimetry. A set of approximations exist for proton ion depth dose distribution (DDD) in water. However, the modeling is more complicated for carbon ions due to fragmentation. This study addresses this by providing and evaluating a new methodology for DDD modeling of carbon ions in water. Methods: The FLUKA, Monte Carlo (MC) general-purpose transport code was used for simulation of carbon DDDs for energies of 100–400 MeV in water as reference data model benchmarking. Based on Thomas Bortfeld’s closed form equation approximating proton Bragg Curves as a basis, we derived the critical constants for a beam of Carbon ions by applying models of radiation transport by Lee et. al. and Geiger to our simulated Carbon curves. We hypothesized that including a new exponential (κ) residual distance parameter to Bortfeld’s fluence reduction relation would improve DDD modeling for carbon ions. We are introducing an additional term to be added to Bortfeld’s equation to describe fragmentation tail. This term accounts for the pre-peak dose from nuclear fragments (NF). In the post peak region, the NF transport will be treated as new beams utilizing the Glauber model for interaction cross sections and the Abrasion- Ablation fragmentation model. Results: The carbon beam specific constants in the developed model were determined to be : p= 1.75, β=0.008 cm-1, γ=0.6, α=0.0007 cm MeV, σmono=0.08, and the new exponential parameter κ=0.55. This produced a close match for the plateau part of the curve (max deviation 6.37%). Conclusion: The derived semi-empirical model provides an accurate approximation of the MC simulated clinical carbon DDDs. This is the first direct semi-empirical simulation for the dosimetry of therapeutic carbon ions. The accurate modeling of the NF tail in the carbon DDD will provide key insight into distal edge dose deposition formation.
Andrei, R.M.; Smith, C.S.; Fraanje, P.R.; Verhaegen, M.; Korkiakoski, V.A.; Keller, C.U.; Doelman, N.J.
2012-01-01
In this paper we give a new wavefront estimation technique that overcomes the main disadvantages of the phase diversity (PD) algorithms, namely the large computational complexity and the fact that the solutions can get stuck in a local minima. Our approach gives a good starting point for an
Pedoinformatics Approach to Soil Text Analytics
Furey, J.; Seiter, J.; Davis, A.
2017-12-01
The several extant schema for the classification of soils rely on differing criteria, but the major soil science taxonomies, including the United States Department of Agriculture (USDA) and the international harmonized World Reference Base for Soil Resources systems, are based principally on inferred pedogenic properties. These taxonomies largely result from compiled individual observations of soil morphologies within soil profiles, and the vast majority of this pedologic information is contained in qualitative text descriptions. We present text mining analyses of hundreds of gigabytes of parsed text and other data in the digitally available USDA soil taxonomy documentation, the Soil Survey Geographic (SSURGO) database, and the National Cooperative Soil Survey (NCSS) soil characterization database. These analyses implemented iPython calls to Gensim modules for topic modelling, with latent semantic indexing completed down to the lowest taxon level (soil series) paragraphs. Via a custom extension of the Natural Language Toolkit (NLTK), approximately one percent of the USDA soil series descriptions were used to train a classifier for the remainder of the documents, essentially by treating soil science words as comprising a novel language. While location-specific descriptors at the soil series level are amenable to geomatics methods, unsupervised clustering of the occurrence of other soil science words did not closely follow the usual hierarchy of soil taxa. We present preliminary phrasal analyses that may account for some of these effects.
An extended analytical approach for diffuse optical imaging.
Erkol, H; Nouizi, F; Unlu, M B; Gulsen, G
2015-07-07
In this work, we introduce an analytical method to solve the diffusion equation in a cylindrical geometry. This method is based on an integral approach to derive the Green's function for specific boundary conditions. Using our approach, we obtain comprehensive analytical solutions with the Robin boundary condition for diffuse optical imaging in both two and three dimensions. The solutions are expressed in terms of the optical properties of tissue and the amplitude and position of the light source. Our method not only works well inside the tissue but provides very accurate results near the tissue boundaries as well. The results obtained by our method are first compared with those obtained by a conventional analytical method then validated using numerical simulations. Our new analytical method allows not only implementation of any boundary condition for a specific problem but also fast simulation of light propagation making it very suitable for iterative image reconstruction algorithms.
International Nuclear Information System (INIS)
Doroshenko, A.Yu.; Tarasko, M.Z.; Piksaikin, V.M.
2002-01-01
The energy spectrum of the delayed neutrons is the poorest known of all input data required in the calculation of the effective delayed neutron fractions. In addition to delayed neutron spectra based on the aggregate spectrum measurements there are two different approaches for deriving the delayed neutron energy spectra. Both of them are based on the data related to the delayed neutron spectra from individual precursors of delayed neutrons. In present work these two different data sets were compared with the help of an approximation by gamma-function. The choice of this approximation function instead of the Maxwellian or evaporation type of distribution is substantiated. (author)
A new embedded-atom method approach based on the pth moment approximation
Wang, Kun; Zhu, Wenjun; Xiao, Shifang; Chen, Jun; Hu, Wangyu
2016-12-01
Large scale atomistic simulations with suitable interatomic potentials are widely employed by scientists or engineers of different areas. The quick generation of high-quality interatomic potentials is urgently needed. This largely relies on the developments of potential construction methods and algorithms in this area. Quantities of interatomic potential models have been proposed and parameterized with various methods, such as the analytic method, the force-matching approach and multi-object optimization method, in order to make the potentials more transferable. Without apparently lowering the precision for describing the target system, potentials of fewer fitting parameters (FPs) are somewhat more physically reasonable. Thus, studying methods to reduce the FP number is helpful in understanding the underlying physics of simulated systems and improving the precision of potential models. In this work, we propose an embedded-atom method (EAM) potential model consisting of a new manybody term based on the pth moment approximation to the tight binding theory and the general transformation invariance of EAM potentials, and an energy modification term represented by pairwise interactions. The pairwise interactions are evaluated by an analytic-numerical scheme without the need to know their functional forms a priori. By constructing three potentials of aluminum and comparing them with a commonly used EAM potential model, several wonderful results are obtained. First, without losing the precision of potentials, our potential of aluminum has fewer potential parameters and a smaller cutoff distance when compared with some constantly-used potentials of aluminum. This is because several physical quantities, usually serving as target quantities to match in other potentials, seem to be uniquely dependent on quantities contained in our basic reference database within the new potential model. Second, a key empirical parameter in the embedding term of the commonly used EAM model is
39 (APPROXIMATE ANALYTICAL SOLUTION)
African Journals Online (AJOL)
Rotating machines like motors, turbines, compressors etc. are generally subjected to periodic forces and the system parameters remain more or less constant. ... parameters change and, consequently, the natural frequencies too, due to reasons of changing gyroscopic moments, centrifugal forces, bearing characteristics,.
Towards a set Theoretical Approach to Big Data Analytics
DEFF Research Database (Denmark)
Mukkamala, Raghava Rao; Hussain, Abid; Vatrapu, Ravi
2014-01-01
Formal methods, models and tools for social big data analytics are largely limited to graph theoretical approaches such as social network analysis (SNA) informed by relational sociology. There are no other unified modeling approaches to social big data that integrate the conceptual, formal...... the Social Data Analytics Tool (SODATO) that realizes the conceptual model in software and provisions social data analysis based on the conceptual and formal models. Fourth and last, based on the formal model and sentiment analysis of text, we present a method for profiling of artifacts and actors and apply...
Kim, Jaehoon; Jung, Yousung
2015-01-13
We present a systematic derivation of double-hybrid density functional (DHDF) based on the polynomial series expansion of adiabatic connection formula in the closed interval λ = [0,1] without a loss of generality. Because of the tendency of Wλ having a small (but not negligible) curvature at equilibrium, we first evaluate the chemical validity of quadratic approximation for Wλ using the large GMTKN30 benchmark database. The resulting functional, obtained analytically and denoted by quadratic adiabatic connection functional-PT2 (QACF-2), is found to be robust and accurate (2.35 kcal/mol of weighted total mean absolute deviation error, WTMAD), comparable or slightly improved compared to other flavors of existing parameter-free DHDFs (2.45 or 3.29 kcal/mol for PBE0-2 or PBE0-DH, respectively). The nonlocal expansion coefficients obtained for the current QACF-2 (aHF = 2/3, aPT2 = 1/3) also offer some interesting observation, in that the latter analytical coefficients are very similar to the empirically optimized coefficients in some of the best DHDFs today with high accuracy (1.5 kcal/mol). Effects of quadratic truncation in QACF-2 have been further assessed and justified by estimating the higher-order corrections to be as much as 0.54 kcal/mol. The present derivation and numerical experiments suggest that the quadratic λ dependence, despite its simplicity, is a surprisingly good approximation to the adiabatic connection that can serve as a good starting point for further development of accurate parameter-free density functionals.
A general approach for cache-oblivious range reporting and approximate range counting
DEFF Research Database (Denmark)
Afshani, Peyman; Hamilton, Chris; Zeh, Norbert
2010-01-01
We present cache-oblivious solutions to two important variants of range searching: range reporting and approximate range counting. Our main contribution is a general approach for constructing cache-oblivious data structures that provide relative (1+ε)-approximations for a general class of range...
Ship Impact Study: Analytical Approaches and Finite Element Modeling
Directory of Open Access Journals (Sweden)
Pawel Woelke
2012-01-01
Full Text Available The current paper presents the results of a ship impact study conducted using various analytical approaches available in the literature with the results obtained from detailed finite element analysis. Considering a typical container vessel impacting a rigid wall with an initial speed of 10 knots, the study investigates the forces imparted on the struck obstacle, the energy dissipated through inelastic deformation, penetration, local deformation patterns, and local failure of the ship elements. The main objective of the paper is to study the accuracy and generality of the predictions of the vessel collision forces, obtained by means of analytical closed-form solutions, in reference to detailed finite element analyses. The results show that significant discrepancies between simplified analytical approaches and detailed finite element analyses can occur, depending on the specific impact scenarios under consideration.
Energy Technology Data Exchange (ETDEWEB)
Delgado-Aparicio, L.; Tritz, K.; Kramer, T.; Stutman, D.; Finkentha, M.; Hill, K.; Bitter, M.
2010-08-26
A new set of analytic formulae describes the transmission of soft X-ray (SXR) continuum radiation through a metallic foil for its application to fast electron temperature measurements in fusion plasmas. This novel approach shows good agreement with numerical calculations over a wide range of plasma temperatures in contrast with the solutions obtained when using a transmission approximated by a single-Heaviside function [S. von Goeler, Rev. Sci. Instrum., 20, 599, (1999)]. The new analytic formulae can improve the interpretation of the experimental results and thus contribute in obtaining fast teperature measurements in between intermittent Thomson Scattering data.
Kronecker Product Analytical Approach to ANOVA of Surface ...
African Journals Online (AJOL)
Kronecker Product Analytical Approach to ANOVA of Surface Roughness Optimization. ... Journal of the Nigerian Association of Mathematical Physics ... Using the new method, the combination of controllable variables that optimized most the surface finish of machined workpiece materials was determined with Kronecker ...
Integrated analytical approaches towards toxic algal natural products discovery
DEFF Research Database (Denmark)
Larsen, Thomas Ostenfeld; Rasmussen, Silas Anselm; Gedsted Andersen, Mikael
the structures of already known compounds (3). When likely unknown compounds have been identified, we use E-SPE results (4) to predict a fast and optimal purification strategy towards the pure novel compounds for NMR characterization. This presentation will highlight our integrated analytical approaches...
International Nuclear Information System (INIS)
Khan, S.H.; Ivanov, A.A.
1993-01-01
This paper describes an approximate method for calculating the static characteristics of linear step motors (LSM), being developed for control rod drives (CRD) in large nuclear reactors. The static characteristic of such an LSM which is given by the variation of electromagnetic force with armature displacement determines the motor performance in its standing and dynamic modes. The approximate method of calculation of these characteristics is based on the permeance analysis method applied to the phase magnetic circuit of LSM. This is a simple, fast and efficient analytical approach which gives satisfactory results for small stator currents and weak iron saturation, typical to the standing mode of operation of LSM. The method is validated by comparing theoretical results with experimental ones. (Author)
Multi-analytical Approaches Informing the Risk of Sepsis
Gwadry-Sridhar, Femida; Lewden, Benoit; Mequanint, Selam; Bauer, Michael
Sepsis is a significant cause of mortality and morbidity and is often associated with increased hospital resource utilization, prolonged intensive care unit (ICU) and hospital stay. The economic burden associated with sepsis is huge. With advances in medicine, there are now aggressive goal oriented treatments that can be used to help these patients. If we were able to predict which patients may be at risk for sepsis we could start treatment early and potentially reduce the risk of mortality and morbidity. Analytic methods currently used in clinical research to determine the risk of a patient developing sepsis may be further enhanced by using multi-modal analytic methods that together could be used to provide greater precision. Researchers commonly use univariate and multivariate regressions to develop predictive models. We hypothesized that such models could be enhanced by using multiple analytic methods that together could be used to provide greater insight. In this paper, we analyze data about patients with and without sepsis using a decision tree approach and a cluster analysis approach. A comparison with a regression approach shows strong similarity among variables identified, though not an exact match. We compare the variables identified by the different approaches and draw conclusions about the respective predictive capabilities,while considering their clinical significance.
Salama, Amgad
2013-09-01
In this work the problem of flow in three-dimensional, axisymmetric, heterogeneous porous medium domain is investigated numerically. For this system, it is natural to use cylindrical coordinate system, which is useful in describing phenomena that have some rotational symmetry about the longitudinal axis. This can happen in porous media, for example, in the vicinity of production/injection wells. The basic feature of this system is the fact that the flux component (volume flow rate per unit area) in the radial direction is changing because of the continuous change of the area. In this case, variables change rapidly closer to the axis of symmetry and this requires the mesh to be denser. In this work, we generalize a methodology that allows coarser mesh to be used and yet yields accurate results. This method is based on constructing local analytical solution in each cell in the radial direction and moves the derivatives in the other directions to the source term. A new expression for the harmonic mean of the hydraulic conductivity in the radial direction is developed. Apparently, this approach conforms to the analytical solution for uni-directional flows in radial direction in homogeneous porous media. For the case when the porous medium is heterogeneous or the boundary conditions is more complex, comparing with the mesh-independent solution, this approach requires only coarser mesh to arrive at this solution while the traditional methods require more denser mesh. Comparisons for different hydraulic conductivity scenarios and boundary conditions have also been introduced. © 2013 Elsevier B.V.
Bridging analytical approaches for low-carbon transitions
Geels, Frank W.; Berkhout, Frans; van Vuuren, Detlef P.
2016-06-01
Low-carbon transitions are long-term multi-faceted processes. Although integrated assessment models have many strengths for analysing such transitions, their mathematical representation requires a simplification of the causes, dynamics and scope of such societal transformations. We suggest that integrated assessment model-based analysis should be complemented with insights from socio-technical transition analysis and practice-based action research. We discuss the underlying assumptions, strengths and weaknesses of these three analytical approaches. We argue that full integration of these approaches is not feasible, because of foundational differences in philosophies of science and ontological assumptions. Instead, we suggest that bridging, based on sequential and interactive articulation of different approaches, may generate a more comprehensive and useful chain of assessments to support policy formation and action. We also show how these approaches address knowledge needs of different policymakers (international, national and local), relate to different dimensions of policy processes and speak to different policy-relevant criteria such as cost-effectiveness, socio-political feasibility, social acceptance and legitimacy, and flexibility. A more differentiated set of analytical approaches thus enables a more differentiated approach to climate policy making.
Mie scattering of highly focused, scalar fields: an analytic approach.
Moore, Nicole J; Alonso, Miguel A
2016-07-01
We present a method for modeling the scattering of a focused scalar field incident on a spherical particle. This approach involves the expansion of the incident field in an orthonormal basis of closed-form solutions of the Helmholtz equation which are nonparaxial counterparts of Laguerre-Gaussian beams. This method also allows for the analytic calculation of the forces and torques exerted on a particle at any position with respect to the beam's focus.
Merging Belief Propagation and the Mean Field Approximation: A Free Energy Approach
DEFF Research Database (Denmark)
Riegler, Erwin; Kirkelund, Gunvor Elisabeth; Manchón, Carles Navarro
2013-01-01
We present a joint message passing approach that combines belief propagation and the mean field approximation. Our analysis is based on the region-based free energy approximation method proposed by Yedidia et al. We show that the message passing fixed-point equations obtained with this combination...... correspond to stationary points of a constrained region-based free energy approximation. Moreover, we present a convergent implementation of these message passing fixed-point equations provided that the underlying factor graph fulfills certain technical conditions. In addition, we show how to include hard...
Caprini, Chiara; Servant, Géraldine
2008-01-01
Gravitational wave production from bubble collisions was calculated in the early nineties using numerical simulations. In this paper, we present an alternative analytic estimate, relying on a different treatment of stochasticity. In our approach, we provide a model for the bubble velocity power spectrum, suitable for both detonations and deflagrations. From this, we derive the anisotropic stress and analytically solve the gravitational wave equation. We provide analytical formulae for the peak frequency and the shape of the spectrum which we compare with numerical estimates. In contrast to the previous analysis, we do not work in the envelope approximation. This paper focuses on a particular source of gravitational waves from phase transitions. In a companion article, we will add together the different sources of gravitational wave signals from phase transitions: bubble collisions, turbulence and magnetic fields and discuss the prospects for probing the electroweak phase transition at LISA.
Hard ellipsoids: Analytically approaching the exact overlap distance
Guevara-Rodríguez, F. de J.; Odriozola, G.
2011-08-01
Following previous work [G. Odriozola and F. de J. Guevara-Rodríguez, J. Chem. Phys. 134, 201103 (2011)], 10.1063/1.3596728, the replica exchange Monte Carlo technique is used to produce the equation of state of hard 1:5 aspect-ratio oblate ellipsoids for a wide density range. Here, in addition to the analytical approximation of the overlap distance given by Berne and Pechukas (BP) and the exact numerical solution of Perram and Wertheim, we tested a simple modification of the original BP approximation (MBP) which corrects the known T-shape mismatch of BP for all aspect ratios. We found that the MBP equation of state shows a very good quantitative agreement with the exact solution. The MBP analytical expression allowed us to study size effects on the previously reported results. For the thermodynamic limit, we estimated the exact 1:5 hard ellipsoid isotropic-nematic transition at the volume fraction 0.343 ± 0.003, and the nematic-solid transition in the volume fraction interval (0.592 ± 0.006) - (0.634 ± 0.008).
Elements of a function analytic approach to probability.
Energy Technology Data Exchange (ETDEWEB)
Ghanem, Roger Georges (University of Southern California, Los Angeles, CA); Red-Horse, John Robert
2008-02-01
We first provide a detailed motivation for using probability theory as a mathematical context in which to analyze engineering and scientific systems that possess uncertainties. We then present introductory notes on the function analytic approach to probabilistic analysis, emphasizing the connections to various classical deterministic mathematical analysis elements. Lastly, we describe how to use the approach as a means to augment deterministic analysis methods in a particular Hilbert space context, and thus enable a rigorous framework for commingling deterministic and probabilistic analysis tools in an application setting.
Zarlenga, Antonio; de Barros, Felipe; Fiori, Aldo
2017-04-01
Predicting solutes displacement in ecosystems is a complex task because of heterogeneity of hydrogeological properties and limited financial resources for characterization. As a consequence, solute transport model predictions are subject to uncertainty and probabilistic methods are invoked. Despite the significant theoretical advances in subsurface hydrology, there is a compelling need to transfer those specialized know-hows into an easy-to-use practical tool. The deterministic approach is able to capture some features of the transport behavior but its adoption in practical applications (e.g. remediation strategies or health risk assessment) is often inadequate because of its inability to accurately model the phenomena triggered by the spatial heterogeneity. The rigorous evaluation of the local contaminant concentration in natural aquifers requires an accurate estimate of the domain properties and huge computational times; those aspects limit the adoption of fully 3D numerical models. In this presentation, we illustrate a physically-based methodology to analytically estimate of the statistics of the solute concentration in natural aquifers and the related health risk. Our methodology aims to provide a simple tool for a quick assessment of the contamination level in aquifers, as function of a few relevant, physically based parameters such as the log conductivity variance, the mean flow velocity, the Péclet number. Solutions of the 3D analytical model adopt the results of previous works: transport model is based on the solutions proposed by Zarlenga and Fiori (2013, 2014) where semi-analytical relations for the statics of local contaminant concentration are carry out through a Lagrangian first-order model. As suggested in de Barros and Fiori (2014), the Beta distribution is assumed for the concentration cumulative density function (CDF). We illustrate the use of the closed-form equations for the probability of local contaminant concentration and health risk in a
xQuake: A Modern Approach to Seismic Network Analytics
Johnson, C. E.; Aikin, K. E.
2017-12-01
While seismic networks have expanded over the past few decades, and social needs for accurate and timely information has increased dramatically, approaches to the operational needs of both global and regional seismic observatories have been slow to adopt new technologies. This presentation presents the xQuake system that provides a fresh approach to seismic network analytics based on complexity theory and an adaptive architecture of streaming connected microservices as diverse data (picks, beams, and other data) flow into a final, curated catalog of events. The foundation for xQuake is the xGraph (executable graph) framework that is essentially a self-organizing graph database. An xGraph instance provides both the analytics as well as the data storage capabilities at the same time. Much of the analytics, such as synthetic annealing in the detection process and an evolutionary programing approach for event evolution, draws from the recent GLASS 3.0 seismic associator developed by and for the USGS National Earthquake Information Center (NEIC). In some respects xQuake is reminiscent of the Earthworm system, in that it comprises processes interacting through store and forward rings; not surprising as the first author was the lead architect of the original Earthworm project when it was known as "Rings and Things". While Earthworm components can easily be integrated into the xGraph processing framework, the architecture and analytics are more current (e.g. using a Kafka Broker for store and forward rings). The xQuake system is being released under an unrestricted open source license to encourage and enable sthe eismic community support in further development of its capabilities.
Fractal approach to computer-analytical modelling of tree crown
International Nuclear Information System (INIS)
Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.
1993-09-01
In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs
Energy Technology Data Exchange (ETDEWEB)
Gonchar, Andrei A; Rakhmanov, Evguenii A; Suetin, Sergey P
2011-12-31
Pade-Chebyshev approximants are considered for multivalued analytic functions that are real-valued on the unit interval [-1,1]. The focus is mainly on non-linear Pade-Chebyshev approximants. For such rational approximations an analogue is found of Stahl's theorem on convergence in capacity of the Pade approximants in the maximal domain of holomorphy of the given function. The rate of convergence is characterized in terms of the stationary compact set for the mixed equilibrium problem of Green-logarithmic potentials. Bibliography: 79 titles.
Big data analytics in immunology: a knowledge-based approach.
Zhang, Guang Lan; Sun, Jing; Chitkushev, Lou; Brusic, Vladimir
2014-01-01
With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow.
Big Data Analytics in Immunology: A Knowledge-Based Approach
Directory of Open Access Journals (Sweden)
Guang Lan Zhang
2014-01-01
Full Text Available With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow.
Bronstein, Leo; Koeppl, Heinz
2018-01-01
Approximate solutions of the chemical master equation and the chemical Fokker-Planck equation are an important tool in the analysis of biomolecular reaction networks. Previous studies have highlighted a number of problems with the moment-closure approach used to obtain such approximations, calling it an ad hoc method. In this article, we give a new variational derivation of moment-closure equations which provides us with an intuitive understanding of their properties and failure modes and allows us to correct some of these problems. We use mixtures of product-Poisson distributions to obtain a flexible parametric family which solves the commonly observed problem of divergences at low system sizes. We also extend the recently introduced entropic matching approach to arbitrary ansatz distributions and Markov processes, demonstrating that it is a special case of variational moment closure. This provides us with a particularly principled approximation method. Finally, we extend the above approaches to cover the approximation of multi-time joint distributions, resulting in a viable alternative to process-level approximations which are often intractable.
Sub-optimal Hankel norm approximation problem : a frequency-domain approach
Iftime, OV; Sasane, AJ
We obtain a simple solution for the sub-optimal Hankel norm approximation problem for the Wiener class of matrix-valued functions. The approach is via J-spectral factorization and frequency-domain techniques. (C) 2003 Elsevier B.V. All rights reserved.
DEFF Research Database (Denmark)
Opper, Manfred; Winther, Ole
2001-01-01
We develop an advanced mean held method for approximating averages in probabilistic data models that is based on the Thouless-Anderson-Palmer (TAP) approach of disorder physics. In contrast to conventional TAP. where the knowledge of the distribution of couplings between the random variables...
An analytical approach to 3D orthodontic load systems.
Katona, Thomas R; Isikbay, Serkis C; Chen, Jie
2014-09-01
To present and demonstrate a pseudo three-dimensional (3D) analytical approach for the characterization of orthodontic load (force and moment) systems. Previously measured 3D load systems were evaluated and compared using the traditional two-dimensional (2D) plane approach and the newly proposed vector method. Although both methods demonstrated that the loop designs were not ideal for translatory space closure, they did so for entirely different and conflicting reasons. The traditional 2D approach to the analysis of 3D load systems is flawed, but the established 2D orthodontic concepts can be substantially preserved and adapted to 3D with the use of a modified coordinate system that is aligned with the desired tooth translation.
The behavior-analytic approach to emotional self-control
Directory of Open Access Journals (Sweden)
Jussara Rocha Batista
2012-12-01
Full Text Available Some psychological approaches distinguish behavioral self-control from emotional self-control, the latter being approached with the reference to inside events controlled by the individual himself. This paper offers some directions to a behavior-analytic approach of what has been referred to as emotional self-control. According to Behavior Analysis, no new process is found in emotional self-control, but components that are additional to those found in behavioral self-control, which require appropriate treatment. The paper highlights some determinants of behavioral repertoires taken as instances of emotional self-control: the social context in which self-control is produced and maintained; the conflicts between consequences for the individual and for the group; and the degree of participation of the motor apparatus in the emission of emotional responses. Keywords: emotional self-control; emotional responses; inner world; behavior analysis.
Walks in the quarter plane: Analytic approach and applications
Directory of Open Access Journals (Sweden)
Raschel Kilian
2014-01-01
Full Text Available In this survey we present an analytic approach to solve problems concerning (deterministic or random walks in the quarter plane. We illustrate the recent breakthroughs in that domain with two examples. The first one is about the combinatorics of walks confined to the quarter plane, and more precisely about the numbers of walks evolving in the quarter plane and having given length, starting and ending points. We show how to obtain exact and asymptotic expressions for these numbers, and how to find the algebraic nature of their generating function. The second example deals with population biology, and more specifically with the extinction probabilities of certain flower populations.
Advances in Assays and Analytical Approaches for Botulinum Toxin Detection
Energy Technology Data Exchange (ETDEWEB)
Grate, Jay W.; Ozanich, Richard M.; Warner, Marvin G.; Bruckner-Lea, Cindy J.; Marks, James D.
2010-08-04
Methods to detect botulinum toxin, the most poisonous substance known, are reviewed. Current assays are being developed with two main objectives in mind: 1) to obtain sufficiently low detection limits to replace the mouse bioassay with an in vitro assay, and 2) to develop rapid assays for screening purposes that are as sensitive as possible while requiring an hour or less to process the sample an obtain the result. This review emphasizes the diverse analytical approaches and devices that have been developed over the last decade, while also briefly reviewing representative older immunoassays to provide background and context.
Developing An Analytic Approach to Understanding the Patient Care Experience
Springman, Mary Kate; Bermeo, Yalissa; Limper, Heather M
2016-01-01
The amount of data available to health-care institutions regarding the patient care experience has grown tremendously. Purposeful approaches to condensing, interpreting, and disseminating these data are becoming necessary to further understand how clinical and operational constructs relate to patient satisfaction with their care, identify areas for improvement, and accurately measure the impact of initiatives designed to improve the patient experience. We set out to develop an analytic reporting tool deeply rooted in the patient voice that would compile patient experience data obtained throughout the medical center. PMID:28725852
Energy Technology Data Exchange (ETDEWEB)
FEDOROVA,A.; ZEITLIN,M.; PARSA,Z.
2000-03-31
In this paper the authors present applications of methods from wavelet analysis to polynomial approximations for a number of accelerator physics problems. According to a variational approach in the general case they have the solution as a multiresolution (multiscales) expansion on the base of compactly supported wavelet basis. They give an extension of their results to the cases of periodic orbital particle motion and arbitrary variable coefficients. Then they consider more flexible variational method which is based on a biorthogonal wavelet approach. Also they consider a different variational approach, which is applied to each scale.
Analytical approaches for the characterization of nickel proteome.
Jiménez-Lamana, Javier; Szpunar, Joanna
2017-08-16
The use of nickel in modern industry and in consumer products implies some health problems for the human being. Nickel allergy and nickel carcinogenicity are well-known health effects related to human exposure to nickel, either during production of nickel-containing products or by direct contact with the final item. In this context, the study of nickel toxicity and nickel carcinogenicity involves the understanding of their molecular mechanisms and hence the characterization of the nickel-binding proteins in different biological samples. During the last 50 years, a broad range of analytical techniques, covering from the first chromatographic columns to the last generation mass spectrometers, have been used in order to fully characterize the nickel proteome. The aim of this review is to present a critical view of the different analytical approaches that have been applied for the purification, isolation, detection and identification of nickel-binding proteins. The different analytical techniques used are discussed from a critical point of view, highlighting advantages and limitations.
Stability analysis of magnetized neutron stars - a semi-analytic approach
Herbrik, Marlene; Kokkotas, Kostas D.
2017-04-01
We implement a semi-analytic approach for stability analysis, addressing the ongoing uncertainty about stability and structure of neutron star magnetic fields. Applying the energy variational principle, a model system is displaced from its equilibrium state. The related energy density variation is set up analytically, whereas its volume integration is carried out numerically. This facilitates the consideration of more realistic neutron star characteristics within the model compared to analytical treatments. At the same time, our method retains the possibility to yield general information about neutron star magnetic field and composition structures that are likely to be stable. In contrast to numerical studies, classes of parametrized systems can be studied at once, finally constraining realistic configurations for interior neutron star magnetic fields. We apply the stability analysis scheme on polytropic and non-barotropic neutron stars with toroidal, poloidal and mixed fields testing their stability in a Newtonian framework. Furthermore, we provide the analytical scheme for dropping the Cowling approximation in an axisymmetric system and investigate its impact. Our results confirm the instability of simple magnetized neutron star models as well as a stabilization tendency in the case of mixed fields and stratification. These findings agree with analytical studies whose spectrum of model systems we extend by lifting former simplifications.
Marinca, Vasile; Herisanu, Nicolae
2017-07-01
In the present paper, the Optimal Homotopy Asymptotic Method (OHAM) is applied to determine approximate analytic solutions of steady MHD flow and heat transfer of a third grade fluid analysis, considering constant viscosity. The effect of the magnetic parameter is shown. Some examples are given and the results obtained reveal that the proposed method is effective and easy to use.
ANALYTICAL APPROACHES TO THE STUDY OF EXPORT TRANSACTIONS
Directory of Open Access Journals (Sweden)
Ekaterina Viktorovna Medvedeva
2015-12-01
Full Text Available Analytical approaches to research of export operations depend on the conditions containing in separate external economic contracts with foreign buyers and also on a form of an exit of the Russian supplier of export goods to a foreign market. By means of analytical procedures it is possible to foresee and predict admissible situations which can have an adverse effect on a financial position of the economic subject. The economic entity, the engaged foreign economic activity, has to carry out surely not only the analysis of the current activity, but also the analysis of export operations. In article analytical approaches of carrying out the analysis of export operations are considered, on an example the analysis of export operations in dynamics is submitted, it is recommended to use the formulas allowing to estimate export in dynamics. For the comparative analysis export volume in the comparable prices is estimated. On the commodity groups including and quantitatively and qualitatively commensurable goods, the index of quantitative structure is calculated, the coefficient of delay of delivery of goods in comparison with other periods pays off. Carrying out the analysis allows to determine a tendency of change of export deliveries by export operations for the analyzed period for adoption of administrative decisions.Purpose Definition of the ways and receptions of the analysis applying when carrying out the analysis of export operations.Methodology in article economic-mathematical methods, and also statistical methods of the analysis were used.Results: the most informative parameters showing some aspects of carrying out the analysis of export operations are received.Practical implications it is expedient to apply the received results the economic subjects which are carrying out foreign economic activity, one of which elements are export operations.
Towards a Set Theoretical Approach to Big Data Analytics
DEFF Research Database (Denmark)
Mukkamala, Raghava Rao; Hussain, Abid; Vatrapu, Ravi
Formal methods, models and tools for social big data analytics are largely limited to graph theoretical approaches such as social network analysis (SNA) informed by relational sociology. There are no other unified modeling approaches to social big data that integrate the conceptual, formal and so...... this technique to the data analysis of big social data collected from Facebook page of the fast fashion company, H&M....... and software realms. In this paper, we first present and discuss a theory and conceptual model of social data. Second, we outline a formal model based on set theory and discuss the semantics of the formal model with a real-world social data example from Facebook. Third, we briefly present and discuss...
Cognitive neuroscience robotics B analytic approaches to human understanding
Ishiguro, Hiroshi; Asada, Minoru; Osaka, Mariko; Fujikado, Takashi
2016-01-01
Cognitive Neuroscience Robotics is the first introductory book on this new interdisciplinary area. This book consists of two volumes, the first of which, Synthetic Approaches to Human Understanding, advances human understanding from a robotics or engineering point of view. The second, Analytic Approaches to Human Understanding, addresses related subjects in cognitive science and neuroscience. These two volumes are intended to complement each other in order to more comprehensively investigate human cognitive functions, to develop human-friendly information and robot technology (IRT) systems, and to understand what kind of beings we humans are. Volume B describes to what extent cognitive science and neuroscience have revealed the underlying mechanism of human cognition, and investigates how development of neural engineering and advances in other disciplines could lead to deep understanding of human cognition.
The Navier-Stokes equations an elementary functional analytic approach
Sohr, Hermann
2001-01-01
The primary objective of this monograph is to develop an elementary and self contained approach to the mathematical theory of a viscous incompressible fluid in a domain 0 of the Euclidean space ]Rn, described by the equations of Navier Stokes. The book is mainly directed to students familiar with basic functional analytic tools in Hilbert and Banach spaces. However, for readers' convenience, in the first two chapters we collect without proof some fundamental properties of Sobolev spaces, distributions, operators, etc. Another important objective is to formulate the theory for a completely general domain O. In particular, the theory applies to arbitrary unbounded, non-smooth domains. For this reason, in the nonlinear case, we have to restrict ourselves to space dimensions n = 2,3 that are also most significant from the physical point of view. For mathematical generality, we will develop the lin earized theory for all n 2 2. Although the functional-analytic approach developed here is, in principle, known ...
Managing knowledge business intelligence: A cognitive analytic approach
Surbakti, Herison; Ta'a, Azman
2017-10-01
The purpose of this paper is to identify and analyze integration of Knowledge Management (KM) and Business Intelligence (BI) in order to achieve competitive edge in context of intellectual capital. Methodology includes review of literatures and analyzes the interviews data from managers in corporate sector and models established by different authors. BI technologies have strong association with process of KM for attaining competitive advantage. KM have strong influence from human and social factors and turn them to the most valuable assets with efficient system run under BI tactics and technologies. However, the term of predictive analytics is based on the field of BI. Extracting tacit knowledge is a big challenge to be used as a new source for BI to use in analyzing. The advanced approach of the analytic methods that address the diversity of data corpus - structured and unstructured - required a cognitive approach to provide estimative results and to yield actionable descriptive, predictive and prescriptive results. This is a big challenge nowadays, and this paper aims to elaborate detail in this initial work.
Jaramillo, Juan; Gomez, Juan; Saenz, Mario; Vergara, Juan
2013-03-01
The scattering induced by surface topographies of arbitrary shapes, submitted to horizontally polarized shear waves (SH) is studied analytically. In particular, we propose an analysis technique based on a representation of the scattered field like the superposition of incident, reflected and diffracted rays. The diffraction contribution is the result of the interaction of the incident and reflected waves, with the geometric singularities present in the surface topography. This splitting of the solution into different terms, makes the difference between our method and alternative numerical/analytical approaches, where the complete field is described by a single term. The contribution from the incident and reflected fields is considered using standard techniques, while the diffracted field is obtained using the idea of a ray as was introduced by the geometrical theory of diffraction. Our final solution however, is an approximation in the sense that, surface-diffracted rays are neglected while we retain the contribution from corner-diffracted rays and its further diffraction. These surface rays are only present when the problem has smooth boundaries combined with shadow zones, which is far from being the typical scenario in far-field earthquake engineering. The proposed technique was tested in the study of a combined hill-canyon topography and the results were compared with those of a boundary element algorithm. After considering only secondary sources of diffraction, a difference of 0.09 per cent (with respect to the incident field amplitude) was observed. The proposed analysis technique can be used in the interpretation of numerical and experimental results and in the preliminary prediction of the response in complex topographies.
Rough Set Approach to Approximation Reduction in Ordered Decision Table with Fuzzy Decision
Directory of Open Access Journals (Sweden)
Xiaoyan Zhang
2011-01-01
Full Text Available In practice, some of information systems are based on dominance relations, and values of decision attribute are fuzzy. So, it is meaningful to study attribute reductions in ordered decision tables with fuzzy decision. In this paper, upper and lower approximation reductions are proposed in this kind of complicated decision table, respectively. Some important properties are discussed. The judgement theorems and discernibility matrices associated with two reductions are obtained from which the theory of attribute reductions is provided in ordered decision tables with fuzzy decision. Moreover, rough set approach to upper and lower approximation reductions is presented in ordered decision tables with fuzzy decision as well. An example illustrates the validity of the approach, and results show that it is an efficient tool for knowledge discovery in ordered decision tables with fuzzy decision.
International Nuclear Information System (INIS)
Zheng Renhui; Jing Yuanyuan; Chen Liping; Shi Qiang
2011-01-01
Graphical abstract: An analytically solvable model was employed to study proton coupled electron transfer reactions. Approximated theories are assessed, and vibrational coherence is observed in case of small reorganization energy. Research highlights: → The Duschinsky rotation effect in PCET reactions. → Assessment of the BO approx. for proton motion using an analytically solvable model. → Vibrational coherence in PCET in the case of small reorganization energy. - Abstract: By employing an analytically solvable model including the Duschinsky rotation effect, we investigated the applicability of the commonly used Born-Oppenheimer (BO) approximation for separating the proton and proton donor-acceptor motions in theories of proton coupled electron transfer (PCET) reactions. Comparison with theories based on the BO approximation shows that, the BO approximation for the proton coordinate is generally valid while some further approximations may become inaccurate in certain range of parameters. We have also investigated the effect of vibrationally coherent tunneling in the case of small reorganization energy, and shown that it plays an important role on the rate constant and kinetic isotope effect.
Transverse plane wave analysis of short elliptical chamber mufflers: An analytical approach
Mimani, A.; Munjal, M. L.
2011-03-01
Short elliptical chamber mufflers are used often in the modern day automotive exhaust systems. The acoustic analysis of such short chamber mufflers is facilitated by considering a transverse plane wave propagation model along the major axis up to the low frequency limit. The one dimensional differential equation governing the transverse plane wave propagation in such short chambers is solved using the segmentation approaches which are inherently numerical schemes, wherein the transfer matrix relating the upstream state variables to the downstream variables is obtained. Analytical solution of the transverse plane wave model used to analyze such short chambers has not been reported in the literature so far. This present work is thus an attempt to fill up this lacuna, whereby Frobenius solution of the differential equation governing the transverse plane wave propagation is obtained. By taking a sufficient number of terms of the infinite series, an approximate analytical solution so obtained shows good convergence up to about 1300 Hz and also covers most of the range of muffler dimensions used in practice. The transmission loss (TL) performance of the muffler configurations computed by this analytical approach agrees excellently with that computed by the Matrizant approach used earlier by the authors, thereby offering a faster and more elegant alternate method to analyze short elliptical muffler configurations.
Analytical approach for confirming the achievement of LMFBR reliability goals
International Nuclear Information System (INIS)
Ingram, G.E.; Elerath, J.G.; Wood, A.P.
1981-01-01
The approach, recommended by GE-ARSD, for confirming the achievement of LMFBR reliability goals relies upon a comprehensive understanding of the physical and operational characteristics of the system and the environments to which the system will be subjected during its operational life. This kind of understanding is required for an approach based on system hardware testing or analyses, as recommended in this report. However, for a system as complex and expensive as the LMFBR, an approach which relies primarily on system hardware testing would be prohibitive both in cost and time to obtain the required system reliability test information. By using an analytical approach, results of tests (reliability and functional) at a low level within the specific system of interest, as well as results from other similar systems can be used to form the data base for confirming the achievement of the system reliability goals. This data, along with information relating to the design characteristics and operating environments of the specific system, will be used in the assessment of the system's reliability
Berlyand, Leonid; Owhadi, Houman
2010-11-01
We consider linear divergence-form scalar elliptic equations and vectorial equations for elasticity with rough ( L ∞(Ω), {Ω subset mathbb R^d}) coefficients a( x) that, in particular, model media with non-separated scales and high contrast in material properties. While the homogenization of PDEs with periodic or ergodic coefficients and well separated scales is now well understood, we consider here the most general case of arbitrary bounded coefficients. For such problems, we introduce explicit and optimal finite dimensional approximations of solutions that can be viewed as a theoretical Galerkin method with controlled error estimates, analogous to classical homogenization approximations. In particular, this approach allows one to analyze a given medium directly without introducing the mathematical concept of an {ɛ} family of media as in classical homogenization. We define the flux norm as the L 2 norm of the potential part of the fluxes of solutions, which is equivalent to the usual H 1-norm. We show that in the flux norm, the error associated with approximating, in a properly defined finite-dimensional space, the set of solutions of the aforementioned PDEs with rough coefficients is equal to the error associated with approximating the set of solutions of the same type of PDEs with smooth coefficients in a standard space (for example, piecewise polynomial). We refer to this property as the transfer property. A simple application of this property is the construction of finite dimensional approximation spaces with errors independent of the regularity and contrast of the coefficients and with optimal and explicit convergence rates. This transfer property also provides an alternative to the global harmonic change of coordinates for the homogenization of elliptic operators that can be extended to elasticity equations. The proofs of these homogenization results are based on a new class of elliptic inequalities. These inequalities play the same role in our approach
Modern Adaptive Analytics Approach to Lowering Seismic Network Detection Thresholds
Johnson, C. E.
2017-12-01
Modern seismic networks present a number of challenges, but perhaps most notably are those related to 1) extreme variation in station density, 2) temporal variation in station availability, and 3) the need to achieve detectability for much smaller events of strategic importance. The first of these has been reasonably addressed in the development of modern seismic associators, such as GLASS 3.0 by the USGS/NEIC, though some work still remains to be done in this area. However, the latter two challenges demand special attention. Station availability is impacted by weather, equipment failure or the adding or removing of stations, and while thresholds have been pushed to increasingly smaller magnitudes, new algorithms are needed to achieve even lower thresholds. Station availability can be addressed by a modern, adaptive architecture that maintains specified performance envelopes using adaptive analytics coupled with complexity theory. Finally, detection thresholds can be lowered using a novel approach that tightly couples waveform analytics with the event detection and association processes based on a principled repicking algorithm that uses particle realignment for enhanced phase discrimination.
Energy Technology Data Exchange (ETDEWEB)
Si, F.Q.; Romero, C.E.; Yao, Z.; Xu, Z.G.; Morey, R.L.; Liebowitz, B.N. [Lehigh University, Bethlehem, PA (United States). Energy Research Center
2009-05-15
In the scheme of boiler combustion optimization, a group of optimal controller settings is found to provide recommendations to balance desired thermal efficiency and lowest emissions limit. Characteristic functions between particular objectives and controlling variables can be approximated based on data sets obtained from field tests. These relationships can change with variations in coal quality, slag/soot deposits and the condition of plant equipment, which can not be sampled on-line. Thus, approximation relationships based on test conditions could have little applicability for on-line optimization of the combustion process. In this paper, a new approach is proposed to adaptively perform function approximation based on a modified accurate on-line support vector regression method. Two modified criteria are proposed for selection of the unwanted trained sample to be removed. A structural matrix is used to process and save the model parameters and training data sets, which can be adaptively regulated by the online learning method. The proposed method is illustrated with an example and is also applied to real boiler data successfully. The results reveal their validity in the prediction of NOx emissions and function approximation, which can correctly be adapted to actual variable operating conditions in the boiler.
A Visual Analytics Approach for Correlation, Classification, and Regression Analysis
Energy Technology Data Exchange (ETDEWEB)
Steed, Chad A [ORNL; SwanII, J. Edward [Mississippi State University (MSU); Fitzpatrick, Patrick J. [Mississippi State University (MSU); Jankun-Kelly, T.J. [Mississippi State University (MSU)
2012-02-01
New approaches that combine the strengths of humans and machines are necessary to equip analysts with the proper tools for exploring today's increasing complex, multivariate data sets. In this paper, a novel visual data mining framework, called the Multidimensional Data eXplorer (MDX), is described that addresses the challenges of today's data by combining automated statistical analytics with a highly interactive parallel coordinates based canvas. In addition to several intuitive interaction capabilities, this framework offers a rich set of graphical statistical indicators, interactive regression analysis, visual correlation mining, automated axis arrangements and filtering, and data classification techniques. The current work provides a detailed description of the system as well as a discussion of key design aspects and critical feedback from domain experts.
Energy Technology Data Exchange (ETDEWEB)
Petracca, S. [Salerno Univ. (Italy)
1996-08-01
Debye potentials, the Lorentz reciprocity theorem, and (extended) Leontovich boundary conditions can be used to obtain simple and accurate analytic estimates of the longitudinal and transverse coupling impedances of (piecewise longitudinally uniform) multi-layered pipes with non simple transverse geometry and/or (spatially inhomogeneous) boundary conditions. (author)
International Nuclear Information System (INIS)
Aboanber, A E; Nahla, A A
2002-01-01
A method based on the Pade approximations is applied to the solution of the point kinetics equations with a time varying reactivity. The technique consists of treating explicitly the roots of the inhour formula. A significant improvement has been observed by treating explicitly the most dominant roots of the inhour equation, which usually would make the Pade approximation inaccurate. Also the analytical inversion method which permits a fast inversion of polynomials of the point kinetics matrix is applied to the Pade approximations. Results are presented for several cases of Pade approximations using various options of the method with different types of reactivity. The formalism is applicable equally well to non-linear problems, where the reactivity depends on the neutron density through temperature feedback. It was evident that the presented method is particularly good for cases in which the reactivity can be represented by a series of steps and performed quite well for more general cases
Analytical and unitary approach in mesons electromagnetic form factor applications
International Nuclear Information System (INIS)
Liptaj, A.
2010-07-01
could be related to a very different type of experiment, a direct lifetime measurement, that was predominantly used to get the Γ π 0 →γγ value (unlike in the case of our evaluation or in the case of the PDG values for Γ η→γγ and Γ η ' →γγ . We are looking forward to analyze this issue and contribute to the solution. We finally study the behavior of the elastic pion EM form factor in the space-like domain. In this case we aimed to minimize the model dependence and based our approach only on the analytic properties of the form factor and the precise data in the time-like region. Our motivation was the data in the space-like region that, we believe, cannot be fully trusted. Further, we wanted to compare our prediction to other QCD inspired model. We have shown, that the prediction we obtain has only small model dependence. By making a prediction in the time-like region we have also shown that our approach is self-consistent, the prediction describes well the data points that were initially used to get it. Eventually we observed that our prediction is close tho the most recent result obtained in the framework of the AdS/CFT theory. The obtained results allow us to conclude that the unitary and analytic model and approach as such are correct tools to study meson form factors and we have shown, that they have big potential to give important results in several domains of particle physics. (author)
An intrinsic robust rank-one-approximation approach for currencyportfolio optimization
Directory of Open Access Journals (Sweden)
Hongxuan Huang
2018-03-01
Full Text Available A currency portfolio is a special kind of wealth whose value fluctuates with foreignexchange rates over time, which possesses 3Vs (volume, variety and velocity properties of big datain the currency market. In this paper, an intrinsic robust rank one approximation (ROA approachis proposed to maximize the value of currency portfolios over time. The main results of the paperinclude four parts: Firstly, under the assumptions about the currency market, the currency portfoliooptimization problem is formulated as the basic model, in which there are two types of variablesdescribing currency amounts in portfolios and the amount of each currency exchanged into another,respectively. Secondly, the rank one approximation problem and its variants are also formulated toapproximate a foreign exchange rate matrix, whose performance is measured by the Frobenius normor the 2-norm of a residual matrix. The intrinsic robustness of the rank one approximation is provedtogether with summarizing properties of the basic ROA problem and designing a modified powermethod to search for the virtual exchange rates hidden in a foreign exchange rate matrix. Thirdly,a technique for decision variables reduction is presented to attack the currency portfolio optimization.The reduced formulation is referred to as the ROA model, which keeps only variables describingcurrency amounts in portfolios. The optimal solution to the ROA model also induces a feasible solutionto the basic model of the currency portfolio problem by integrating forex operations from the ROAmodel with practical forex rates. Finally, numerical examples are presented to verify the feasibility ande ciency of the intrinsic robust rank one approximation approach. They also indicate that there existsan objective measure for evaluating and optimizing currency portfolios over time, which is related tothe virtual standard currency and independent of any real currency selected specially for measurement.
International Nuclear Information System (INIS)
Tumelero, Fernanda; Petersen, Claudio Zen; Goncalves, Glenio Aguiar; Schramm, Marcelo
2016-01-01
In this work, we report a solution to solve the Neutron Point Kinetics Equations applying the Polynomial Approach Method. The main idea is to expand the neutron density and delayed neutron precursors as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions and the analytical continuation is used to determine the solutions of the next intervals. A genuine error control is developed based on an analogy with the Rest Theorem. For illustration, we also report simulations for different approaches types (linear, quadratic and cubic). The results obtained by numerical simulations for linear approximation are compared with results in the literature.
Energy Technology Data Exchange (ETDEWEB)
Tumelero, Fernanda; Petersen, Claudio Zen; Goncalves, Glenio Aguiar [Universidade Federal de Pelotas, Capao do Leao, RS (Brazil). Programa de Pos Graduacao em Modelagem Matematica; Schramm, Marcelo [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica
2016-12-15
In this work, we report a solution to solve the Neutron Point Kinetics Equations applying the Polynomial Approach Method. The main idea is to expand the neutron density and delayed neutron precursors as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions and the analytical continuation is used to determine the solutions of the next intervals. A genuine error control is developed based on an analogy with the Rest Theorem. For illustration, we also report simulations for different approaches types (linear, quadratic and cubic). The results obtained by numerical simulations for linear approximation are compared with results in the literature.
Towards Big Earth Data Analytics: The EarthServer Approach
Baumann, Peter
2013-04-01
Big Data in the Earth sciences, the Tera- to Exabyte archives, mostly are made up from coverage data whereby the term "coverage", according to ISO and OGC, is defined as the digital representation of some space-time varying phenomenon. Common examples include 1-D sensor timeseries, 2-D remote sensing imagery, 3D x/y/t image timeseries and x/y/z geology data, and 4-D x/y/z/t atmosphere and ocean data. Analytics on such data requires on-demand processing of sometimes significant complexity, such as getting the Fourier transform of satellite images. As network bandwidth limits prohibit transfer of such Big Data it is indispensable to devise protocols allowing clients to task flexible and fast processing on the server. The EarthServer initiative, funded by EU FP7 eInfrastructures, unites 11 partners from computer and earth sciences to establish Big Earth Data Analytics. One key ingredient is flexibility for users to ask what they want, not impeded and complicated by system internals. The EarthServer answer to this is to use high-level query languages; these have proven tremendously successful on tabular and XML data, and we extend them with a central geo data structure, multi-dimensional arrays. A second key ingredient is scalability. Without any doubt, scalability ultimately can only be achieved through parallelization. In the past, parallelizing code has been done at compile time and usually with manual intervention. The EarthServer approach is to perform a samentic-based dynamic distribution of queries fragments based on networks optimization and further criteria. The EarthServer platform is comprised by rasdaman, an Array DBMS enabling efficient storage and retrieval of any-size, any-type multi-dimensional raster data. In the project, rasdaman is being extended with several functionality and scalability features, including: support for irregular grids and general meshes; in-situ retrieval (evaluation of database queries on existing archive structures, avoiding data
Directory of Open Access Journals (Sweden)
Roberta Briesemeister
2017-01-01
Full Text Available Cross-docking is a logistics management concept in which products are temporarily unloaded at intermediate facilities and loaded onto output trucks to be sent to their final destination. In this paper, we propose an approximate nonstationary queuing model to size the number of docks to receive the trucks, so that their unloading will be as short as possible at the receiving dock, thus making the cross-docking process more efficient. It is observed that the stochastic queuing process may not reach the steady equilibrium state. A type of modeling that does not depend on the stationary characteristics of the process developed is applied. In order to measure the efficiency, performance, and possible adjustments of the parameters of the algorithm, an alternative simulation model is proposed using the Arena® software. The simulation uses analytic tools to make the problem more detailed, which is not allowed in the theoretical model. The computational analysis compares the results of the simulated model with the ones obtained with the theoretical algorithm, considering the queue length and the average waiting time of the trucks. Based on the results obtained, the simulation represented very well the proposed problem and possible changes can be easily detected with small adjustments in the simulated model.
Directory of Open Access Journals (Sweden)
Alsaedi Ahmed
2009-01-01
Full Text Available A generalized quasilinearization technique is developed to obtain a sequence of approximate solutions converging monotonically and quadratically to a unique solution of a boundary value problem involving Duffing type nonlinear integro-differential equation with integral boundary conditions. The convergence of order for the sequence of iterates is also established. It is found that the work presented in this paper not only produces new results but also yields several old results in certain limits.
BOOK REVIEW Analytical and Numerical Approaches to Mathematical Relativity
Stewart, John M.
2007-08-01
The 319th Wilhelm-and-Else-Heraeus Seminar 'Mathematical Relativity: New Ideas and Developments' took place in March 2004. Twelve of the invited speakers have expanded their one hour talks into the papers appearing in this volume, preceded by a foreword by Roger Penrose. The first group consists of four papers on 'differential geometry and differential topology'. Paul Ehrlich opens with a very witty review of global Lorentzian geometry, which caused this reviewer to think more carefully about how he uses the adjective 'generic'. Robert Low addresses the issue of causality with a description of the 'space of null geodesics' and a tentative proposal for a new definition of causal boundary. The underlying review of global Lorentzian geometry is continued by Antonio Masiello, looking at variational approaches (actually valid for more general semi-Riemannian manifolds). This group concludes with a very clear review of pp-wave spacetimes from José Flores and Miguel Sánchez. (This reviewer was delighted to see a reproduction of Roger Penrose's seminal (1965) picture of null geodesics in plane wave spacetimes which attracted him into the subject.) Robert Beig opens the second group 'analytic methods and differential equations' with a brief but careful discussion of symmetric (regular) hyperbolicity for first (second) order systems, respectively, of partial differential equations. His description is peppered with examples, many specific to relativstic continuum mechanics. There follows a succinct review of linear elliptic boundary value problems with applications to general relativity from Sergio Dain. The numerous examples he provides are thought-provoking. The 'standard cosmological model' has been well understood for three quarters of a century. However recent observations suggest that the expansion in our Universe may be accelerating. Alan Rendall provides a careful discussion of the changes, both mathematical and physical, to the standard model which might be needed
Linear response theory an analytic-algebraic approach
De Nittis, Giuseppe
2017-01-01
This book presents a modern and systematic approach to Linear Response Theory (LRT) by combining analytic and algebraic ideas. LRT is a tool to study systems that are driven out of equilibrium by external perturbations. In particular the reader is provided with a new and robust tool to implement LRT for a wide array of systems. The proposed formalism in fact applies to periodic and random systems in the discrete and the continuum. After a short introduction describing the structure of the book, its aim and motivation, the basic elements of the theory are presented in chapter 2. The mathematical framework of the theory is outlined in chapters 3–5: the relevant von Neumann algebras, noncommutative $L^p$- and Sobolev spaces are introduced; their construction is then made explicit for common physical systems; the notion of isopectral perturbations and the associated dynamics are studied. Chapter 6 is dedicated to the main results, proofs of the Kubo and Kubo-Streda formulas. The book closes with a chapter about...
Analytic game—theoretic approach to ground-water extraction
Loáiciga, Hugo A.
2004-09-01
The roles of cooperation and non-cooperation in the sustainable exploitation of a jointly used groundwater resource have been quantified mathematically using an analytical game-theoretic formulation. Cooperative equilibrium arises when ground-water users respect water-level constraints and consider mutual impacts, which allows them to derive economic benefits from ground-water indefinitely, that is, to achieve sustainability. This work shows that cooperative equilibrium can be obtained from the solution of a quadratic programming problem. For cooperative equilibrium to hold, however, enforcement must be effective. Otherwise, according to the commonized costs-privatized profits paradox, there is a natural tendency towards non-cooperation and non-sustainable aquifer mining, of which overdraft is a typical symptom. Non-cooperative behavior arises when at least one ground-water user neglects the externalities of his adopted ground-water pumping strategy. In this instance, water-level constraints may be violated in a relatively short time and the economic benefits from ground-water extraction fall below those obtained with cooperative aquifer use. One example illustrates the game theoretic approach of this work.
Uncertainties in workplace external dosimetry--an analytical approach.
Ambrosi, P
2006-01-01
The uncertainties associated with external dosimetry measurements at workplaces depend on the type of dosemeter used together with its performance characteristics and the information available on the measurement conditions. Performance characteristics were determined in the course of a type test and information about the measurement conditions can either be general, e.g. 'research' and 'medicine', or specific, e.g. 'X-ray testing equipment for aluminium wheel rims'. This paper explains an analytical approach to determine the measurement uncertainty. It is based on the Draft IEC Technical Report IEC 62461 Radiation Protection Instrumentation-Determination of Uncertainty in Measurement. Both this paper and the report cannot eliminate the fact that the determination of the uncertainty requires a larger effort than performing the measurement itself. As a counterbalance, the process of determining the uncertainty results not only in a numerical value of the uncertainty but also produces the best estimate of the quantity to be measured, which may differ from the indication of the instrument. Thus it also improves the result of the measurement.
Uncertainties in workplace external dosimetry - An analytical approach
International Nuclear Information System (INIS)
Ambrosi, P.
2006-01-01
The uncertainties associated with external dosimetry measurements at workplaces depend on the type of dosemeter used together with its performance characteristics and the information available on the measurement conditions. Performance characteristics were determined in the course of a type test and information about the measurement conditions can either be general, e.g. 'research' and 'medicine', or specific, e.g. 'X-ray testing equipment for aluminium wheel rims'. This paper explains an analytical approach to determine the measurement uncertainty. It is based on the Draft IEC Technical Report IEC 62461 Radiation Protection Instrumentation - Determination of Uncertainty in Measurement. Both this paper and the report cannot eliminate the fact that the determination of the uncertainty requires a larger effort than performing the measurement itself. As a counterbalance, the process of determining the uncertainty results not only in a numerical value of the uncertainty but also produces the best estimate of the quantity to be measured, which may differ from the indication of the instrument. Thus it also improves the result of the measurement. (authors)
International Nuclear Information System (INIS)
Delgado-Aparicio, L.; Hill, K.; Bitter, M.; Tritz, K.; Kramer, T.; Stutman, D.; Finkenthal, M.
2010-01-01
A new set of analytic formulas describes the transmission of soft x-ray continuum radiation through a metallic foil for its application to fast electron temperature measurements in fusion plasmas. This novel approach shows good agreement with numerical calculations over a wide range of plasma temperatures in contrast with the solutions obtained when using a transmission approximated by a single-Heaviside function [S. von Goeler et al., Rev. Sci. Instrum. 70, 599 (1999)]. The new analytic formulas can improve the interpretation of the experimental results and thus contribute in obtaining fast temperature measurements in between intermittent Thomson scattering data.
Altajskij, M V; Erokhin, N S; Zolnikova, N N; Mikhajlovskaya, L A; Moiseev, S S
2002-01-01
The approximation formulae for modeling the interaction of the fast alpha-particles and superthermal electrons with the solid-state plasma of the emitter films of the secondary emission radioisotope current source are developed. The evaluations of the interactions characteristic parameters, including the effective breaking efficiency of the emitter composite medium and the alpha-particles run, the emitter optimal thickness and maximum number of the current binary cells are accomplished on their basis. The obtained results may be used for optimizing the parameters of the experimental sample of such source and for the analysis of the problems, connected with its operation
Learning Analytics for Online Discussions: Embedded and Extracted Approaches
Wise, Alyssa Friend; Zhao, Yuting; Hausknecht, Simone Nicole
2014-01-01
This paper describes an application of learning analytics that builds on an existing research program investigating how students contribute and attend to the messages of others in asynchronous online discussions. We first overview the E-Listening research program and then explain how this work was translated into analytics that students and…
A Progressive Approach to Teaching Analytics in the Marketing Curriculum
Liu, Yiyuan; Levin, Michael A.
2018-01-01
With the emerging use of analytics tools and methodologies in marketing, marketing educators have provided students training and experiences beyond the soft skills associated with understanding consumer behavior. Previous studies have only discussed how to apply analytics in course designs, tools, and related practices. However, there is a lack of…
Geovisual Analytics Approach to Exploring Public Political Discourse on Twitter
Directory of Open Access Journals (Sweden)
Jonathan K. Nelson
2015-03-01
Full Text Available We introduce spatial patterns of Tweets visualization (SPoTvis, a web-based geovisual analytics tool for exploring messages on Twitter (or “tweets” collected about political discourse, and illustrate the potential of the approach with a case study focused on a set of linked political events in the United States. In October 2013, the U.S. Congressional debate over the allocation of funds to the Patient Protection and Affordable Care Act (commonly known as the ACA or “Obamacare” culminated in a 16-day government shutdown. Meanwhile the online health insurance marketplace related to the ACA was making a public debut hampered by performance and functionality problems. Messages on Twitter during this time period included sharply divided opinions about these events, with many people angry about the shutdown and others supporting the delay of the ACA implementation. SPoTvis supports the analysis of these events using an interactive map connected dynamically to a term polarity plot; through the SPoTvis interface, users can compare the dominant subthemes of Tweets in any two states or congressional districts. Demographic attributes and political information on the display, coupled with functionality to show (dissimilar features, enrich users’ understandings of the units being compared. Relationships among places, politics and discourse on Twitter are quantified using statistical analyses and explored visually using SPoTvis. A two-part user study evaluates SPoTvis’ ability to enable insight discovery, as well as the tool’s design, functionality and applicability to other contexts.
Qian, Youhua; Chen, Shengmin
2010-10-01
In this paper, the homotopy analysis method (HAM) is presented to establish the accurate approximate analytical solutions for multi-degree-of-freedom (MDOF) nonlinear coupled oscillators. The periodic solutions for the three-degree-of-freedom (3DOF) coupled van der Pol-Duffing oscillators are applied to illustrate the validity and great potential of this method. For given physical parameters of nonlinear systems and with different initial conditions, the frequency ω , displacements x1 (t),x2 (t) and x3 (t) can be explicitly obtained. In addition, comparisons are conducted between the results obtained by the HAM and the numerical integration (i.e. Runge-Kutta) method. It is shown that the analytical solutions of the HAM are in excellent agreement with respect to the numerical integration solutions, even if time t progresses to a certain large domain in the time history responses. Finally, the homotopy Pade technique is used to accelerate the convergence of the solutions.
Multi-analytical approach for profiling some essential medical drugs
International Nuclear Information System (INIS)
Abubakar, M.
2015-07-01
Counterfeit and substandard pharmaceutical drugs are chiefly rampant in developing countries due to inadequate analytical facilities and lack of regulatory oversight. The production of counterfeit or substandard drugs is broadly problematic. Underestimating it therefore leads to morbidity, mortality, drug resistance, introduction of toxic substances into the body and loss of confidence in health care systems. Medical drugs that are often counterfeited range from antimalarial drugs to antiretroviral drugs with antibiotics being counterfeited the most. This research work, therefore, aims at contributing towards the establishment of measures/processes for distinguishing between fake and genuine amoxicillin drugs. This was achieved by the identification and quantification of the Active Pharmaceutical Ingredient (API) and the excipients in the drug formulation. The major analytical techniques employed for this research work were Instrumental Neutron Activation Analysis (INAA), X-ray Powder Diffraction (XRD), High Performance Liquid Chromatography (HPLC) and in vitro Dissolution Test. The amoxicillin samples analyzed were the foreign generic amoxicillin purchased from Ernest Chemists pharmacy at East Legon, Accra, the National Health Insurance Scheme (NHIS) amoxicillin purchased at Fair Mile pharmacy at West Legon, Accra and the Suspected Fake amoxicillin purchased at Okaishi market. For the establishment of fingerprint for identification of substandard amoxicillin, INAA was used to qualitatively determine the short lived radionuclides (excipients) which then facilitated the correct identification of the API and the excipient phases in each of the amoxicillin groups. The phases identified were Amoxicillin Trihydrate as the excipient, Magnesium Stearate (hydrated) and Magnesium Stearate (anhydrous) as the excipients. For Quality control purposes, High Performance Liquid Chromatography approach and also, the in vitro Dissolution test were conducted on each of the groups of
Dariescu, Marina-Aura; Dariescu, Ciprian
2012-10-01
Working with a magnetic field periodic along Oz and decaying in time, we deal with the Dirac-type equation characterizing the fermions evolving in magnetar's crust. For ultra-relativistic particles, one can employ the perturbative approach, to compute the conserved current density components. If the magnetic field is frozen and the magnetar is treated as a stationary object, the fermion's wave function is expressed in terms of the Heun's Confluent functions. Finally, we are extending some previous investigations on the linearly independent fermionic modes solutions to the Mathieu's equation and we discuss the energy spectrum and the Mathieu Characteristic Exponent.
Energy Technology Data Exchange (ETDEWEB)
Jennings, Elise; Wolf, Rachel; Sako, Masao
2016-11-09
Cosmological parameter estimation techniques that robustly account for systematic measurement uncertainties will be crucial for the next generation of cosmological surveys. We present a new analysis method, superABC, for obtaining cosmological constraints from Type Ia supernova (SN Ia) light curves using Approximate Bayesian Computation (ABC) without any likelihood assumptions. The ABC method works by using a forward model simulation of the data where systematic uncertainties can be simulated and marginalized over. A key feature of the method presented here is the use of two distinct metrics, the `Tripp' and `Light Curve' metrics, which allow us to compare the simulated data to the observed data set. The Tripp metric takes as input the parameters of models fit to each light curve with the SALT-II method, whereas the Light Curve metric uses the measured fluxes directly without model fitting. We apply the superABC sampler to a simulated data set of $\\sim$1000 SNe corresponding to the first season of the Dark Energy Survey Supernova Program. Varying $\\Omega_m, w_0, \\alpha$ and $\\beta$ and a magnitude offset parameter, with no systematics we obtain $\\Delta(w_0) = w_0^{\\rm true} - w_0^{\\rm best \\, fit} = -0.036\\pm0.109$ (a $\\sim11$% 1$\\sigma$ uncertainty) using the Tripp metric and $\\Delta(w_0) = -0.055\\pm0.068$ (a $\\sim7$% 1$\\sigma$ uncertainty) using the Light Curve metric. Including 1% calibration uncertainties in four passbands, adding 4 more parameters, we obtain $\\Delta(w_0) = -0.062\\pm0.132$ (a $\\sim14$% 1$\\sigma$ uncertainty) using the Tripp metric. Overall we find a $17$% increase in the uncertainty on $w_0$ with systematics compared to without. We contrast this with a MCMC approach where systematic effects are approximately included. We find that the MCMC method slightly underestimates the impact of calibration uncertainties for this simulated data set.
A novel approach for choosing summary statistics in approximate Bayesian computation.
Aeschbacher, Simon; Beaumont, Mark A; Futschik, Andreas
2012-11-01
The choice of summary statistics is a crucial step in approximate Bayesian computation (ABC). Since statistics are often not sufficient, this choice involves a trade-off between loss of information and reduction of dimensionality. The latter may increase the efficiency of ABC. Here, we propose an approach for choosing summary statistics based on boosting, a technique from the machine-learning literature. We consider different types of boosting and compare them to partial least-squares regression as an alternative. To mitigate the lack of sufficiency, we also propose an approach for choosing summary statistics locally, in the putative neighborhood of the true parameter value. We study a demographic model motivated by the reintroduction of Alpine ibex (Capra ibex) into the Swiss Alps. The parameters of interest are the mean and standard deviation across microsatellites of the scaled ancestral mutation rate (θ(anc) = 4N(e)u) and the proportion of males obtaining access to matings per breeding season (ω). By simulation, we assess the properties of the posterior distribution obtained with the various methods. According to our criteria, ABC with summary statistics chosen locally via boosting with the L(2)-loss performs best. Applying that method to the ibex data, we estimate θ(anc)≈ 1.288 and find that most of the variation across loci of the ancestral mutation rate u is between 7.7 × 10(-4) and 3.5 × 10(-3) per locus per generation. The proportion of males with access to matings is estimated as ω≈ 0.21, which is in good agreement with recent independent estimates.
Zubov, N. O.; Kaban'kov, O. N.; Yagov, V. V.; Sukomel, L. A.
2017-12-01
Wide use of natural circulation loops operating at low redused pressures generates the real need to develop reliable methods for predicting flow regimes and friction pressure drop for two-phase flows in this region of parameters. Although water-air flows at close-to-atmospheric pressures are the most widely studied subject in the field of two-phase hydrodynamics, the problem of reliably calculating friction pressure drop can hardly be regarded to have been fully solved. The specific volumes of liquid differ very much from those of steam (gas) under such conditions, due to which even a small change in flow quality may cause the flow pattern to alter very significantly. Frequently made attempts to use some or another universal approach to calculating friction pressure drop in a wide range of steam quality values do not seem to be justified and yield predicted values that are poorly consistent with experimentally measured data. The article analyzes the existing methods used to calculate friction pressure drop for two-phase flows at low pressures by comparing their results with the experimentally obtained data. The advisability of elaborating calculation procedures for determining the friction pressure drop and void fraction for two-phase flows taking their pattern (flow regime) into account is demonstrated. It is shown that, for flows characterized by low reduced pressures, satisfactory results are obtained from using a homogeneous model for quasi-homogeneous flows, whereas satisfactory results are obtained from using an annular flow model for flows characterized by high values of void fraction. Recommendations for making a shift from one model to another in carrying out engineering calculations are formulated and tested. By using the modified annular flow model, it is possible to obtain reliable predictions for not only the pressure gradient but also for the liquid film thickness; the consideration of droplet entrainment and deposition phenomena allows reasonable
An Analytical Model for Learning: An Applied Approach.
Kassebaum, Peter Arthur
A mediated-learning package, geared toward non-traditional students, was developed for use in the College of Marin's cultural anthropology courses. An analytical model for learning was used in the development of the package, utilizing concepts related to learning objectives, programmed instruction, Gestalt psychology, cognitive psychology, and…
Nonlinear analysis of doubly curved shells: An analytical approach
Indian Academy of Sciences (India)
is a time-honoured need to have an efficient analytical methodology of solution, which could serve as a bench-mark solution for the numerical methods. Chebyshev polynomials are orthogonal functions and have the property of minimax (Fox & Parker 1968). The. Sa┼dhana┼, Vol. 25, Part 4, August 2000, pp. 343±352.
Lyophilization: a useful approach to the automation of analytical processes?
de Castro, M. D. Luque; Izquierdo, A.
1990-01-01
An overview of the state-of-the-art in the use of lyophilization for the pretreatment of samples and standards prior to their storage and/or preconcentration is presented. The different analytical applications of this process are dealt with according to the type of material (reagent, standard, samples) and matrix involved.
An analytical approach to managing complex process problems
Energy Technology Data Exchange (ETDEWEB)
Ramstad, Kari; Andersen, Espen; Rohde, Hans Christian; Tydal, Trine
2006-03-15
The oil companies are continuously investing time and money to ensure optimum regularity on their production facilities. High regularity increases profitability, reduces workload on the offshore organisation and most important; - reduces discharge to air and sea. There are a number of mechanisms and tools available in order to achieve high regularity. Most of these are related to maintenance, system integrity, well operations and process conditions. However, for all of these tools, they will only be effective if quick and proper analysis of fluids and deposits are carried out. In fact, analytical backup is a powerful tool used to maintain optimised oil production, and should as such be given high priority. The present Operator (Hydro Oil and Energy) and the Chemical Supplier (MI Production Chemicals) have developed a cooperation to ensure that analytical backup is provided efficiently to the offshore installations. The Operator's Research and Development (R and D) departments and the Chemical Supplier have complementary specialties in both personnel and equipment, and this is utilized to give the best possible service when required from production technologists or operations. In order for the Operator's Research departments, Health, Safety and Environment (HSE) departments and Operations to approve analytical work performed by the Chemical Supplier, a number of analytical tests are carried out following procedures agreed by both companies. In the present paper, three field case examples of analytical cooperation for managing process problems will be presented. 1) Deposition in a Complex Platform Processing System. 2) Contaminated Production Chemicals. 3) Improved Monitoring of Scale Inhibitor, Suspended Solids and Ions. In each case the Research Centre, Operations and the Chemical Supplier have worked closely together to achieve fast solutions and Best Practice. (author) (tk)
Beam steering in superconducting quarter-wave resonators: An analytical approach
Directory of Open Access Journals (Sweden)
Alberto Facco
2011-07-01
Full Text Available Beam steering in superconducting quarter-wave resonators (QWRs, which is mainly caused by magnetic fields, has been pointed out in 2001 in an early work [A. Facco and V. Zviagintsev, in Proceedings of the Particle Accelerator Conference, Chicago, IL, 2001 (IEEE, New York, 2001, p. 1095], where an analytical formula describing it was proposed and the influence of cavity geometry was discussed. Since then, the importance of this effect was recognized and effective correction techniques have been found [P. N. Ostroumov and K. W. Shepard, Phys. Rev. ST Accel. Beams 4, 110101 (2001PRABFM1098-440210.1103/PhysRevSTAB.4.110101]. This phenomenon was further studied in the following years, mainly with numerical methods. In this paper we intend to go back to the original approach and, using well established approximations, derive a simple analytical expression for QWR steering which includes correction methods and reproduces the data starting from a few calculable geometrical constants which characterize every cavity. This expression, of the type of the Panofski equation, can be a useful tool in the design of superconducting quarter-wave resonators and in the definition of their limits of application with different beams.
How recent history affects perception: the normative approach and its heuristic approximation.
Directory of Open Access Journals (Sweden)
Ofri Raviv
Full Text Available There is accumulating evidence that prior knowledge about expectations plays an important role in perception. The Bayesian framework is the standard computational approach to explain how prior knowledge about the distribution of expected stimuli is incorporated with noisy observations in order to improve performance. However, it is unclear what information about the prior distribution is acquired by the perceptual system over short periods of time and how this information is utilized in the process of perceptual decision making. Here we address this question using a simple two-tone discrimination task. We find that the "contraction bias", in which small magnitudes are overestimated and large magnitudes are underestimated, dominates the pattern of responses of human participants. This contraction bias is consistent with the Bayesian hypothesis in which the true prior information is available to the decision-maker. However, a trial-by-trial analysis of the pattern of responses reveals that the contribution of most recent trials to performance is overweighted compared with the predictions of a standard Bayesian model. Moreover, we study participants' performance in a-typical distributions of stimuli and demonstrate substantial deviations from the ideal Bayesian detector, suggesting that the brain utilizes a heuristic approximation of the Bayesian inference. We propose a biologically plausible model, in which decision in the two-tone discrimination task is based on a comparison between the second tone and an exponentially-decaying average of the first tone and past tones. We show that this model accounts for both the contraction bias and the deviations from the ideal Bayesian detector hypothesis. These findings demonstrate the power of Bayesian-like heuristics in the brain, as well as their limitations in their failure to fully adapt to novel environments.
How Recent History Affects Perception: The Normative Approach and Its Heuristic Approximation
Raviv, Ofri; Ahissar, Merav; Loewenstein, Yonatan
2012-01-01
There is accumulating evidence that prior knowledge about expectations plays an important role in perception. The Bayesian framework is the standard computational approach to explain how prior knowledge about the distribution of expected stimuli is incorporated with noisy observations in order to improve performance. However, it is unclear what information about the prior distribution is acquired by the perceptual system over short periods of time and how this information is utilized in the process of perceptual decision making. Here we address this question using a simple two-tone discrimination task. We find that the “contraction bias”, in which small magnitudes are overestimated and large magnitudes are underestimated, dominates the pattern of responses of human participants. This contraction bias is consistent with the Bayesian hypothesis in which the true prior information is available to the decision-maker. However, a trial-by-trial analysis of the pattern of responses reveals that the contribution of most recent trials to performance is overweighted compared with the predictions of a standard Bayesian model. Moreover, we study participants' performance in a-typical distributions of stimuli and demonstrate substantial deviations from the ideal Bayesian detector, suggesting that the brain utilizes a heuristic approximation of the Bayesian inference. We propose a biologically plausible model, in which decision in the two-tone discrimination task is based on a comparison between the second tone and an exponentially-decaying average of the first tone and past tones. We show that this model accounts for both the contraction bias and the deviations from the ideal Bayesian detector hypothesis. These findings demonstrate the power of Bayesian-like heuristics in the brain, as well as their limitations in their failure to fully adapt to novel environments. PMID:23133343
An analytical approach to studying non-exponential decay
Petridis, Athanasios; Luban, Marshall; Vermedahl, Jon; Staunton, Lawrence
2002-04-01
Deviations from exponential decay have been numerically established for wavefunctions initially set inside potential wells of finite depth. The survival probability features oscillations about an initially non-exponential median curve. An analytical solution is developed for certain even-parity potentials to further understand this behavior. A connection between single-particle and multi-particle systems is investigated and shown to lead to the known exponential decay law for large systems.
Gordon, John C.; Fenley, Andrew T.; Onufriev, Alexey
2008-08-01
An ability to efficiently compute the electrostatic potential produced by molecular charge distributions under realistic solvation conditions is essential for a variety of applications. Here, the simple closed-form analytical approximation to the Poisson equation rigorously derived in Part I for idealized spherical geometry is tested on realistic shapes. The effects of mobile ions are included at the Debye-Hückel level. The accuracy of the resulting closed-form expressions for electrostatic potential is assessed through comparisons with numerical Poisson-Boltzmann (NPB) reference solutions on a test set of 580 representative biomolecular structures under typical conditions of aqueous solvation. For each structure, the deviation from the reference is computed for a large number of test points placed near the dielectric boundary (molecular surface). The accuracy of the approximation, averaged over all test points in each structure, is within 0.6 kcal/mol/|e|~kT per unit charge for all structures in the test set. For 91.5% of the individual test points, the deviation from the NPB potential is within 0.6 kcal/mol/|e|. The deviations from the reference decrease with increasing distance from the dielectric boundary: The approximation is asymptotically exact far away from the source charges. Deviation of the overall shape of a structure from ideal spherical does not, by itself, appear to necessitate decreased accuracy of the approximation. The largest deviations from the NPB reference are found inside very deep and narrow indentations that occur on the dielectric boundaries of some structures. The dimensions of these pockets of locally highly negative curvature are comparable to the size of a water molecule; the applicability of a continuum dielectric models in these regions is discussed. The maximum deviations from the NPB are reduced substantially when the boundary is smoothed by using a larger probe radius (3 A˚) to generate the molecular surface. A detailed accuracy
A conceptual approach to approximate tree root architecture in infinite slope models
Schmaltz, Elmar; Glade, Thomas
2016-04-01
Vegetation-related properties - particularly tree root distribution and coherent hydrologic and mechanical effects on the underlying soil mantle - are commonly not considered in infinite slope models. Indeed, from a geotechnical point of view, these effects appear to be difficult to be reproduced reliably in a physically-based modelling approach. The growth of a tree and the expansion of its root architecture are directly connected with both intrinsic properties such as species and age, and extrinsic factors like topography, availability of nutrients, climate and soil type. These parameters control four main issues of the tree root architecture: 1) Type of rooting; 2) maximum growing distance to the tree stem (radius r); 3) maximum growing depth (height h); and 4) potential deformation of the root system. Geometric solids are able to approximate the distribution of a tree root system. The objective of this paper is to investigate whether it is possible to implement root systems and the connected hydrological and mechanical attributes sufficiently in a 3-dimensional slope stability model. Hereby, a spatio-dynamic vegetation module should cope with the demands of performance, computation time and significance. However, in this presentation, we focus only on the distribution of roots. The assumption is that the horizontal root distribution around a tree stem on a 2-dimensional plane can be described by a circle with the stem located at the centroid and a distinct radius r that is dependent on age and species. We classified three main types of tree root systems and reproduced the species-age-related root distribution with three respective mathematical solids in a synthetic 3-dimensional hillslope ambience. Thus, two solids in an Euclidian space were distinguished to represent the three root systems: i) cylinders with radius r and height h, whilst the dimension of latter defines the shape of a taproot-system or a shallow-root-system respectively; ii) elliptic
Byrne, Barbara M.; van de Vijver, Fons J. R.
2017-01-01
Background: The impracticality of using the confirmatory factor analytic (CFA) approach in testing measurement invariance across many groups is now well known. A concerted effort to addressing these encumbrances over the last decade has resulted in a new generation of alternative methodological
MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory
Horowitz, Gary L.; Zaman, Zahur; Blanckaert, Norbert J. C.; Chan, Daniel W.; Dubois, Jeffrey A.; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W.; Nilsen, Olaug L.; Oellerich, Michael
2005-01-01
MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughp...
Asgharzadeh, Hafez; Borazjani, Iman
2016-01-01
diagonal of the Jacobian further improves the performance by 42 – 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80–90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future. PMID:28042172
Asgharzadeh, Hafez; Borazjani, Iman
2017-02-15
diagonal of the Jacobian further improves the performance by 42 - 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.
Directory of Open Access Journals (Sweden)
M. M. Rashidi
2012-01-01
Full Text Available In this study, a steady, incompressible, and laminar-free convective flow of a two-dimensional electrically conducting viscoelastic fluid over a moving stretching surface through a porous medium is considered. The boundary-layer equations are derived by considering Boussinesq and boundary-layer approximations. The nonlinear ordinary differential equations for the momentum and energy equations are obtained and solved analytically by using homotopy analysis method (HAM with two auxiliary parameters for two classes of visco-elastic fluid (Walters’ liquid B and second-grade fluid. It is clear that by the use of second auxiliary parameter, the straight line region in ℏ-curve increases and the convergence accelerates. This research is performed by considering two different boundary conditions: (a prescribed surface temperature (PST and (b prescribed heat flux (PHF. The effect of involved parameters on velocity and temperature is investigated.
Energy Technology Data Exchange (ETDEWEB)
Binotti, M.; Zhu, G.; Gray, A.; Manzollini, G.
2012-04-01
An analytical approach, as an extension of one newly developed method -- First-principle OPTical Intercept Calculation (FirstOPTIC) -- is proposed to treat the geometrical impact of three-dimensional (3-D) effects on parabolic trough optical performance. The mathematical steps of this analytical approach are presented and implemented numerically as part of the suite of FirstOPTIC code. In addition, the new code has been carefully validated against ray-tracing simulation results and available numerical solutions. This new analytical approach to treating 3-D effects will facilitate further understanding and analysis of the optical performance of trough collectors as a function of incidence angle.
An approach for the evaluation of observables in analytic versions of QCD
International Nuclear Information System (INIS)
Cvetic, Gorazd; Valenzuela, Cristian
2006-01-01
We present two variants of an approach for the evaluation of observables in analytic QCD models. The approach is motivated by the skeleton expansion in a certain class of schemes. We then evaluate the Adler function at low energies in one variant of this approach, in various analytic QCD models for the coupling parameter, and compare it with perturbative QCD predictions and the experimental results. We introduce two analytic QCD models for the coupling parameter which reproduce the measured value of the semihadronic τ decay ratio. Further, we evaluate the Bjorken polarized sum rule at low energies in both variants of the evaluation approach, using for the coupling parameter the analytic QCD model of Shirkov and Solovtsov, and compare with values obtained by the evaluation approach of Milton et al and Shirkov. (letter to the editor)
ANALYTIC CAUSATIVES IN JAVANESE: A LEXICAL- FUNCTIONAL APPROACH
Directory of Open Access Journals (Sweden)
Agus Subiyanto
2014-01-01
Full Text Available Analytic causatives are the type of causatives formed by separate predicates expressing the cause and the effect, that is, the causing notion is realized by a word separate from the word denoting the caused activity. This paper aims to discuss the forms and syntactic structure of analytic causatives in Javanese. To discuss the syntactic structure, the theory of lexical functional grammar (LFG is employed. The data used in this study is the „ngoko‟ level of Javanese of the Surakarta dialect. By using a negation marker and modals as the syntactic operators to test mono- or bi-clausality of analytic causatives, the writer found that analytic causatives in Javanese form biclausal constructions. These constructions have an X-COMP structure, in that the SUBJ of the second verb is controlled by the OBJ of the causative verb (Ngawe „make‟. In terms of the constituent structure, analytic causatives have two kinds of structures, which are V-cause OBJ X-COMP and V-cause X-COMP OBJ. Kausatif analitik adalah tipe kausatif yang dibentuk oleh dua predikat atau dua kata terpisah untuk mengungkapkan makna sebab dan akibat, yakni makna sebab direalisasikan oleh kata yang berbeda dengan kata yang menyatakan makna akibat. Tulisan ini membahas bentuk dan struktur sintaksis kausatif analitik dalam bahasa Jawa. Untuk menjelaskan struktur sintaksis digunakan teori Tata Bahasa Leksikal Fungsional. Data yang digunakan dalam penelitian ini adalah bahasa Jawa dialek Surakarta ragam ngoko. Dengan menggunakan alat uji pemarkah negasi dan penggunaaan modalitas, penulis menemukan bahwa kausatif analitik dalam bahasa Jawa membentuk struktur biklausa. Konstruksi ini memiliki struktur X
Thorsland, Martin N.; Novak, Joseph D.
1974-01-01
Described is an approach to assessment of intuitive and analytic modes of thinking in physics. These modes of thinking are associated with Ausubel's theory of learning. High ability in either intuitive or analytic thinking was associated with success in college physics, with high learning efficiency following a pattern expected on the basis of…
'Texts' and 'signs': Criteria for choosing an analytical approach
Directory of Open Access Journals (Sweden)
Trocuk Irina V.
2014-01-01
Full Text Available At least for two and a half decades the concepts 'narrative' and 'narrative analysis', 'discourse' and 'discourse analysis', 'text', 'context', 'signs' and 'semiotic analysis' have become extremely popular in humanities and social sciences but still have not received precise definitions and are interpreted quite arbitrary based on the conceptual and methodological preferences of the researcher, as well as the goals and objectives of the particular applied or fundamental sociological research project. The author proposes a way to structure the field of textual analysis in sociology that goes far beyond even the broadest interpretations of the content analysis method. Undoubtedly, we need to develop clear criteria for at least the correct naming of different formats of analytical work with textual data; otherwise, we run the risk of writing not scientific articles but rather 'original discursive collages' skillfully juggling an ambiguous and diverse terminology of textual analysis. Key words:.
DEFF Research Database (Denmark)
Hulbæk, Mette; Primdahl, Jette; Nielsen, Jesper Bo
The Development of a Decision Aid with a Multi Criterial Analytic Approach for Women with Pelvic Organ Prolapse.......The Development of a Decision Aid with a Multi Criterial Analytic Approach for Women with Pelvic Organ Prolapse....
Hankel-norm approximation of FIR filters: a descriptor-systems based approach
Halikias, George; Tsoulkas, Vasilis; Pantelous, Athanasios; Milonidis, Efstathios
2010-09-01
We propose a new method for approximating a matrix finite impulse response (FIR) filter by an infinite impulse response (IIR) filter of lower McMillan degree. This is based on a technique for approximating discrete-time descriptor systems and requires only standard linear algebraic routines, while avoiding altogether the solution of two matrix Lyapunov equations which is computationally expensive. Both the optimal and the suboptimal cases are addressed using a unified treatment. A detailed solution is developed in state-space or polynomial form, using only the Markov parameters of the FIR filter which is approximated. The method is finally applied to the design of scalar IIR filters with specified magnitude frequency-response tolerances and approximately linear-phase characteristics. A priori bounds on the magnitude and phase errors are obtained which may be used to select the reduced-order IIR filter order which satisfies the specified design tolerances. The effectiveness of the method is illustrated with a numerical example. Additional applications of the method are also briefly discussed.
Unemployment and Causes of Hospital Admission Considering Different Analytical Approaches
DEFF Research Database (Denmark)
Berg-Beckhoff, Gabriele; Gulis, Gabriel; Kronborg Bak, Carsten
2016-01-01
compensated unemployment and both types of disease specific hospital admission was associated statistically significant in the cross-sectional analysis. With regard to circulatory disease, the cohort approach suggests that social welfare compensated unemployment might lead to hospital admission due......The association between unemployment and hospital admission is known, but the causal relationship is still under discussion. The aim of the present analysis is to compare results of a cross-sectional and a cohort approach considering overall hospital admission and hospital admission due to cancer...
An analytic approach to understanding and predicting healthcare coverage.
Delen, Dursun; Fuller, Christie
2013-01-01
The inequality in the level of healthcare coverage among the people in the US is a pressing issue. Unfortunately, many people do not have healthcare coverage and much research is needed to identify the factors leading to this phenomenon. Hence, the goal of this study is to examine the healthcare coverage of individuals by applying popular analytic techniques on a wide-variety of predictive factors. A large and feature-rich dataset is used in conjunction with four popular data mining techniques-artificial neural networks, decision trees, support vector machines and logistic regression-to develop prediction models. Applying sensitivity analysis to the developed prediction models, the ranked importance of variables is determined. The experimental results indicated that the most accurate classifier for this phenomenon was the support vector machines that had an overall classification accuracy of 82.23% on the 10-fold holdout/test sample. The most important predictive factors came out as income, employment status, education, and marital status. The ability to identify and explain the reasoning of those likely to be without healthcare coverage through the application of accurate classification models can potentially be used in reducing the disparity in health care coverage.
A tiered analytical approach for investigating poor quality emergency contraceptives.
Directory of Open Access Journals (Sweden)
María Eugenia Monge
Full Text Available Reproductive health has been deleteriously affected by poor quality medicines. Emergency contraceptive pills (ECPs are an important birth control method that women can use after unprotected coitus for reducing the risk of pregnancy. In response to the detection of poor quality ECPs commercially available in the Peruvian market we developed a tiered multi-platform analytical strategy. In a survey to assess ECP medicine quality in Peru, 7 out of 25 different batches showed inadequate release of levonorgestrel by dissolution testing or improper amounts of active ingredient. One batch was found to contain a wrong active ingredient, with no detectable levonorgestrel. By combining ultrahigh performance liquid chromatography-ion mobility spectrometry-mass spectrometry (UHPLC-IMS-MS and direct analysis in real time MS (DART-MS the unknown compound was identified as the antibiotic sulfamethoxazole. Quantitation by UHPLC-triple quadrupole tandem MS (QqQ-MS/MS indicated that the wrong ingredient was present in the ECP sample at levels which could have significant physiological effects. Further chemical characterization of the poor quality ECP samples included the identification of the excipients by 2D Diffusion-Ordered Nuclear Magnetic Resonance Spectroscopy (DOSY 1H NMR indicating the presence of lactose and magnesium stearate.
Analytic Black Hole Perturbation Approach to Gravitational Radiation.
Sasaki, Misao; Tagoshi, Hideyuki
2003-01-01
We review the analytic methods used to perform the post-Newtonian expansion of gravitational waves induced by a particle orbiting a massive, compact body, based on black hole perturbation theory. There exist two different methods of performing the post-Newtonian expansion. Both are based on the Teukolsky equation. In one method, the Teukolsky equation is transformed into a Regge-Wheeler type equation that reduces to the standard Klein Gordon equation in the flat-space limit, while in the other method (which was introduced by Mano, Suzuki, and Takasugi relatively recently, the Teukolsky equation is used directly in its original form. The former's advantage is that it is intuitively easy to understand how various curved space effects come into play. However, it becomes increasingly complicated when one goes to higher and higher post-Newtonian orders. In contrast, the latter's advantage is that a systematic calculation to higher post-Newtonian orders can be implemented relatively easily, but otherwise, it is so mathematical that it is hard to understand the interplay of higher order terms. In this paper, we review both methods so that their pros and cons may be seen clearly. We also review some results of calculations of gravitational radiation emitted by a particle orbiting a black hole.
Analytical approaches for arsenic determination in air: A critical review
Energy Technology Data Exchange (ETDEWEB)
Sánchez-Rodas, Daniel, E-mail: rodas@uhu.es [Centre for Research in Sustainable Chemistry-CIQSO, Associated Unit CSIC-University of Huelva “Atmospheric Pollution”, Campus El Carmen, University of Huelva, 21071 Huelva (Spain); Department of Chemistry and Materials Science, University of Huelva, 21071 Huelva (Spain); Sánchez de la Campa, Ana M. [Centre for Research in Sustainable Chemistry-CIQSO, Associated Unit CSIC-University of Huelva “Atmospheric Pollution”, Campus El Carmen, University of Huelva, 21071 Huelva (Spain); Department of Mining, Mechanic and Energetic Engineering, ETSI, University of Huelva, 21071 Huelva (Spain); Alsioufi, Louay [Centre for Research in Sustainable Chemistry-CIQSO, Associated Unit CSIC-University of Huelva “Atmospheric Pollution”, Campus El Carmen, University of Huelva, 21071 Huelva (Spain)
2015-10-22
This review describes the different steps involved in the determination of arsenic in air, considering the particulate matter (PM) and the gaseous phase. The review focuses on sampling, sample preparation and instrumental analytical techniques for both total arsenic determination and speciation analysis. The origin, concentration and legislation concerning arsenic in ambient air are also considered. The review intends to describe the procedures for sample collection of total suspended particles (TSP) or particles with a certain diameter expressed in microns (e.g. PM10 and PM2.5), or the collection of the gaseous phase containing gaseous arsenic species. Sample digestion of the collecting media for PM is described, indicating proposed and established procedures that use acids or mixtures of acids aided with different heating procedures. The detection techniques are summarized and compared (ICP-MS, ICP-OES and ET-AAS), as well those techniques capable of direct analysis of the solid sample (PIXE, INAA and XRF). The studies about speciation in PM are also discussed, considering the initial works that employed a cold trap in combination with atomic spectroscopy detectors, or the more recent studies based on chromatography (GC or HPLC) combined with atomic or mass detectors (AFS, ICP-MS and MS). Further trends and challenges about determination of As in air are also addressed. - Highlights: • Review about arsenic in the air. • Sampling, sample treatment and analysis of arsenic in particulate matter and gaseous phase. • Total arsenic determination and arsenic speciation analysis.
Analytical and statistical approaches in the characterization of synthetic polymers
Dimzon, I.K.
2015-01-01
Polymers vary in terms of the monomer/s used; the number, distribution and type of linkage of monomers per molecule; and the side chains and end groups attached. Given this diversity, traditional single-technique approaches to characterization often give limited and inadequate information about a
Radio drama adaptations: an approach towards an analytical methodology
Huwiler, E.
2010-01-01
This article establishes a methodology with which radio drama pieces can be analysed. It thereby integrates all features the art form has to offer: voices, music, noises, but also technical features like cutting and mixing contribute to the narrative that is being told. This approach emphasizes the
An intrinsic robust rank-one-approximation approach for currencyportfolio optimization
Hongxuan Huang; Zhengjun Zhang
2018-01-01
A currency portfolio is a special kind of wealth whose value fluctuates with foreignexchange rates over time, which possesses 3Vs (volume, variety and velocity) properties of big datain the currency market. In this paper, an intrinsic robust rank one approximation (ROA) approachis proposed to maximize the value of currency portfolios over time. The main results of the paperinclude four parts: Firstly, under the assumptions about the currency market, the currency portfoliooptimization problem ...
Spacecraft formation control using analytical finite-duration approaches
Ben Larbi, Mohamed Khalil; Stoll, Enrico
2018-03-01
This paper derives a control concept for formation flight (FF) applications assuming circular reference orbits. The paper focuses on a general impulsive control concept for FF which is then extended to the more realistic case of non-impulsive thrust maneuvers. The control concept uses a description of the FF in relative orbital elements (ROE) instead of the classical Cartesian description since the ROE provide a direct insight into key aspects of the relative motion and are particularly suitable for relative orbit control purposes and collision avoidance analysis. Although Gauss' variational equations have been first derived to offer a mathematical tool for processing orbit perturbations, they are suitable for several different applications. If the perturbation acceleration is due to a control thrust, Gauss' variational equations show the effect of such a control thrust on the Keplerian orbital elements. Integrating the Gauss' variational equations offers a direct relation between velocity increments in the local vertical local horizontal frame and the subsequent change of Keplerian orbital elements. For proximity operations, these equations can be generalized from describing the motion of single spacecraft to the description of the relative motion of two spacecraft. This will be shown for impulsive and finite-duration maneuvers. Based on that, an analytical tool to estimate the error induced through impulsive maneuver planning is presented. The resulting control schemes are simple and effective and thus also suitable for on-board implementation. Simulations show that the proposed concept improves the timing of the thrust maneuver executions and thus reduces the residual error of the formation control.
Directory of Open Access Journals (Sweden)
Ishak Altun
2016-01-01
Full Text Available We provide sufficient conditions for the existence of a unique common fixed point for a pair of mappings T,S:X→X, where X is a nonempty set endowed with a certain metric. Moreover, a numerical algorithm is presented in order to approximate such solution. Our approach is different to the usual used methods in the literature.
International Nuclear Information System (INIS)
Zimerman, R.W.; Bodvarsson, G.S.
1990-01-01
Various analytical and numerical approaches are presented for the study of unsaturated flow processes in the vicinity of the Yucca Mountain, Nevada, the proposed site of an underground radioactive waste repository. Approximate analytical methods are used to study absorption of water from a saturated fracture into the adjacent rock. These solutions are incorporated into a numerical simulator as fracture/matrix interaction terms to treat problems such as flow along a fracture with transverse leakage into the matrix. An automatic fracture/matrix mesh generator is described; it allows for more efficient mesh generation for fractured/porous media, and consequently leads to large savings in computational time and cost. 21 refs., 6 figs
Co-evolving prisoner's dilemma: Performance indicators and analytic approaches
Zhang, W.; Choi, C. W.; Li, Y. S.; Xu, C.; Hui, P. M.
2017-02-01
Understanding the intrinsic relation between the dynamical processes in a co-evolving network and the necessary ingredients in formulating a reliable theory is an important question and a challenging task. Using two slightly different definitions of performance indicator in the context of a co-evolving prisoner's dilemma game, it is shown that very different cooperative levels result and theories of different complexity are required to understand the key features. When the payoff per opponent is used as the indicator (Case A), non-cooperative strategy has an edge and dominates in a large part of the parameter space formed by the cutting-and-rewiring probability and the strategy imitation probability. When the payoff from all opponents is used (Case B), cooperative strategy has an edge and dominates the parameter space. Two distinct phases, one homogeneous and dynamical and another inhomogeneous and static, emerge and the phase boundary in the parameter space is studied in detail. A simple theory assuming an average competing environment for cooperative agents and another for non-cooperative agents is shown to perform well in Case A. The same theory, however, fails badly for Case B. It is necessary to include more spatial correlation into a theory for Case B. We show that the local configuration approximation, which takes into account of the different competing environments for agents with different strategies and degrees, is needed to give reliable results for Case B. The results illustrate that formulating a proper theory requires both a conceptual understanding of the effects of the adaptive processes in the problem and a delicate balance between simplicity and accuracy.
Cosmic Reionization after Planck and before JWST: An Analytic Approach
Madau, Piero
2017-12-01
The reionization of cosmic hydrogen marks a critical juncture in the history of structure formation. Here we present a new formulation of the standard reionization equation for the evolution of the volume-averaged H II fraction that is more consistent with the accepted conceptual model of inhomogeneous intergalactic absorption. The revised equation explicitly accounts for the presence of the optically thick “Lyman-limit systems” that are known to determine the mean-free path of ionizing radiation after overlap. Integration of this equation provides a better characterization of the timing of reionization by smoothly linking the pre-overlap with the post-overlap phases of such a process. We confirm the validity of the quasi-instantaneous approximation as a predictor of reionization completion/maintenance and discuss new insights on the sources of cosmic reionization using the improved formalism. A constant emission rate into the intergalactic medium (IGM) of three Lyman continuum (LyC) photons per atom per gigayear leads to a reionization history that is consistent with a number of observational constraints on the ionization state of the z = 5–9 universe. While star-forming galaxies can dominate the reionization process if the luminosity-weighted fraction of LyC photons that escape into the IGM, {f}{esc}, exceeds 15% (for a faint magnitude cut-off of the galaxy UV luminosity function of {M}{lim}=-13 and a LyC photon yield per unit 1500 Å luminosity of {ξ }{ion}={10}25.3 {{erg}}-1 {Hz}), simple models where the product of the two unknowns {f}{esc}{ξ }{ion} is not evolving with redshift fail to reproduce the changing neutrality of the IGM observed at these epochs.
Relativistic bulk viscosity in the relaxation time approximation: a chaotic velocities approach
International Nuclear Information System (INIS)
García-Perciante, A L; Méndez, A R; Sandoval-Villalbazo, A
2015-01-01
In this short note, the bulk viscosity for a high temperature dilute gas is calculated by applying the Chapman-Enskog method within Marle's relaxation time approximation. The expression for the stress-tensor established in Ref.[1], using explicitly the concept of chaotic velocity, is used in order to obtain the transport coefficient. The result is compared with previous expressions obtained by other authors using similar methods and emphasis is made on the agreement when a corrected relaxation parameter is considered. (paper)
Directory of Open Access Journals (Sweden)
Haiwen Xu
2016-01-01
Full Text Available The time-band approximation model for flight operations recovery following disruption (Bard, Yu, Arguello, IIE Transactions, 33, 931–947, 2001 is constructed by partitioning the recovery period into time bands and by approximating the delay costs associated with the possible flight connections. However, for disruptions occurring in a hub-and-spoke network, a large number of possible flight connections are constructed throughout the entire flight schedule, so as to obtain the approximate optimal. In this paper, we show the application of the simplex group cycle approach to hub-and-spoke airlines in China, along with the related weighted threshold necessary for controlling the computation time and the flight disruption scope and depth. Subsequently, we present the weighted time-band approximation model for flight operations recovery, which incorporates the simplex group cycle approach. Simple numerical experiments using actual data from Air China show that the weighted time-band approximation model is feasible, and the results of stochastic experiments using actual data from Sichuan Airlines show that the flight disruption and computation time are controlled by the airline operations control center, which aims to achieve a balance between the flight disruption scope and depth, computation time, and recovery value.
Directory of Open Access Journals (Sweden)
John C. Yi
2016-08-01
Full Text Available We investigate the impact of early adoption of an innovative analytics approach on organizational analytics maturity and sustainability. With the sales operation planning involving the accurate determination of physician detailing frequency, multiple product sequencing, nonlinear promotional response functions and achievement of the right level of share of voice (SOV, an analytical approach was developed by integrating domain knowledge, neural network (NN’s pattern-recognition capability and nonlinear mathematical programming to address these challenges. A pharmaceutical company headquartered in the U.S. championed this initial research in 2005 and became the first major firm to implement the recommendations. The company improved its profitability by 12% when piloted to a sales district with 481 physicians; then it launched this approach nationally. In 2014, the firm again gave us its data, performance of the analytical approach and access to key stakeholders to better understand the changes in the pharmaceutical sales operations landscape, the firm’s analytics maturity and sustainability of analytics. Results suggest that being the early adopter of innovation doubled the firm’s technology utilization from 2005 to 2014, as well as doubling the firm’s ability to continuously improve the sales operations process; it outperformed the standard industry practice by 23%. Moreover, the infusion of analytics from the corporate office to sales, improvement in management commitment to analytics, increased communications for continuous process improvement and the successes from this approach has created the environment for sustainable organizational growth in analytics.
Analytical Approach to Eigen-Emittance Evolution in Storage Rings
Energy Technology Data Exchange (ETDEWEB)
Nash, Boaz; /SLAC
2006-05-16
This dissertation develops the subject of beam evolution in storage rings with nearly uncoupled symplectic linear dynamics. Linear coupling and dissipative/diffusive processes are treated perturbatively. The beam distribution is assumed Gaussian and a function of the invariants. The development requires two pieces: the global invariants and the local stochastic processes which change the emittances, or averages of the invariants. A map based perturbation theory is described, providing explicit expressions for the invariants near each linear resonance, where small perturbations can have a large effect. Emittance evolution is determined by the damping and diffusion coefficients. The discussion is divided into the cases of uniform and non-uniform stochasticity, synchrotron radiation an example of the former and intrabeam scattering the latter. For the uniform case, the beam dynamics is captured by a global diffusion coefficient and damping decrement for each eigen-invariant. Explicit expressions for these quantities near coupling resonances are given. In many cases, they are simply related to the uncoupled values. Near a sum resonance, it is found that one of the damping decrements becomes negative, indicating an anti-damping instability. The formalism is applied to a number of examples, including synchrobetatron coupling caused by a crab cavity, a case of current interest where there is concern about operation near half integer {nu}{sub x}. In the non-uniform case, the moment evolution is computed directly, which is illustrated through the example of intrabeam scattering. Our approach to intrabeam scattering damping and diffusion has the advantage of not requiring a loosely-defined Coulomb Logarithm. It is found that in some situations there is a small difference between our results and the standard approaches such as Bjorken-Mtingwa, which is illustrated by comparison of the two approaches and with a measurement of Au evolution in RHIC. Finally, in combining IBS
Nuclear emergency response planning based on participatory decision analytic approaches
International Nuclear Information System (INIS)
Sinkko, K.
2004-10-01
This work was undertaken in order to develop methods and techniques for evaluating systematically and comprehensively protective action strategies in the case of a nuclear or radiation emergency. This was done in a way that the concerns and issues of all key players related to decisions on protective actions could be aggregated into decision- making transparently and in an equal manner. An approach called facilitated workshop, based on the theory of Decision Analysis, was tailored and tested in the planning of actions to be taken. The work builds on case studies in which it was assumed that a hypothetical accident in a nuclear power plant had led to a release of considerable amounts of radionuclides and therefore different types of protective actions should be considered. Altogether six workshops were organised in which all key players were represented, i.e., the authorities, expert organisations, industry and agricultural producers. The participants were those responsible for preparing advice or presenting matters for those responsible for the formal decision-making. Many preparatory meetings were held with various experts to prepare information for the workshops. It was considered essential that the set-up strictly follow the decision- making process to which the key players are accustomed. Key players or stakeholders comprise responsible administrators and organisations, politicians as well as representatives of the citizens affected and other persons who will and are likely to take part in decision-making in nuclear emergencies. The realistic nature and the disciplined process of a facilitated workshop and commitment to decision-making yielded up insight in many radiation protection issues. The objectives and attributes which are considered in a decision on protective actions were discussed in many occasions and were defined for different accident scenario to come. In the workshops intervention levels were derived according justification and optimisation
Analytic and probabilistic approaches to dynamics in negative curvature
Peigné, Marc; Sambusetti, Andrea
2014-01-01
The work of E. Hopf and G.A. Hedlund, in the 1930s, on transitivity and ergodicity of the geodesic flow for hyperbolic surfaces, marked the beginning of the investigation of the statistical properties and stochastic behavior of the flow. The first central limit theorem for the geodesic flow was proved in the 1960s by Y. Sinai for compact hyperbolic manifolds. Since then, strong relationships have been found between the fields of ergodic theory, analysis, and geometry. Different approaches and new tools have been developed to study the geodesic flow, including measure theory, thermodynamic formalism, transfer operators, Laplace operators, and Brownian motion. All these different points of view have led to a deep understanding of more general dynamical systems, in particular the so-called Anosov systems, with applications to geometric problems such as counting, equirepartition, mixing, and recurrence properties of the orbits. This book comprises two independent texts that provide a self-contained introduction t...
Examining roles pharmacists assume in disasters: a content analytic approach.
Ford, Heath; Dallas, Cham E; Harris, Curt
2013-12-01
Numerous practice reports recommend roles pharmacists may adopt during disasters. This study examines the peer-reviewed literature for factors that explain the roles pharmacists assume in disasters and the differences in roles and disasters when stratified by time. Quantitative content analysis was used to gather data consisting of words and phrases from peer-reviewed pharmacy literature regarding pharmacists' roles in disasters. Negative binomial regression and Kruskal-Wallis nonparametric models were applied to the data. Pharmacists' roles in disasters have not changed significantly since the 1960s. Pharmaceutical supply remains their preferred role, while patient management and response integration roles decrease in context of common, geographically widespread disasters. Policy coordination roles, however, significantly increase in nuclear terrorism planning. Pharmacists' adoption of nonpharmaceutical supply roles may represent a problem of accepting a paradigm shift in nontraditional roles. Possible shortages of personnel in future disasters may change the pharmacists' approach to disaster management.
Analytical and computational approaches to define the Aspergillus niger secretome
Energy Technology Data Exchange (ETDEWEB)
Tsang, Adrian; Butler, Gregory D.; Powlowski, Justin; Panisko, Ellen A.; Baker, Scott E.
2009-03-01
We used computational and mass spectrometric approaches to characterize the Aspergillus niger secretome. The 11,200 gene models predicted in the genome of A. niger strain ATCC 1015 were the data source for the analysis. Depending on the computational methods used, 691 to 881 proteins were predicted to be secreted proteins. We cultured A. niger in six different media and analyzed the extracellular proteins produced using mass spectrometry. A total of 222 proteins were identified, with 39 proteins expressed under all six conditions and 74 proteins expressed under only one condition. The secreted proteins identified by mass spectrometry were used to guide the correction of about 20 gene models. Additional analysis focused on extracellular enzymes of interest for biomass processing. Of the 63 glycoside hydrolases predicted to be capable of hydrolyzing cellulose, hemicellulose or pectin, 94% of the exo-acting enzymes and only 18% of the endo-acting enzymes were experimentally detected.
Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E
2018-03-14
We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.
Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.
2018-03-01
We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.
Pardue, Harry L.; Woo, Jannie
1984-01-01
Proposes an approach to teaching analytical chemistry and chemical analysis in which a problem to be resolved is the focus of a course. Indicates that this problem-oriented approach is intended to complement detailed discussions of fundamental and applied aspects of chemical determinations and not replace such discussions. (JN)
DEFF Research Database (Denmark)
Popov, Vladislav; Lavrinenko, Andrei; Novitsky, Andrey
2016-01-01
of the conventional effective medium theory depending on the ratio of unit-cell length to the wavelength, the number of unit cells, and the angle of incidence. The operator approach to effective medium theory is applicable for periodic and nonperiodic layered systems, being a fruitful tool in the fields......We elaborate on an operator approach to effective medium theory for homogenization of the periodic multilayered structures composed of nonmagnetic isotropic materials, which is based on equating the spatial evolution operators for the original structure and its effective alternative. We show...
An analytical approach for the Propagation Saw Test
Benedetti, Lorenzo; Fischer, Jan-Thomas; Gaume, Johan
2016-04-01
The Propagation Saw Test (PST) [1, 2] is an experimental in-situ technique that has been introduced to assess crack propagation propensity in weak snowpack layers buried below cohesive snow slabs. This test attracted the interest of a large number of practitioners, being relatively easy to perform and providing useful insights for the evaluation of snow instability. The PST procedure requires isolating a snow column of 30 centimeters of width and -at least-1 meter in the downslope direction. Then, once the stratigraphy is known (e.g. from a manual snow profile), a saw is used to cut a weak layer which could fail, potentially leading to the release of a slab avalanche. If the length of the saw cut reaches the so-called critical crack length, the onset of crack propagation occurs. Furthermore, depending on snow properties, the crack in the weak layer can initiate the fracture and detachment of the overlying slab. Statistical studies over a large set of field data confirmed the relevance of the PST, highlighting the positive correlation between test results and the likelihood of avalanche release [3]. Recent works provided key information on the conditions for the onset of crack propagation [4] and on the evolution of slab displacement during the test [5]. In addition, experimental studies [6] and simplified models [7] focused on the qualitative description of snowpack properties leading to different failure types, namely full propagation or fracture arrest (with or without slab fracture). However, beside current numerical studies utilizing discrete elements methods [8], only little attention has been devoted to a detailed analytical description of the PST able to give a comprehensive mechanical framework of the sequence of processes involved in the test. Consequently, this work aims to give a quantitative tool for an exhaustive interpretation of the PST, stressing the attention on important parameters that influence the test outcomes. First, starting from a pure
Visual Analytics approach for Lightning data analysis and cell nowcasting
Peters, Stefan; Meng, Liqiu; Betz, Hans-Dieter
2013-04-01
Thunderstorms and their ground effects, such as flash floods, hail, lightning, strong wind and tornadoes, are responsible for most weather damages (Bonelli & Marcacci 2008). Thus to understand, identify, track and predict lightning cells is essential. An important aspect for decision makers is an appropriate visualization of weather analysis results including the representation of dynamic lightning cells. This work focuses on the visual analysis of lightning data and lightning cell nowcasting which aim to detect and understanding spatial-temporal patterns of moving thunderstorms. Lightnings are described by 3D coordinates and the exact occurrence time of lightnings. The three-dimensionally resolved total lightning data used in our experiment are provided by the European lightning detection network LINET (Betz et al. 2009). In all previous works, lightning point data, detected lightning cells and derived cell tracks are visualized in 2D. Lightning cells are either displayed as 2D convex hulls with or without the underlying lightning point data. Due to recent improvements of lightning data detection and accuracy, there is a growing demand on multidimensional and interactive visualization in particular for decision makers. In a first step lightning cells are identified and tracked. Then an interactive graphic user interface (GUI) is developed to investigate the dynamics of the lightning cells: e.g. changes of cell density, location, extension as well as merging and splitting behavior in 3D over time. In particular a space time cube approach is highlighted along with statistical analysis. Furthermore a lightning cell nowcasting is conducted and visualized. The idea thereby is to predict the following cell features for the next 10-60 minutes including location, centre, extension, density, area, volume, lifetime and cell feature probabilities. The main focus will be set to a suitable interactive visualization of the predicted featured within the GUI. The developed visual
Directory of Open Access Journals (Sweden)
N. V. V. S. S. Raman
2015-01-01
Full Text Available Pharmaceutical industry has been emerging rapidly for the last decade by focusing on product Quality, Safety, and Efficacy. Pharmaceutical firms increased the number of product development by using scientific tools such as QbD (Quality by Design and PAT (Process Analytical Technology. ICH guidelines Q8 to Q11 have discussed QbD implementation in API synthetic process and formulation development. ICH Q11 guidelines clearly discussed QbD approach for API synthesis with examples. Generic companies are implementing QbD approach in formulation development and even it is mandatory for USFDA perspective. As of now there is no specific requirements for AQbD (Analytical Quality by Design and PAT in analytical development from all regulatory agencies. In this review, authors have discussed the implementation of QbD and AQbD simultaneously for API synthetic process and analytical methods development. AQbD key tools are identification of ATP (Analytical Target Profile, CQA (Critical Quality Attributes with risk assessment, Method Optimization and Development with DoE, MODR (method operable design region, Control Strategy, AQbD Method Validation, and Continuous Method Monitoring (CMM. Simultaneous implementation of QbD activities in synthetic and analytical development will provide the highest quality product by minimizing the risks and even it is very good input for PAT approach.
DIS structure functions in the NLO approximation of the Parton Reggeization Approach
Nefedov, Maxim; Saleev, Vladimir
2017-10-01
The main ideas of the NLO calculations in Parton Reggeization Approach are illustrated on the example of the simplest NLO subprocess, which contributes to DIS: The double counting with the LO contribution is resolved. The problem of matching of the NLO results for single-scale observables in PRA on the corresponding NLO results in Collinear Parton Model is considered. In the developed framework, the usual NLO PDFs in the -scheme can be consistently used as the collinear input for the NLO calculation in PRA.
DEFF Research Database (Denmark)
Shuai, Hang; Ai, Xiaomeng; Wen, Jinyu
2017-01-01
This paper proposes a hybrid approximate dynamic programming (ADP) approach for the multiple time-period optimal power flow in integrated gas and power systems. ADP successively solves Bellman's equation to make decisions according to the current state of the system. So, the updated near future...... forecast information is not fully utilized. While model predictive control (MPC) as a look ahead policy can integrate the updated forecast in the optimization process. The proposed hybrid optimization approach makes full use of the advantages of ADP and MPC to obtain a better solution by using the real...
MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory.
Horowitz, Gary L; Zaman, Zahur; Blanckaert, Norbert J C; Chan, Daniel W; Dubois, Jeffrey A; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W; Nilsen, Olaug L; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang
2005-01-01
MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality.
An analytical approach to the CMB polarization in a spatially closed background
Niazy, Pedram; Abbassi, Amir H.
2018-03-01
The scalar mode polarization of the cosmic microwave background is derived in a spatially closed universe from the Boltzmann equation using the line of sight integral method. The EE and TE multipole coefficients have been extracted analytically by considering some tolerable approximations such as considering the evolution of perturbation hydrodynamically and sudden transition from opacity to transparency at the time of last scattering. As the major advantage of analytic expressions, CEE,ℓS and CTE,ℓ explicitly show the dependencies on baryon density ΩB, matter density ΩM, curvature ΩK, primordial spectral index ns, primordial power spectrum amplitude As, Optical depth τreion, recombination width σt and recombination time tL. Using a realistic set of cosmological parameters taken from a fit to data from Planck, the closed universe EE and TE power spectrums in the scalar mode are compared with numerical results from the CAMB code and also latest observational data. The analytic results agree with the numerical ones on the big and moderate scales. The peak positions are in good agreement with the numerical result on these scales while the peak heights agree with that to within 20% due to the approximations have been considered for these derivations. Also, several interesting properties of CMB polarization are revealed by the analytic spectra.
Analytical approach of laser beam propagation in the hollow polygonal light pipe.
Zhu, Guangzhi; Zhu, Xiao; Zhu, Changhong
2013-08-10
An analytical method of researching the light distribution properties on the output end of a hollow n-sided polygonal light pipe and a light source with a Gaussian distribution is developed. The mirror transformation matrices and a special algorithm of removing void virtual images are created to acquire the location and direction vector of each effective virtual image on the entrance plane. The analytical method is demonstrated by Monte Carlo ray tracing. At the same time, four typical cases are discussed. The analytical results indicate that the uniformity of light distribution varies with the structural and optical parameters of the hollow n-sided polygonal light pipe and light source with a Gaussian distribution. The analytical approach will be useful to design and choose the hollow n-sided polygonal light pipe, especially for high-power laser beam homogenization techniques.
Salmin, Vadim V.
2017-01-01
Flight mechanics with a low-thrust is a new chapter of mechanics of space flight, considered plurality of all problems trajectory optimization and movement control laws and the design parameters of spacecraft. Thus tasks associated with taking into account the additional factors in mathematical models of the motion of spacecraft becomes increasingly important, as well as additional restrictions on the possibilities of the thrust vector control. The complication of the mathematical models of controlled motion leads to difficulties in solving optimization problems. Author proposed methods of finding approximate optimal control and evaluating their optimality based on analytical solutions. These methods are based on the principle of extending the class of admissible states and controls and sufficient conditions for the absolute minimum. Developed procedures of the estimation enabling to determine how close to the optimal founded solution, and indicate ways to improve them. Authors describes procedures of estimate for approximately optimal control laws for space flight mechanics problems, in particular for optimization flight low-thrust between the circular non-coplanar orbits, optimization the control angle and trajectory movement of the spacecraft during interorbital flights, optimization flights with low-thrust between arbitrary elliptical orbits Earth satellites.
Jolani, Shahab
2018-03-01
In health and medical sciences, multiple imputation (MI) is now becoming popular to obtain valid inferences in the presence of missing data. However, MI of clustered data such as multicenter studies and individual participant data meta-analysis requires advanced imputation routines that preserve the hierarchical structure of data. In clustered data, a specific challenge is the presence of systematically missing data, when a variable is completely missing in some clusters, and sporadically missing data, when it is partly missing in some clusters. Unfortunately, little is known about how to perform MI when both types of missing data occur simultaneously. We develop a new class of hierarchical imputation approach based on chained equations methodology that simultaneously imputes systematically and sporadically missing data while allowing for arbitrary patterns of missingness among them. Here, we use a random effect imputation model and adopt a simplification over fully Bayesian techniques such as Gibbs sampler to directly obtain draws of parameters within each step of the chained equations. We justify through theoretical arguments and extensive simulation studies that the proposed imputation methodology has good statistical properties in terms of bias and coverage rates of parameter estimates. An illustration is given in a case study with eight individual participant datasets. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Siragusa, Mattia; Baiocco, Giorgio; Fredericia, Pil M; Friedland, Werner; Groesser, Torsten; Ottolenghi, Andrea; Jensen, Mikael
2017-08-01
COmputation Of Local Electron Release (COOLER), a software program has been designed for dosimetry assessment at the cellular/subcellular scale, with a given distribution of administered low-energy electron-emitting radionuclides in cellular compartments, which remains a critical step in risk/benefit analysis for advancements in internal radiotherapy. The software is intended to overcome the main limitations of the medical internal radiation dose (MIRD) formalism for calculations of cellular S-values (i.e., dose to a target region in the cell per decay in a given source region), namely, the use of the continuous slowing down approximation (CSDA) and the assumption of a spherical cell geometry. To this aim, we developed an analytical approach, entrusted to a MATLAB-based program, using as input simulated data for electron spatial energy deposition directly derived from full Monte Carlo track structure calculations with PARTRAC. Results from PARTRAC calculations on electron range, stopping power and residual energy versus traveled distance curves are presented and, when useful for implementation in COOLER, analytical fit functions are given. Example configurations for cells in different culture conditions (V79 cells in suspension or adherent culture) with realistic geometrical parameters are implemented for use in the tool. Finally, cellular S-value predictions by the newly developed code are presented for different cellular geometries and activity distributions (uniform activity in the nucleus, in the entire cell or on the cell surface), validated against full Monte Carlo calculations with PARTRAC, and compared to MIRD standards, as well as results based on different track structure calculations (Geant4-DNA). The largest discrepancies between COOLER and MIRD predictions were generally found for electrons between 25 and 30 keV, where the magnitude of disagreement in S-values can vary from 50 to 100%, depending on the activity distribution. In calculations for
A Multi-Level Middle-Out Cross-Zooming Approach for Large Graph Analytics
Energy Technology Data Exchange (ETDEWEB)
Wong, Pak C.; Mackey, Patrick S.; Cook, Kristin A.; Rohrer, Randall M.; Foote, Harlan P.; Whiting, Mark A.
2009-10-11
This paper presents a working graph analytics model that embraces the strengths of the traditional top-down and bottom-up approaches with a resilient crossover concept to exploit the vast middle-ground information overlooked by the two extreme analytical approaches. Our graph analytics model is developed in collaboration with researchers and users, who carefully studied the functional requirements that reflect the critical thinking and interaction pattern of a real-life intelligence analyst. To evaluate the model, we implement a system prototype, known as GreenHornet, which allows our analysts to test the theory in practice, identify the technological and usage-related gaps in the model, and then adapt the new technology in their work space. The paper describes the implementation of GreenHornet and compares its strengths and weaknesses against the other prevailing models and tools.
International Nuclear Information System (INIS)
Lorenzana, J.; Grynberg, M.D.; Yu, L.; Yonemitsu, K.; Bishop, A.R.
1992-11-01
The ground state energy, and static and dynamic correlation functions are investigated in the inhomogeneous Hartree-Fock (HF) plus random phase approximation (RPA) approach applied to a one-dimensional spinless fermion model showing self-trapped doping states at the mean field level. Results are compared with homogeneous HF and exact diagonalization. RPA fluctuations added to the generally inhomogeneous HF ground state allows the computation of dynamical correlation functions that compare well with exact diagonalization results. The RPA correction to the ground state energy agrees well with the exact results at strong and weak coupling limits. We also compare it with a related quasi-boson approach. The instability towards self-trapped behaviour is signaled by a RPA mode with frequency approaching zero. (author). 21 refs, 10 figs
DuBois, Frank L.
1999-01-01
Describes use of the analytic hierarchy process (AHP) as a teaching tool to illustrate the complexities of decision making in an international environment. The AHP approach uses managerial input to develop pairwise comparisons of relevant decision criteria to efficiently generate an appropriate solution. (DB)
Knight, David B.; Brozina, Cory; Novoselich, Brian
2016-01-01
This paper investigates how first-year engineering undergraduates and their instructors describe the potential for learning analytics approaches to contribute to student success. Results of qualitative data collection in a first-year engineering course indicated that both students and instructors\temphasized a preference for learning analytics…
Using Configural Frequency Analysis as a Person-Centered Analytic Approach with Categorical Data
Stemmler, Mark; Heine, Jörg-Henrik
2017-01-01
Configural frequency analysis and log-linear modeling are presented as person-centered analytic approaches for the analysis of categorical or categorized data in multi-way contingency tables. Person-centered developmental psychology, based on the holistic interactionistic perspective of the Stockholm working group around David Magnusson and Lars…
Methodological Demonstration of a Text Analytics Approach to Country Logistics System Assessments
DEFF Research Database (Denmark)
Kinra, Aseem; Mukkamala, Raghava Rao; Vatrapu, Ravi
2017-01-01
The purpose of this study is to develop and demonstrate a semi-automated text analytics approach for the identification and categorization of information that can be used for country logistics assessments. In this paper, we develop the methodology on a set of documents for 21 countries using mach...... and the text analyst. Implications are discussed and future work is outlined....
Yogev, Sara; Brett, Jeanne
This paper offers a conceptual framework for the intersection of work and family roles based on the constructs of work involvement and family involvement. The theoretical and empirical literature on the intersection of work and family roles is reviewed from two analytical approaches. From the individual level of analysis, the literature reviewed…
Petrova, N.; Zagidullin, A.; Nefedyev, Y.; Kosulin, V.; Andreev, A.
2017-11-01
Observing physical librations of celestial bodies and the Moon represents one of the astronomical methods of remotely assessing the internal structure of a celestial body without conducting expensive space experiments. The paper contains a review of recent advances in studying the Moon's structure using various methods of obtaining and applying the lunar physical librations (LPhL) data. In this article LPhL simulation methods of assessing viscoelastic and dissipative properties of the lunar body and lunar core parameters, whose existence has been recently confirmed during the seismic data reprocessing of ;Apollo; space mission, are described. Much attention is paid to physical interpretation of the free librations phenomenon and the methods for its determination. In the paper the practical application of the most accurate analytical LPhL tables (Rambaux and Williams, 2011) is discussed. The tables were built on the basis of complex analytical processing of the residual differences obtained when comparing long-term series of laser observations with the numerical ephemeris DE421. In the paper an efficiency analysis of two approaches to LPhL theory is conducted: the numerical and the analytical ones. It has been shown that in lunar investigation both approaches complement each other in various aspects: the numerical approach provides high accuracy of the theory, which is required for the proper processing of modern observations, the analytical approach allows to comprehend the essence of the phenomena in the lunar rotation, predict and interpret new effects in the observations of lunar body and lunar core parameters.
Approximate iterative algorithms
Almudevar, Anthony Louis
2014-01-01
Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a
Schildcrout, Jonathan S.; Basford, Melissa A.; Pulley, Jill M.; Masys, Daniel R.; Roden, Dan M.; Wang, Deede; Chute, Christopher G.; Kullo, Iftikhar J.; Carrell, David; Peissig, Peggy; Kho, Abel; Denny, Joshua C.
2010-01-01
We describe a two-stage analytical approach for characterizing morbidity profile dissimilarity among patient cohorts using electronic medical records. We capture morbidities using the International Statistical Classification of Diseases and Related Health Problems (ICD-9) codes. In the first stage of the approach separate logistic regression analyses for ICD-9 sections (e.g., “hypertensive disease” or “appendicitis”) are conducted, and the odds ratios that describe adjusted differences in pre...
Cohen, Noa; Sabhachandani, Pooja; Golberg, Alexander; Konry, Tania
2015-04-15
In this study we describe a simple lab-on-a-chip (LOC) biosensor approach utilizing well mixed microfluidic device and a microsphere-based assay capable of performing near real-time diagnostics of clinically relevant analytes such cytokines and antibodies. We were able to overcome the adsorption kinetics reaction rate-limiting mechanism, which is diffusion-controlled in standard immunoassays, by introducing the microsphere-based assay into well-mixed yet simple microfluidic device with turbulent flow profiles in the reaction regions. The integrated microsphere-based LOC device performs dynamic detection of the analyte in minimal amount of biological specimen by continuously sampling micro-liter volumes of sample per minute to detect dynamic changes in target analyte concentration. Furthermore we developed a mathematical model for the well-mixed reaction to describe the near real time detection mechanism observed in the developed LOC method. To demonstrate the specificity and sensitivity of the developed real time monitoring LOC approach, we applied the device for clinically relevant analytes: Tumor Necrosis Factor (TNF)-α cytokine and its clinically used inhibitor, anti-TNF-α antibody. Based on the reported results herein, the developed LOC device provides continuous sensitive and specific near real-time monitoring method for analytes such as cytokines and antibodies, reduces reagent volumes by nearly three orders of magnitude as well as eliminates the washing steps required by standard immunoassays. Copyright © 2014 Elsevier B.V. All rights reserved.
Amin, Talha
2013-01-01
In the paper, we present a comparison of dynamic programming and greedy approaches for construction and optimization of approximate decision rules relative to the number of misclassifications. We use an uncertainty measure that is a difference between the number of rows in a decision table T and the number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules that localize rows in subtables of T with uncertainty at most γ. Experimental results with decision tables from the UCI Machine Learning Repository are also presented. © 2013 Springer-Verlag.
International Nuclear Information System (INIS)
Klüser, Lars; Di Biagio, Claudia; Kleiber, Paul D.; Formenti, Paola; Grassian, Vicki H.
2016-01-01
Optical properties (extinction efficiency, single scattering albedo, asymmetry parameter and scattering phase function) of five different desert dust minerals have been calculated with an asymptotic approximation approach (AAA) for non-spherical particles. The AAA method combines Rayleigh-limit approximations with an asymptotic geometric optics solution in a simple and straightforward formulation. The simulated extinction spectra have been compared with classical Lorenz–Mie calculations as well as with laboratory measurements of dust extinction. This comparison has been done for single minerals and with bulk dust samples collected from desert environments. It is shown that the non-spherical asymptotic approximation improves the spectral extinction pattern, including position of the extinction peaks, compared to the Lorenz–Mie calculations for spherical particles. Squared correlation coefficients from the asymptotic approach range from 0.84 to 0.96 for the mineral components whereas the corresponding numbers for Lorenz–Mie simulations range from 0.54 to 0.85. Moreover the blue shift typically found in Lorenz–Mie results is not present in the AAA simulations. The comparison of spectra simulated with the AAA for different shape assumptions suggests that the differences mainly stem from the assumption of the particle shape and not from the formulation of the method itself. It has been shown that the choice of particle shape strongly impacts the quality of the simulations. Additionally, the comparison of simulated extinction spectra with bulk dust measurements indicates that within airborne dust the composition may be inhomogeneous over the range of dust particle sizes, making the calculation of reliable radiative properties of desert dust even more complex. - Highlights: • A fast and simple method for estimating optical properties of dust. • Can be used with non-spherical particles of arbitrary size distributions. • Comparison with Mie simulations and
Analytical Approach for Load Capacity of Large Diameter Bored Piles Using Field Data
Directory of Open Access Journals (Sweden)
Alaa Dawood Salman
2015-08-01
Full Text Available An analytical approach based on field data was used to determine the strength capacity of large diameter bored type piles. Also the deformations and settlements were evaluated for both vertical and lateral loadings. The analytical predictions are compared to field data obtained from a proto-type test pile used at Tharthar –Tigris canal Bridge. They were found to be with acceptable agreement of 12% deviation. Following ASTM standards D1143M-07e1,2010, a test schedule of five loading cycles were proposed for vertical loads and series of cyclic loads to simulate horizontal loading .The load test results and analytical data of 1.95m in diameter test pile proved efficiently to carry a working load of 450 tons. The calculated lateral displacements based on a specified coefficient of subgrade reaction are compared to the measured values from dial gauges and strain gauges placed at various locations along the length of the pile.
CERN. Geneva
2015-01-01
Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...
Energy Technology Data Exchange (ETDEWEB)
Randrianalisoa, Jaona [Centre de Thermique de Lyon, Villeurbanne (France); CETHIL UMR5008, CNRS, INSA-Lyon Universite Lyon 1, Villeurbanne (France); Baillis, Dominique [CETHIL UMR5008, CNRS, INSA-Lyon Universite Lyon 1, Villeurbanne (France)
2009-10-15
A combination of analytical and phonon-tracking approaches is proposed to predict thermal conductivity of porous nanostructured thick materials. The analytical approach derives the thermal conductivity as function of the intrinsic properties of the material and properties characterizing the phonon interaction with pore walls. (Abstract Copyright [2009], Wiley Periodicals, Inc.)
Energy Technology Data Exchange (ETDEWEB)
Kandemir, B S; Keskin, M [Department of Physics, Faculty of Sciences, Ankara University, 06100 Tandogan, Ankara (Turkey)
2008-08-13
In this paper, exact analytical expressions for the entire phonon spectra in single-walled carbon nanotubes with zigzag geometry are presented by using a new approach, originally developed by Kandemir and Altanhan. This approach is based on the concept of construction of a classical lattice Hamiltonian of single-walled carbon nanotubes, wherein the nearest and next nearest neighbor and bond bending interactions are all included, then its quantization and finally diagonalization of the resulting second quantized Hamiltonian. Furthermore, within this context, explicit analytical expressions for the relevant electron-phonon interaction coefficients are also investigated for single-walled carbon nanotubes having this geometry, by the phonon modulation of the hopping interaction.
International Nuclear Information System (INIS)
Kandemir, B S; Keskin, M
2008-01-01
In this paper, exact analytical expressions for the entire phonon spectra in single-walled carbon nanotubes with zigzag geometry are presented by using a new approach, originally developed by Kandemir and Altanhan. This approach is based on the concept of construction of a classical lattice Hamiltonian of single-walled carbon nanotubes, wherein the nearest and next nearest neighbor and bond bending interactions are all included, then its quantization and finally diagonalization of the resulting second quantized Hamiltonian. Furthermore, within this context, explicit analytical expressions for the relevant electron-phonon interaction coefficients are also investigated for single-walled carbon nanotubes having this geometry, by the phonon modulation of the hopping interaction
Directory of Open Access Journals (Sweden)
H. K. Hetman
2011-01-01
Full Text Available A number of functions for approximating the universal magnetic curve and its derivatives, their accuracy and conformity to the requirements put forward by the authors have been studied.
Pakkar, Mohammad Sadegh
2017-01-01
Purpose: This paper aims to propose an integration of the analytic hierarchy process (AHP) and data envelopment analysis (DEA) methods in a multiattribute grey relational analysis (GRA) methodology in which the attribute weights are completely unknown and the attribute values take the form of fuzzy numbers. Design/methodology/approach: This research has been organized to proceed along the following steps: computing the grey relational coefficients for alternatives with respect to each attribu...
International Nuclear Information System (INIS)
1979-01-01
Analytical procedures were refined for the Structural Assessment Approach for assessing the Material Control and Accounting systems at facilities that contain special nuclear material. Requirements were established for an efficient, feasible algorithm to be used in evaluating system performance measures that involve the probability of detection. Algorithm requirements to calculate the probability of detection for a given type of adversary and the target set are described
HPTLC fingerprint: a modern approach for the analytical determination of botanicals
Directory of Open Access Journals (Sweden)
Marcello Nicoletti
2011-07-01
Full Text Available Availability of rapid and reliable methods for detection of quality in plant raw materials and botanicals is urgently needed. The recent HPTLC instrumentation allows to obtain fingerprints useful to ascertain identity and composition. The results of direct application of HPTLC devices in selected cases, using the fingerprint approach, are here reported, considered and compared with other methods. HPTLC is proposed as an useful tool for analytical validation of the novel forms of natural products.
HPTLC fingerprint: a modern approach for the analytical determination of botanicals
Directory of Open Access Journals (Sweden)
Marcello Nicoletti
2011-10-01
Full Text Available Availability of rapid and reliable methods for detection of quality in plant raw materials and botanicals is urgently needed. The recent HPTLC instrumentation allows to obtain fingerprints useful to ascertain identity and composition. The results of direct application of HPTLC devices in selected cases, using the fingerprint approach, are here reported, considered and compared with other methods. HPTLC is proposed as an useful tool for analytical validation of the novel forms of natural products.
HPTLC fingerprint: a modern approach for the analytical determination of botanicals
Nicoletti,Marcello
2011-01-01
Availability of rapid and reliable methods for detection of quality in plant raw materials and botanicals is urgently needed. The recent HPTLC instrumentation allows to obtain fingerprints useful to ascertain identity and composition. The results of direct application of HPTLC devices in selected cases, using the fingerprint approach, are here reported, considered and compared with other methods. HPTLC is proposed as an useful tool for analytical validation of the novel forms of natural produ...
A Streamlined Approach to Solving Simple and Complex Kinetic Systems Analytically
Andraos, John
1999-11-01
The use of Laplace transforms and integration techniques for the solution of simultaneous differential equations is demonstrated for obtaining the analytical solutions of simple and complex kinetic systems well known to students of the chemical sciences. These techniques learned in core first- and second-year mathematics courses provide a firm grounding in students' ability to understand the derivation of various rate expressions, and illustrate the value of cross-disciplinary approaches to education between the chemical and mathematical sciences.
Rebenda, Josef; Šmarda, Zdeněk
2017-07-01
In the paper, we propose a correct and efficient semi-analytical approach to solve initial value problem for systems of functional differential equations with delay. The idea is to combine the method of steps and differential transformation method (DTM). In the latter, formulas for proportional arguments and nonlinear terms are used. An example of using this technique for a system with constant and proportional delays is presented.
Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael
2017-09-01
Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.
Directory of Open Access Journals (Sweden)
S. Wüst
2017-09-01
Full Text Available Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals – the subtraction of the spline from the original time series – are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.
Accurate analytical modeling of junctionless DG-MOSFET by green's function approach
Nandi, Ashutosh; Pandey, Nilesh
2017-11-01
An accurate analytical model of Junctionless double gate MOSFET (JL-DG-MOSFET) in the subthreshold regime of operation is developed in this work using green's function approach. The approach considers 2-D mixed boundary conditions and multi-zone techniques to provide an exact analytical solution to 2-D Poisson's equation. The Fourier coefficients are calculated correctly to derive the potential equations that are further used to model the channel current and subthreshold slope of the device. The threshold voltage roll-off is computed from parallel shifts of Ids-Vgs curves between the long channel and short-channel devices. It is observed that the green's function approach of solving 2-D Poisson's equation in both oxide and silicon region can accurately predict channel potential, subthreshold current (Isub), threshold voltage (Vt) roll-off and subthreshold slope (SS) of both long & short channel devices designed with different doping concentrations and higher as well as lower tsi/tox ratio. All the analytical model results are verified through comparisons with TCAD Sentaurus simulation results. It is observed that the model matches quite well with TCAD device simulations.
Directory of Open Access Journals (Sweden)
Larissa B. Del Piero
2016-06-01
Full Text Available Early neuroimaging studies suggested that adolescents show initial development in brain regions linked with emotional reactivity, but slower development in brain structures linked with emotion regulation. However, the increased sophistication of adolescent brain research has made this picture more complex. This review examines functional neuroimaging studies that test for differences in basic emotion processing (reactivity and regulation between adolescents and either children or adults. We delineated different emotional processing demands across the experimental paradigms in the reviewed studies to synthesize the diverse results. The methods for assessing change (i.e., analytical approach and cohort characteristics (e.g., age range were also explored as potential factors influencing study results. Few unifying dimensions were found to successfully distill the results of the reviewed studies. However, this review highlights the potential impact of subtle methodological and analytic differences between studies, need for standardized and theory-driven experimental paradigms, and necessity of analytic approaches that are can adequately test the trajectories of developmental change that have recently been proposed. Recommendations for future research highlight connectivity analyses and non-linear developmental trajectories, which appear to be promising approaches for measuring change across adolescence. Recommendations are made for evaluating gender and biological markers of development beyond chronological age.
Terasaki, J.
2018-03-01
The nuclear matrix elements (NMEs) of the neutrinoless and two-neutrino double-β decays of 48Ca are calculated by the quasiparticle random-phase approximation (QRPA) with emphasis on the consistency examinations of this calculation method. The main new examination points are the consistency of two ways to treat the intermediate-state energies in the two-neutrino double-β NME and comparison with the experimental charge-exchange strength functions obtained from 48Ca(p ,n ) and 48Ti(n ,p ) reactions. No decisive problem preventing use of the QRPA approach is found. The obtained neutrinoless double-β NME adjusted by the ratio of the effective and bare axial-vector current couplings is lowest in those calculated by different groups and close to one of the QRPA values obtained by another group.
Design of laser-generated shockwave experiments. An approach using analytic models
International Nuclear Information System (INIS)
Lee, Y.T.; Trainor, R.J.
1980-01-01
Two of the target-physics phenomena which must be understood before a clean experiment can be confidently performed are preheating due to suprathermal electrons and shock decay due to a shock-rarefaction interaction. Simple analytic models are described for these two processes and the predictions of these models are compared with those of the LASNEX fluid physics code. We have approached this work not with the view of surpassing or even approaching the reliability of the code calculations, but rather with the aim of providing simple models which may be used for quick parameter-sensitivity evaluations, while providing physical insight into the problems
Tschandl, P; Kittler, H; Schmid, K; Zalaudek, I; Argenziano, G
2015-06-01
There are two strategies to approach the dermatoscopic diagnosis of pigmented skin tumours, namely the verbal-based analytic and the more visual-global heuristic method. It is not known if one or the other is more efficient in teaching dermatoscopy. To compare two teaching methods in short-term training of dermatoscopy to medical students. Fifty-seven medical students in the last year of the curriculum were given a 1-h lecture of either the heuristic- or the analytic-based teaching of dermatoscopy. Before and after this session, they were shown the same 50 lesions and asked to diagnose them and rate for chance of malignancy. Test lesions consisted of melanomas, basal cell carcinomas, nevi, seborrhoeic keratoses, benign vascular tumours and dermatofibromas. Performance measures were diagnostic accuracy regarding malignancy as measured by the area under the curves of receiver operating curves (range: 0-1), as well as per cent correct diagnoses (range: 0-100%). Diagnostic accuracy as well as per cent correct diagnoses increased by +0.21 and +32.9% (heuristic teaching) and +0.19 and +35.7% (analytic teaching) respectively (P for all heuristic or analytic method does not have an influence on this effect in short training using common pigmented skin lesions. © 2014 European Academy of Dermatology and Venereology.
Chiadamrong, N.; Piyathanavong, V.
2017-12-01
Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.
A Numerical-Analytical Approach to Modeling the Axial Rotation of the Earth
Markov, Yu. G.; Perepelkin, V. V.; Rykhlova, L. V.; Filippova, A. S.
2018-04-01
A model for the non-uniform axial rotation of the Earth is studied using a celestial-mechanical approach and numerical simulations. The application of an approximate model containing a small number of parameters to predict variations of the axial rotation velocity of the Earth over short time intervals is justified. This approximate model is obtained by averaging variable parameters that are subject to small variations due to non-stationarity of the perturbing factors. The model is verified and compared with predictions over a long time interval published by the International Earth Rotation and Reference Systems Service (IERS).
Directory of Open Access Journals (Sweden)
Zanzi Luigi
2010-01-01
Full Text Available The two-step approach is a fast algorithm for 3D migration originally introduced to process zero-offset seismic data. Its application to monostatic GPR (Ground Penetrating Radar data is straightforward. A direct extension of the algorithm for the application to bistatic radar data is possible provided that the TX-RX azimuth is constant. As for the zero-offset case, the two-step operator is exactly equivalent to the one-step 3D operator for a constant velocity medium and is an approximation of the one-step 3D operator for a medium where the velocity varies vertically. Two methods are explored for handling a heterogeneous medium; both are suitable for the application of the two-step approach, and they are compared in terms of accuracy of the final 3D operator. The aperture of the two-step operator is discussed, and a solution is proposed to optimize its shape. The analysis is of interest for any NDT application where the medium is expected to be heterogeneous, or where the antenna is not in direct contact with the medium (e.g., NDT of artworks, humanitarian demining, radar with air-launched antennas.
Directory of Open Access Journals (Sweden)
Simon Boitard
2016-03-01
Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.
Analytical Features: A Knowledge-Based Approach to Audio Feature Generation
Directory of Open Access Journals (Sweden)
Pachet François
2009-01-01
Full Text Available We present a feature generation system designed to create audio features for supervised classification tasks. The main contribution to feature generation studies is the notion of analytical features (AFs, a construct designed to support the representation of knowledge about audio signal processing. We describe the most important aspects of AFs, in particular their dimensional type system, on which are based pattern-based random generators, heuristics, and rewriting rules. We show how AFs generalize or improve previous approaches used in feature generation. We report on several projects using AFs for difficult audio classification tasks, demonstrating their advantage over standard audio features. More generally, we propose analytical features as a paradigm to bring raw signals into the world of symbolic computation.
Energy Technology Data Exchange (ETDEWEB)
Guo, Y.; van Dam, J.; Bergua, R.; Jove, J.; Campbell, J.
2015-03-01
Nontorque loads induced by the wind turbine rotor overhang weight and aerodynamic forces can greatly affect drivetrain loads and responses. If not addressed properly, these loads can result in a decrease in gearbox component life. This work uses analytical modeling, computational modeling, and experimental data to evaluate a unique drivetrain design that minimizes the effects of nontorque loads on gearbox reliability: the Pure Torque(R) drivetrain developed by Alstom. The drivetrain has a hub-support configuration that transmits nontorque loads directly into the tower rather than through the gearbox as in other design approaches. An analytical model of Alstom's Pure Torque drivetrain provides insight into the relationships among turbine component weights, aerodynamic forces, and the resulting drivetrain loads. Main shaft bending loads are orders of magnitude lower than the rated torque and are hardly affected by wind conditions and turbine operations.
Takata, Masashi; Takagi, Kenichiro; Nagase, Takashi; Kobayashi, Takashi; Naito, Hiroyoshi
2016-04-01
An analytical expression for impedance spectra in the case of double injection (both electrons and holes are injected into an organic semiconductor thin film) has been derived from the basic transport equations (the current density equation, the continuity equation and the Possion's equation). Capacitance-frequency characteristics calculated from the analytical expression have been examined at different recombination constants and different values of mobility balance defined by a ratio of electron mobility to hole mobility. Negative capacitance appears when the recombination constant is lower than the Langevin recombination constant and when the value of the mobility balance approaches unity. These results are consistent with the numerical results obtained by a device simulator (Atlas, Silvaco).
Pintér, Balázs; Erdélyi, R.
2018-01-01
Solar fundamental (f) acoustic mode oscillations are investigated analytically in a magnetohydrodynamic (MHD) model. The model consists of three layers in planar geometry, representing the solar interior, the magnetic atmosphere, and a transitional layer sandwiched between them. Since we focus on the fundamental mode here, we assume the plasma is incompressible. A horizontal, canopy-like, magnetic field is introduced to the atmosphere, in which degenerated slow MHD waves can exist. The global (f-mode) oscillations can couple to local atmospheric Alfvén waves, resulting, e.g., in a frequency shift of the oscillations. The dispersion relation of the global oscillation mode is derived, and is solved analytically for the thin-transitional layer approximation and for the weak-field approximation. Analytical formulae are also provided for the frequency shifts due to the presence of a thin transitional layer and a weak atmospheric magnetic field. The analytical results generally indicate that, compared to the fundamental value (ω =√{ gk }), the mode frequency is reduced by the presence of an atmosphere by a few per cent. A thin transitional layer reduces the eigen-frequencies further by about an additional hundred microhertz. Finally, a weak atmospheric magnetic field can slightly, by a few percent, increase the frequency of the eigen-mode. Stronger magnetic fields, however, can increase the f-mode frequency by even up to ten per cent, which cannot be seen in observed data. The presence of a magnetic atmosphere in the three-layer model also introduces non-permitted propagation windows in the frequency spectrum; here, f-mode oscillations cannot exist with certain values of the harmonic degree. The eigen-frequencies can be sensitive to the background physical parameters, such as an atmospheric density scale-height or the rate of the plasma density drop at the photosphere. Such information, if ever observed with high-resolution instrumentation and inverted, could help to
Integrated assessment of the global warming problem: A decision-analytical approach
International Nuclear Information System (INIS)
Van Lenthe, J.; Hendrickx, L.; Vlek, C.A.J.
1994-12-01
The multi-disciplinary character of the global warming problem asks for an integrated assessment approach for ordering and combining the various physical, ecological, economical, and sociological results. The Netherlands initiated their own National Research Program (NRP) on Global Air Pollution and Climate Change (NRP). The first phase (NRP-1) identified the integration theme as one of five central research themes. The second phase (NRP-2) shows a growing concern for integrated assessment issues. The current two-year research project 'Characterizing the risks: a comparative analysis of the risks of global warming and of relevant policy options, which started in September 1993, comes under the integrated assessment part of the Dutch NRP. The first part of the interim report describes the search for an integrated assessment methodology. It starts with emphasizing the need for integrated assessment at a relatively high level of aggregation and from a policy point of view. The conclusion will be that a decision-analytical approach might fit the purpose of a policy-oriented integrated modeling of the global warming problem. The discussion proceeds with an account on decision analysis and its explicit incorporation and analysis of uncertainty. Then influence diagrams, a relatively recent development in decision analysis, are introduced as a useful decision-analytical approach for integrated assessment. Finally, a software environment for creating and analyzing complex influence diagram models is discussed. The second part of the interim report provides a first, provisional integrated modeling of the global warming problem, emphasizing on the illustration of the decision-analytical approach. Major problem elements are identified and an initial problem structure is developed. The problem structure is described in terms of hierarchical influence diagrams. At some places the qualitative structure is filled with quantitative data
Carter, James L.; Resh, Vincent H.
2013-01-01
Biomonitoring programs based on benthic macroinvertebrates are well-established worldwide. Their value, however, depends on the appropriateness of the analytical techniques used. All United States State, benthic macroinvertebrate biomonitoring programs were surveyed regarding the purposes of their programs, quality-assurance and quality-control procedures used, habitat and water-chemistry data collected, treatment of macroinvertebrate data prior to analysis, statistical methods used, and data-storage considerations. State regulatory mandates (59 percent of programs), biotic index development (17 percent), and Federal requirements (15 percent) were the most frequently reported purposes of State programs, with the specific tasks of satisfying the requirements for 305b/303d reports (89 percent), establishment and monitoring of total maximum daily loads, and developing biocriteria being the purposes most often mentioned. Most states establish reference sites (81 percent), but classify them using State-specific methods. The most often used technique for determining the appropriateness of a reference site was Best Professional Judgment (86 percent of these states). Macroinvertebrate samples are almost always collected by using a D-frame net, and duplicate samples are collected from approximately 10 percent of sites for quality assurance and quality control purposes. Most programs have macroinvertebrate samples processed by contractors (53 percent) and have identifications confirmed by a second taxonomist (85 percent). All States collect habitat data, with most using the Rapid Bioassessment Protocol visual-assessment approach, which requires ~1 h/site. Dissolved oxygen, pH, and conductivity are measured in more than 90 percent of programs. Wide variation exists in which taxa are excluded from analyses and the level of taxonomic resolution used. Species traits, such as functional feeding groups, are commonly used (96 percent), as are tolerance values for organic pollution
Al-Ababneh, Nedal
2014-07-01
We propose an accurate analytical model to calculate the optical crosstalk of a first-order free space optical interconnects system that uses microlenses with circular apertures. The proposed model is derived by evaluating the resulted finite integral in terms of an infinite series of Bessel functions. Compared to the model that uses complex Gaussian functions to expand the aperture function, it is shown that the proposed model is superior in estimating the crosstalk and provides more accurate results. Moreover, it is shown that the proposed model gives results close to that of the numerical model with superior computational efficiency.
Energy Technology Data Exchange (ETDEWEB)
Yepes, P [Rice University, Houston, TX (United States); UT MD Anderson Cancer Center, Houston, TX (United States); Titt, U; Mirkovic, D; Liu, A; Frank, S; Mohan, R [UT MD Anderson Cancer Center, Houston, TX (United States)
2016-06-15
Purpose: Evaluate the differences in dose distributions between the proton analytic semi-empirical dose calculation algorithm used in the clinic and Monte Carlo calculations for a sample of 50 head-and-neck (H&N) patients and estimate the potential clinical significance of the differences. Methods: A cohort of 50 H&N patients, treated at the University of Texas Cancer Center with Intensity Modulated Proton Therapy (IMPT), were selected for evaluation of clinical significance of approximations in computed dose distributions. H&N site was selected because of the highly inhomogeneous nature of the anatomy. The Fast Dose Calculator (FDC), a fast track-repeating accelerated Monte Carlo algorithm for proton therapy, was utilized for the calculation of dose distributions delivered during treatment plans. Because of its short processing time, FDC allows for the processing of large cohorts of patients. FDC has been validated versus GEANT4, a full Monte Carlo system and measurements in water and for inhomogeneous phantoms. A gamma-index analysis, DVHs, EUDs, and TCP and NTCPs computed using published models were utilized to evaluate the differences between the Treatment Plan System (TPS) and FDC. Results: The Monte Carlo results systematically predict lower dose delivered in the target. The observed differences can be as large as 8 Gy, and should have a clinical impact. Gamma analysis also showed significant differences between both approaches, especially for the target volumes. Conclusion: Monte Carlo calculations with fast algorithms is practical and should be considered for the clinic, at least as a treatment plan verification tool.
Schempf, Ashley H; Kaufman, Jay S
2012-10-01
A common epidemiologic objective is to evaluate the contribution of residential context to individual-level disparities by race or socioeconomic position. We reviewed analytic strategies to account for the total (observed and unobserved factors) contribution of environmental context to health inequalities, including conventional fixed effects (FE) and hybrid FE implemented within a random effects (RE) or a marginal model. To illustrate results and limitations of the various analytic approaches of accounting for the total contextual component of health disparities, we used data on births nested within neighborhoods as an applied example of evaluating neighborhood confounding of racial disparities in gestational age at birth, including both a continuous and a binary outcome. Ordinary and RE models provided disparity estimates that can be substantially biased in the presence of neighborhood confounding. Both FE and hybrid FE models can account for cluster level confounding and provide disparity estimates unconfounded by neighborhood, with the latter having greater flexibility in allowing estimation of neighborhood-level effects and intercept/slope variability when implemented in a RE specification. Given the range of models that can be implemented in a hybrid approach and the frequent goal of accounting for contextual confounding, this approach should be used more often. Published by Elsevier Inc.
Directory of Open Access Journals (Sweden)
Euro Beinat
2012-11-01
Full Text Available In this paper we present a visual analytics approach for deriving spatio-temporal patterns of collective human mobility from a vast mobile network traffic data set. More than 88 million movements between pairs of radio cells—so-called handovers—served as a proxy for more than two months of mobility within four urban test areas in Northern Italy. In contrast to previous work, our approach relies entirely on visualization and mapping techniques, implemented in several software applications. We purposefully avoid statistical or probabilistic modeling and, nonetheless, reveal characteristic and exceptional mobility patterns. The results show, for example, surprising similarities and symmetries amongst the total mobility and people flows between the test areas. Moreover, the exceptional patterns detected can be associated to real-world events such as soccer matches. We conclude that the visual analytics approach presented can shed new light on large-scale collective urban mobility behavior and thus helps to better understand the “pulse” of dynamic urban systems.
The Flipped MOOC: Using Gamification and Learning Analytics in MOOC Design—A Conceptual Approach
Directory of Open Access Journals (Sweden)
Roland Klemke
2018-02-01
Full Text Available Recently, research has highlighted the potential of Massive Open Online Courses (MOOCs for education, as well as their drawbacks, which are well known. Several studies state that the main limitations of the MOOCs are low completion and high dropout rates of participants. However, MOOCs suffer also from the lack of participant engagement, personalization, and despite the fact that several formats and types of MOOCs are reported in the literature, the majority of them contain a considerable amount of content that is mainly presented in a video format. This is in contrast to the results reported in other educational settings, where engagement and active participation are identified as success factors. We present the results of a study that involved educational experts and learning scientists giving new and interesting insights towards the conceptualization of a new design approach, the flipped MOOC, applying the flipped classroom approach to the MOOCs’ design and making use of gamification and learning analytics. We found important indications, applicable to the concept of a flipped MOOC, which entails turning MOOCs from mainly content-oriented delivery machines into personalized, interactive, and engaging learning environments. Our findings support the idea that MOOCs can be enriched by the orchestration of a flipped classroom approach in combination with the support of gamification and learning analytics.
Directory of Open Access Journals (Sweden)
Thomas M Kessler
Full Text Available BACKGROUND: Overactive bladder (OAB affects the lives of millions of people worldwide and antimuscarinics are the pharmacological treatment of choice. Meta-analyses of all currently used antimuscarinics for treating OAB found similar efficacy, making the choice dependent on their adverse event profiles. However, conventional meta-analyses often fail to quantify and compare adverse events across different drugs, dosages, formulations, and routes of administration. In addition, the assessment of the broad variety of adverse events is dissatisfying. Our aim was to compare adverse events of antimuscarinics using a network meta-analytic approach that overcomes shortcomings of conventional analyses. METHODS: Cochrane Incontinence Group Specialized Trials Register, previous systematic reviews, conference abstracts, book chapters, and reference lists of relevant articles were searched. Eligible studies included randomized controlled trials comparing at least one antimuscarinic for treating OAB with placebo or with another antimuscarinic, and adverse events as outcome measures. Two authors independently extracted data. A network meta-analytic approach was applied allowing for joint assessment of all adverse events of all currently used antimuscarinics while fully maintaining randomization. RESULTS: 69 trials enrolling 26'229 patients were included. Similar overall adverse event profiles were found for darifenacin, fesoterodine, transdermal oxybutynin, propiverine, solifenacin, tolterodine, and trospium chloride but not for oxybutynin orally administered when currently used starting dosages were compared. CONCLUSIONS: The proposed generally applicable transparent network meta-analytic approach summarizes adverse events in an easy to grasp way allowing straightforward benchmarking of antimuscarinics for treating OAB in clinical practice. Most currently used antimuscarinics seem to be equivalent first choice drugs to start the treatment of OAB except for
Exploring Multi-Scale Spatiotemporal Twitter User Mobility Patterns with a Visual-Analytics Approach
Directory of Open Access Journals (Sweden)
Junjun Yin
2016-10-01
Full Text Available Understanding human mobility patterns is of great importance for urban planning, traffic management, and even marketing campaign. However, the capability of capturing detailed human movements with fine-grained spatial and temporal granularity is still limited. In this study, we extracted high-resolution mobility data from a collection of over 1.3 billion geo-located Twitter messages. Regarding the concerns of infringement on individual privacy, such as the mobile phone call records with restricted access, the dataset is collected from publicly accessible Twitter data streams. In this paper, we employed a visual-analytics approach to studying multi-scale spatiotemporal Twitter user mobility patterns in the contiguous United States during the year 2014. Our approach included a scalable visual-analytics framework to deliver efficiency and scalability in filtering large volume of geo-located tweets, modeling and extracting Twitter user movements, generating space-time user trajectories, and summarizing multi-scale spatiotemporal user mobility patterns. We performed a set of statistical analysis to understand Twitter user mobility patterns across multi-level spatial scales and temporal ranges. In particular, Twitter user mobility patterns measured by the displacements and radius of gyrations of individuals revealed multi-scale or multi-modal Twitter user mobility patterns. By further studying such mobility patterns in different temporal ranges, we identified both consistency and seasonal fluctuations regarding the distance decay effects in the corresponding mobility patterns. At the same time, our approach provides a geo-visualization unit with an interactive 3D virtual globe web mapping interface for exploratory geo-visual analytics of the multi-level spatiotemporal Twitter user movements.
An analysis of beam parameters on proton-acoustic waves through an analytic approach.
Kipergil, Esra Aytac; Erkol, Hakan; Kaya, Serhat; Gulsen, Gultekin; Unlu, Mehmet Burcin
2017-06-21
It has been reported that acoustic waves are generated when a high-energy pulsed proton beam is deposited in a small volume within tissue. One possible application of proton-induced acoustics is to get real-time feedback for intra-treatment adjustments by monitoring such acoustic waves. A high spatial resolution in ultrasound imaging may reduce proton range uncertainty. Thus, it is crucial to understand the dependence of the acoustic waves on the proton beam characteristics. In this manuscript, firstly, an analytic solution for the proton-induced acoustic wave is presented to reveal the dependence of the signal on the beam parameters; then it is combined with an analytic approximation of the Bragg curve. The influence of the beam energy, pulse duration and beam diameter variation on the acoustic waveform are investigated. Further analysis is performed regarding the Fourier decomposition of the proton-acoustic signals. Our results show that the smaller spill time of the proton beam upsurges the amplitude of the acoustic wave for a constant number of protons, which is hence beneficial for dose monitoring. The increase in the energy of each individual proton in the beam leads to the spatial broadening of the Bragg curve, which also yields acoustic waves of greater amplitude. The pulse duration and the beam width of the proton beam do not affect the central frequency of the acoustic wave, but they change the amplitude of the spectral components.
Analytic mappings: a new approach in particle production by accelerated observers
International Nuclear Information System (INIS)
Sanchez, N.
1982-01-01
This is a summary of the authors recent results about physical consequences of analytic mappings in the space-time. Classically, the mapping defines an accelerated frame. At the quantum level it gives rise to particle production. Statistically, the real singularities of the mapping have associated temperatures. This concerns a new approach in Q.F.T. as formulated in accelerated frames. It has been considered as a first step in the understanding of the deep connection that could exist between the structure (geometry and topology) of the space-time and thermodynamics, mainly motivated by the works of Hawking since 1975. (Auth.)
An analytical approach to space charge distortions for time projection chambers
Rossegger, S; Riegler, W
2010-01-01
In a time projection chamber (TPC), the possible ion feedback and also the primary ionization of high multiplicity events result in accumulation of ionic charges inside the gas volume (space charge). This charge introduces electrical field distortions and modifies the cluster trajectory along the drift path, affecting the tracking performance of the detector. In order to calculate the track distortions due to an arbitrary space charge distribution in the TPC, novel representations of the Green's function for a TPC geometry were worked out. This analytical approach finally permits accurate predictions of track distortions due to an arbitrary space charge distribution by solving the Langevin equation.
A combined analytic-numeric approach for some boundary-value problems
Directory of Open Access Journals (Sweden)
Mustafa Turkyilmazoglu
2016-02-01
Full Text Available A combined analytic-numeric approach is undertaken in the present work for the solution of boundary-value problems in the finite or semi-infinite domains. Equations to be treated arise specifically from the boundary layer analysis of some two and three-dimensional flows in fluid mechanics. The purpose is to find quick but accurate enough solutions. Taylor expansions at either boundary conditions are computed which are next matched to the other asymptotic or exact boundary conditions. The technique is applied to the well-known Blasius as well as Karman flows. Solutions obtained in terms of series compare favorably with the existing ones in the literature.
Semi-analytical approach to modelling the dynamic behaviour of soil excited by embedded foundations
DEFF Research Database (Denmark)
Bucinskas, Paulius; Andersen, Lars Vabbersgaard
2017-01-01
The underlying soil has a significant effect on the dynamic behaviour of structures. The paper proposes a semi-analytical approach based on a Green’s function solution in frequency–wavenumber domain. The procedure allows calculating the dynamic stiffness for points on the soil surface as well...... as for points inside the soil body. Different cases of soil stratification can be considered, with soil layers with different properties overlying a half-space of soil or bedrock. In this paper, the soil is coupled with piles and surface foundations. The effects of different foundation modelling configurations...
Europe needs to take clear, analytical approach in considering future of nuclear energy
Energy Technology Data Exchange (ETDEWEB)
Shepherd, John [nuclear 24, Redditch (United Kingdom)
2016-11-15
Europe's political leaders have been accused of failing to offer a clear and comprehensive approach to the future of nuclear power in Europe. The criticism came in an opinion adopted recently by the European Economic and Social Committee (EESC). According to the EESC, the European Commission should propose ''a clear analytical process and methodology which can offer a consistent, voluntary framework for national decision-making about the role - if any - of nuclear power in the energy mix''.
Analytic network process (ANP approach for product mix planning in railway industry
Directory of Open Access Journals (Sweden)
Hadi Pazoki Toroudi
2016-08-01
Full Text Available Given the competitive environment in the global market in recent years, organizations need to plan for increased profitability and optimize their performance. Planning for an appropriate product mix plays essential role for the success of most production units. This paper applies analytical network process (ANP approach for product mix planning for a part supplier in Iran. The proposed method uses four criteria including cost of production, sales figures, supply of raw materials and quality of products. In addition, the study proposes different set of products as alternatives for production planning. The preliminary results have indicated that that the proposed study of this paper could increase productivity, significantly.
Luo, Wei; Yin, Peifeng; Di, Qian; Hardisty, Frank; MacEachren, Alan M.
2014-01-01
The world has become a complex set of geo-social systems interconnected by networks, including transportation networks, telecommunications, and the internet. Understanding the interactions between spatial and social relationships within such geo-social systems is a challenge. This research aims to address this challenge through the framework of geovisual analytics. We present the GeoSocialApp which implements traditional network analysis methods in the context of explicitly spatial and social representations. We then apply it to an exploration of international trade networks in terms of the complex interactions between spatial and social relationships. This exploration using the GeoSocialApp helps us develop a two-part hypothesis: international trade network clusters with structural equivalence are strongly ‘balkanized’ (fragmented) according to the geography of trading partners, and the geographical distance weighted by population within each network cluster has a positive relationship with the development level of countries. In addition to demonstrating the potential of visual analytics to provide insight concerning complex geo-social relationships at a global scale, the research also addresses the challenge of validating insights derived through interactive geovisual analytics. We develop two indicators to quantify the observed patterns, and then use a Monte-Carlo approach to support the hypothesis developed above. PMID:24558409
Engel, Kelly B; Vaught, Jim; Moore, Helen M
2014-04-01
Variable biospecimen collection, processing, and storage practices may introduce variability in biospecimen quality and analytical results. This risk can be minimized within a facility through the use of standardized procedures; however, analysis of biospecimens from different facilities may be confounded by differences in procedures and inferred biospecimen quality. Thus, a global approach to standardization of biospecimen handling procedures and their validation is needed. Here we present the first in a series of procedural guidelines that were developed and annotated with published findings in the field of human biospecimen science. The series of documents will be known as NCI Biospecimen Evidence-Based Practices, or BEBPs. Pertinent literature was identified via the National Cancer Institute (NCI) Biospecimen Research Database ( brd.nci.nih.gov ) and findings were organized by specific biospecimen pre-analytical factors and analytes of interest (DNA, RNA, protein, morphology). Meta-analysis results were presented as annotated summaries, which highlight concordant and discordant findings and the threshold and magnitude of effects when applicable. The detailed and adaptable format of the document is intended to support the development and execution of evidence-based standard operating procedures (SOPs) for human biospecimen collection, processing, and storage operations.
Luo, Wei; Yin, Peifeng; Di, Qian; Hardisty, Frank; MacEachren, Alan M
2014-01-01
The world has become a complex set of geo-social systems interconnected by networks, including transportation networks, telecommunications, and the internet. Understanding the interactions between spatial and social relationships within such geo-social systems is a challenge. This research aims to address this challenge through the framework of geovisual analytics. We present the GeoSocialApp which implements traditional network analysis methods in the context of explicitly spatial and social representations. We then apply it to an exploration of international trade networks in terms of the complex interactions between spatial and social relationships. This exploration using the GeoSocialApp helps us develop a two-part hypothesis: international trade network clusters with structural equivalence are strongly 'balkanized' (fragmented) according to the geography of trading partners, and the geographical distance weighted by population within each network cluster has a positive relationship with the development level of countries. In addition to demonstrating the potential of visual analytics to provide insight concerning complex geo-social relationships at a global scale, the research also addresses the challenge of validating insights derived through interactive geovisual analytics. We develop two indicators to quantify the observed patterns, and then use a Monte-Carlo approach to support the hypothesis developed above.
An analytical approach for a nodal scheme of two-dimensional neutron transport problems
International Nuclear Information System (INIS)
Barichello, L.B.; Cabrera, L.C.; Prolo Filho, J.F.
2011-01-01
Research highlights: → Nodal equations for a two-dimensional neutron transport problem. → Analytical Discrete Ordinates Method. → Numerical results compared with the literature. - Abstract: In this work, a solution for a two-dimensional neutron transport problem, in cartesian geometry, is proposed, on the basis of nodal schemes. In this context, one-dimensional equations are generated by an integration process of the multidimensional problem. Here, the integration is performed for the whole domain such that no iterative procedure between nodes is needed. The ADO method is used to develop analytical discrete ordinates solution for the one-dimensional integrated equations, such that final solutions are analytical in terms of the spatial variables. The ADO approach along with a level symmetric quadrature scheme, lead to a significant order reduction of the associated eigenvalues problems. Relations between the averaged fluxes and the unknown fluxes at the boundary are introduced as the usually needed, in nodal schemes, auxiliary equations. Numerical results are presented and compared with test problems.
Danaeifar, Mohammad; Granpayeh, Nosrat
2018-03-01
An analytical method is presented to analyze and synthesize bianisotropic metasurfaces. The equivalent parameters of metasurfaces in terms of meta-atom properties and other specifications of metasurfaces are derived. These parameters are related to electric, magnetic, and electromagnetic/magnetoelectric dipole moments of the bianisotropic media, and they can simplify the analysis of complicated and multilayer structures. A metasurface of split ring resonators is studied as an example demonstrating the proposed method. The optical properties of the meta-atom are explored, and the calculated polarizabilities are applied to find the reflection coefficient and the equivalent parameters of the metasurface. Finally, a structure consisting of two metasurfaces of the split ring resonators is provided, and the proposed analytical method is applied to derive the reflection coefficient. The validity of this analytical approach is verified by full-wave simulations which demonstrate good accuracy of the equivalent parameter method. This method can be used in the analysis and synthesis of bianisotropic metasurfaces with different materials and in different frequency ranges by considering electric, magnetic, and electromagnetic/magnetoelectric dipole moments.
Schumacher, Axel; Rujan, Tamas; Hoefkens, Jens
2014-12-01
The integration and analysis of large datasets in translational research has become an increasingly challenging problem. We propose a collaborative approach to integrate established data management platforms with existing analytical systems to fill the hole in the value chain between data collection and data exploitation. Our proposal in particular ensures data security and provides support for widely distributed teams of researchers. As a successful example for such an approach, we describe the implementation of a unified single platform that combines capabilities of the knowledge management platform tranSMART and the data analysis system Genedata Analyst™. The combined end-to-end platform helps to quickly find, enter, integrate, analyze, extract, and share patient- and drug-related data in the context of translational R&D projects.
A collaborative approach to develop a multi-omics data analytics platform for translational research
Directory of Open Access Journals (Sweden)
Axel Schumacher
2014-12-01
Full Text Available The integration and analysis of large datasets in translational research has become an increasingly challenging problem. We propose a collaborative approach to integrate established data management platforms with existing analytical systems to fill the hole in the value chain between data collection and data exploitation. Our proposal in particular ensures data security and provides support for widely distributed teams of researchers. As a successful example for such an approach, we describe the implementation of a unified single platform that combines capabilities of the knowledge management platform tranSMART and the data analysis system Genedata Analyst™. The combined end-to-end platform helps to quickly find, enter, integrate, analyze, extract, and share patient- and drug-related data in the context of translational R&D projects.
Johnson, Sara B; Little, Todd D; Masyn, Katherine; Mehta, Paras D; Ghazarian, Sharon R
2017-06-01
Characterizing the determinants of child health and development over time, and identifying the mechanisms by which these determinants operate, is a research priority. The growth of precision medicine has increased awareness and refinement of conceptual frameworks, data management systems, and analytic methods for multilevel data. This article reviews key methodological challenges in cohort studies designed to investigate multilevel influences on child health and strategies to address them. We review and summarize methodological challenges that could undermine prospective studies of the multilevel determinants of child health and ways to address them, borrowing approaches from the social and behavioral sciences. Nested data, variation in intervals of data collection and assessment, missing data, construct measurement across development and reporters, and unobserved population heterogeneity pose challenges in prospective multilevel cohort studies with children. We discuss innovations in missing data, innovations in person-oriented analyses, and innovations in multilevel modeling to address these challenges. Study design and analytic approaches that facilitate the integration across multiple levels, and that account for changes in people and the multiple, dynamic, nested systems in which they participate over time, are crucial to fully realize the promise of precision medicine for children and adolescents. Copyright © 2017 Elsevier Inc. All rights reserved.
DEFF Research Database (Denmark)
Poulsen, Stefan Othmar; Poulsen, Henning Friis
2014-01-01
The properties of compound refractive lenses (CRLs) of biconcave parabolic lenses for focusing and imaging synchrotron X-rays have been investigated theoretically by ray transfer matrix analysis and Gaussian beam propagation. We present approximate analytical expressions, that allow fast estimati...
A Big Data and Learning Analytics Approach to Process-Level Feedback in Cognitive Simulations.
Pecaric, Martin; Boutis, Kathy; Beckstead, Jason; Pusic, Martin
2017-02-01
Collecting and analyzing large amounts of process data for the purposes of education can be considered a big data/learning analytics (BD/LA) approach to improving learning. However, in the education of health care professionals, the application of BD/LA is limited to date. The authors discuss the potential advantages of the BD/LA approach for the process of learning via cognitive simulations. Using the lens of a cognitive model of radiograph interpretation with four phases (orientation, searching/scanning, feature detection, and decision making), they reanalyzed process data from a cognitive simulation of pediatric ankle radiography where 46 practitioners from three expertise levels classified 234 cases online. To illustrate the big data component, they highlight the data available in a digital environment (time-stamped, click-level process data). Learning analytics were illustrated using algorithmic computer-enabled approaches to process-level feedback.For each phase, the authors were able to identify examples of potentially useful BD/LA measures. For orientation, the trackable behavior of re-reviewing the clinical history was associated with increased diagnostic accuracy. For searching/scanning, evidence of skipping views was associated with an increased false-negative rate. For feature detection, heat maps overlaid on the radiograph can provide a metacognitive visualization of common novice errors. For decision making, the measured influence of sequence effects can reflect susceptibility to bias, whereas computer-generated path maps can provide insights into learners' diagnostic strategies.In conclusion, the augmented collection and dynamic analysis of learning process data within a cognitive simulation can improve feedback and prompt more precise reflection on a novice clinician's skill development.
Directory of Open Access Journals (Sweden)
Mohammad Sadegh Pakkar
2016-01-01
Full Text Available This research proposes a hierarchical aggregation approach using Data Envelopment Analysis (DEA and Analytic Hierarchy Process (AHP for indicators. The core logic of the proposed approach is to reflect the hierarchical structures of indicators and their relative priorities in constructing composite indicators (CIs, simultaneously. Under hierarchical structures, the indicators of similar characteristics can be grouped into sub-categories and further into categories. According to this approach, we define a domain of composite losses, i.e., a reduction in CI values, based on two sets of weights. The first set represents the weights of indicators for each Decision Making Unit (DMU with the minimal composite loss, and the second set represents the weights of indicators bounded by AHP with the maximal composite loss. Using a parametric distance model, we explore various ranking positions for DMUs while the indicator weights obtained from a three-level DEA-based CI model shift towards the corresponding weights bounded by AHP. An illustrative example of road safety performance indicators (SPIs for a set of European countries highlights the usefulness of the proposed approach.
International Nuclear Information System (INIS)
Esh, D.W.; Pinkston, K.E.; Barr, C.S.; Bradford, A.H.; Ridge, A.Ch.
2009-01-01
Nuclear Regulatory Commission (NRC) staff has developed a concentration averaging approach and guidance for the review of Department of Energy (DOE) non-HLW determinations. Although the approach was focused on this specific application, concentration averaging is generally applicable to waste classification and thus has implications for waste management decisions as discussed in more detail in this paper. In the United States, radioactive waste has historically been classified into various categories for the purpose of ensuring that the disposal system selected is commensurate with the hazard of the waste such that public health and safety will be protected. However, the risk from the near-surface disposal of radioactive waste is not solely a function of waste concentration but is also a function of the volume (quantity) of waste and its accessibility. A risk-informed approach to waste classification for near-surface disposal of low-level waste would consider the specific characteristics of the waste, the quantity of material, and the disposal system features that limit accessibility to the waste. NRC staff has developed example analytical approaches to estimate waste concentration, and therefore waste classification, for waste disposed in facilities or with configurations that were not anticipated when the regulation for the disposal of commercial low-level waste (i.e. 10 CFR Part 61) was developed. (authors)
Approach of decision making based on the analytic hierarchy process for urban landscape management.
Srdjevic, Zorica; Lakicevic, Milena; Srdjevic, Bojan
2013-03-01
This paper proposes a two-stage group decision making approach to urban landscape management and planning supported by the analytic hierarchy process. The proposed approach combines an application of the consensus convergence model and the weighted geometric mean method. The application of the proposed approach is shown on a real urban landscape planning problem with a park-forest in Belgrade, Serbia. Decision makers were policy makers, i.e., representatives of several key national and municipal institutions, and experts coming from different scientific fields. As a result, the most suitable management plan from the set of plans is recognized. It includes both native vegetation renewal in degraded areas of park-forest and continued maintenance of its dominant tourism function. Decision makers included in this research consider the approach to be transparent and useful for addressing landscape management tasks. The central idea of this paper can be understood in a broader sense and easily applied to other decision making problems in various scientific fields.
An Approach for Routine Analytical Detection of Beeswax Adulteration Using FTIR-ATR Spectroscopy
Directory of Open Access Journals (Sweden)
Svečnjak Lidija
2015-12-01
Full Text Available Although beeswax adulteration represents one of the main beeswax quality issues, there are still no internationally standardised analytical methods for routine quality control. The objective of this study was to establish an analytical procedure suitable for routine detection of beeswax adulteration using FTIR-ATR spectroscopy. For the purpose of this study, reference IR spectra of virgin beeswax, paraffin, and their mixtures containing different proportions of paraffin (5 - 95%, were obtained. Mixtures were used for the establishment of calibration curves. To determine the prediction strength of IR spectral data for the share of paraffin in mixtures, the Partial Least Squares Regression method was used. The same procedure was conducted on beeswax-beef tallow mixtures. The model was validated using comb foundation samples of an unknown chemical background which had been collected from the international market (n = 56. Selected physico-chemical parameters were determined for comparison purposes. Results revealed a strong predictive power (R2 = 0.999 of IR spectra for the paraffin and beef tallow share in beeswax. The results also revealed that the majority of the analysed samples (89% were adulterated with paraffin; only 6 out of 56 (11% samples were identified as virgin beeswax, 28% of the samples exhibited a higher level of paraffin adulteration (>46% of paraffin, while the majority of the analysed samples (50% were found to be adulterated with 5 - 20% of paraffin. These results indicate an urgent need for routine beeswax authenticity control. In this study, we demonstrated that the analytical approach defining the standard curves for particular adulteration levels in beeswax, based on chemometric modelling of specific IR spectral region indicative for adulteration, enables reliable determination of the adulterant proportions in beeswax.
Kolus, Ahmet; Dubé, Philippe-Antoine; Imbeau, Daniel; Labib, Richard; Dubeau, Denise
2014-11-01
In new approaches based on adaptive neuro-fuzzy systems (ANFIS) and analytical method, heart rate (HR) measurements were used to estimate oxygen consumption (VO2). Thirty-five participants performed Meyer and Flenghi's step-test (eight of which performed regeneration release work), during which heart rate and oxygen consumption were measured. Two individualized models and a General ANFIS model that does not require individual calibration were developed. Results indicated the superior precision achieved with individualized ANFIS modelling (RMSE = 1.0 and 2.8 ml/kg min in laboratory and field, respectively). The analytical model outperformed the traditional linear calibration and Flex-HR methods with field data. The General ANFIS model's estimates of VO2 were not significantly different from actual field VO2 measurements (RMSE = 3.5 ml/kg min). With its ease of use and low implementation cost, the General ANFIS model shows potential to replace any of the traditional individualized methods for VO2 estimation from HR data collected in the field. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Durante, Caterina; Baschieri, Carlo; Bertacchini, Lucia; Bertelli, Davide; Cocchi, Marina; Marchetti, Andrea; Manzini, Daniela; Papotti, Giulia; Sighinolfi, Simona
2015-04-15
Geographical origin and authenticity of food are topics of interest for both consumers and producers. Among the different indicators used for traceability studies, (87)Sr/(86)Sr isotopic ratio has provided excellent results. In this study, two analytical approaches for wine sample pre-treatment, microwave and low temperature mineralisation, were investigated to develop accurate and precise analytical method for (87)Sr/(86)Sr determination. The two procedures led to comparable results (paired t-test, with twine sample), processed during each sample batch (calculated Relative Standard Deviation, RSD%, equal to 0.002%. Lambrusco PDO (Protected Designation of Origin) wines coming from four different vintages (2009, 2010, 2011 and 2012) were pre-treated according to the best procedure and their isotopic values were compared with isotopic data coming from (i) soils of their territory of origin and (ii) wines obtained by same grape varieties cultivated in different districts. The obtained results have shown no significant variability among the different vintages of wines and a perfect agreement between the isotopic range of the soils and wines has been observed. Nevertheless, the investigated indicator was not enough powerful to discriminate between similar products. To this regard, it is worth to note that more soil samples as well as wines coming from different districts will be considered to obtain more trustworthy results. Copyright © 2014 Elsevier Ltd. All rights reserved.
Quasi-Steady Evolution of Hillslopes in Layered Landscapes: An Analytic Approach
Glade, R. C.; Anderson, R. S.
2018-01-01
Landscapes developed in layered sedimentary or igneous rocks are common on Earth, as well as on other planets. Features such as hogbacks, exposed dikes, escarpments, and mesas exhibit resistant rock layers adjoining more erodible rock in tilted, vertical, or horizontal orientations. Hillslopes developed in the erodible rock are typically characterized by steep, linear-to-concave slopes or "ramps" mantled with material derived from the resistant layers, often in the form of large blocks. Previous work on hogbacks has shown that feedbacks between weathering and transport of the blocks and underlying soft rock can create relief over time and lead to the development of concave-up slope profiles in the absence of rilling processes. Here we employ an analytic approach, informed by numerical modeling and field data, to describe the quasi-steady state behavior of such rocky hillslopes for the full spectrum of resistant layer dip angles. We begin with a simple geometric analysis that relates structural dip to erosion rates. We then explore the mechanisms by which our numerical model of hogback evolution self-organizes to meet these geometric expectations, including adjustment of soil depth, erosion rates, and block velocities along the ramp. Analytical solutions relate easily measurable field quantities such as ramp length, slope, block size, and resistant layer dip angle to local incision rate, block velocity, and block weathering rate. These equations provide a framework for exploring the evolution of layered landscapes and pinpoint the processes for which we require a more thorough understanding to predict their evolution over time.
Gallardo, Helena; Queralt, Ignasi; Tapias, Josefina; Candela, Lucila; Margui, Eva
2016-08-01
Monitoring total bromine and bromide concentrations in soils is significant in many environmental studies. Thus fast analytical methodologies that entail simple sample preparation and low-cost analyses are desired. In the present work, the possibilities and drawbacks of low-power total reflection X-ray fluorescence spectrometry (TXRF) for the determination of total bromine and bromide contents in soils were evaluated. The direct analysis of a solid suspension using 20 mg of fine ground soil (TXRF analysis can be directly performed by depositing 10 μL of the internal standardized soil extract sample on a quartz glass reflector in a measuring time of 1500 s. The bromide limit of detection by this approach was 10 μg L(-1). Good agreement was obtained between the TXRF results for the total bromine and bromide determinations in soils and those obtained by other popular analytical techniques, e.g. energy dispersive X-ray fluorescence spectrometry (total bromine) and ionic chromatography (bromide). As a study case, the TXRF method was applied to study bromine accumulation in two agricultural soils fumigated with a methyl bromide pesticide and irrigated with regenerated waste water. Copyright © 2016 Elsevier Ltd. All rights reserved.
A new analytical approach for humin determination in sediments and soils.
Calace, N; Petronio, B M; Persia, S; Pietroletti, M; Pacioni, D
2007-02-28
In this work a new analytical approach is proposed for the recovery of humin present in soil and sediments. The procedure is based on microwave oven treatment for humin deashing. In this way both the treatment time and the concentration of the HCl/HF mixture are significantly reduced (minutes rather than hours, 10% rather than concentrated). By means of the proposed scheme organic matter present in sediment and soil samples can be subdivided into the different fractions (hydrophobic and hydrophilic compounds, fulvic and humic acids, humin) making up the balance of organic carbon. Results obtained for samples characterised by different organic carbon content showed a loss of carbon ranging between 20% and 30%, consistent with previous reports about humin deashing.
Electromagnetic imaging of multiple-scattering small objects: non-iterative analytical approach
International Nuclear Information System (INIS)
Chen, X; Zhong, Y
2008-01-01
Multiple signal classification (MUSIC) imaging method and the least squares method are applied to solve the electromagnetic inverse scattering problem of determining the locations and polarization tensors of a collection of small objects embedded in a known background medium. Based on the analysis of induced electric and magnetic dipoles, the proposed MUSIC method is able to deal with some special scenarios, due to the shapes and materials of objects, to which the standard MUSIC doesn't apply. After the locations of objects are obtained, the nonlinear inverse problem of determining the polarization tensors of objects accounting for multiple scattering between objects is solved by a non-iterative analytical approach based on the least squares method
[Approaches to shaping up a medical information-analytical system at a central district hospital].
Koretskiĭ, V L; Shegel'skaia, M N
2003-01-01
The research topic, presented in the paper, is dedicated to the possibilities of managing the district healthcare element, i.e. central district hospital (CDH), focusing on the methods of setting up an information-and-analytical department at the CDH. An algorithm is designed for collecting and condensing the data meant for the CDM management needs; approaches to defining the managerial parameters and to creating a district data base are described. The topicality of the paper and of the algorithms outlined in it are highly favorable for recovering the managerial structure and for ensuring a high-quality medical care provided by rural patient-care facilities. The paper's concepts are also aimed at advancing the accessibility and quality of specialized medical care at the CDH and at ensuring the up-to-date standards of communications and management with the resources being shared rationally and with the cost-saving medical-care mode being maintained within district limits.
Battista, Natalia; Sergi, Manuel; Montesano, Camilla; Napoletano, Sabino; Compagnone, Dario; Maccarrone, Mauro
2014-01-01
Over the last two decades, the role played by phytocannabinoids and endocannabinoids in medicine has gained increasing interest in the scientific community. Upon identification of the plant compound Δ(9)-tetrahydrocannabinol (THC) and of the endogenous substance anandamide (AEA), different methodological approaches and innovative techniques have been developed, in order to evaluate the content of these molecules in various human matrices. In this review, we discuss the analytical methods that are currently used for the identification of phytocannabinoids and endocannabinoids, and we summarize the benefits and limitations of these procedures. Moreover, we provide an overview of the main biological matrices that have been analyzed to date for qualitative detection and quantitative determination of these compounds. Copyright © 2013 John Wiley & Sons, Ltd.
Olsen, Katharina Norgren; Ask, Kristine Skoglund; Pedersen-Bjergaard, Stig; Gjelstad, Astrid
2018-03-01
Liquid-liquid extraction is widely used in therapeutic drug monitoring of antipsychotics, but difficulties in automation of the technique can result in long operational time. In this paper, parallel artificial liquid membrane extraction was used for extraction of serotonin- and serotonin-norepinephrine reuptake inhibitors from human plasma, and an approach to automate the technique was investigated. Eight model analytes were extracted from 125 μl human plasma with recoveries in the range 72-111% (relative standard deviation [RSD] ≤12.8%). A semiautomated pipettor was successfully utilized in the procedure, reducing the manual handling time. Real patient samples were analyzed with satisfying accuracy. A semiautomated extraction of serotonin-and serotonin-norepinephrine reuptake inhibitors by parallel artificial liquid membrane extraction extraction was successfully performed.
Oud, Bart; Maris, Antonius J A; Daran, Jean-Marc; Pronk, Jack T
2012-01-01
Successful reverse engineering of mutants that have been obtained by nontargeted strain improvement has long presented a major challenge in yeast biotechnology. This paper reviews the use of genome-wide approaches for analysis of Saccharomyces cerevisiae strains originating from evolutionary engineering or random mutagenesis. On the basis of an evaluation of the strengths and weaknesses of different methods, we conclude that for the initial identification of relevant genetic changes, whole genome sequencing is superior to other analytical techniques, such as transcriptome, metabolome, proteome, or array-based genome analysis. Key advantages of this technique over gene expression analysis include the independency of genome sequences on experimental context and the possibility to directly and precisely reproduce the identified changes in naive strains. The predictive value of genome-wide analysis of strains with industrially relevant characteristics can be further improved by classical genetics or simultaneous analysis of strains derived from parallel, independent strain improvement lineages. PMID:22152095
Thermal fatigue crack growth in mixing tees nuclear piping - An analytical approach
International Nuclear Information System (INIS)
Radu, V.
2009-01-01
The assessment of fatigue crack growth due to cyclic thermal loads arising from turbulent mixing presents significant challenges, principally due to the difficulty of establishing the actual loading spectrum. So-called sinusoidal methods represent a simplified approach in which the entire spectrum is replaced by a sine-wave variation of the temperature at the inner pipe surface. The need for multiple calculations in this process has lead to the development of analytical solutions for thermal stresses in a pipe subject to sinusoidal thermal loading, described in previous work performed at JRC IE Petten, The Netherlands, during the author's stage as seconded national expert. Based on these stress distributions solutions, the paper presents a methodology for assessment of thermal fatigue crack growth life in mixing tees nuclear piping. (author)
Analytical slave-spin mean-field approach to orbital selective Mott insulators
Komijani, Yashar; Kotliar, Gabriel
2017-09-01
We use the slave-spin mean-field approach to study particle-hole symmetric one- and two-band Hubbard models in the presence of Hund's coupling interaction. By analytical analysis of the Hamiltonian, we show that the locking of the two orbitals vs orbital selective Mott transition can be formulated within a Landau-Ginzburg framework. By applying the slave-spin mean field to impurity problems, we are able to make a correspondence between impurity and lattice. We also consider the stability of the orbital selective Mott phase to the hybridization between the orbitals and study the limitations of the slave-spin method for treating interorbital tunnelings in the case of multiorbital Bethe lattices with particle-hole symmetry.
Esteve, Clara; D'Amato, Alfonsina; Marina, María Luisa; García, María Concepción; Righetti, Pier Giorgio
2013-10-30
Proteins in olive oil have been scarcely investigated probably due to the difficulty of working with such a lipidic matrix and the dramatically low abundance of proteins in this biological material. Additionally, this scarce information has generated contradictory results, thus requiring further investigations. This work treats this subject from a comprehensive point of view and proposes the use of different analytical approaches to delve into the characterization and identification of proteins in olive oil. Different extraction methodologies, including capture via combinational hexapeptide ligand libraries (CPLLs), were tried. A sequence of methodologies, starting with off-gel isoelectric focusing (IEF) followed by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) or high-performance liquid chromatography (HPLC) using an ultraperformance liquid chromatography (UPLC) column, was applied to profile proteins from olive seed, pulp, and oil. Besides this, and for the first time, a tentative identification of oil proteins by mass spectrometry has been attempted.
Indian Academy of Sciences (India)
IAS Admin
V S Borkar is the Institute. Chair Professor of. Electrical Engineering at. IIT Bombay. His research interests are stochastic optimization, theory, algorithms and applica- tions. 1 'Markov Chain Monte Carlo' is another one (see [1]), not to mention schemes that combine both. Stochastic approximation is one of the unsung.
An Analytical Approach for Activity Determination of Extended Gas Ampoule Sources
International Nuclear Information System (INIS)
Nafee, S.S.; Abbas, M.
2009-01-01
The National Institute of Standards and Technology (NIST, Gaithersburg, MD 20878, USA) uses different extended sources, such as ampoule sources filled with mixed noble gases ( 133 Xe and 85 Kr) to calibrate the detection systems in the nuclear power plants. Those noble gases are produced from the fission of the uranium and plutonium in the nuclear reactors. Accurate activity determination is needed for those radioactive sources to be used in the calibration process. A straight forward theoretical approach is presented here to determine the activity of the gas ampoule sources using the (NIST) hyper pure germanium (HPGe) cylindrical detectors. The validity of the present approach is tested through extensive comparisons with the activity values carried out in the (NIST). The calculated activities show discrepancies less than 1.5 % and less than 2 % from the measured ones using the NIST gas counting system for the 133 Xe and 85 Kr, respectively. The comparisons indicate that the present analytical approach provides a useful methodology for traceability of radioactivity measurements to the fissionable radioactive sources and nuclear power facilities
A novel fast and accurate pseudo-analytical simulation approach for MOAO
Gendron, É.; Charara, A.; Abdelfattah, A.; Gratadour, D.; Keyes, D.; Ltaief, H.; Morel, C.; Vidal, F.; Sevin, A.; Rousset, G.
2014-08-01
Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique for wide-field multi-object spectrographs (MOS). MOAO aims at applying dedicated wavefront corrections to numerous separated tiny patches spread over a large field of view (FOV), limited only by that of the telescope. The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. We have developed a novel hybrid, pseudo-analytical simulation scheme, somewhere in between the end-to- end and purely analytical approaches, that allows us to simulate in detail the tomographic problem as well as noise and aliasing with a high fidelity, and including fitting and bandwidth errors thanks to a Fourier-based code. Our tomographic approach is based on the computation of the minimum mean square error (MMSE) reconstructor, from which we derive numerically the covariance matrix of the tomographic error, including aliasing and propagated noise. We are then able to simulate the point-spread function (PSF) associated to this covariance matrix of the residuals, like in PSF reconstruction algorithms. The advantage of our approach is that we compute the same tomographic reconstructor that would be computed when operating the real instrument, so that our developments open the way for a future on-sky implementation of the tomographic control, plus the joint PSF and performance estimation. The main challenge resides in the computation of the tomographic reconstructor which involves the inversion of a large matrix (typically 40 000 × 40 000 elements). To perform this computation efficiently, we chose an optimized approach based on the use of GPUs as accelerators and using an optimized linear algebra library: MORSE providing a significant speedup against standard CPU oriented libraries such as Intel MKL. Because the covariance matrix is
A novel fast and accurate pseudo-analytical simulation approach for MOAO
Gendron, É.
2014-08-04
Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique for wide-field multi-object spectrographs (MOS). MOAO aims at applying dedicated wavefront corrections to numerous separated tiny patches spread over a large field of view (FOV), limited only by that of the telescope. The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. We have developed a novel hybrid, pseudo-analytical simulation scheme, somewhere in between the end-to- end and purely analytical approaches, that allows us to simulate in detail the tomographic problem as well as noise and aliasing with a high fidelity, and including fitting and bandwidth errors thanks to a Fourier-based code. Our tomographic approach is based on the computation of the minimum mean square error (MMSE) reconstructor, from which we derive numerically the covariance matrix of the tomographic error, including aliasing and propagated noise. We are then able to simulate the point-spread function (PSF) associated to this covariance matrix of the residuals, like in PSF reconstruction algorithms. The advantage of our approach is that we compute the same tomographic reconstructor that would be computed when operating the real instrument, so that our developments open the way for a future on-sky implementation of the tomographic control, plus the joint PSF and performance estimation. The main challenge resides in the computation of the tomographic reconstructor which involves the inversion of a large matrix (typically 40 000 × 40 000 elements). To perform this computation efficiently, we chose an optimized approach based on the use of GPUs as accelerators and using an optimized linear algebra library: MORSE providing a significant speedup against standard CPU oriented libraries such as Intel MKL. Because the covariance matrix is
Steinmetz, Philipp; Kellner, Michael; Hötzer, Johannes; Nestler, Britta
2018-02-01
For the analytical description of the relationship between undercoolings, lamellar spacings and growth velocities during the directional solidification of ternary eutectics in 2D and 3D, different extensions based on the theory of Jackson and Hunt are reported in the literature. Besides analytical approaches, the phase-field method has been established to study the spatially complex microstructure evolution during the solidification of eutectic alloys. The understanding of the fundamental mechanisms controlling the morphology development in multiphase, multicomponent systems is of high interest. For this purpose, a comparison is made between the analytical extensions and three-dimensional phase-field simulations of directional solidification in an ideal ternary eutectic system. Based on the observed accordance in two-dimensional validation cases, the experimentally reported, inherently three-dimensional chain-like pattern is investigated in extensive simulation studies. The results are quantitatively compared with the analytical results reported in the literature, and with a newly derived approach which uses equal undercoolings. A good accordance of the undercooling-spacing characteristics between simulations and the analytical Jackson-Hunt apporaches are found. The results show that the applied phase-field model, which is based on the Grand potential approach, is able to describe the analytically predicted relationship between the undercooling and the lamellar arrangements during the directional solidification of a ternary eutectic system in 3D.
An analytical approach to separate climate and human contributions to basin streamflow variability
Li, Changbin; Wang, Liuming; Wanrui, Wang; Qi, Jiaguo; Linshan, Yang; Zhang, Yuan; Lei, Wu; Cui, Xia; Wang, Peng
2018-04-01
Climate variability and anthropogenic regulations are two interwoven factors in the ecohydrologic system across large basins. Understanding the roles that these two factors play under various hydrologic conditions is of great significance for basin hydrology and sustainable water utilization. In this study, we present an analytical approach based on coupling water balance method and Budyko hypothesis to derive effectiveness coefficients (ECs) of climate change, as a way to disentangle contributions of it and human activities to the variability of river discharges under different hydro-transitional situations. The climate dominated streamflow change (ΔQc) by EC approach was compared with those deduced by the elasticity method and sensitivity index. The results suggest that the EC approach is valid and applicable for hydrologic study at large basin scale. Analyses of various scenarios revealed that contributions of climate change and human activities to river discharge variation differed among the regions of the study area. Over the past several decades, climate change dominated hydro-transitions from dry to wet, while human activities played key roles in the reduction of streamflow during wet to dry periods. Remarkable decline of discharge in upstream was mainly due to human interventions, although climate contributed more to runoff increasing during dry periods in the semi-arid downstream. Induced effectiveness on streamflow changes indicated a contribution ratio of 49% for climate and 51% for human activities at the basin scale from 1956 to 2015. The mathematic derivation based simple approach, together with the case example of temporal segmentation and spatial zoning, could help people understand variation of river discharge with more details at a large basin scale under the background of climate change and human regulations.
Schildcrout, Jonathan S; Basford, Melissa A; Pulley, Jill M; Masys, Daniel R; Roden, Dan M; Wang, Deede; Chute, Christopher G; Kullo, Iftikhar J; Carrell, David; Peissig, Peggy; Kho, Abel; Denny, Joshua C
2010-12-01
We describe a two-stage analytical approach for characterizing morbidity profile dissimilarity among patient cohorts using electronic medical records. We capture morbidities using the International Statistical Classification of Diseases and Related Health Problems (ICD-9) codes. In the first stage of the approach separate logistic regression analyses for ICD-9 sections (e.g., "hypertensive disease" or "appendicitis") are conducted, and the odds ratios that describe adjusted differences in prevalence between two cohorts are displayed graphically. In the second stage, the results from ICD-9 section analyses are combined into a general morbidity dissimilarity index (MDI). For illustration, we examine nine cohorts of patients representing six phenotypes (or controls) derived from five institutions, each a participant in the electronic MEdical REcords and GEnomics (eMERGE) network. The phenotypes studied include type II diabetes and type II diabetes controls, peripheral arterial disease and peripheral arterial disease controls, normal cardiac conduction as measured by electrocardiography, and senile cataracts. Copyright © 2010 Elsevier Inc. All rights reserved.
An Analytical Approach for Fast Recovery of the LSI Properties in Magnetic Particle Imaging
Directory of Open Access Journals (Sweden)
Hamed Jabbari Asl
2016-01-01
Full Text Available Linearity and shift invariance (LSI characteristics of magnetic particle imaging (MPI are important properties for quantitative medical diagnosis applications. The MPI image equations have been theoretically shown to exhibit LSI; however, in practice, the necessary filtering action removes the first harmonic information, which destroys the LSI characteristics. This lost information can be constant in the x-space reconstruction method. Available recovery algorithms, which are based on signal matching of multiple partial field of views (pFOVs, require much processing time and a priori information at the start of imaging. In this paper, a fast analytical recovery algorithm is proposed to restore the LSI properties of the x-space MPI images, representable as an image of discrete concentrations of magnetic material. The method utilizes the one-dimensional (1D x-space imaging kernel and properties of the image and lost image equations. The approach does not require overlapping of pFOVs, and its complexity depends only on a small-sized system of linear equations; therefore, it can reduce the processing time. Moreover, the algorithm only needs a priori information which can be obtained at one imaging process. Considering different particle distributions, several simulations are conducted, and results of 1D and 2D imaging demonstrate the effectiveness of the proposed approach.
A Deep Learning Approach to on-Node Sensor Data Analytics for Mobile or Wearable Devices.
Ravi, Daniele; Wong, Charence; Lo, Benny; Yang, Guang-Zhong
2017-01-01
The increasing popularity of wearable devices in recent years means that a diverse range of physiological and functional data can now be captured continuously for applications in sports, wellbeing, and healthcare. This wealth of information requires efficient methods of classification and analysis where deep learning is a promising technique for large-scale data analytics. While deep learning has been successful in implementations that utilize high-performance computing platforms, its use on low-power wearable devices is limited by resource constraints. In this paper, we propose a deep learning methodology, which combines features learned from inertial sensor data together with complementary information from a set of shallow features to enable accurate and real-time activity classification. The design of this combined method aims to overcome some of the limitations present in a typical deep learning framework where on-node computation is required. To optimize the proposed method for real-time on-node computation, spectral domain preprocessing is used before the data are passed onto the deep learning framework. The classification accuracy of our proposed deep learning approach is evaluated against state-of-the-art methods using both laboratory and real world activity datasets. Our results show the validity of the approach on different human activity datasets, outperforming other methods, including the two methods used within our combined pipeline. We also demonstrate that the computation times for the proposed method are consistent with the constraints of real-time on-node processing on smartphones and a wearable sensor platform.
A Visual Analytics Approach for Station-Based Air Quality Data
Directory of Open Access Journals (Sweden)
Yi Du
2016-12-01
Full Text Available With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support.
A Visual Analytics Approach for Station-Based Air Quality Data
Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui
2016-01-01
With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support. PMID:28029117
Contaminant ingress into multizone buildings: An analytical state-space approach
Parker, Simon
2013-08-13
The ingress of exterior contaminants into buildings is often assessed by treating the building interior as a single well-mixed space. Multizone modelling provides an alternative way of representing buildings that can estimate concentration time series in different internal locations. A state-space approach is adopted to represent the concentration dynamics within multizone buildings. Analysis based on this approach is used to demonstrate that the exposure in every interior location is limited to the exterior exposure in the absence of removal mechanisms. Estimates are also developed for the short term maximum concentration and exposure in a multizone building in response to a step-change in concentration. These have considerable potential for practical use. The analytical development is demonstrated using a simple two-zone building with an inner zone and a range of existing multizone models of residential buildings. Quantitative measures are provided of the standard deviation of concentration and exposure within a range of residential multizone buildings. Ratios of the maximum short term concentrations and exposures to single zone building estimates are also provided for the same buildings. © 2013 Tsinghua University Press and Springer-Verlag Berlin Heidelberg.
Field-driven chiral bubble dynamics analysed by a semi-analytical approach
Vandermeulen, J.; Leliaert, J.; Dupré, L.; Van Waeyenberge, B.
2017-12-01
Nowadays, field-driven chiral bubble dynamics in the presence of the Dzyaloshinskii-Moriya interaction are a topic of thorough investigation. In this paper, a semi-analytical approach is used to derive equations of motion that express the bubble wall (BW) velocity and the change in in-plane magnetization angle as function of the micromagnetic parameters of the involved interactions, thereby taking into account the two-dimensional nature of the bubble wall. It is demonstrated that the equations of motion enable an accurate description of the expanding and shrinking convex bubble dynamics and an expression for the transition field between shrinkage and expansion is derived. In addition, these equations of motion show that the BW velocity is not only dependent on the driving force, but also on the BW curvature. The absolute BW velocity increases for both a shrinking and an expanding bubble, but for different reasons: for expanding bubbles, it is due to the increasing importance of the driving force, while for shrinking bubbles, it is due to the increasing importance of contributions related to the BW curvature. Finally, using this approach we show how the recently proposed magnetic bubblecade memory can operate in the flow regime in the presence of a tilted sinusoidal magnetic field and at greatly reduced bubble sizes compared to the original device prototype.
Analytical quality-by-design approach for sample treatment of BSA-containing solutions.
Taevernier, Lien; Wynendaele, Evelien; D Hondt, Matthias; De Spiegeleer, Bart
2015-02-01
The sample preparation of samples containing bovine serum albumin (BSA), e.g., as used in transdermal Franz diffusion cell (FDC) solutions, was evaluated using an analytical quality-by-design (QbD) approach. Traditional precipitation of BSA by adding an equal volume of organic solvent, often successfully used with conventional HPLC-PDA, was found insufficiently robust when novel fused-core HPLC and/or UPLC-MS methods were used. In this study, three factors (acetonitrile (%), formic acid (%) and boiling time (min)) were included in the experimental design to determine an optimal and more suitable sample treatment of BSA-containing FDC solutions. Using a QbD and Derringer desirability ( D ) approach, combining BSA loss, dilution factor and variability, we constructed an optimal working space with the edge of failure defined as D <0.9. The design space is modelled and is confirmed to have an ACN range of 83±3% and FA content of 1±0.25%.
Lipshitz, Raanan; Cohen, Marvin S
2005-01-01
Efforts to improve decision making must appeal to some source of warrant - that is, specific criteria or models for guiding and evaluating decision-making performance. We examine and compare the warrants for two approaches to decision aids, decision training, and consulting: analytically based prescription, which obtains warrant from formal models, and empirically based prescription, which obtains warrant from descriptive models of successful performance. We argue that empirically based warrants can provide a meaningful and valid basis for prescriptive intervention without committing the naturalistic fallacy (i.e., confusing what is with what ought to be) and without the use of formal deduction from first principles. We describe points of divergence as well as convergence in the types of warrant appealed to by naturalistic decision making and decision analysis, letting each approach shed light on the other, and explore the application of empirically based prescription to cognitive engineering. Actual or potential applications of this research include the development of training programs to improve various aspects of naturalistic decision making.
General analytical approach for sound transmission loss analysis through a thick metamaterial plate
International Nuclear Information System (INIS)
Oudich, Mourad; Zhou, Xiaoming; Badreddine Assouar, M.
2014-01-01
We report theoretically and numerically on the sound transmission loss performance through a thick plate-type acoustic metamaterial made of spring-mass resonators attached to the surface of a homogeneous elastic plate. Two general analytical approaches based on plane wave expansion were developed to calculate both the sound transmission loss through the metamaterial plate (thick and thin) and its band structure. The first one can be applied to thick plate systems to study the sound transmission for any normal or oblique incident sound pressure. The second approach gives the metamaterial dispersion behavior to describe the vibrational motions of the plate, which helps to understand the physics behind sound radiation through air by the structure. Computed results show that high sound transmission loss up to 72 dB at 2 kHz is reached with a thick metamaterial plate while only 23 dB can be obtained for a simple homogeneous plate with the same thickness. Such plate-type acoustic metamaterial can be a very effective solution for high performance sound insulation and structural vibration shielding in the very low-frequency range
A terahertz performance of hybrid single walled CNT based amplifier with analytical approach
Kumar, Sandeep; Song, Hanjung
2018-01-01
This work is focuses on terahertz performance of hybrid single walled carbon nanotube (CNT) based amplifier and proposed for measurement of soil parameters application. The proposed circuit topology provides hybrid structure which achieves wide impedance bandwidth of 0.33 THz within range of 1.07-THz to 1.42-THz with fractional amount of 28%. The single walled RF CNT network executes proposed ambition and proves its ability to resonant at 1.25-THz with analytical approach. Moreover, a RF based microstrip transmission line radiator used as compensator in the circuit topology which achieves more than 30 dB of gain. A proper methodology is chosen for achieves stability at circuit level in order to obtain desired optimal conditions. The fundamental approach optimizes matched impedance condition at (50+j0) Ω and noise variation with impact of series resistances for the proposed hybrid circuit topology and demonstrates the accuracy of performance parameters at the circuit level. The chip fabrication of the proposed circuit by using RF based commercial CMOS process of 45 nm which reveals promising results with simulation one. Additionally, power measurement analysis achieves highest output power of 26 dBm with power added efficiency of 78%. The succeed minimum noise figure from 0.6 dB to 0.4 dB is outstanding achievement for circuit topology at terahertz range. The chip area of hybrid circuit is 0.65 mm2 and power consumption of 9.6 mW.
Maglaveras, Nicos; Kilintzis, Vassilis; Koutkias, Vassilis; Chouvarda, Ioanna
2016-01-01
Integrated care and connected health are two fast evolving concepts that have the potential to leverage personalised health. From the one side, the restructuring of care models and implementation of new systems and integrated care programs providing coaching and advanced intervention possibilities, enable medical decision support and personalized healthcare services. From the other side, the connected health ecosystem builds the means to follow and support citizens via personal health systems in their everyday activities and, thus, give rise to an unprecedented wealth of data. These approaches are leading to the deluge of complex data, as well as in new types of interactions with and among users of the healthcare ecosystem. The main challenges refer to the data layer, the information layer, and the output of information processing and analytics. In all the above mentioned layers, the primary concern is the quality both in data and information, thus, increasing the need for filtering mechanisms. Especially in the data layer, the big biodata management and analytics ecosystem is evolving, telemonitoring is a step forward for data quality leverage, with numerous challenges still left to address, partly due to the large number of micro-nano sensors and technologies available today, as well as the heterogeneity in the users' background and data sources. This leads to new R&D pathways as it concerns biomedical information processing and management, as well as to the design of new intelligent decision support systems (DSS) and interventions for patients. In this paper, we illustrate these issues through exemplar research targeting chronic patients, illustrating the current status and trends in PHS within the integrated care and connected care world.
Directory of Open Access Journals (Sweden)
V. F. Chekhun
2013-09-01
Full Text Available New data on cytogenetic approximation of the experimental cytogenetic dependence "dose - effect" based on the spline regression model that improves biological dosimetry of human radiological exposure were received. This is achieved by reducing the error of the determination of absorbed dose as compared to the traditional use of linear and linear-quadratic models and makes it possible to predict the effect of dose curves on plateau.
Wurm, Patrick; Ulz, Manfred H.
2016-10-01
The aim of this work is to provide an improved information exchange in hierarchical atomistic-to-continuum settings by applying stochastic approximation methods. For this purpose a typical model belonging to this class is chosen and enhanced. On the macroscale of this particular two-scale model, the balance equations of continuum mechanics are solved using a nonlinear finite element formulation. The microscale, on which a canonical ensemble of statistical mechanics is simulated using molecular dynamics, replaces a classic material formulation. The constitutive behavior is computed on the microscale by computing time averages. However, these time averages are thermal noise-corrupted as the microscale may practically not be tracked for a sufficiently long period of time due to limited computational resources. This noise prevents the model from a classical convergence behavior and creates a setting that shows remarkable resemblance to iteration schemes known from stochastic approximation. This resemblance justifies the use of two averaging strategies known to improve the convergence behavior in stochastic approximation schemes under certain, fairly general, conditions. To demonstrate the effectiveness of the proposed strategies, three numerical examples are studied.
Directory of Open Access Journals (Sweden)
Christina Pfeiffer
2015-09-01
Full Text Available Multivariate genetic evaluation in modern dairy cattle breeding programs became important in the last decades. The simultaneous estimation of all production and functional traits is still demanding. Different meta-models are used to overcome several constraints. The aim of this study was to conduct an approximate multivariate two-step procedure applied to de-regressed breeding values and yield deviations of five fertility traits of Austrian Pinzgau cattle and to compare results with routinely estimated breeding values. The approximate two-step procedure applied to de-regressed breeding values performed better than the procedure applied to yield deviations. Spearman rank correlations for all animals, sires and cows were between 0.996 and 0.999 for the procedure applied to de-regressed breeding values and between 0.866 and 0.995 for the procedure applied to yield deviations. The results are encouraging to move from the currently used selection index in routine genetic evaluation towards an approximate two-step procedure applied to de-regressed breeding values.
Zarghani, Maryam; Parastar, Hadi
2017-11-17
The objective of the present work is development of joint approximate diagonalization of eigenmatrices (JADE) as a member of independent component analysis (ICA) family, for the analysis of gas chromatography-mass spectrometry (GC-MS) and comprehensive two-dimensional gas chromatography-mass spectrometry (GC×GC-MS) data to address incomplete separation problem occurred during the analysis of complex sample matrices. In this regard, simulated GC-MS and GC×GC-MS data sets with different number of components, different degree of overlap and noise were evaluated. In the case of simultaneous analysis of multiple samples, column-wise augmentation for GC-MS and column-wise super-augmentation for GC×GC-MS was used before JADE analysis. The performance of JADE was evaluated in terms of statistical parameters of lack of fit (LOF), mutual information (MI) and Amari index as well as analytical figures of merit (AFOMs) obtained from calibration curves. In addition, the area of feasible solutions (AFSs) was calculated by two different approaches of MCR-BANDs and polygon inflation algorithm (FACPACK). Furthermore, JADE performance was compared with multivariate curve resolution-alternating least squares (MCR-ALS) and other ICA algorithms of mean-field ICA (MFICA) and mutual information least dependent component analysis (MILCA). In all cases, JADE could successfully resolve the elution and spectral profiles in GC-MS and GC×GC-MS data with acceptable statistical and calibration parameters and their solutions were in AFSs. To check the applicability of JADE in real cases, JADE was used for resolution and quantification of phenanthrene and anthracene in aromatic fraction of heavy fuel oil (HFO) analyzed by GC×GC-MS. Surprisingly, pure elution and spectral profiles of target compounds were properly resolved in the presence of baseline and interferences using JADE. Once more, the performance of JADE was compared with MCR-ALS in real case. On this matter, the mutual information
Mitigating Sports Injury Risks Using Internet of Things and Analytics Approaches.
Wilkerson, Gary B; Gupta, Ashish; Colston, Marisa A
2018-03-12
Sport injuries restrict participation, impose a substantial economic burden, and can have persisting adverse effects on health-related quality of life. The effective use of Internet of Things (IoT), when combined with analytics approaches, can improve player safety through identification of injury risk factors that can be addressed by targeted risk reduction training activities. Use of IoT devices can facilitate highly efficient quantification of relevant functional capabilities prior to sport participation, which could substantially advance the prevailing sport injury management paradigm. This study introduces a framework for using sensor-derived IoT data to supplement other data for objective estimation of each individual college football player's level of injury risk, which is an approach to injury prevention that has not been previously reported. A cohort of 45 NCAA Division I-FCS college players provided data in the form of self-ratings of persisting effects of previous injuries and single-leg postural stability test. Instantaneous change in body mass acceleration (jerk) during the test was quantified by a smartphone accelerometer, with data wirelessly transmitted to a secure cloud server. Injuries sustained from the beginning of practice sessions until the end of the 13-game season were documented, along with the number of games played by each athlete over the course of a 13-game season. Results demonstrate a strong prediction model. Our approach may have strong relevance to the estimation of injury risk for other physically demanding activities. Clearly, there is great potential for improvement of injury prevention initiatives through identification of individual athletes who possess elevated injury risk and targeted interventions. © 2018 Society for Risk Analysis.
Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Gunay, Nur Sibel; Wang, Jing; Sun, Elaine Y; Pradines, Joël R; Farutin, Victor; Shriver, Zachary; Kaundinya, Ganesh V; Capila, Ishan
2017-02-01
Heparan sulfate (HS), a glycosaminoglycan present on the surface of cells, has been postulated to have important roles in driving both normal and pathological physiologies. The chemical structure and sulfation pattern (domain structure) of HS is believed to determine its biological function, to vary across tissue types, and to be modified in the context of disease. Characterization of HS requires isolation and purification of cell surface HS as a complex mixture. This process may introduce additional chemical modification of the native residues. In this study, we describe an approach towards thorough characterization of bovine kidney heparan sulfate (BKHS) that utilizes a variety of orthogonal analytical techniques (e.g. NMR, IP-RPHPLC, LC-MS). These techniques are applied to characterize this mixture at various levels including composition, fragment level, and overall chain properties. The combination of these techniques in many instances provides orthogonal views into the fine structure of HS, and in other instances provides overlapping / confirmatory information from different perspectives. Specifically, this approach enables quantitative determination of natural and modified saccharide residues in the HS chains, and identifies unusual structures. Analysis of partially digested HS chains allows for a better understanding of the domain structures within this mixture, and yields specific insights into the non-reducing end and reducing end structures of the chains. This approach outlines a useful framework that can be applied to elucidate HS structure and thereby provides means to advance understanding of its biological role and potential involvement in disease progression. In addition, the techniques described here can be applied to characterization of heparin from different sources.
Exploration of Simple Analytical Approaches for Rapid Detection of Pathogenic Bacteria
Energy Technology Data Exchange (ETDEWEB)
Rahman, Salma [Iowa State Univ., Ames, IA (United States)
2005-01-01
Many of the current methods for pathogenic bacterial detection require long sample-preparation and analysis time, as well as complex instrumentation. This dissertation explores simple analytical approaches (e.g., flow cytometry and diffuse reflectance spectroscopy) that may be applied towards ideal requirements of a microbial detection system, through method and instrumentation development, and by the creation and characterization of immunosensing platforms. This dissertation is organized into six sections. In the general Introduction section a literature review on several of the key aspects of this work is presented. First, different approaches for detection of pathogenic bacteria will be reviewed, with a comparison of the relative strengths and weaknesses of each approach, A general overview regarding diffuse reflectance spectroscopy is then presented. Next, the structure and function of self-assembled monolayers (SAMs) formed from organosulfur molecules at gold and micrometer and sub-micrometer patterning of biomolecules using SAMs will be discussed. This section is followed by four research chapters, presented as separate manuscripts. Chapter 1 describes the efforts and challenges towards the creation of imunosensing platforms that exploit the flexibility and structural stability of SAMs of thiols at gold. 1H, 1H, 2H, 2H-perfluorodecyl-1-thiol SAM (PFDT) and dithio-bis(succinimidyl propionate)-(DSP)-derived SAMs were used to construct the platform. Chapter 2 describes the characterization of the PFDT- and DSP-derived SAMs, and the architectures formed when it is coupled to antibodies as well as target bacteria. These studies used infrared reflection spectroscopy (IRS), X-ray photoelectron spectroscopy (XPS), and electrochemical quartz crystal microbalance (EQCM), Chapter 3 presents a new sensitive, and portable diffuse reflection based technique for the rapid identification and quantification of pathogenic bacteria. Chapter 4 reports research efforts in the
Oseev, Aleksandr; Lucklum, Ralf; Zubtsov, Mikhail; Schmidt, Marc-Peter; Mukhin, Nikolay V; Hirsch, Soeren
2017-09-23
The current work demonstrates a novel surface acoustic wave (SAW) based phononic crystal sensor approach that allows the integration of a velocimetry-based sensor concept into single chip integrated solutions, such as Lab-on-a-Chip devices. The introduced sensor platform merges advantages of ultrasonic velocimetry analytic systems and a microacoustic sensor approach. It is based on the analysis of structural resonances in a periodic composite arrangement of microfluidic channels confined within a liquid analyte. Completed theoretical and experimental investigations show the ability to utilize periodic structure localized modes for the detection of volumetric properties of liquids and prove the efficacy of the proposed sensor concept.
International Nuclear Information System (INIS)
Suarez Antola, R.
2005-01-01
It was proponed recently to apply an extension of Lyapunov's first method to the non-linear regime, known as non-linear modal analysis (NMA), to the study of space-time problems in nuclear reactor kinetics, nuclear power plant dynamics and nuclear power plant instrumentation and control(1). The present communication shows how to apply NMA to the study of Xenon spatial oscillations in large nuclear reactors. The set of non-linear modal equations derived by J. Lewins(2) for neutron flux, Xenon concentration and Iodine concentration are discussed, and a modified version of these equations is taken as a starting point. Using the methods of singular perturbation theory a slow manifold is constructed in the space of mode amplitudes. This allows the reduction of the original high dimensional dynamics to a low dimensional one. It is shown how the amplitudes of the first mode for neutron flux field, temperature field and concentrations of Xenon and Iodine fields can have a stable steady state value while the corresponding amplitudes of the second mode oscillates in a stable limit cycle. The extrapolated dimensions of the reactor's core are used as bifurcation parameters. Approximate analytical formulae are obtained for the critical values of this parameters( below which the onset of oscillations is produced), for the period and for the amplitudes of the above mentioned oscillations. These results are applied to the discussion of neutron flux and temperature excursions in critical locations of the reactor's core. The results of NMA can be validated from the results obtained applying suitable computer codes, using homogenization theory(3) to link the complex heterogeneous model of the codes with the simplified mathematical model used for NMA
Bearup, Daniel; Petrovskaya, Natalia; Petrovskii, Sergei
2015-05-01
Monitoring of pest insects is an important part of the integrated pest management. It aims to provide information about pest insect abundance at a given location. This includes data collection, usually using traps, and their subsequent analysis and/or interpretation. However, interpretation of trap count (number of insects caught over a fixed time) remains a challenging problem. First, an increase in either the population density or insects activity can result in a similar increase in the number of insects trapped (the so called "activity-density" problem). Second, a genuine increase of the local population density can be attributed to qualitatively different ecological mechanisms such as multiplication or immigration. Identification of the true factor causing an increase in trap count is important as different mechanisms require different control strategies. In this paper, we consider a mean-field mathematical model of insect trapping based on the diffusion equation. Although the diffusion equation is a well-studied model, its analytical solution in closed form is actually available only for a few special cases, whilst in a more general case the problem has to be solved numerically. We choose finite differences as the baseline numerical method and show that numerical solution of the problem, especially in the realistic 2D case, is not at all straightforward as it requires a sufficiently accurate approximation of the diffusion fluxes. Once the numerical method is justified and tested, we apply it to the corresponding boundary problem where different types of boundary forcing describe different scenarios of pest insect immigration and reveal the corresponding patterns in the trap count growth. Copyright © 2015 Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Morini, Filippo; Deleuze, Michael S., E-mail: michael.deleuze@uhasselt.be [Center of Molecular and Materials Modelling, Hasselt University, Agoralaan Gebouw D, B-3590 Diepenbeek (Belgium); Watanabe, Noboru; Takahashi, Masahiko [Institute of Multidisciplinary Research for Advanced Materials, Tohoku University, Sendai 980-8577 (Japan)
2015-03-07
The influence of thermally induced nuclear dynamics (molecular vibrations) in the initial electronic ground state on the valence orbital momentum profiles of furan has been theoretically investigated using two different approaches. The first of these approaches employs the principles of Born-Oppenheimer molecular dynamics, whereas the so-called harmonic analytical quantum mechanical approach resorts to an analytical decomposition of contributions arising from quantized harmonic vibrational eigenstates. In spite of their intrinsic differences, the two approaches enable consistent insights into the electron momentum distributions inferred from new measurements employing electron momentum spectroscopy and an electron impact energy of 1.2 keV. Both approaches point out in particular an appreciable influence of a few specific molecular vibrations of A{sub 1} symmetry on the 9a{sub 1} momentum profile, which can be unravelled from considerations on the symmetry characteristics of orbitals and their energy spacing.
International Nuclear Information System (INIS)
Tapilin, V.M.
1982-01-01
A scheme of calculation of the electronic structure of a solid state surface and chemisorbed molecules is discussed. The method of the Green's function and MO LCAO approximation are used which permits to perform calculations, taking into account the whole crystal but not its fragment only, with the accuracy adopted by quantum chemistry. Results of model calculations are presented: chemisorption of hydrogen-like atom on the (100) face of the one-band crystal model and dispersion curves for the density of states of nickel (100) face. (Auth.)
International Nuclear Information System (INIS)
Gawand, Hemangi Laxman; Bhattacharjee, A. K.; Roy, Kallol
2017-01-01
In industrial plants such as nuclear power plants, system operations are performed by embedded controllers orchestrated by Supervisory Control and Data Acquisition (SCADA) software. A targeted attack (also termed a control aware attack) on the controller/SCADA software can lead a control system to operate in an unsafe mode or sometimes to complete shutdown of the plant. Such malware attacks can result in tremendous cost to the organization for recovery, cleanup, and maintenance activity. SCADA systems in operational mode generate huge log files. These files are useful in analysis of the plant behavior and diagnostics during an ongoing attack. However, they are bulky and difficult for manual inspection. Data mining techniques such as least squares approximation and computational methods can be used in the analysis of logs and to take proactive actions when required. This paper explores methodologies and algorithms so as to develop an effective monitoring scheme against control aware cyber attacks. It also explains soft computation techniques such as the computational geometric method and least squares approximation that can be effective in monitor design. This paper provides insights into diagnostic monitoring of its effectiveness by attack simulations on a four-tank model and using computation techniques to diagnose it. Cyber security of instrumentation and control systems used in nuclear power plants is of paramount importance and hence could be a possible target of such applications
Directory of Open Access Journals (Sweden)
Hemangi Laxman Gawand
2017-04-01
Full Text Available In industrial plants such as nuclear power plants, system operations are performed by embedded controllers orchestrated by Supervisory Control and Data Acquisition (SCADA software. A targeted attack (also termed a control aware attack on the controller/SCADA software can lead a control system to operate in an unsafe mode or sometimes to complete shutdown of the plant. Such malware attacks can result in tremendous cost to the organization for recovery, cleanup, and maintenance activity. SCADA systems in operational mode generate huge log files. These files are useful in analysis of the plant behavior and diagnostics during an ongoing attack. However, they are bulky and difficult for manual inspection. Data mining techniques such as least squares approximation and computational methods can be used in the analysis of logs and to take proactive actions when required. This paper explores methodologies and algorithms so as to develop an effective monitoring scheme against control aware cyber attacks. It also explains soft computation techniques such as the computational geometric method and least squares approximation that can be effective in monitor design. This paper provides insights into diagnostic monitoring of its effectiveness by attack simulations on a four-tank model and using computation techniques to diagnose it. Cyber security of instrumentation and control systems used in nuclear power plants is of paramount importance and hence could be a possible target of such applications.
Energy Technology Data Exchange (ETDEWEB)
Gawand, Hemangi Laxman [Homi Bhabha National Institute, Computer Section, BARC, Mumbai (India); Bhattacharjee, A. K. [Reactor Control Division, BARC, Mumbai (India); Roy, Kallol [BHAVINI, Kalpakkam (India)
2017-04-15
In industrial plants such as nuclear power plants, system operations are performed by embedded controllers orchestrated by Supervisory Control and Data Acquisition (SCADA) software. A targeted attack (also termed a control aware attack) on the controller/SCADA software can lead a control system to operate in an unsafe mode or sometimes to complete shutdown of the plant. Such malware attacks can result in tremendous cost to the organization for recovery, cleanup, and maintenance activity. SCADA systems in operational mode generate huge log files. These files are useful in analysis of the plant behavior and diagnostics during an ongoing attack. However, they are bulky and difficult for manual inspection. Data mining techniques such as least squares approximation and computational methods can be used in the analysis of logs and to take proactive actions when required. This paper explores methodologies and algorithms so as to develop an effective monitoring scheme against control aware cyber attacks. It also explains soft computation techniques such as the computational geometric method and least squares approximation that can be effective in monitor design. This paper provides insights into diagnostic monitoring of its effectiveness by attack simulations on a four-tank model and using computation techniques to diagnose it. Cyber security of instrumentation and control systems used in nuclear power plants is of paramount importance and hence could be a possible target of such applications.
Spatial Analytic Hierarchy Process Model for Flood Forecasting: An Integrated Approach
International Nuclear Information System (INIS)
Matori, Abd Nasir; Yusof, Khamaruzaman Wan; Hashim, Mustafa Ahmad; Lawal, Dano Umar; Balogun, Abdul-Lateef
2014-01-01
Various flood influencing factors such as rainfall, geology, slope gradient, land use, soil type, drainage density, temperature etc. are generally considered for flood hazard assessment. However, lack of appropriate handling/integration of data from different sources is a challenge that can make any spatial forecasting difficult and inaccurate. Availability of accurate flood maps and thorough understanding of the subsurface conditions can adequately enhance flood disasters management. This study presents an approach that attempts to provide a solution to this drawback by combining Geographic Information System (GIS)-based Analytic Hierarchy Process (AHP) model as spatial forecasting tools. In achieving the set objectives, spatial forecasting of flood susceptible zones in the study area was made. A total number of five set of criteria/factors believed to be influencing flood generation in the study area were selected. Priority weights were assigned to each criterion/factor based on Saaty's nine point scale of preference and weights were further normalized through the AHP. The model was integrated into a GIS system in order to produce a flood forecasting map
Decision support for energy conservation promotion: an analytic hierarchy process approach
International Nuclear Information System (INIS)
Kablan, M.M.
2004-01-01
An effective energy conservation program in any country should encourage the different enterprises, utilities and individuals to employ energy efficient processes, technologies, equipment, and materials. Governments use different mechanisms or policy instruments such as pricing policy (PP), regulation and legislation (RL), training and education, fiscal and financial incentives (FFI), and R and D to promote energy conservation. Effective implementation of energy conservation policies requires prioritization of the different available policy instruments. This paper presents an analytic hierarchy process (AHP) based modeling framework for the prioritization of energy conservation policy instruments. The use of AHP to support management in the prioritization process of policy instruments for promoting energy conservation is illustrated in this research using the case study of Jordan. The research provided a comprehensive framework for performing the prioritization in a scientific and systematic manner. The four most promising policy instruments for promoting energy conservation in Jordan are RL (37.4%), followed by FFI (22.2%), PP (18.0%), and Training, education and qualification (14.5%). One of the major advantages of the use of the AHP approach is that it breaks a large problem into smaller problems which enables the decision-maker (DM) to have a better concentration and to make more sound decisions. In addition, AHP employs a consistency test that can screen out inconsistent judgements. The presented methodology of the research might be beneficial to DMs in other countries
A Hybrid Approach for Reliability Analysis Based on Analytic Hierarchy Process and Bayesian Network
International Nuclear Information System (INIS)
Zubair, Muhammad
2014-01-01
By using analytic hierarchy process (AHP) and Bayesian Network (BN) the present research signifies the technical and non-technical issues of nuclear accidents. The study exposed that the technical faults was one major reason of these accidents. Keep an eye on other point of view it becomes clearer that human behavior like dishonesty, insufficient training, and selfishness are also play a key role to cause these accidents. In this study, a hybrid approach for reliability analysis based on AHP and BN to increase nuclear power plant (NPP) safety has been developed. By using AHP, best alternative to improve safety, design, operation, and to allocate budget for all technical and non-technical factors related with nuclear safety has been investigated. We use a special structure of BN based on the method AHP. The graphs of the BN and the probabilities associated with nodes are designed to translate the knowledge of experts on the selection of best alternative. The results show that the improvement in regulatory authorities will decrease failure probabilities and increase safety and reliability in industrial area.
Mobility spectrum analytical approach for the type-II Weyl semimetal Td-MoTe2
Pei, Q. L.; Luo, X.; Chen, F. C.; Lv, H. Y.; Sun, Y.; Lu, W. J.; Tong, P.; Sheng, Z. G.; Han, Y. Y.; Song, W. H.; Zhu, X. B.; Sun, Y. P.
2018-02-01
The extreme magnetoresistance (XMR) in orthorhombic W/MoTe2 arises from the combination of the perfect electron-hole (e-h) compensation effect and the unique orbital texture topology, which have comprised an intriguing research field in materials physics. Herein, we apply a special analytical approach as a function of mobility (μ-spectrum) without any hypothesis. Based on the interpretations of longitudinal and transverse electric transport of Td-MoTe2, the types and the numbers of carriers can be obtained. There are three observations: the large residual resistivity ratio can be observed in the MoTe2 single crystal sample, which indicates that the studied crystal is of high quality; we observed three electron-pockets and three hole-ones from the μ-spectrum and that the ratio of h/e is much less than 1, which shows that MoTe2 is more e-like; different from the separated peaks obtained from the hole-like μ-spectrum, those of the electron-like one are continuous, which may indicate the topological feature of electron-pockets in Td-MoTe2. The present results may provide an important clue to understanding the mechanism of the XMR effect in Td-MoTe2.
Imaging through atmospheric turbulence for laser based C-RAM systems: an analytical approach
Buske, Ivo; Riede, Wolfgang; Zoz, Jürgen
2013-10-01
High Energy Laser weapons (HEL) have unique attributes which distinguish them from limitations of kinetic energy weapons. HEL weapons engagement process typical starts with identifying the target and selecting the aim point on the target through a high magnification telescope. One scenario for such a HEL system is the countermeasure against rockets, artillery or mortar (RAM) objects to protect ships, camps or other infrastructure from terrorist attacks. For target identification and especially to resolve the aim point it is significant to ensure high resolution imaging of RAM objects. During the whole ballistic flight phase the knowledge about the expectable imaging quality is important to estimate and evaluate the countermeasure system performance. Hereby image quality is mainly influenced by unavoidable atmospheric turbulence. Analytical calculations have been taken to analyze and evaluate image quality parameters during an approaching RAM object. In general, Kolmogorov turbulence theory was implemented to determine atmospheric coherence length and isoplanatic angle. The image acquisition is distinguishing between long and short exposure times to characterize tip/tilt image shift and the impact of high order turbulence fluctuations. Two different observer positions are considered to show the influence of the selected sensor site. Furthermore two different turbulence strengths are investigated to point out the effect of climate or weather condition. It is well known that atmospheric turbulence degenerates image sharpness and creates blurred images. Investigations are done to estimate the effectiveness of simple tip/tilt systems or low order adaptive optics for laser based C-RAM systems.
An analytic approach to probability tables for the unresolved resonance region
Brown, David; Kawano, Toshihiko
2017-09-01
The Unresolved Resonance Region (URR) connects the fast neutron region with the Resolved Resonance Region (RRR). The URR is problematic since resonances are not resolvable experimentally yet the fluctuations in the neutron cross sections play a discernible and technologically important role: the URR in a typical nucleus is in the 100 keV - 2 MeV window where the typical fission spectrum peaks. The URR also represents the transition between R-matrix theory used to described isolated resonances and Hauser-Feshbach theory which accurately describes the average cross sections. In practice, only average or systematic features of the resonances in the URR are known and are tabulated in evaluations in a nuclear data library such as ENDF/B-VII.1. Codes such as AMPX and NJOY can compute the probability distribution of the cross section in the URR under some assumptions using Monte Carlo realizations of sets of resonances. These probability distributions are stored in the so-called PURR tables. In our work, we begin to develop a scheme for computing the covariance of the cross section probability distribution analytically. Our approach offers the possibility of defining the limits of applicability of Hauser-Feshbach theory and suggests a way to calculate PURR tables directly from systematics for nuclei whose RRR is unknown, provided one makes appropriate assumptions about the shape of the cross section probability distribution.
Analytical Approaches to Improve Accuracy in Solving the Protein Topology Problem.
Al Nasr, Kamal; Yousef, Feras; Jebril, Ruba; Jones, Christopher
2018-01-23
To take advantage of recent advances in genomics and proteomics it is critical that the three-dimensional physical structure of biological macromolecules be determined. Cryo-Electron Microscopy (cryo-EM) is a promising and improving method for obtaining this data, however resolution is often not sufficient to directly determine the atomic scale structure. Despite this, information for secondary structure locations is detectable. De novo modeling is a computational approach to modeling these macromolecular structures based on cryo-EM derived data. During de novo modeling a mapping between detected secondary structures and the underlying amino acid sequence must be identified. DP-TOSS ( D ynamic P rogramming for determining the T opology O f S econdary S tructures) is one tool that attempts to automate the creation of this mapping. By treating the correspondence between the detected structures and the structures predicted from sequence data as a constraint graph problem DP-TOSS achieved good accuracy in its original iteration. In this paper, we propose modifications to the scoring methodology of DP-TOSS to improve its accuracy. Three scoring schemes were applied to DP-TOSS and tested: (i) a skeleton-based scoring function; (ii) a geometry-based analytical function; and (iii) a multi-well potential energy-based function. A test of 25 proteins shows that a combination of these schemes can improve the performance of DP-TOSS to solve the topology determination problem for macromolecule proteins.
Oud, Bart; van Maris, Antonius J A; Daran, Jean-Marc; Pronk, Jack T
2012-03-01
Successful reverse engineering of mutants that have been obtained by nontargeted strain improvement has long presented a major challenge in yeast biotechnology. This paper reviews the use of genome-wide approaches for analysis of Saccharomyces cerevisiae strains originating from evolutionary engineering or random mutagenesis. On the basis of an evaluation of the strengths and weaknesses of different methods, we conclude that for the initial identification of relevant genetic changes, whole genome sequencing is superior to other analytical techniques, such as transcriptome, metabolome, proteome, or array-based genome analysis. Key advantages of this technique over gene expression analysis include the independency of genome sequences on experimental context and the possibility to directly and precisely reproduce the identified changes in naive strains. The predictive value of genome-wide analysis of strains with industrially relevant characteristics can be further improved by classical genetics or simultaneous analysis of strains derived from parallel, independent strain improvement lineages. © 2011 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Holger Steinmetz
2012-04-01
Full Text Available Schwartz' theory of human values has found widespread interest in the social sciences. A central part of the theory is that the 10 proposed basic values (i.e., achievement, power, self-direction, hedonism, stimulation, benevolence, universalism, conformity, security, and tradition are arranged in a circular structure. The present study applies a meta-analytical structural equation modelling approach to test the circular structure. The model tested was the quasi-circumplex model, which is considered the most appropriate representation of the circular structure. Moreover, the study explores how far the circular structure varies with the used samples and methodological characteristics of the studies. The meta-analysis comprised 318 matrices with the correlations among the 10 values gathered from 88 studies and the European Social Survey (overall n = 251,239. To reduce heterogeneity across the matrices, cluster analysis was used to sort the matrices into eight clusters with a similar correlation profile and tested the circular structure in each cluster. The results showed that three clusters demonstrated a good fit with the data and an adequate match to the theoretically proposed structure. The clusters' cultural and methodological profiles indicate potential moderators of the circular structure which should be considered in future research.
Hopfer, Helene; Haar, Nina; Stockreiter, Wolfgang; Sauer, Christian; Leitner, Erich
2012-01-01
In a previous study, we identified carbonyls as highly odor-active compounds in both unprocessed and processed polypropylene (PP) with higher intensities after processing, indicating a temperature-driven forming mechanism. In the presented work, we studied whether (a) these carbonyls are the major odor drivers to the overall odor of polyolefins, (b) their formation is taking place already at moderate temperatures well below the typical processing temperatures, (c) conventional antioxidants in polyolefins can prevent or reduce their formation, and (d) whether reducing the amount of oxygen present can decrease the overall odor. One polyethylene (PE) and one PP were selected, and both stabilized and unstabilized polymer powder samples were exposed to conditions differing in oxygen concentration and aging time. The changes in the volatile fraction as well as the formation of odor-active compounds were monitored using a multidisciplinary approach by combining analytical methods based on gas chromatography (GC), multivariate data analysis, and sensory methods (GC-olfactometry and a sensory panel). Both investigated materials (PE and PP) showed similar degradation products (aldehydes, ketones, carboxylic acids, alcohols, and lactones) which increased dramatically with increasing aging time and the lack of stabilization. Oxidation products, mainly carbonyl compounds, were responsible for the odor of the investigated materials. The main odor drivers were unsaturated ketones and aldehydes with a chain length between six and nine C-atoms. Interestingly, similar odor patterns were found for both stabilized and unstabilized samples, indicating that similar formation processes take place independent of the stabilization.
Pashkova, Galina V; Aisueva, Tatyana S; Finkelshtein, Alexander L; Ivanov, Egor V; Shchetnikov, Alexander A
2016-11-01
Bromine has been recognized as a valuable indicator for paleoclimatic studies. Wavelength dispersive X-ray fluorescence (WDXRF) and total reflection X-ray fluorescence (TXRF) methods were applied to study the bromine distributions in lake sediment cores. Conventional WDXRF technique usually requires relatively large mass of a sediment sample and a set of calibration samples. Some analytical approaches were developed to apply WDXRF to small sediment core samples in the absence of adequate calibration samples with a known Br content. The mass of a sample to be analyzed was reduced up to 200-300mg and the internal standard method with correction using fundamental parameters was developed for Br quantification. TXRF technique based on the direct analysis of a solid suspension using 20mg of sediment sample by internal standard method was additionally tested. The accuracy of the WDXRF and TXRF techniques was assessed by the comparative analysis of reference materials of sediments, soil and biological samples. In general, good agreement was achieved between the reference values and the measured values. The detection limits of Br were 1mg/kg and 0.4mg/kg for WDXRF and TXRF respectively. The results of the Br determination obtained with different XRF techniques were comparable to each other and used for paleoclimatic reconstructions. Copyright © 2016 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Donadille, L.; Derreumaux, S.; Mantione, J.; Robbes, I.; Trompier, F.; Amgarou, K.; Asselineau, B.; Martin, A.
2008-01-01
Full text: X-rays produced by high-energy (larger than 6 MeV) medical electron linear accelerators create secondary neutron radiation fields mainly by photonuclear reactions inside the materials of the accelerator head, the patient and the walls of the therapy room. Numerous papers were devoted to the study of neutron production in medical linear accelerators and resulting decay of activation products. However, data associated to doses delivered to workers in treatment conditions are scarce. In France, there are more than 350 external radiotherapy facilities representing almost all types of techniques and designs. IRSN carried out a measurement campaign in order to investigate the variation of the occupational dose according the different encountered situations. Six installations were investigated, associated with the main manufacturers (Varian, Elekta, General Electrics, Siemens), for several nominal energies, conventional and IMRT techniques, and bunker designs. Measurements were carried out separately for neutron and photon radiation fields, and for radiation associated with the decay of the activation products, by means of radiometers, tissue-equivalent proportional counters and spectrometers (neutron and photon spectrometry). They were performed at the positions occupied by the workers, i.e. outside the bunker during treatments, inside between treatments. Measurements have been compared to published data. In addition, semi-empirical analytical approaches recommended by international protocols were used to estimate doses inside and outside the bunkers. The results obtained by both approaches were compared and analysed. The annual occupational effective dose was estimated to about 1 mSv, including more than 50 % associated with the decay of activation products and less than 10 % due to direct exposure to leakage neutrons produced during treatments. (author)
Directory of Open Access Journals (Sweden)
Orhan Dengiz
2018-01-01
Full Text Available Land evaluation analysis is a prerequisite to achieving optimum utilization of the available land resources. Lack of knowledge on best combination of factors that suit production of yields has contributed to the low production. The aim of this study was to determine the most suitable areas for agricultural uses. For that reasons, in order to determine land suitability classes of the study area, multi-criteria approach was used with linear combination technique and analytical hierarchy process by taking into consideration of some land and soil physico-chemical characteristic such as slope, texture, depth, derange, stoniness, erosion, pH, EC, CaCO3 and organic matter. These data and land mapping unites were taken from digital detailed soil map scaled as 1:5.000. In addition, in order to was produce land suitability map GIS was program used for the study area. This study was carried out at Mahmudiye, Karaamca, Yazılı, Çiçeközü, Orhaniye and Akbıyık villages in Yenişehir district of Bursa province. Total study area is 7059 ha. 6890 ha of total study area has been used as irrigated agriculture, dry farming agriculture, pasture while, 169 ha has been used for non-agricultural activities such as settlement, road water body etc. Average annual temperature and precipitation of the study area are 16.1oC and 1039.5 mm, respectively. Finally after determination of land suitability distribution classes for the study area, it was found that 15.0% of the study area has highly (S1 and moderately (S2 while, 85% of the study area has marginally suitable and unsuitable coded as S3 and N. It was also determined some relation as compared results of linear combination technique with other hierarchy approaches such as Land Use Capability Classification and Suitability Class for Agricultural Use methods.
International Nuclear Information System (INIS)
Beelman, R.J.
1999-01-01
A symptom approach to the analytical validation of symptom-based EOPs includes: (1) Identification of critical safety functions to the maintenance of fission product barrier integrity; (2) Identification of the symptoms which manifest an impending challenge to critical safety function maintenance; (3) Development of a symptomatic methodology to delineate bounding plant transient response modes; (4) Specification of bounding scenarios; (5) Development of a systematic calculational approach consistent with the objectives of the methodology; (6) Performance of thermal-hydraulic computer code calculations implementing the analytical methodology; (7) Interpretation of the analytical results on the basis of information available to the operator; (8) Application of the results to the validation of the proposed operator actions; (9) Production of a technical basis document justifying the proposed operator actions. (author)
Kymes, Steven M; Plotzke, Michael R; Li, Jim Z; Nichol, Michael B; Wu, Joanne; Fain, Joel
2010-07-01
Glaucoma accounts for more than 11% of all cases of blindness in the United States, but there have been few studies of economic impact. We examine incremental cost of primary open-angle glaucoma considering both visual and nonvisual medical costs over a lifetime of glaucoma. A decision analytic approach taking the payor's perspective with microsimulation estimation. We constructed a Markov model to replicate health events over the remaining lifetime of someone newly diagnosed with glaucoma. Costs of this group were compared with those estimated for a control group without glaucoma. The cost of management of glaucoma (including medications) before the onset of visual impairment was not considered. The model was populated with probability data estimated from Medicare claims data (1999 through 2005). Cost of nonocular medications and nursing home use was estimated from California Medicare claims, and all other costs were estimated from Medicare claims data. We found modest differences in the incidence of comorbid conditions and health service use between people with glaucoma and the control group. Over their expected lifetime, the cost of care for people with primary open-angle glaucoma was higher than that of people without primary open-angle glaucoma by $1688 or approximately $137 per year. Among Medicare beneficiaries, glaucoma diagnosis not found to be associated with significant risk of comorbidities before development of visual impairment. Further study is necessary to consider the impact of glaucoma on quality of life, as well as aspects of physical and visual function not captured in this claims-based analysis. 2010 Elsevier Inc. All rights reserved.
Tromp, P.C.; Kuijpers, E.; Bekker, C.; Godderis, L.; Lan, Q.; Jedynska, A.D.; Vermeulen, R.; Pronk, A.
2017-01-01
To date there is no consensus about the most appropriate analytical method for measuring carbon nanotubes (CNTs), hampering the assessment and limiting the comparison of data. The goal of this study is to develop an approach for the assessment of the level and nature of inhalable multi-wall CNTs
Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying
2011-01-01
Monte Carlo methods can be used in data analytic situations (e.g., validity studies) to make decisions about sample size and to estimate power. The purpose of using Monte Carlo methods in a validity study is to improve the methodological approach within a study where the primary focus is on construct validity issues and not on advancing…
Cothran, E. K.
1982-01-01
The computer program written in support of one dimensional analytical approach to thermal modeling of Bridgman type crystal growth is presented. The program listing and flow charts are included, along with the complete thermal model. Sample problems include detailed comments on input and output to aid the first time user.
Abidin, Zainal; Handayani, Wahyu; Fattah, Mochammad
2016-01-01
Masamo as new variety of catfish cultivated by the farmer group "Sumber Lancar" in Blimbing, Malang currently has a lot of demand due to increasing consumers who like to eat fish to meet the need for protein for the body. Increasing of Masamo catfish demand followed by production and marketing efforts. This study wants to know whether the marketing efficient. Therefore, this study uses analytical approach approach in order to identify institutional and channel of Masamo Catfish marketing perf...
DEFF Research Database (Denmark)
Mewes, Julie Sascia; Elliot, Michelle L.; Lee, Kim
2017-01-01
In this paper, three qualitative researchers with professional backgrounds in social anthropology, occupational therapy, and occupational science present their methodological and theoretical standpoints and resultant analytical approaches on a single set of ethnographic data – an event occurring ......, such an approach reveals similarities, differences, and complexity that may arise when attempting to locate occupation as the central unit of analysis. The conclusion suggests that cutting through the layers of occupation necessarily provides multiple ontologies....
Approximation by planar elastic curves
DEFF Research Database (Denmark)
Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge
2016-01-01
We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient-driven...
A Task-Analytic Approach to the Determination of Training Requirements for the Precision Descent
Smith, Nancy; Rosekind, Mark (Technical Monitor)
1996-01-01
A task-analytic approach was used to evaluate the results from an experiment comparing two training methods for the "Precision Descent," a cockpit procedure designed to complement a new, computer-based air traffic control advisory system by allowing air traffic controllers to assign precise descent trajectories to aircraft. A task model was developed for the procedure using a methodology that represents four different categories of task-related knowledge: (1) ability to determine current flight goals; (2) ability to assess the current flight situation relative to those goals; (3) operational knowledge about flight-related tasks; and (4) knowledge about task selection. This model showed what knowledge experienced pilots already possessed, and how that knowledge was supplemented by training material provided in the two training conditions. All flight crews were given a "Precision Descent Chart" that explained the procedure's clearances and compliance requirements. This information enabled pilots to establish appropriate flight goals for the descent, and to monitor their compliance with those goals. In addition to this chart, half of the crews received a "Precision Descent Bulletin" containing technique recommendations for performing procedure-related tasks. The Bulletin's recommendations supported pilots in task selection and helped clarify the procedure's compliance requirements. Eight type-rated flight crews flew eight Precision Descents in a Boeing 747-400 simulator, with four crews in each of the two training conditions. Both conditions (Chart and Chart-with-Bulletin) relied exclusively on the use of those documents to introduce the procedure. No performance feedback was provided during the experiment. Preliminary result show better procedure compliance and higher acceptability ratings from flight crews in the Chart-with-Bulletin condition. These crews performed flight-related tasks less efficiently, however, using the simpler but less efficient methods suggested
Borghesi, Fabrizio; Migani, Francesca; Andreotti, Alessandro; Baccetti, Nicola; Bianchi, Nicola; Birke, Manfred; Dinelli, Enrico
2016-02-15
Assessing trace metal pollution using feathers has long attracted the attention of ecotoxicologists as a cost-effective and non-invasive biomonitoring method. In order to interpret the concentrations in feathers considering the external contamination due to lithic residue particles, we adopted a novel geochemical approach. We analysed 58 element concentrations in feathers of wild Eurasian Greater Flamingo Phoenicopterus roseus fledglings, from 4 colonies in Western Europe (Spain, France, Sardinia, and North-eastern Italy) and one group of adults from zoo. In addition, 53 elements were assessed in soil collected close to the nesting islets. This enabled to compare a wide selection of metals among the colonies, highlighting environmental anomalies and tackling possible causes of misinterpretation of feather results. Most trace elements in feathers (Al, Ce, Co, Cs, Fe, Ga, Li, Mn, Nb, Pb, Rb, Ti, V, Zr, and REEs) were of external origin. Some elements could be constitutive (Cu, Zn) or significantly bioaccumulated (Hg, Se) in flamingos. For As, Cr, and to a lesser extent Pb, it seems that bioaccumulation potentially could be revealed by highly exposed birds, provided feathers are well cleaned. This comprehensive study provides a new dataset and confirms that Hg has been accumulated in feathers in all sites to some extent, with particular concern for the Sardinian colony, which should be studied further including Cr. The Spanish colony appears critical for As pollution and should be urgently investigated in depth. Feathers collected from North-eastern Italy were the hardest to clean, but our methods allowed biological interpretation of Cr and Pb. Our study highlights the importance of external contamination when analysing trace elements in feathers and advances methodological recommendations in order to reduce the presence of residual particles carrying elements of external origin. Geochemical data, when available, can represent a valuable tool for a correct
Shigayeva, Altynay; Coker, Richard J
2015-04-01
There is renewed concern over the sustainability of disease control programmes, and re-emergence of policy recommendations to integrate programmes with general health systems. However, the conceptualization of this issue has remarkably received little critical attention. Additionally, the study of programmatic sustainability presents methodological challenges. In this article, we propose a conceptual framework to support analyses of sustainability of communicable disease programmes. Through this work, we also aim to clarify a link between notions of integration and sustainability. As a part of development of the conceptual framework, we conducted a systematic literature review of peer-reviewed literature on concepts, definitions, analytical approaches and empirical studies on sustainability in health systems. Identified conceptual proposals for analysis of sustainability in health systems lack an explicit conceptualization of what a health system is. Drawing upon theoretical concepts originating in sustainability sciences and our review here, we conceptualize a communicable disease programme as a component of a health system which is viewed as a complex adaptive system. We propose five programmatic characteristics that may explain a potential for sustainability: leadership, capacity, interactions (notions of integration), flexibility/adaptability and performance. Though integration of elements of a programme with other system components is important, its role in sustainability is context specific and difficult to predict. The proposed framework might serve as a basis for further empirical evaluations in understanding complex interplay between programmes and broader health systems in the development of sustainable responses to communicable diseases. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine © The Author 2014; all rights reserved.
A novel approach to evaluate soil heat flux calculation: An analytical review of nine methods
Gao, Zhongming; Russell, Eric S.; Missik, Justine E. C.; Huang, Maoyi; Chen, Xingyuan; Strickland, Chris E.; Clayton, Ray; Arntzen, Evan; Ma, Yulong; Liu, Heping
2017-07-01
There are no direct methods to evaluate calculated soil heat flux (SHF) at the surface (G0). Instead, validation and cross evaluation of methods for calculating G0 usually rely on the conventional calorimetric method or the degree of the surface energy balance closure. However, there is uncertainty in the calorimetric method itself, and factors apart from G0 also contribute to nonclosure of the surface energy balance. Here we used a novel approach to evaluate nine different methods for calculating SHF, including the calorimetric method and methods based on analytical solutions of the heat diffusion equation. The SHF (Gz) measured by a self-calibrating SHF plate at a depth of z = 5 cm below the surface (hereafter Gm_5cm) was deployed as a reference. Each SHF calculation method was assessed by comparing the calculated Gz at the same depth (hereafter Gc_5cm) with Gm_5cm. The calorimetric method and simple measurement method performed best in determining Gc_5cm but still underestimated Gm_5cm by 19% during the daytime. Possible causes for this underestimation include errors and uncertainties in SHF measurements and soil thermal properties, as well as the phase lag between Gc_5cm and Gm_5cm. Our results indicate that the calorimetric method achieves the most accurate SHF estimates if self-calibrating SHF plates are deployed at two depths (e.g., 5 cm and 10 cm), soil temperature and water content measurements are made in a few depths between the two plates, and soil thermal properties are accurately quantified.
Analytic approach to constructing effective local potentials in nuclear reaction theory
International Nuclear Information System (INIS)
Blokhintsev, L.D.; Safronov, A.N.; Safronov, A.A.
2004-01-01
Full text: Recently a method of constructing effective local potentials between strongly interacting composite particles was suggested [1]. The approach is based on the requirement of the proper structure of the nearest to the physical region singularities of scattering amplitudes as well as on the methods of the inverse scattering problem. In the given work the method is generalized to the case of the presence of the long-range Coulomb interaction which drastically changes the analytic structure of the S matrix. The effective potential is defined as the local operator which, being inserted into the Lippmann-Schwinger equation, generates required discontinuities of partial-wave scattering amplitudes. Its strong part is written in the form V(r)=∫ μ 0 C(α)e -ar dα, where μ is determined by the position of the nearest to the physical region dynamical singularity. For all the processes considered, these singularities correspond to pole Feynman diagrams describing the elastic transfer mechanism. The C(α) function is found as a solution of the inverse scattering problem equations, the kernels of which are determined by the discontinuities at the nearest dynamical cuts. The Coulomb interaction is treated by introducing reduced Coulomb-nuclear scattering amplitudes and, in addition, by taking into account Coulomb corrections in the three-particle intermediate states and in the vertex functions of the pole diagrams. The processes of nd, pd, p 3 He, nα, pα and 3 Heα scattering were considered. To calculate the required discontinuities, the information on the corresponding vertex constants and binding energies was used. Effective potentials, scattering lengths and low-energy phase shifts for the processes under consideration were obtained. The work was supported by the Russian Foundation for Basic Research (grant No.04-02-16602) and by the 'Russian Universities' program (grant No. 02.02.027)
Maurage, Pierre; Timary, Philippe de; D'Hondt, Fabien
2017-08-01
Emotional and interpersonal impairments have been largely reported in alcohol-dependence, and their role in its development and maintenance is widely established. However, earlier studies have exclusively focused on group comparisons between healthy controls and alcohol-dependent individuals, considering them as a homogeneous population. The variability of socio-emotional profiles in this disorder thus remains totally unexplored. The present study used a cluster analytic approach to explore the heterogeneity of affective and social disorders in alcohol-dependent individuals. 296 recently-detoxified alcohol-dependent patients were first compared with 246 matched healthy controls regarding self-reported emotional (i.e. alexithymia) and social (i.e. interpersonal problems) difficulties. Then, a cluster analysis was performed, focusing on the alcohol-dependent sample, to explore the presence of differential patterns of socio-emotional deficits and their links with demographic, psychopathological and alcohol-related variables. The group comparison between alcohol-dependent individuals and controls clearly confirmed that emotional and interpersonal difficulties constitute a key factor in alcohol-dependence. However, the cluster analysis identified five subgroups of alcohol-dependent individuals, presenting distinct combinations of alexithymia and interpersonal problems ranging from a total absence of reported impairment to generalized socio-emotional difficulties. Alcohol-dependent individuals should no more be considered as constituting a unitary group regarding their affective and interpersonal difficulties, but rather as a population encompassing a wide variety of socio-emotional profiles. Future experimental studies on emotional and social variables should thus go beyond mere group comparisons to explore this heterogeneity, and prevention programs proposing an individualized evaluation and rehabilitation of these deficits should be promoted. Copyright © 2017
Using Gephi to visualize online course participation: a Social Learning Analytics approach
Directory of Open Access Journals (Sweden)
Ángel Hernández-García
2014-12-01
Social learning analytics provides tools and methods for extracting information that is useful for improving the learning process. This case study shows how instructors and course coordinators can use the tool Gephi to generate relevant information that would otherwise be difficult to gain. Analysis of empirical data from a cross-curricular course with 656 students proves the usefulness of Gephi for social learning analytics studies and demonstrates how the tool can provide relevant indicators of student activity and engagement. The study also discusses the potential of social learning analytics for improving online instruction via learning data visualization.
International Nuclear Information System (INIS)
Bubert, H.; Garten, R.; Klockenkaemper, R.; Puderbach, H.
1983-01-01
Corrosion protective coatings on galvanized steel sheets have been studied by a combination of SEM, EDX, AES, ISS and SIMS. Analytical statements concerning such rough, poly-crystalline and contaminated surfaces of technical samples are quite difficult to obtain. The use of a surface-analytical multi-method approach overcomes, the intrinsic limitations of the individual method applied, thus resulting in a consistent picture of those technical surfaces. Such results can be used to examine technical faults and to optimize the technical process. (Author)
Harne, R. L.; Zhang, Chunlin; Li, Bing; Wang, K. W.
2016-07-01
Impulsive energies are abundant throughout the natural and built environments, for instance as stimulated by wind gusts, foot-steps, or vehicle-road interactions. In the interest of maximizing the sustainability of society's technological developments, one idea is to capture these high-amplitude and abrupt energies and convert them into usable electrical power such as for sensors which otherwise rely on less sustainable power supplies. In this spirit, the considerable sensitivity to impulse-type events previously uncovered for bistable oscillators has motivated recent experimental and numerical studies on the power generation performance of bistable vibration energy harvesters. To lead to an effective and efficient predictive tool and design guide, this research develops a new analytical approach to estimate the electroelastic response and power generation of a bistable energy harvester when excited by an impulse. Comparison with values determined by direct simulation of the governing equations shows that the analytically predicted net converted energies are very accurate for a wide range of impulse strengths. Extensive experimental investigations are undertaken to validate the analytical approach and it is seen that the predicted estimates of the impulsive energy conversion are in excellent agreement with the measurements, and the detailed structural dynamics are correctly reproduced. As a result, the analytical approach represents a significant leap forward in the understanding of how to effectively leverage bistable structures as energy harvesting devices and introduces new means to elucidate the transient and far-from-equilibrium dynamics of nonlinear systems more generally.
Combined analytical and numerical approaches in Dynamic Stability analyses of engineering systems
Náprstek, Jiří
2015-03-01
Dynamic Stability is a widely studied area that has attracted many researchers from various disciplines. Although Dynamic Stability is usually associated with mechanics, theoretical physics or other natural and technical disciplines, it is also relevant to social, economic, and philosophical areas of our lives. Therefore, it is useful to occasionally highlight the general aspects of this amazing area, to present some relevant examples and to evaluate its position among the various branches of Rational Mechanics. From this perspective, the aim of this study is to present a brief review concerning the Dynamic Stability problem, its basic definitions and principles, important phenomena, research motivations and applications in engineering. The relationships with relevant systems that are prone to stability loss (encountered in other areas such as physics, other natural sciences and engineering) are also noted. The theoretical background, which is applicable to many disciplines, is presented. In this paper, the most frequently used Dynamic Stability analysis methods are presented in relation to individual dynamic systems that are widely discussed in various engineering branches. In particular, the Lyapunov function and exponent procedures, Routh-Hurwitz, Liénard, and other theorems are outlined together with demonstrations. The possibilities for analytical and numerical procedures are mentioned together with possible feedback from experimental research and testing. The strengths and shortcomings of these approaches are evaluated together with examples of their effective complementing of each other. The systems that are widely encountered in engineering are presented in the form of mathematical models. The analyses of their Dynamic Stability and post-critical behaviour are also presented. The stability limits, bifurcation points, quasi-periodic response processes and chaotic regimes are discussed. The limit cycle existence and stability are examined together with their
Converse, Sarah J.; Shelley, Kevin J.; Morey, Steve; Chan, Jeffrey; LaTier, Andrea; Scafidi, Carolyn; Crouse, Deborah T.; Runge, Michael C.
2011-01-01
The resources available to support conservation work, whether time or money, are limited. Decision makers need methods to help them identify the optimal allocation of limited resources to meet conservation goals, and decision analysis is uniquely suited to assist with the development of such methods. In recent years, a number of case studies have been described that examine optimal conservation decisions under fiscal constraints; here we develop methods to look at other types of constraints, including limited staff and regulatory deadlines. In the US, Section Seven consultation, an important component of protection under the federal Endangered Species Act, requires that federal agencies overseeing projects consult with federal biologists to avoid jeopardizing species. A benefit of consultation is negotiation of project modifications that lessen impacts on species, so staff time allocated to consultation supports conservation. However, some offices have experienced declining staff, potentially reducing the efficacy of consultation. This is true of the US Fish and Wildlife Service's Washington Fish and Wildlife Office (WFWO) and its consultation work on federally-threatened bull trout (Salvelinus confluentus). To improve effectiveness, WFWO managers needed a tool to help allocate this work to maximize conservation benefits. We used a decision-analytic approach to score projects based on the value of staff time investment, and then identified an optimal decision rule for how scored projects would be allocated across bins, where projects in different bins received different time investments. We found that, given current staff, the optimal decision rule placed 80% of informal consultations (those where expected effects are beneficial, insignificant, or discountable) in a short bin where they would be completed without negotiating changes. The remaining 20% would be placed in a long bin, warranting an investment of seven days, including time for negotiation. For formal
Analytical approaches in optimization of design of electrical system of INRP Project at Tarapur
International Nuclear Information System (INIS)
Mishra, H.; Basu, Sekhar; Raman, C.V.; Kushwah, M.
2015-01-01
Integrated Nuclear Recycle Plant (INRP) will be the first integrated nuclear fuel recycle facility where spent fuel storage, reprocessing, waste management plants and waste storage will be integrated into a single entity by locating all the civil structures in a single campus. Electrical power system of the plant comprises of various normal, emergency and un-interruptible power supplies in line with the existing nuclear recycle plants and the safety guide lines for such radio chemical facility. The power supply systems are significantly large and spread over length and breadth of the plant area. A large number of transformers and associated switch gear and cabling systems are envisaged for development of the power system network. In order to arrive at an optimum design solution, a number of technically feasible options for 6.6 kV and 415V power distribution network were analysed and compared techno economically. This paper covers the analytical approaches adopted in design of such electrical systems of INRP Project at Tarapur. It involves Load Flow Study and Short Circuit Analysis using ETAP software as well as manual calculation of Steady State and Transient Voltage Dip. Load-flow study was performed to determine the steady-state operation of an electric power system. The voltage drop on each feeder, the voltage level at each bus, and the power flow in all branches and feeder circuits were calculated. It was determined if system voltages remain within specified limits under various contingency conditions, and whether equipment such as transformers and cables are protected against overload. Load-flow study was used to identify the need for additional active Power, capacitive, or inductive VAR support, or the placement of capacitors and/or reactors to maintain system voltages within specified limits. Prospective losses in each branch and total system power losses were also calculated. The short circuit study models the current that flows in the power system under
Geometrical optimization of nanostrips for surface plasmon excitation: an analytical approach.
Grosges, Thomas; Barchiesi, Dominique
2018-01-01
We give a simple tool for the optimization of the dimensions of a metallic nanostrip illuminated at a given wavelength under normal incidence, to get a maximum of the electromagnetic field amplitude in the nanostrip. We propose an analytical formula that gives the widths and heights of the series of nanostrips that produce field enhancement. The validity of the analytical formula is checked by using the finite element method. This design of a nanostrip could be useful for sensors and thermally active components.
A REVIEW: AN APPROACH TOWARDS THE ANALYTICAL METHOD DEVELOPMENT FOR DETERMINATION OF NEWER DRUGS
Kirtimaya Mishra*, Dr. K. Balamurugan1, Dr. R. Suresh1
2017-01-01
In this present scenario for treating various diseases several new drugs were invented. Before launching to the market these drugs must undergo analytical validation process. In this review some of analytical techniques such as ultraviolet/ visible spectrophotometry, fluorimetry, capillary electrophoresis, and chromatographic methods (gas chromatography and high-performance liquid chromatography), LC-MS, GC-MS, SOLID PHASE EXTRACTION, NMR, MASS Spectrophotometry LC/MS/MS LC/UV X-ray crystallo...
DEFF Research Database (Denmark)
Ryberg, Thomas; Dirckinck-Holmfeld, Lone
2008-01-01
people’s learning in technology and media-rich settings. Based on a study of a group of young ‘Power Users’ it is argued, that conceptualising and analysing learning as a process of patchworking can enhance our knowledge of young people’s learning in such settings. We argue that the analytical approach...... gives us ways of critically investigating young people’s learning in technology and media-rich settings, and study if these are processes of critical, reflexive enquiry where resources are creatively re-appropriated. With departure in an analytical example the paper presents the proposed metaphor......This paper sets out to problematize generational categories such as ‘Power Users’ or ‘New Millennium Learners’ by discussing these in the light of recent research on youth and ICT. We then suggest analytic and conceptual pathways to engage in more critical and empirically founded studies of young...
Analytical Approaches to Understanding the Role of Non-carbohydrate Components in Wood Biorefinery
Leskinen, Timo Ensio
This dissertation describes the production and analysis of wood subjected to a novel electron beam-steam explosion pretreatment (EB-SE) pretreatment with the aim to evaluate its suitability for the production of bioethanol. The goal of these studies was to: 1) develop analytical methods for the investigation of depolymerization of wood components under pretreatments, 2) analyze the effects of EB-SE pretreatment on the pretreated biomass, 3) define how lignin and extractive components affect the action of enzymes on cellulosic substrates, and 4) examine how changes in lignin structure impact its isolation and potential conversion into value added chemicals. The first section of the work describes the development of a size-exclusion chromatography (SEC) methodology for molecular weight analysis for native and pretreated wood. The selective analysis of carbohydrates and lignin from native wood was made possible by the combination of two selective derivatization methods, ionic liquid assisted benzoylation of the carbohydrate fraction and acetobromination of the lignin in acetic acid media. This method was then used to examine changes in softwood samples after the EB-SE pretreatment. The methodology was shown to be effective for monitoring changes in the molecular weight profiles of the pretreated wood. The second section of the work investigates synergistic effects of the EB-SE pretreatment on the molecular level structures of wood components and the significance of these alterations in terms of enzymatic digestibility. The two pretreatment steps depolymerized cell wall components in different fashion, while showing synergistic effects. Hardwood and softwood species responded differently to similar treatment conditions, which was attributed to the well-known differences in the structure of their lignin and hemicellulose fractions. The relatively crosslinked lignin in softwood appeared to limit swelling and subsequent depolymerization in comparison to hardwood
Fast, Approximate Solutions for 1D Multicomponent Gas Injection Problems
DEFF Research Database (Denmark)
Jessen, Kristian; Wang, Yun; Ermakov, Pavel
2001-01-01
by the geometry of key tie lines. It has previously been proven that for systems with an arbitrary number of components, the key tie lines can be approximated quite accurately by a sequence of intersecting tie lines. As a result, analytical solutions can be constructed efficiently for problems with constant......This paper presents a new approach for constructing approximate analytical solutions for ID, multicomponent gas displacement problems. The solution to mass conservation equations governing ID dispersion-free flow in which components partition between two equilibrium phases is controlled...... initial and injection compositions (Riemann problems). For fully self-sharpening systems, in which all key tie lines are connected by shocks, the analytical solutions obtained are rigorously accurate, while for systems in which some key tie lines are connected by spreading waves, the analytical solutions...
International Nuclear Information System (INIS)
Mikhailovskii, A.B.; Shirokov, M.S.; Konovalov, S.V.; Tsypin, V.S.
2005-01-01
Transport threshold models of neoclassical tearing modes in tokamaks are investigated analytically. An analysis is made of the competition between strong transverse heat transport, on the one hand, and longitudinal heat transport, longitudinal heat convection, longitudinal inertial transport, and rotational transport, on the other hand, which leads to the establishment of the perturbed temperature profile in magnetic islands. It is shown that, in all these cases, the temperature profile can be found analytically by using rigorous solutions to the heat conduction equation in the near and far regions of a chain of magnetic islands and then by matching these solutions. Analytic expressions for the temperature profile are used to calculate the contribution of the bootstrap current to the generalized Rutherford equation for the island width evolution with the aim of constructing particular transport threshold models of neoclassical tearing modes. Four transport threshold models, differing in the underlying competing mechanisms, are analyzed: collisional, convective, inertial, and rotational models. The collisional model constructed analytically is shown to coincide exactly with that calculated numerically; the reason is that the analytical temperature profile turns out to be the same as the numerical profile. The results obtained can be useful in developing the next generation of general threshold models. The first steps toward such models have already been made
International Nuclear Information System (INIS)
Liu Hongzhun; Pan Zuliang; Li Peng
2006-01-01
In this article, we will derive an equality, where the Taylor series expansion around ε = 0 for any asymptotical analytical solution of the perturbed partial differential equation (PDE) with perturbing parameter ε must be admitted. By making use of the equality, we may obtain a transformation, which directly map the analytical solutions of a given unperturbed PDE to the asymptotical analytical solutions of the corresponding perturbed one. The notion of Lie-Baecklund symmetries is introduced in order to obtain more transformations. Hence, we can directly create more transformations in virtue of known Lie-Baecklund symmetries and recursion operators of corresponding unperturbed equation. The perturbed Burgers equation and the perturbed Korteweg-de Vries (KdV) equation are used as examples.
Olivieri, Alejandro C; Magallanes, Jorge F
2012-08-15
Screening of relevant factors using Plackett-Burman designs is usual in analytical chemistry. It relies on the assumption that factor interactions are negligible; however, failure of recognizing such interactions may lead to incorrect results. Factor associations can be revealed by feature selection techniques such as ant colony optimization. This method has been combined with a Monte Carlo approach, developing a new algorithm for assessing both main and interaction terms when analyzing the influence of experimental factors through a Plackett-Burman design of experiments. The results for both simulated and analytically relevant experimental systems show excellent agreement with previous approaches, highlighting the importance of considering potential interactions when conducting a screening search. Copyright © 2012 Elsevier B.V. All rights reserved.
A Semi-Analytical Approach for the Response of Nonlinear Conservative Systems
DEFF Research Database (Denmark)
Kimiaeifar, Amin; Barari, Amin; Fooladi, M
2011-01-01
This work applies Parameter expanding method (PEM) as a powerful analytical technique in order to obtain the exact solution of nonlinear problems in the classical dynamics. Lagrange method is employed to derive the governing equations. The nonlinear governing equations are solved analytically...... by means of He’s Parameter expanding method. It is demonstrated that one term in series expansion is sufficient to generate a highly accurate solution, which is valid for the whole domain of the solution and system response. Comparison of the obtained solutions with the numerical ones indicates...
A Semi-Analytical Approach for the Response of Nonlinear Conservative Systems
DEFF Research Database (Denmark)
Kimiaeifar, Amin; Barari, Amin; Fooladi, M
2011-01-01
This work applies Parameter expanding method (PEM) as a powerful analytical technique in order to obtain the exact solution of nonlinear problems in the classical dynamics. Lagrange method is employed to derive the governing equations. The nonlinear governing equations are solved analytically...... by means of He’s Parameter expanding method. It is demonstrated that one term in series expansion is sufficient to generate a highly accurate solution, which is valid for the whole domain of the solution and system response. Comparison of the obtained solutions with the numerical ones indicates...... that this method is an effective and convenient tool for solving these types of problems....
Two Dimensional Temperature Distributions in Plate Heat Exchangers: An Analytical Approach
Directory of Open Access Journals (Sweden)
Amir Reza Ansari Dezfoli
2015-12-01
Full Text Available Analytical solutions are developed to work out the two-dimensional (2D temperature changes of flow in the passages of a plate heat exchanger in parallel flow and counter flow arrangements. Two different flow regimes, namely, the plug flow and the turbulent flow are considered. The mathematical formulation of problems coupled at boundary conditions are presented, the solution procedure is then obtained as a special case of the two region Sturm-Liouville problem. The results obtained for two different flow regimes are then compared with experimental results and with each other. The agreement between the analytical and experimental results is an indication of the accuracy of solution method.
A Bayesian meta-analytic approach for safety signal detection in randomized clinical trials.
Odani, Motoi; Fukimbara, Satoru; Sato, Tosiya
2017-04-01
Meta-analyses are frequently performed on adverse event data and are primarily used for improving statistical power to detect safety signals. However, in the evaluation of drug safety for New Drug Applications, simple pooling of adverse event data from multiple clinical trials is still commonly used. We sought to propose a new Bayesian hierarchical meta-analytic approach based on consideration of a hierarchical structure of reported individual adverse event data from multiple randomized clinical trials. To develop our meta-analysis model, we extended an existing three-stage Bayesian hierarchical model by including an additional stage of the clinical trial level in the hierarchical model; this generated a four-stage Bayesian hierarchical model. We applied the proposed Bayesian meta-analysis models to published adverse event data from three premarketing randomized clinical trials of tadalafil and to a simulation study motivated by the case example to evaluate the characteristics of three alternative models. Comparison of the results from the Bayesian meta-analysis model with those from Fisher's exact test after simple pooling showed that 6 out of 10 adverse events were the same within a top 10 ranking of individual adverse events with regard to association with treatment. However, more individual adverse events were detected in the Bayesian meta-analysis model than in Fisher's exact test under the body system "Musculoskeletal and connective tissue disorders." Moreover, comparison of the overall trend of estimates between the Bayesian model and the standard approach (odds ratios after simple pooling methods) revealed that the posterior median odds ratios for the Bayesian model for most adverse events shrank toward values for no association. Based on the simulation results, the Bayesian meta-analysis model could balance the false detection rate and power to a better extent than Fisher's exact test. For example, when the threshold value of the posterior probability for
DEFF Research Database (Denmark)
Klemmensen, Charlotte Marie Bisgaard
The approach of language psychology is grounded in the persons communicating; where as the approach of discursive psychology is grounded in social interaction. There is a lack of scientific knowledge on the social/communicative/interactional challenges of communication difficulties and brain injury...... in everyday life. A sense-making-in-practice approach may help form a new discourse. How may a new analytical approach be designed? May ‘communication’ be described as ‘participation abilities’, using the framework from language psychology combined with discursive psychology and the conventions...... of ethnomethodology? I draw on Roy Harris’ integrational linguistics’ approach (1998; 2009) to communication and communication abilities as I investigate how agreement on a micro-level is accomplished through participation and initiatives in interactions (Goodwin, 2003). I examine excerpts from a study I have been...
Wasser, L. A.; Gold, A. U.
2017-12-01
There is a deluge of earth systems data available to address cutting edge science problems yet specific skills are required to work with these data. The Earth analytics education program, a core component of Earth Lab at the University of Colorado - Boulder - is building a data intensive program that provides training in realms including 1) interdisciplinary communication and collaboration 2) earth science domain knowledge including geospatial science and remote sensing and 3) reproducible, open science workflows ("earth analytics"). The earth analytics program includes an undergraduate internship, undergraduate and graduate level courses and a professional certificate / degree program. All programs share the goals of preparing a STEM workforce for successful earth analytics driven careers. We are developing an program-wide evaluation framework that assesses the effectiveness of data intensive instruction combined with domain science learning to better understand and improve data-intensive teaching approaches using blends of online, in situ, asynchronous and synchronous learning. We are using targeted online search engine optimization (SEO) to increase visibility and in turn program reach. Finally our design targets longitudinal program impacts on participant career tracts over time.. Here we present results from evaluation of both an interdisciplinary undergrad / graduate level earth analytics course and and undergraduate internship. Early results suggest that a blended approach to learning and teaching that includes both synchronous in-person teaching and active classroom hands-on learning combined with asynchronous learning in the form of online materials lead to student success. Further we will present our model for longitudinal tracking of participant's career focus overtime to better understand long-term program impacts. We also demonstrate the impact of SEO optimization on online content reach and program visibility.
Assessment of Learning in Digital Interactive Social Networks: A Learning Analytics Approach
Wilson, Mark; Gochyyev, Perman; Scalise, Kathleen
2016-01-01
This paper summarizes initial field-test results from data analytics used in the work of the Assessment and Teaching of 21st Century Skills (ATC21S) project, on the "ICT Literacy--Learning in digital networks" learning progression. This project, sponsored by Cisco, Intel and Microsoft, aims to help educators around the world enable…
Behn, Maximilian; Tapken, Ulf; Puttkammer, Peter; Hagmeijer, Rob; Thouault, Nicolas
2016-01-01
The present study is dealing with the analytical modelling of sound transmission through turbomachinery stators. Two-dimensional cascade models are applied in combination with a newly proposed impedance model to account for the effect of flow deflection on the propagation of acoustic modes in
A Social Media Practicum: An Action-Learning Approach to Social Media Marketing and Analytics
Atwong, Catherine T.
2015-01-01
To prepare students for the rapidly evolving field of digital marketing, which requires more and more technical skills every year, a social media practicum creates a learning environment in which students can apply marketing principles and become ready for collaborative work in social media marketing and analytics. Using student newspapers as…
An analytical approach to the comparison of chemical and radiation hazards to man
International Nuclear Information System (INIS)
Leenhouts, H.P.; Chadwick, K.H.; Cebulska-Wasilewska, A.
1980-01-01
An analytical model, based on radiation biological concepts at the molecular level, is presented. It permits an analysis of the effects of mutagenic agents other than radiation and predicts that a synergistic interaction between two different mutagenic agents can occur at the molecular level. (H.K.)
Modal instability of rod fiber amplifiers: a semi-analytic approach
DEFF Research Database (Denmark)
Jørgensen, Mette Marie; Hansen, Kristian Rymann; Laurila, Marko
2013-01-01
The modal instability (MI) threshold is estimated for four rod fiber designs by combining a semi-analytic model with the finite element method. The thermal load due to the quantum defect is calculated and used to numerically determine the mode distributions on which the expression for the onset o...
Boyce, Mary C.; Singh, Kuki
2008-01-01
This paper describes a student-focused activity that promotes effective learning in analytical chemistry. Providing an environment where students were responsible for their own learning allowed them to participate at all levels from designing the problem to be addressed, planning the laboratory work to support their learning, to providing evidence…
An Innovative Hybrid 3D Analytic-Numerical Approach for System Level Modelling of PEM Fuel Cells
Directory of Open Access Journals (Sweden)
Gregor Tavčar
2013-10-01
Full Text Available The PEM fuel cell model presented in this paper is based on modelling species transport and coupling electrochemical reactions to species transport in an innovative way. Species transport is modelled by obtaining a 2D analytic solution for species concentration distribution in the plane perpendicular to the gas-flow and coupling consecutive 2D solutions by means of a 1D numerical gas-flow model. The 2D solution is devised on a jigsaw puzzle of multiple coupled domains which enables the modelling of parallel straight channel fuel cells with realistic geometries. Electrochemical and other nonlinear phenomena are coupled to the species transport by a routine that uses derivative approximation with prediction-iteration. A hybrid 3D analytic-numerical fuel cell model of a laboratory test fuel cell is presented and evaluated against a professional 3D computational fluid dynamic (CFD simulation tool. This comparative evaluation shows very good agreement between results of the presented model and those of the CFD simulation. Furthermore, high accuracy results are achieved at computational times short enough to be suitable for system level simulations. This computational efficiency is owed to the semi-analytic nature of its species transport modelling and to the efficient computational coupling of electrochemical kinetics and species transport.
Directory of Open Access Journals (Sweden)
Sushila
2013-09-01
Full Text Available In this paper, we present an efficient analytical approach based on new homotopy perturbation sumudu transform method (HPSTM to investigate the magnetohydrodynamics (MHD viscous flow due to a stretching sheet. The viscous fluid is electrically conducting in the presence of magnetic field and the induced magnetic field is neglected for small magnetic Reynolds number. Finally, some numerical comparisons among the new HPSTM, the homotopy perturbation method and the exact solution have been made. The numerical solutions obtained by the proposed method show that the approach is easy to implement and computationally very attractive.
Analytical approach to cross-layer protocol optimization in wireless sensor networks
Hortos, William S.
2008-04-01
In the distributed operations of route discovery and maintenance, strong interaction occurs across mobile ad hoc network (MANET) protocol layers. Quality of service (QoS) requirements of multimedia service classes must be satisfied by the cross-layer protocol, along with minimization of the distributed power consumption at nodes and along routes to battery-limited energy constraints. In previous work by the author, cross-layer interactions in the MANET protocol are modeled in terms of a set of concatenated design parameters and associated resource levels by multivariate point processes (MVPPs). Determination of the "best" cross-layer design is carried out using the optimal control of martingale representations of the MVPPs. In contrast to the competitive interaction among nodes in a MANET for multimedia services using limited resources, the interaction among the nodes of a wireless sensor network (WSN) is distributed and collaborative, based on the processing of data from a variety of sensors at nodes to satisfy common mission objectives. Sensor data originates at the nodes at the periphery of the WSN, is successively transported to other nodes for aggregation based on information-theoretic measures of correlation and ultimately sent as information to one or more destination (decision) nodes. The "multimedia services" in the MANET model are replaced by multiple types of sensors, e.g., audio, seismic, imaging, thermal, etc., at the nodes; the QoS metrics associated with MANETs become those associated with the quality of fused information flow, i.e., throughput, delay, packet error rate, data correlation, etc. Significantly, the essential analytical approach to MANET cross-layer optimization, now based on the MVPPs for discrete random events occurring in the WSN, can be applied to develop the stochastic characteristics and optimality conditions for cross-layer designs of sensor network protocols. Functional dependencies of WSN performance metrics are described in
Approximate Bayesian computation.
Directory of Open Access Journals (Sweden)
Mikael Sunnåker
Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.
Kaplun, Veronika; Stepensky, David
2014-08-04
In our previous studies, we developed a nanodrug delivery system (nano-DDS) based on poly(lactic-co-glycolic acid) PLGA nanoparticles encapsulating antigenic peptide and fluorescent marker and 3-stage approach for its decoration with peptide targeting residues. The objectives of this study were (a) to develop methods for quantitative analysis of efficiency of individual conjugation steps and (b) to determine, based on these methods, the efficiency of our 3-stage approach of nano-DDS decoration. We prepared antigenic peptide-loaded PLGA-based nano-DDSs and sequentially decorated them with specific residues using carbodiimide and Click (azide-alkyne Huisgen cycloaddition using copper(I) catalysis) reactions. The extent of cargo encapsulation and release kinetics were analyzed using HPLC-based and colorimetric analytical methods. The efficiency of residue conjugation to the nano-DDSs was analyzed using FTIR spectroscopy and by quantifying the unreacted residues in the reaction mixture (i.e., by indirect analysis of reaction efficiencies). We revealed that copper, the catalyst of the Click reactions, formed complexes with unreacted targeting residues and interfered with the analysis of their conjugation efficiency. We used penicillamine (a chelator) to disrupt these complexes, and to recover the unreacted residues. Quantitative analysis revealed that 28,800-34,000 targeting residues (corresponding to 11-13 nm(2) surface area per residue) had been conjugated to a single nano-DDS using our 3-stage decoration approach, which is much higher than previously reported conjugation efficiencies. We conclude that the applied analytical tools allow quantitative analysis of nano-DDSs and the efficiency of their conjugation with targeting residues. The 3-stage decoration approach resulted in dense conjugation of nano-DDSs with targeting residues. The present decoration and analytical approaches can be effectively applied to other types of delivery systems and other targeting
International Nuclear Information System (INIS)
Moraes, Pedro Gabriel B.; Leite, Michel C.A.; Barros, Ricardo C.
2013-01-01
In this work we developed a software to model and generate results in tables and graphs of one-dimensional neutron transport problems in multi-group formulation of energy. The numerical method we use to solve the problem of neutron diffusion is analytic, thus eliminating the truncation errors that appear in classical numerical methods, e.g., the method of finite differences. This numerical analytical method increases the computational efficiency, since they are not refined spatial discretization necessary because for any spatial discretization grids used, the numerical result generated for the same point of the domain remains unchanged unless the rounding errors of computational finite arithmetic. We chose to develop a computational application in MatLab platform for numerical computation and program interface is simple and easy with knobs. We consider important to model this neutron transport problem with a fixed source in the context of shielding calculations of radiation that protects the biosphere, and could be sensitive to ionizing radiation
An analytical approach to activating demand elasticity with a demand response mechanism
International Nuclear Information System (INIS)
Clastres, Cedric; Khalfallah, Haikel
2015-01-01
The aim of this work is to demonstrate analytically the conditions under which activating the elasticity of consumer demand could benefit social welfare. We have developed an analytical equilibrium model to quantify the effect of deploying demand response on social welfare and energy trade. The novelty of this research is that it demonstrates the existence of an optimal area for the price signal in which demand response enhances social welfare. This optimal area is negatively correlated to the degree of competitiveness of generation technologies and the market size of the system. In particular, it should be noted that the value of un-served energy or energy reduction which the producers could lose from such a demand response scheme would limit its effectiveness. This constraint is even greater if energy trade between countries is limited. Finally, we have demonstrated scope for more aggressive demand response, when only considering the impact in terms of consumer surplus. (authors)
Practical approach to a procedure for judging the results of analytical verification measurements
International Nuclear Information System (INIS)
Beyrich, W.; Spannagel, G.
1979-01-01
For practical safeguards a particularly transparent procedure is described to judge analytical differences between declared and verified values based on experimental data relevant to the actual status of the measurement technique concerned. Essentially it consists of two parts: Derivation of distribution curves for the occurrence of interlaboratory differences from the results of analytical intercomparison programmes; and judging of observed differences using criteria established on the basis of these probability curves. By courtesy of the Euratom Safeguards Directorate, Luxembourg, the applicability of this judging procedure has been checked in practical data verification for safeguarding; the experience gained was encouraging and implementation of the method is intended. Its reliability might be improved further by evaluation of additional experimental data. (author)
International Nuclear Information System (INIS)
Gao, J.
1993-09-01
Starting from a single resonant rf cavity, disk-loaded travelling (forward or backward) wave accelerating structures' properties are determined by rather simple analytical formulae. They include the coupling coefficient K in the dispersion relation, group velocity v g , shunt impedance R, wake potential W (longitudinal and transverse), the coupling coefficient β of the coupler cavity and the coupler cavity axis shift δ r which is introduced to compensate the asymmetry caused by the coupling aperture. (author) 12 refs., 18 figs
Analytical approach to the helium-atom ground state using correlated wavefunctions
Energy Technology Data Exchange (ETDEWEB)
Bhattacharyya, S.; Bhattacharyya, A.; Talukdar, B. [Visvabharati Univ., Santiniketan (India). Dept. of Physics; Deb, N.C. [Indian Association for the Cultivation of Science, Calcutta (India). Dept. of Theoretical Physics
1996-03-14
A realistic three-parameter correlated wavefunction is used to construct an exact analytical expression for the expectation value of the helium-atom Hamiltonian expressed in the interparticle coordinates. The parameters determined variationally are found to satisfy the orbital and correlation cusp conditions to a fair degree of accuracy and yield a value for the ground-state energy which is in good agreement with the exact result. (author).
Luo, Wei; Yin, Peifeng; Di, Qian; Hardisty, Frank; MacEachren, Alan M.
2014-01-01
The world has become a complex set of geo-social systems interconnected by networks, including transportation networks, telecommunications, and the internet. Understanding the interactions between spatial and social relationships within such geo-social systems is a challenge. This research aims to address this challenge through the framework of geovisual analytics. We present the GeoSocialApp which implements traditional network analysis methods in the context of explicitly spatial and social...
Analytical methods in sphingolipidomics: Quantitative and profiling approaches in food analysis.
Canela, Núria; Herrero, Pol; Mariné, Sílvia; Nadal, Pedro; Ras, Maria Rosa; Rodríguez, Miguel Ángel; Arola, Lluís
2016-01-08
In recent years, sphingolipidomics has emerged as an interesting omic science that encompasses the study of the full sphingolipidome characterization, content, structure and activity in cells, tissues or organisms. Like other omics, it has the potential to impact biomarker discovery, drug development and systems biology knowledge. Concretely, dietary food sphingolipids have gained considerable importance due to their extensively reported bioactivity. Because of the complexity of this lipid family and their diversity among foods, powerful analytical methodologies are needed for their study. The analytical tools developed in the past have been improved with the enormous advances made in recent years in mass spectrometry (MS) and chromatography, which allow the convenient and sensitive identification and quantitation of sphingolipid classes and form the basis of current sphingolipidomics methodologies. In addition, novel hyphenated nuclear magnetic resonance (NMR) strategies, new ionization strategies, and MS imaging are outlined as promising technologies to shape the future of sphingolipid analyses. This review traces the analytical methods of sphingolipidomics in food analysis concerning sample extraction, chromatographic separation, the identification and quantification of sphingolipids by MS and their structural elucidation by NMR. Copyright © 2015 Elsevier B.V. All rights reserved.
The Usefulness of Analytical Procedures - An Empirical Approach in the Auditing Sector in Portugal
Directory of Open Access Journals (Sweden)
Carlos Pinho
2014-08-01
Full Text Available The conceptual conflict between the efficiency and efficacy on financial auditing arises from the fact that resources are scarce, both in terms of the time available to carry out the audit and the quality and timeliness of the information available to the external auditor. Audits tend to be more efficient, the lower the combination of inherent risk and control risk is assessed to be, allowing the auditor to carry out less extensive and less timely auditing tests, meaning that in some cases analytical audit procedures are a good tool to support the opinions formed by the auditor. This research, by means of an empirical study of financial auditing in Portugal, aims to evaluate the extent to which analytical procedures are used during a financial audit engagement in Portugal, throughout the different phases involved in auditing. The conclusions point to the fact that, in general terms and regardless of the size of the audit company and the way in which professionals work, Portuguese auditors use analytical procedures more frequently during the planning phase rather than during the phase of evidence gathering and the phase of opinion formation.
Characteristics, Properties and Analytical Methods of Amoxicillin: A Review with Green Approach.
de Marco, Bianca Aparecida; Natori, Jéssica Sayuri Hisano; Fanelli, Stefany; Tótoli, Eliane Gandolpho; Salgado, Hérida Regina Nunes
2017-05-04
Bacterial infections are the second leading cause of global mortality. Considering this fact, it is extremely important studying the antimicrobial agents. Amoxicillin is an antimicrobial agent that belongs to the class of penicillins; it has bactericidal activity and is widely used in the Brazilian health system. In literature, some analytical methods are found for the identification and quantification of this penicillin, which are essential for its quality control, which ensures maintaining the product characteristics, therapeutic efficacy and patient's safety. Thus, this study presents a brief literature review on amoxicillin and the analytical methods developed for the analysis of this drug in official and scientific papers. The major analytical methods found were high-performance liquid chromatography (HPLC), ultra-performance liquid chromatography (U-HPLC), capillary electrophoresis and iodometry and diffuse reflectance infrared Fourier transform. It is essential to note that most of the developed methods used toxic and hazardous solvents, which makes necessary industries and researchers choose to develop environmental-friendly techniques to provide enhanced benefits to environment and staff.
Dual metal gate tunneling field effect transistors based on MOSFETs: A 2-D analytical approach
Ramezani, Zeinab; Orouji, Ali A.
2018-01-01
A novel 2-D analytical drain current model of novel Dual Metal Gate Tunnel Field Effect Transistors Based on MOSFETs (DMG-TFET) is presented in this paper. The proposed Tunneling FET is extracted from a MOSFET structure by employing an additional electrode in the source region with an appropriate work function to induce holes in the N+ source region and hence makes it as a P+ source region. The electric field is derived which is utilized to extract the expression of the drain current by analytically integrating the band to band tunneling generation rate in the tunneling region based on the potential profile by solving the Poisson's equation. Through this model, the effects of the thin film thickness and gate voltage on the potential, the electric field, and the effects of the thin film thickness on the tunneling current can be studied. To validate our present model we use SILVACO ATLAS device simulator and the analytical results have been compared with it and found a good agreement.
Directory of Open Access Journals (Sweden)
Luisa Pellegrino
2013-05-01
Full Text Available Hen egg-white lysozyme (LSZ is currently used in the food industry to limit the proliferation of lactic acid bacteria spoilage in the production of wine and beer, and to inhibit butyric acid fermentation in hard and extra hard cheeses (late blowing caused by the outgrowth of clostridial spores. The aim of this work was to evaluate how the enzyme activity in commercial preparations correlates to the enzyme concentration and can be affected by the presence of process-related impurities. Different analytical approaches, including turbidimetric assay, SDS-PAGE and HPLC were used to analyse 17 commercial preparations of LSZ marketed in different countries. The HPLC method adopted by ISO allowed the true LSZ concentration to be determined with accuracy. The turbidimetric assay was the most suitable method to evaluate LSZ activity, whereas SDS-PAGE allowed the presence of other egg proteins, which are potential allergens, to be detected. The analytical results showed that the purity of commercially available enzyme preparations can vary significantly, and evidenced the effectiveness of combining different analytical approaches in this type of control.
International Nuclear Information System (INIS)
Klinger, Carolin; Mayer, Bernhard
2016-01-01
Due to computational costs, radiation is usually neglected or solved in plane parallel 1D approximation in today's numerical weather forecast and cloud resolving models. We present a fast and accurate method to calculate 3D heating and cooling rates in the thermal spectral range that can be used in cloud resolving models. The parameterization considers net fluxes across horizontal box boundaries in addition to the top and bottom boundaries. Since the largest heating and cooling rates occur inside the cloud, close to the cloud edge, the method needs in first approximation only the information if a grid box is at the edge of a cloud or not. Therefore, in order to calculate the heating or cooling rates of a specific grid box, only the directly neighboring columns are used. Our so-called Neighboring Column Approximation (NCA) is an analytical consideration of cloud side effects which can be considered a convolution of a 1D radiative transfer result with a kernel or radius of 1 grid-box (5 pt stencil) and which does usually not break the parallelization of a cloud resolving model. The NCA can be easily applied to any cloud resolving model that includes a 1D radiation scheme. Due to the neglect of horizontal transport of radiation further away than one model column, the NCA works best for model resolutions of about 100 m or lager. In this paper we describe the method and show a set of applications of LES cloud field snap shots. Correction terms, gains and restrictions of the NCA are described. Comprehensive comparisons to the 3D Monte Carlo Model MYSTIC and a 1D solution are shown. In realistic cloud fields, the full 3D simulation with MYSTIC shows cooling rates up to −150 K/d (100 m resolution) while the 1D solution shows maximum coolings of only −100 K/d. The NCA is capable of reproducing the larger 3D cooling rates. The spatial distribution of the heating and cooling is improved considerably. Computational costs are only a factor of 1.5–2 higher compared to
Pankratov, E. L.; Bulaeva, E. A.
2015-03-01
In this paper, we introduce an approach to increase integration rate of drift heterobipolar transistors. The approach is based on manufacturing of heterostructure with spatial configuration, doping of required areas of the heterostructure by diffusion or ion implantation and optimization of annealing of dopant and/or radiation defects.
Dąbrowska, Monika; Starek, Małgorzata
2014-01-01
l-Carnitine is a vitamin-like amino acid derivative, which is an essential factor in fatty acid metabolism as acyltransferase cofactor and in energy production processes, such as interconversion in the mechanisms of regulation of cetogenesis and termogenesis, and it is also used in the therapy of primary and secondary deficiency, and in other diseases. The determination of carnitine and acyl-carnitines can provide important information about inherited or acquired metabolic disorders, and for monitoring the biochemical effect of carnitine therapy. The endogenous carnitine pool in humans is maintained by biosynthesis and absorption of carnitine from the diet. Carnitine has one asymmetric carbon giving two stereoisomers d and l, but only the l form has a biological positive effect, thus chiral recognition of l-carnitine enantiomers is extremely important in biological, chemical and pharmaceutical sciences. In order to get more insight into carnitine metabolism and synthesis, a sensitive analysis for the determination of the concentration of free carnitine, carnitine esters and the carnitine precursors is required. Carnitine has been investigated in many biochemical, pharmacokinetic, metabolic and toxicokinetic studies and thus many analytical methods have been developed and published for the determination of carnitine in foods, dietary supplements, pharmaceutical formulations, biological tissues and body fluid. The analytical procedures presented in this review have been validated in terms of basic parameters (linearity, limit of detection, limit of quantitation, sensitivity, accuracy, and precision). This article presented the impact of different analytical techniques, and provides an overview of applications that address a diverse array of pharmaceutical and biological questions and samples. Copyright © 2013 Elsevier Ltd. All rights reserved.
How much do resin-based dental materials release? A meta-analytical approach.
Van Landuyt, K L; Nawrot, Tim; Geebelen, B; De Munck, J; Snauwaert, J; Yoshihara, K; Scheers, Hans; Godderis, Lode; Hoet, P; Van Meerbeek, B
2011-08-01
Resin-based dental materials are not inert in the oral environment, and may release components, initially due to incomplete polymerization, and later due to degradation. Since there are concerns regarding potential toxicity, more precise knowledge of the actual quantity of released eluates is necessary. However, due to a great variety in analytical methodology employed in different studies and in the presentation of the results, it is still unclear to which quantities of components a patient may be exposed. The objective of this meta-analytical study was to review the literature on the short- and long-term release of components from resin-based dental materials, and to determine how much (order of magnitude) of those components may leach out in the oral cavity. Out of an initial set of 71 studies, 22 were included. In spite of the large statistical incertitude due to the great variety in methodology and lack of complete information (detection limits were seldom mentioned), a meta-analytical mean for the evaluated eluates was calculated. To relate the amount of potentially released material components with the size of restorations, the mean size of standard composite restorations was estimated using a 3D graphical program. While the release of monomers was analyzed in many studies, that of additives, such as initiators, inhibitors and stabilizers, was seldom investigated. Significantly more components were found to be released in organic than in water-based media. Resin-based dental materials might account for the total burden of orally ingested bisphenol A, but they may release even higher amounts of monomers, such as HEMA, TEGDMA, BisGMA and UDMA. Compared to these monomers, similar or even higher amounts of additives may elute, even though composites generally only contain very small amounts of additives. A positive correlation was found between the total quantity of released eluates and the volume of extraction solution. There is a clear need for more accurate
Contaminant ingress into multizone buildings: An analytical state-space approach
DEFF Research Database (Denmark)
Parker, Simon; Coffey, Chris; Gravesen, Jens
2014-01-01
The ingress of exterior contaminants into buildings is often assessed by treating the building interior as a single well-mixed space. Multizone modelling provides an alternative way of representing buildings that can estimate concentration time series in different internal locations. A state...... term maximum concentration and exposure in a multizone building in response to a step-change in concentration. These have considerable potential for practical use. The analytical development is demonstrated using a simple two-zone building with an inner zone and a range of existing multizone models...
An analytical approach of thermodynamic behavior in a gas target system on a medical cyclotron.
Jahangiri, Pouyan; Zacchia, Nicholas A; Buckley, Ken; Bénard, François; Schaffer, Paul; Martinez, D Mark; Hoehr, Cornelia
2016-01-01
An analytical model has been developed to study the thermo-mechanical behavior of gas targets used to produce medical isotopes, assuming that the system reaches steady-state. It is based on an integral analysis of the mass and energy balance of the gas-target system, the ideal gas law, and the deformation of the foil. The heat transfer coefficients for different target bodies and gases have been calculated. Excellent agreement is observed between experiments performed at TRIUMF's 13 MeV cyclotron and the model. Copyright © 2015 Elsevier Ltd. All rights reserved.
Analytic approach to the edge state of the Kane-Mele Model
Doh, Hyeonjin; Jeon, Gun Sang; Choi, Hyoung Joon
2014-01-01
We investigate the edge state of a two-dimensional topological insulator based on the Kane-Mele model. Using complex wave numbers of the Bloch wave function, we derive an analytical expression for the edge state localized near the edge of a semi-infinite honeycomb lattice with a straight edge. For the comparison of the edge type effects, two types of the edges are considered in this calculation; one is a zigzag edge and the other is an armchair edge. The complex wave numbers and the boundary ...
An analytic approach to sunset diagrams in chiral perturbation theory: Theory and practice
Energy Technology Data Exchange (ETDEWEB)
Ananthanarayan, B.; Ghosh, Shayan [Indian Institute of Science, Centre for High Energy Physics, Karnataka (India); Bijnens, Johan [Lund University, Department of Astronomy and Theoretical Physics, Lund (Sweden); Hebbar, Aditya [Indian Institute of Science, Centre for High Energy Physics, Karnataka (India); University of Delaware, Department of Physics and Astronomy, Newark, DE (United States)
2016-12-15
We demonstrate the use of several code implementations of the Mellin-Barnes method available in the public domain to derive analytic expressions for the sunset diagrams that arise in the two-loop contribution to the pion mass and decay constant in three-flavoured chiral perturbation theory. We also provide results for all possible two mass configurations of the sunset integral, and derive a new one-dimensional integral representation for the one mass sunset integral with arbitrary external momentum. Thoroughly annotated Mathematica notebooks are provided as ancillary files in the Electronic Supplementary Material to this paper, which may serve as pedagogical supplements to the methods described in this paper. (orig.)
Sattasathuchana, Tosaporn; Murri, Riccardo; Baldridge, Kim K
2017-05-09
The implementation, optimization, and performance of a generalized analytic treatment of multidimensional Franck-Condon Factors (FCF) within the harmonic oscillator approximation and associated photoelectron spectra (PES) for N-dimensional systems, including consideration of Eckart conditions in the displacement minimization and Cartesian coordinate handedness for evaluation of the Duschinsky Effect, is carried out in this work. A new efficient strategy for algorithmic efficiency for high dimensional systems is introduced, and demonstrated for 3-, 15-, and 30-dimensional systems. Determination of the photoelectron spectra for H 2 O + (B̃ 2 B 2 ), vinyl alcohol, and C 6 H 6 + (X̃ 2 E 1g ) validates the capabilities with a high degree of accuracy with respect to experiment.
A Comparison of Two Approaches for the Ruggedness Testing of an Analytical Method
International Nuclear Information System (INIS)
Maestroni, Britt
2016-01-01
As part of an initiative under the “Red Analitica de Latino America y el Caribe” (RALACA) network the FAO/IAEA Food and Environmental Protection Laboratory validated a multi-residue method for pesticides in potato. One of the parameters to be assessed was the intra laboratory robustness or ruggedness. The objective of this work was to implement a worked example for RALACA laboratories to test for the robustness (ruggedness) of an analytical method. As a conclusion to this study, it is evident that there is a need for harmonization of the definition of the terms robustness/ruggedness, the limits, the methodology and the statistical treatment of the generated data. A worked example for RALACA laboratories to test for the robustness (ruggedness) of an analytical method will soon be posted on the RALACA website (www.red-ralaca.net). This study was carried out with collaborators from LVA (Austria), University of Antwerp (Belgium), University of Leuwen (The Netherlands), Universidad de la Republica (Uruguay) and Agilent technologies.
Directory of Open Access Journals (Sweden)
Sulaimon Olanrewaju Adebiyi
2015-10-01
Full Text Available This paper describes the application of Analytic Hierarchy Process (AHP for unraveling customers’ motivation for churn of telecommunication network in Nigeria. By identifying, modeling and measuring of customers` churn motivations across four mobile telecommunication service providers in Nigeria. AHP was used to design a hierarchical model of seven criteria for customers` churning of network and investigates the relative priorities of the criteria through a pairwise comparison. The questionnaire were administered through convenient sampling to 480 mobile telecommunication customers and was completed and returned by 438 mobile phone subscribers in Lagos state, Nigeria, but only 408 copies were useful for the analysis of this study. The result shows that six out of the seven criteria have weight above 10% in their individual contribution to motivating customer churn behavior in the Nigeria telecommunication industry. The inefficient data/ internet plan criterion has the highest weight of 18.81% relative to the churn decision. Thus, AHP effectively supported modeling and analyzing subscribers` motivation toward good marketing decision for both the individual and the organization. It helps in developing an analytic and intelligible framework of decision-making on complex problem of customer churn in an emerging market like Nigeria
Guided Wave Based Crack Detection in the Rivet Hole Using Global Analytical with Local FEM Approach
Directory of Open Access Journals (Sweden)
Md Yeasin Bhuiyan
2016-07-01
Full Text Available In this article, ultrasonic guided wave propagation and interaction with the rivet hole cracks has been formulated using closed-form analytical solution while the local damage interaction, scattering, and mode conversion have been obtained from finite element analysis. The rivet hole cracks (damage in the plate structure gives rise to the non-axisymmetric scattering of Lamb wave, as well as shear horizontal (SH wave, although the incident Lamb wave source (primary source is axisymmetric. The damage in the plate acts as a non-axisymmetric secondary source of Lamb wave and SH wave. The scattering of Lamb and SH waves are captured using wave damage interaction coefficient (WDIC. The scatter cubes of complex-valued WDIC are formed that can describe the 3D interaction (frequency, incident direction, and azimuth direction of Lamb waves with the damage. The scatter cubes are fed into the exact analytical framework to produce the time domain signal. This analysis enables us to obtain the optimum design parameters for better detection of the cracks in a multiple-rivet-hole problem. The optimum parameters provide the guideline of the design of the sensor installation to obtain the most noticeable signals that represent the presence of cracks in the rivet hole.
An analytical approach to estimate the number of small scatterers in 2D inverse scattering problems
International Nuclear Information System (INIS)
Fazli, Roohallah; Nakhkash, Mansor
2012-01-01
This paper presents an analytical method to estimate the location and number of actual small targets in 2D inverse scattering problems. This method is motivated from the exact maximum likelihood estimation of signal parameters in white Gaussian noise for the linear data model. In the first stage, the method uses the MUSIC algorithm to acquire all possible target locations and in the next stage, it employs an analytical formula that works as a spatial filter to determine which target locations are associated to the actual ones. The ability of the method is examined for both the Born and multiple scattering cases and for the cases of well-resolved and non-resolved targets. Many numerical simulations using both the coincident and non-coincident arrays demonstrate that the proposed method can detect the number of actual targets even in the case of very noisy data and when the targets are closely located. Using the experimental microwave data sets, we further show that this method is successful in specifying the number of small inclusions. (paper)
International Nuclear Information System (INIS)
Zhang, K.; Zuo, Y.
2006-01-01
Phenolic compounds are the most abundant natural antioxidants in our diet. Epidemiological studies have shown the possible prevention effects of consumption of fruits and vegetables rich in phenolic compounds on degenerative diseases, such as cardiovascular diseases and cancers. However, there is a serious lack of fundamental knowledge on the uptake and metabolism of phenolic compounds in humans. It is clear that phenolic molecules, only absorbed by humans, can exert biological effects. This review presents a current knowledge on the analytical methods, antioxidant capacity measurements, as well as research strategies related to natural phenolic antioxidants on human health. Both GC-MS and LC-MS have proved to be very useful analytical techniques that can be employed to identify and quantitate targeted phenolic antioxidants and their metabolites in biofluids. Free radical quenching tests provide a direct measurement of antioxidant capacity but lack specificity and may oversimplify the in vivo human physiological environment. Research strategies are diverse and mainly focused on positive health effect of antioxidants. In the future studies, multiple potential bioactivities, both positive and negative, should be considered. (author)
Directory of Open Access Journals (Sweden)
Mariano Modano
2018-02-01
Full Text Available This study formulates numerical and analytical approaches to the self-equilibrium problem of novel units of tensegrity metamaterials composed of class θ = 1 tensegrity prisms. The freestanding configurations of the examined structures are determined for varying geometries, and it is shown that such configurations exhibit a large number of infinitesimal mechanisms. The latter can be effectively stabilized by applying self-equilibrated systems of internal forces induced by cable prestretching. The equilibrium equations of class θ = 1 tensegrity prisms are studied for varying values of two aspect parameters, and local solutions to the self-equilibrium problem are determined by recourse to Newton–Raphson iterations. Such a numerical approach to the form-finding problem can be easily generalized to arbitrary tensegrity systems. An analytical approach is also proposed for the class θ = 1 units analyzed in the present work. The potential of such structures for development of novel mechanical metamaterials is discussed, in the light of recent findings concerned with structural lattices alternating lumped masses and tensegrity units.
Directory of Open Access Journals (Sweden)
Klimenta Dardan O.
2017-01-01
Full Text Available The purpose of this paper is to propose a novel approach to analytical modelling of steady-state heat transfer from the exterior of totally enclosed fan-cooled induction motors. The proposed approach is based on the geometry simplification methods, energy balance equation, modified correlations for forced convection, the Stefan-Boltzmann law, air-flow velocity profiles, and turbulence factor models. To apply modified correlations for forced convection, the motor exterior is presented with surfaces of elementary 3-D shapes as well as the air-flow velocity profiles and turbulence factor models are introduced. The existing correlations for forced convection from a short horizontal cylinder and correlations for heat transfer from straight fins (as well as inter-fin surfaces in axial air-flows are modified by introducing the Prandtl number to the appropriate power. The correlations for forced convection from straight fins and inter-fin surfaces are derived from the existing ones for combined heat transfer (due to forced convection and radiation by using the forced-convection correlations for a single flat plate. Employing the proposed analytical approach, satisfactory agreement is obtained with experimental data from other studies.
Dubinskiy, Mark A.; Kamal, Mohammed M.; Misra, Prabhaker
1995-01-01
The availability of manned laboratory facilities in space offers wonderful opportunities and challenges in microgravity combustion science and technology. In turn, the fundamentals of microgravity combustion science can be studied via spectroscopic characterization of free radicals generated in flames. The laser-induced fluorescence (LIF) technique is a noninvasive method of considerable utility in combustion physics and chemistry suitable for monitoring not only specific species and their kinetics, but it is also important for imaging of flames. This makes LIF one of the most important tools for microgravity combustion science. Flame characterization under microgravity conditions using LIF is expected to be more informative than other methods aimed at searching for effects like pumping phenomenon that can be modeled via ground level experiments. A primary goal of our work consisted in working out an innovative approach to devising an LIF-based analytical unit suitable for in-space flame characterization. It was decided to follow two approaches in tandem: (1) use the existing laboratory (non-portable) equipment and determine the optimal set of parameters for flames that can be used as analytical criteria for flame characterization under microgravity conditions; and (2) use state-of-the-art developments in laser technology and concentrate some effort in devising a layout for the portable analytical equipment. This paper presents an up-to-date summary of the results of our experiments aimed at the creation of the portable device for combustion studies in a microgravity environment, which is based on a portable UV tunable solid-state laser for excitation of free radicals normally present in flames in detectable amounts. A systematic approach has allowed us to make a convenient choice of species under investigation, as well as the proper tunable laser system, and also enabled us to carry out LIF experiments on free radicals using a solid-state laser tunable in the UV.
Wright, D. J.; Raad, M.; Hoel, E.; Park, M.; Mollenkopf, A.; Trujillo, R.
2016-12-01
Introduced is a new approach for processing spatiotemporal big data by leveraging distributed analytics and storage. A suite of temporally-aware analysis tools summarizes data nearby or within variable windows, aggregates points (e.g., for various sensor observations or vessel positions), reconstructs time-enabled points into tracks (e.g., for mapping and visualizing storm tracks), joins features (e.g., to find associations between features based on attributes, spatial relationships, temporal relationships or all three simultaneously), calculates point densities, finds hot spots (e.g., in species distributions), and creates space-time slices and cubes (e.g., in microweather applications with temperature, humidity, and pressure, or within human mobility studies). These "feature geo analytics" tools run in both batch and streaming spatial analysis mode as distributed computations across a cluster of servers on typical "big" data sets, where static data exist in traditional geospatial formats (e.g., shapefile) locally on a disk or file share, attached as static spatiotemporal big data stores, or streamed in near-real-time. In other words, the approach registers large datasets or data stores with ArcGIS Server, then distributes analysis across a cluster of machines for parallel processing. Several brief use cases will be highlighted based on a 16-node server cluster at 14 Gb RAM per node, allowing, for example, the buffering of over 8 million points or thousands of polygons in 1 minute. The approach is "hybrid" in that ArcGIS Server integrates open-source big data frameworks such as Apache Hadoop and Apache Spark on the cluster in order to run the analytics. In addition, the user may devise and connect custom open-source interfaces and tools developed in Python or Python Notebooks; the common denominator being the familiar REST API.
Li, Borui; Fu, Songnian; Tang, Ming; Cheng, Yu; Wei, Huifeng; Tong, Weijun; Shum, P; Liu, Deming
2014-06-16
The mitigation of both crosstalk and its wavelength dependent sensitivity for homogeneous multicore fiber (MCF) is theoretically investigated using an analytical evaluation approach. It is found there exists a performance trade-off between the crosstalk mitigation and its wavelength dependent sensitivity suppression. After characterizing the fabricated homogeneous MCFs, we verify that although the increasing core pitch can mitigate the crosstalk, the wavelength dependent sensitivity is drastically degraded from 0.07dB/nm to 0.11dB/nm, which is harmful to the dense wavelength division multiplexing (DWDM) transmission over C + L band using MCF.
The earrings of Pancas treasure: Analytical study by X-ray based techniques – A first approach
Energy Technology Data Exchange (ETDEWEB)
Tissot, I., E-mail: isabel.tissot@archeofactu.pt [Archeofactu – Rua do Cerrado das Oliveiras, No. 14, 2°Dto., 2610-035 Alfragide (Portugal); Tissot, M., E-mail: matthias.tissot@archeofactu.pt [Archeofactu – Rua do Cerrado das Oliveiras, No. 14, 2°Dto., 2610-035 Alfragide (Portugal); Museu Nacional de Arqueologia – Praça do Império, 1400-206 Lisboa (Portugal); Manso, M., E-mail: marta974@gmail.com [Centro de Física Atómica da Universidade de Lisboa, Av., Prof. Gama Pinto 2, 1649-003 Lisboa (Portugal); Alves, L.C., E-mail: luisa@cii.fc.ul.pt [IST/ITN, Univ. Técnica de Lisboa, E.N. 10, UFA-LFI, 2686-953 Sacavém (Portugal); Barreiros, M.A., E-mail: alexandra.barreiros@lneg.pt [LNEG, I.P., Estrada do Paço do Lumiar 22, 1649-038 Lisboa (Portugal); Marcelo, T., E-mail: teresa.marcelo@lneg.pt [LNEG, I.P., Estrada do Paço do Lumiar 22, 1649-038 Lisboa (Portugal); Carvalho, M.L., E-mail: lcalves@itn.pt [Centro de Física Atómica da Universidade de Lisboa, Av., Prof. Gama Pinto 2, 1649-003 Lisboa (Portugal); Corregidor, V., E-mail: vicky.corregidor@itn.pt [IST/ITN, Univ. Técnica de Lisboa, E.N. 10, UFA-LFI, 2686-953 Sacavém (Portugal); Guerra, M.F., E-mail: maria.guerra@culture.gouv.fr [Centre de Recherche et de Restauration des Musées de France and UMR8220 CNRS - 14, quai François Mitterrand, 75001 Paris (France)
2013-07-01
The development of new metallurgical technologies in the Iberian Peninsula during the Iron Age is well represented by the 10 gold earrings from the treasure of Pancas. This work presents a first approach to the analytical study of these earrings and contributes to the construction of a typological evolution of the Iberian earrings. The manufacture techniques and the alloys composition were studied with three complementary X-ray spectroscopy techniques: portable EDXRF, μ-PIXE and SEM–EDS. The results were compared with earrings from the same and previous periods.
Directory of Open Access Journals (Sweden)
Ali Soner Kilinc
2017-08-01
Full Text Available A Linear Wireless Sensor Network (LWSN is a kind of wireless sensor network where the nodes are deployed in a line. Since the sensor nodes are energy restricted, energy efficiency becomes one of the most significant design issues for LWSNs as well as wireless sensor networks. With the proper deployment, the power consumption could be minimized by adjusting the distance between the sensor nodes which is known as hop length. In this paper, analytical and algorithmic approaches are presented to determine the number of hops and sensor nodes for minimum power consumption in a linear wireless sensor network including equidistantly placed sensor nodes.
Zhao, T. L.; Bao, X. J.; Guo, S. Q.
2018-02-01
Systematic calculations on the α decay half-lives are performed by using three analytical formulas and two semiclassical approaches. For the three analytical formulas, the experimental α decay half-lives and {Q}α values of the 66 reference nuclei have been used to obtain the coefficients. We get only four adjustable parameters to describe α decay half-lives for even-even, odd-A, and odd-odd nuclei. By comparison between the calculated values from ten analytical formulas and experimental data, it is shown that the new universal decay law (NUDL) foumula is the most accurate one to reproduce the experimental α decay half-lives of the superheavy nuclei (SHN). Meanwhile it is found that the experimental α decay half-lives of SHN are well reproduced by the Royer formula although many parameters are contained. The results show that the NUDL formula and the generalized liquid drop model (GLDM2) with consideration of the preformation factor can give fairly equivalent results for the superheavy nuclei.
A network analytical approach to the study of labour market mobility
DEFF Research Database (Denmark)
Toubøl, Jonas; Larsen, Anton Grau; Jensen, Carsten Strøby
(RR), which enable us to identify clusters of inter-mobile categories. We apply the method to data of the labour market mobility in Denmark 2000-2007 and demonstrate how this new method can overcome some long standing obstacles to the advance of labour market segmentation theory: Instead...... of the typical theory driven definition of the labour market segments, the use of social network analysis enable a data driven definition of the segments based on the direct observation of mobility between job-positions, which reveals a number of new findings.......The aim of this paper is to present a new network analytical method for analysis of social mobility between categories like occupations or industries. The method consists of two core components; the algorithm MONECA (Mobility Network Clustering Algorithm), and the intensity measure of Relative Risk...
The evolution of stable magnetic fields in stars: an analytical approach
Mestel, Leon; Moss, David
2010-07-01
The absence of a rigorous proof of the existence of dynamically stable, large-scale magnetic fields in radiative stars has been for many years a missing element in the fossil field theory for the magnetic Ap/Bp stars. Recent numerical simulations, by Braithwaite & Spruit and Braithwaite & Nordlund, have largely filled this gap, demonstrating convincingly that coherent global scale fields can survive for times of the order of the main-sequence lifetimes of A stars. These dynamically stable configurations take the form of magnetic tori, with linked poloidal and toroidal fields, that slowly rise towards the stellar surface. This paper studies a simple analytical model of such a torus, designed to elucidate the physical processes that govern its evolution. It is found that one-dimensional numerical calculations reproduce some key features of the numerical simulations, with radiative heat transfer, Archimedes' principle, Lorentz force and Ohmic decay all playing significant roles.
Xie, Wen-Jie; Jiang, Zhi-Qiang; Gu, Gao-Feng; Xiong, Xiong; Zhou, Wei-Xing
2015-10-01
Many complex systems generate multifractal time series which are long-range cross-correlated. Numerous methods have been proposed to characterize the multifractal nature of these long-range cross correlations. However, several important issues about these methods are not well understood and most methods consider only one moment order. We study the joint multifractal analysis based on partition function with two moment orders, which was initially invented to investigate fluid fields, and derive analytically several important properties. We apply the method numerically to binomial measures with multifractal cross correlations and bivariate fractional Brownian motions without multifractal cross correlations. For binomial multifractal measures, the explicit expressions of mass function, singularity strength and multifractal spectrum of the cross correlations are derived, which agree excellently with the numerical results. We also apply the method to stock market indexes and unveil intriguing multifractality in the cross correlations of index volatilities.
da Silva Fernandes, S.; das Chagas Carvalho, F.; Bateli Romão, J. V.
2018-04-01
A numerical-analytical procedure based on infinitesimal canonical transformations is developed for computing optimal time-fixed low-thrust limited power transfers (no rendezvous) between coplanar orbits with small eccentricities in an inverse-square force field. The optimization problem is formulated as a Mayer problem with a set of non-singular orbital elements as state variables. Second order terms in eccentricity are considered in the development of the maximum Hamiltonian describing the optimal trajectories. The two-point boundary value problem of going from an initial orbit to a final orbit is solved by means of a two-stage Newton-Raphson algorithm which uses an infinitesimal canonical transformation. Numerical results are presented for some transfers between circular orbits with moderate radius ratio, including a preliminary analysis of Earth-Mars and Earth-Venus missions.
DEFF Research Database (Denmark)
Gonçalves, P:A.D.; Dias, E. J. C.; Bludov, Yu V.
2016-01-01
We study electromagnetic scattering and subsequent plasmonic excitations in periodic grids of graphene ribbons. To address this problem, we develop an analytical method to describe the plasmon-assisted absorption of electromagnetic radiation by a periodic structure of graphene ribbons forming...... a diffraction grating for THz and mid-IR light. The major advantage of this method lies in its ability to accurately describe the excitation of graphene surface plasmons (GSPs) in one-dimensional (1D) graphene gratings without the use of both time-consuming, and computationally demanding full-wave numerical...... compare the theoretical data with spectra taken from experiments, for which we observe a very good agreement. These theoretical tools may therefore be applied to design new experiments and cutting-edge nanophotonic devices based on graphene plasmonics....
Image Analytical Approach for Needle-Shaped Crystal Counting and Length Estimation
DEFF Research Database (Denmark)
Wu, Jian X.; Kucheryavskiy, Sergey V.; Jensen, Linda G.
2015-01-01
Estimation of nucleation and crystal growth rates from microscopic information is of critical importance. This can be an especially challenging task if needle growth of crystals is observed. To address this challenge, an image analytical method for counting of needle-shaped crystals and estimating...... their length is presented. Since the current algorithm has a number of parameters that need to be optimized, a combination of simulation of needle crystal growth and Design of Experiments was applied to identify the optimal parameter settings for the algorithm. The algorithm was validated for its accuracy...... in different scenarios of simulated needle crystallization, and subsequently, the algorithm was applied to study the influence of additive on antisolvent crystallization. The developed algorithm is robust for quantifying heavily intersecting needle crystals in optical microscopy images, and has the potential...
Energy Technology Data Exchange (ETDEWEB)
Lee, Seong Kon; Kim, Jong Wook [Energy Policy Research Division, Korea Institute of Energy Research, 71-2 Jang-dong, Yuseong-gu, Daejeon 305-343 (Korea); Mogi, Gento [Department of Technology Management for Innovation, Graduate School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan); Gim, Bong Jin [Department of Industrial Engineering, Dankook University, San 29, Anseo-dong, Cheonan-si, Chungnam 330-714 (Korea)
2008-12-15
As it is more environmentally sound and friendly than conventional energy technologies that emit carbon dioxide, hydrogen technology can play a key role in solving the problems caused by the greenhouse gas effect and in coping with the hydrogen economy. Numerous countries around the world, including Korea, have increasingly focused on R and D where hydrogen technology development is concerned. This paper focuses on the use of the fuzzy analytic hierarchy process (fuzzy AHP), which is an extension of the AHP method and uses interval values to reflect the vagueness of human thought, to assess national competitiveness in the hydrogen technology sector. This analysis based on the AHP and fuzzy AHP methods revealed that Korea ranked 6th in terms of national competitiveness in the hydrogen technology sector. (author)
Novel predictive models for metabolic syndrome risk: a "big data" analytic approach.
Steinberg, Gregory B; Church, Bruce W; McCall, Carol J; Scott, Adam B; Kalis, Brian P
2014-06-01
We applied a proprietary "big data" analytic platform--Reverse Engineering and Forward Simulation (REFS)--to dimensions of metabolic syndrome extracted from a large data set compiled from Aetna's databases for 1 large national customer. Our goals were to accurately predict subsequent risk of metabolic syndrome and its various factors on both a population and individual level. The study data set included demographic, medical claim, pharmacy claim, laboratory test, and biometric screening results for 36,944 individuals. The platform reverse-engineered functional models of systems from diverse and large data sources and provided a simulation framework for insight generation. The platform interrogated data sets from the results of 2 Comprehensive Metabolic Syndrome Screenings (CMSSs) as well as complete coverage records; complete data from medical claims, pharmacy claims, and lab results for 2010 and 2011; and responses to health risk assessment questions. The platform predicted subsequent risk of metabolic syndrome, both overall and by risk factor, on population and individual levels, with ROC/AUC varying from 0.80 to 0.88. We demonstrated that improving waist circumference and blood glucose yielded the largest benefits on subsequent risk and medical costs. We also showed that adherence to prescribed medications and, particularly, adherence to routine scheduled outpatient doctor visits, reduced subsequent risk. The platform generated individualized insights using available heterogeneous data within 3 months. The accuracy and short speed to insight with this type of analytic platform allowed Aetna to develop targeted cost-effective care management programs for individuals with or at risk for metabolic syndrome.
Approximate Series Solutions for Nonlinear Free Vibration of Suspended Cables
Directory of Open Access Journals (Sweden)
Yaobing Zhao
2014-01-01
Full Text Available This paper presents approximate series solutions for nonlinear free vibration of suspended cables via the Lindstedt-Poincare method and homotopy analysis method, respectively. Firstly, taking into account the geometric nonlinearity of the suspended cable as well as the quasi-static assumption, a mathematical model is presented. Secondly, two analytical methods are introduced to obtain the approximate series solutions in the case of nonlinear free vibration. Moreover, small and large sag-to-span ratios and initial conditions are chosen to study the nonlinear dynamic responses by these two analytical methods. The numerical results indicate that frequency amplitude relationships obtained with different analytical approaches exhibit some quantitative and qualitative differences in the cases of motions, mode shapes, and particular sag-to-span ratios. Finally, a detailed comparison of the differences in the displacement fields and cable axial total tensions is made.
Path space measures for Dirac and Schroedinger equations: Nonstandard analytical approach
International Nuclear Information System (INIS)
Nakamura, T.
1997-01-01
A nonstandard path space *-measure is constructed to justify the path integral formula for the Dirac equation in two-dimensional space endash time. A standard measure as well as a standard path integral is obtained from it. We also show that, even for the Schroedinger equation, for which there is no standard measure appropriate for a path integral, there exists a nonstandard measure to define a *-path integral whose standard part agrees with the ordinary path integral as defined by a limit from time-slice approximant. copyright 1997 American Institute of Physics
Directory of Open Access Journals (Sweden)
D. V. Lukyanenko
2016-01-01
Full Text Available The main objective of the paper is to present a new analytic-numerical approach to singularly perturbed reaction-diﬀusion-advection models with solutions containing moving interior layers (fronts. We describe some methods to generate the dynamic adapted meshes for an eﬃcient numerical solution of such problems. It is based on a priori information about the moving front properties provided by the asymptotic analysis. In particular, for the mesh construction we take into account a priori asymptotic evaluation of the location and speed of the moving front, its width and structure. Our algorithms signiﬁcantly reduce the CPU time and enhance the stability of the numerical process compared with classical approaches.The article is published in the authors’ wording.
Kozhevnikov, I. V.; Buzmakov, A. V.; Siewert, F.; Tiedtke, K.; Störmer, M.; Samoylova, L.; Sinn, H.
2017-05-01
Simple analytic equation is deduced to explain new physical phenomenon detected experimentally: growth of nano-dots (40-55 nm diameter, 8-13 nm height, 9.4 dots/μm2 surface density) on the grazing incidence mirror surface under the three years irradiation by the free electron laser FLASH (5-45 nm wavelength, 3 degrees grazing incidence angle). The growth model is based on the assumption that the growth of nano-dots is caused by polymerization of incoming hydrocarbon molecules under the action of incident photons directly or photoelectrons knocked out from a mirror surface. The key feature of our approach consists in that we take into account the radiation intensity variation nearby a mirror surface in an explicit form, because the polymerization probability is proportional to it. We demonstrate that the simple analytic approach allows to explain all phenomena observed in experiment and to predict new effects. In particular, we show that the nano-dots growth depends crucially on the grazing angle of incoming beam and its intensity: growth of nano-dots is observed in the limited from above and below intervals of the grazing angle and the radiation intensity. Decrease in the grazing angle by 1 degree only (from 3 to 2 degree) may result in a strong suppression of nanodots growth and their total disappearing. Similarly, decrease in the radiation intensity by several times (replacement of free electron laser by synchrotron) results also in disappearing of nano-dots growth.
Directory of Open Access Journals (Sweden)
Radek eTrnka
2016-04-01
Full Text Available The widely accepted two-dimensional circumplex model of emotions posits that most instances of human emotional experience can be understood within the two general dimensions of valence and activation. Currently, this model is facing some criticism, because complex emotions in particular are hard to define within only these two general dimensions. The present theory-driven study introduces an innovative analytical approach working in a way other than the conventional, two-dimensional paradigm. The main goal was to map and project semantic emotion space in terms of mutual positions of various emotion prototypical categories. Participants (N = 187; 54.5% females judged 16 discrete emotions in terms of valence, intensity, controllability and utility. The results revealed that these four dimensional input measures were uncorrelated. This implies that valence, intensity, controllability and utility represented clearly different qualities of discrete emotions in the judgments of the participants. Based on this data, we constructed a 3D hypercube-projection and compared it with various two-dimensional projections. This contrasting enabled us to detect several sources of bias when working with the traditional, two-dimensional analytical approach. Contrasting two-dimensional and three-dimensional projections revealed that the 2D models provided biased insights about how emotions are conceptually related to one another along multiple dimensions. The results of the present study point out the reductionist nature of the two-dimensional paradigm in the psychological theory of emotions and challenge the widely accepted circumplex model.
Trnka, Radek; Lačev, Alek; Balcar, Karel; Kuška, Martin; Tavel, Peter
2016-01-01
The widely accepted two-dimensional circumplex model of emotions posits that most instances of human emotional experience can be understood within the two general dimensions of valence and activation. Currently, this model is facing some criticism, because complex emotions in particular are hard to define within only these two general dimensions. The present theory-driven study introduces an innovative analytical approach working in a way other than the conventional, two-dimensional paradigm. The main goal was to map and project semantic emotion space in terms of mutual positions of various emotion prototypical categories. Participants (N = 187; 54.5% females) judged 16 discrete emotions in terms of valence, intensity, controllability and utility. The results revealed that these four dimensional input measures were uncorrelated. This implies that valence, intensity, controllability and utility represented clearly different qualities of discrete emotions in the judgments of the participants. Based on this data, we constructed a 3D hypercube-projection and compared it with various two-dimensional projections. This contrasting enabled us to detect several sources of bias when working with the traditional, two-dimensional analytical approach. Contrasting two-dimensional and three-dimensional projections revealed that the 2D models provided biased insights about how emotions are conceptually related to one another along multiple dimensions. The results of the present study point out the reductionist nature of the two-dimensional paradigm in the psychological theory of emotions and challenge the widely accepted circumplex model.
Fuller, Nathaniel J.; Licata, Nicholas A.
2018-05-01
Obtaining a detailed understanding of the physical interactions between a cell and its environment often requires information about the flow of fluid surrounding the cell. Cells must be able to effectively absorb and discard material in order to survive. Strategies for nutrient acquisition and toxin disposal, which have been evolutionarily selected for their efficacy, should reflect knowledge of the physics underlying this mass transport problem. Motivated by these considerations, in this paper we discuss the results from an undergraduate research project on the advection-diffusion equation at small Reynolds number and large Péclet number. In particular, we consider the problem of mass transport for a Stokesian spherical swimmer. We approach the problem numerically and analytically through a rescaling of the concentration boundary layer. A biophysically motivated first-passage problem for the absorption of material by the swimming cell demonstrates quantitative agreement between the numerical and analytical approaches. We conclude by discussing the connections between our results and the design of smart toxin disposal systems.
Directory of Open Access Journals (Sweden)
KUDRYAVTSEV Pavel Gennadievich
2015-02-01
Full Text Available The paper deals with possibilities to use quasi-homogenous approximation for discription of properties of dispersed systems. The authors applied statistical polymer method based on consideration of average structures of all possible macromolecules of the same weight. The equiations which allow evaluating many additive parameters of macromolecules and the systems with them were deduced. Statistical polymer method makes it possible to model branched, cross-linked macromolecules and the systems with them which are in equilibrium or non-equilibrium state. Fractal analysis of statistical polymer allows modeling different types of random fractal and other objects examined with the mehods of fractal theory. The method of fractal polymer can be also applied not only to polymers but also to composites, gels, associates in polar liquids and other packaged systems. There is also a description of the states of colloid solutions of silica oxide from the point of view of statistical physics. This approach is based on the idea that colloid solution of silica dioxide – sol of silica dioxide – consists of enormous number of interacting particles which are always in move. The paper is devoted to the research of ideal system of colliding but not interacting particles of sol. The analysis of behavior of silica sol was performed according to distribution Maxwell-Boltzmann and free path length was calculated. Using this data the number of the particles which can overcome the potential barrier in collision was calculated. To model kinetics of sol-gel transition different approaches were studied.
International Nuclear Information System (INIS)
Urrego, J.P.; Cristancho, F.
2001-01-01
Full text: Fusion-evaporation heavy ion collisions have enable us to explore new regions of phase space E - I, particularly high spin and excitation energy regions, where level densities are so high that modern detectors are unable to resolve individual gamma-ray transitions and consequently the resulting spectrum is continuous and undoubtedly contains a lot of new physics. In spite of that, very few experiments have been designed to extract conclusions about behavior of nuclei in continuum, thus in order to obtain a continuum spectroscopy it is necessary to apply to numerical simulations. In this sense GAMBLE a Monte Carlo based code- is a powerful tool that with some modifications allows us to test a new method to analyze the outcome of experiments focused on the properties of phase space regions in nuclear continuum: The use of Energy-Ordered Spectra (EOS) . Let's suppose that in a experiment is collected all gamma radiation emitted by a specific nucleus in a fixed intrinsic excitation energy range and that the different EOS are constructed. Although it has been shown that comparisons between such EOS and Monte Carlo simulations give information about the level density and the strength function their interpretation is not too clear because the large number of input values needed in a code like GAMBLE. On the other hand, if we could have an analytical description of EOS, the understanding of the underlying physics would be more simple because one could control exactly the involved variables and eventually simulation would be unnecessary. Promissory advances in that direction come from mathematical theory of Order Statistics (OS) In this work it is described the modified code GAMBLE and some simulated EOS for 170 Hf are shown. The simulations are made with different formulations for both level density (Fermi Gas at constant and variable temperature) and gamma strength function (GDR, single particle). Further it is described in detail how OS are employed in the
Stahl, Cynthia; Cimorelli, Alan
2013-01-01
Because controversy, conflict, and lawsuits frequently characterize US Environmental Protection Agency (USEPA) decisions, it is important that USEPA decision makers understand how to evaluate and then make decisions that have simultaneously science-based, social, and political implications. Air quality management is one category of multidimensional decision making at USEPA. The Philadelphia, Pennsylvania metropolitan area experiences unhealthy levels of ozone, fine particulate matter, and air toxics. Many ozone precursors are precursors for particulate matter and certain air toxics. Additionally, some precursors for particulate matter are air toxics. However, air quality management practices have typically evaluated these problems separately. This approach has led to the development of independent (and potentially counterproductive) implementation strategies. This is a methods article about the necessity and feasibility of using a clumsy approach on wicked problems, using an example case study. Air quality management in Philadelphia is a wicked problem. Wicked problems are those where stakeholders define or view the problem differently, there are many different ways to describe the problem (i.e., different dimensions or levels of abstraction), no efficient or optimal solutions exist, and they are often complicated by moral, political, or professional dimensions. The USEPA has developed the multicriteria integrated resource assessment (MIRA) decision analytic approach that engages stakeholder participation through transparency, transdisciplinary learning, and the explicit use of value sets; in other words, a clumsy approach. MIRA's approach to handling technical indicators, expert judgment, and stakeholder values makes it a potentially effective method for tackling wicked environmental problems. Copyright © 2012 SETAC.
Analytical approach to chromatic correction in the final focus system of circular colliders
Directory of Open Access Journals (Sweden)
Yunhai Cai
2016-11-01
Full Text Available A conventional final focus system in particle accelerators is systematically analyzed. We find simple relations between the parameters of two focus modules in the final telescope. Using the relations, we derive the chromatic Courant-Snyder parameters for the telescope. The parameters are scaled approximately according to (L^{*}/β_{y}^{*}δ, where L^{*} is the distance from the interaction point to the first quadrupole, β_{y}^{*} the vertical beta function at the interaction point, and δ the relative momentum deviation. Most importantly, we show how to compensate its chromaticity order by order in δ by a traditional correction module flanked by an asymmetric pair of harmonic multipoles. The method enables a circular Higgs collider with 2% momentum aperture and illuminates a path forward to 4% in the future.
Comparing bona fide psychotherapies of depression in adults with two meta-analytical approaches.
Directory of Open Access Journals (Sweden)
Sarah R Braun
Full Text Available OBJECTIVE: Despite numerous investigations, the question whether all bona fide treatments of depression are equally efficacious in adults has not been sufficiently answered. METHOD: We applied two different meta-analytical techniques (conventional meta-analysis and mixed treatment comparisons. Overall, 53 studies with 3,965 patients, which directly compared two or more bona fide psychotherapies in a randomized trial, were included. Meta-analyses were conducted regarding five different types of outcome measures. Additionally, the influence of possible moderators was examined. RESULTS: Direct comparisons of cognitive behavior therapy, behavior activation therapy, psychodynamic therapy, interpersonal therapy, and supportive therapies versus all other respective treatments indicated that at the end of treatment all treatments but supportive therapies were equally efficacious whereas there was some evidence that supportive therapies were somewhat less efficacious than all other treatments according to patient self-ratings and clinical significance. At follow-up no significant differences were present. Age, gender, comorbid mental disorders, and length of therapy session were found to moderate efficacy. Cognitive behavior therapy was superior in studies where therapy sessions lasted 90 minutes or longer, behavior activation therapy was more efficacious when therapy sessions lasted less than 90 minutes. Mixed treatment comparisons indicated no statistically significant differences in treatment efficacy but some interesting trends. CONCLUSIONS: This study suggests that there might be differential effects of bona fide psychotherapies which should be examined in detail.
Potentially Reactive Forms of Silica in Volcanic Rocks Using Different Analytical Approaches
Esteves, Hugo; Fernandes, Isabel; Janeiro, Ana; Santos Silva, António; Pereira, Manuel; Medeiros, Sara; Nunes, João Carlos
2017-12-01
Several concrete structures show signs of deterioration resulting from internal chemical reactions, such as the alkali-silica reaction (ASR). It is well known that these swelling reactions occur in the presence of moisture, between some silica mineral phases present in the aggregates and the alkalis of the concrete, leading to the degradation of concrete structures and consequently compromising their safety. In most of the cases, rehabilitation, demolition or even rebuilding of such structures is needed and the effective costs can be very high. Volcanic rocks are commonly used as aggregates in concrete, and they are sometimes the only option due to the unavailability of other rock types. These rocks may contain different forms of silica that are deleterious to concrete, such as opal, chalcedony, cristobalite, tridymite and micro- to cryptocrystalline quartz, as well as Si-rich volcanic glass. Volcanic rocks are typically very finegrained and their constituting minerals are usually not distinguished under optical microscopy, thus leading to using complementary methods. The objective of this research is to find the more adequate analytical methods to identify silica phases that might be present in volcanic aggregates and cause ASR. The complementary methods used include X-Ray Diffraction (XRD), mineral acid digestion and Scanning Electron Microscopy with Energy Dispersive X-Ray Spectrometry (SEM/EDS), as well as Electron Probe Micro-Analysis (EPMA).
Directory of Open Access Journals (Sweden)
Zhi Han
2017-01-01
Full Text Available We presented a novel workflow for detecting distribution patterns in cell populations based on single-cell transcriptome study. With the fast adoption of single-cell analysis, a challenge to researchers is how to effectively extract gene features to meaningfully separate the cell population. Considering that coexpressed genes are often functionally or structurally related and the number of coexpressed modules is much smaller than the number of genes, our workflow uses gene coexpression modules as features instead of individual genes. Thus, when the coexpressed modules are summarized into eigengenes, not only can we interactively explore the distribution of cells but also we can promptly interpret the gene features. The interactive visualization is aided by a novel application of spatial statistical analysis to the scatter plots using a clustering index parameter. This parameter helps to highlight interesting 2D patterns in the scatter plot matrix (SPLOM. We demonstrated the effectiveness of the workflow using two large single-cell studies. In the Allen Brain scRNA-seq dataset, the visual analytics suggested a new hypothesis such as the involvement of glutamate metabolism in the separation of the brain cells. In a large glioblastoma study, a sample with a unique cell migration related signature was identified.
Analytic approach to nonlinear hydrodynamic instabilities driven by time-dependent accelerations
Energy Technology Data Exchange (ETDEWEB)
Mikaelian, K O
2009-09-28
We extend our earlier model for Rayleigh-Taylor and Richtmyer-Meshkov instabilities to the more general class of hydrodynamic instabilities driven by a time-dependent acceleration g(t) . Explicit analytic solutions for linear as well as nonlinear amplitudes are obtained for several g(t)'s by solving a Schroedinger-like equation d{sup 2}{eta}/dt{sup 2} - g(t)kA{eta} = 0 where A is the Atwood number and k is the wavenumber of the perturbation amplitude {eta}(t). In our model a simple transformation k {yields} k{sub L} and A {yields} A{sub L} connects the linear to the nonlinear amplitudes: {eta}{sup nonlinear} (k,A) {approx} (1/k{sub L})ln{eta}{sup linear} (k{sub L}, A{sub L}). The model is found to be in very good agreement with direct numerical simulations. Bubble amplitudes for a variety of accelerations are seen to scale with s defined by s = {integral} {radical}g(t)dt, while spike amplitudes prefer scaling with displacement {Delta}x = {integral}[{integral}g(t)dt]dt.
Enaldiev, V. V.; Volkov, V. A.
2018-03-01
Recent high-resolution angle-resolved photoemission spectroscopy experiments have given a reason to believe that pure bismuth is a topologically nontrivial semimetal. We derive an analytic theory of surface and size-quantized states of Dirac fermions in Bi(111) films taking into account the new data. The theory relies on a new phenomenological momentum-dependent boundary condition for the effective Dirac equation. The boundary condition is described by two real parameters that are expressed by a linear combination of the Dresselhaus and Rashba interface spin-orbit interaction parameters. In semi-infinite Bi(111), near the M ¯ point the surface states possess anisotropical parabolic dispersion with very heavy effective mass in the Γ ¯-M ¯ direction order of ten free electron masses and light effective mass in the M ¯-K ¯ direction order of one hundredth of free electron mass. In Bi(111) films with equivalent surfaces, the surface states from top and bottom surfaces are not split. In such a symmetric film with arbitrary thickness, the bottom of the lowest quantum confinement subband in the conduction band coincides with the bottom of the bulk conduction band in the M ¯ point.