Accurate phenotyping: Reconciling approaches through Bayesian model averaging.
Directory of Open Access Journals (Sweden)
Carla Chia-Ming Chen
Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.
Simple and High-Accurate Schemes for Hyperbolic Conservation Laws
Directory of Open Access Journals (Sweden)
Renzhong Feng
2014-01-01
Full Text Available The paper constructs a class of simple high-accurate schemes (SHA schemes with third order approximation accuracy in both space and time to solve linear hyperbolic equations, using linear data reconstruction and Lax-Wendroff scheme. The schemes can be made even fourth order accurate with special choice of parameter. In order to avoid spurious oscillations in the vicinity of strong gradients, we make the SHA schemes total variation diminishing ones (TVD schemes for short by setting flux limiter in their numerical fluxes and then extend these schemes to solve nonlinear Burgers’ equation and Euler equations. The numerical examples show that these schemes give high order of accuracy and high resolution results. The advantages of these schemes are their simplicity and high order of accuracy.
High order accurate finite difference schemes based on symmetry preservation
Ozbenli, Ersin; Vedula, Prakash
2017-11-01
In this paper, we present a mathematical approach that is based on modified equations and the method of equivariant moving frames for construction of high order accurate invariant finite difference schemes that preserve Lie symmetry groups of underlying partial differential equations (PDEs). In the proposed approach, invariant (or symmetry preserving) numerical schemes with a desired (or fixed) order of accuracy are constructed from some non-invariant (base) numerical schemes. Modified forms of PDEs are used to improve the order of accuracy of existing schemes and these modified forms are obtained through addition of defect correction terms to the original forms of PDEs. These defect correction terms of modified PDEs that are noted from truncation error analysis are either completely removed from schemes or their representation is significantly simplified by considering convenient moving frames. This feature of the proposed method can especially be useful to avoid cumbersome numerical representations when high order schemes are developed from low order ones via the method of modified equations. The proposed method is demonstrated via construction of invariant numerical schemes with fixed (and higher) order of accuracy for some common linear and nonlinear problems (including the linear advection-diffusion equation in 1D and 2D, inviscid Burgers' equation, and viscous Burgers' equation) and the performance of these invariant numerical schemes is further evaluated. Our results indicate that such invariant numerical schemes obtained from existing base numerical schemes have the potential to significantly improve the quality of results not only in terms of desired higher order accuracy but also in the context of preservation of appropriate symmetry properties of underlying PDEs.
Practical Schemes for Accurate Forces in Quantum Monte Carlo.
Moroni, S; Saccani, S; Filippi, C
2014-11-11
While the computation of interatomic forces has become a well-established practice within variational Monte Carlo (VMC), the use of the more accurate Fixed-Node Diffusion Monte Carlo (DMC) method is still largely limited to the computation of total energies on structures obtained at a lower level of theory. Algorithms to compute exact DMC forces have been proposed in the past, and one such scheme is also put forward in this work, but remain rather impractical due to their high computational cost. As a practical route to DMC forces, we therefore revisit here an approximate method, originally developed in the context of correlated sampling and named here the Variational Drift-Diffusion (VD) approach. We thoroughly investigate its accuracy by checking the consistency between the approximate VD force and the derivative of the DMC potential energy surface for the SiH and C2 molecules and employ a wide range of wave functions optimized in VMC to assess its robustness against the choice of trial function. We find that, for all but the poorest wave function, the discrepancy between force and energy is very small over all interatomic distances, affecting the equilibrium bond length obtained with the VD forces by less than 0.004 au. Furthermore, when the VMC forces are approximate due to the use of a partially optimized wave function, the DMC forces have smaller errors and always lead to an equilibrium distance in better agreement with the experimental value. We also show that the cost of computing the VD forces is only slightly larger than the cost of calculating the DMC energy. Therefore, the VD approximation represents a robust and efficient approach to compute accurate DMC forces, superior to the VMC counterparts.
Kristek, J.; Moczo, P.; Galis, M.
2005-12-01
Geller and Takeuchi (1995) developed optimally accurate finite-difference (FD) operators. The operators minimize the error of the numerical solution of the discretized equation of motion. The criterion for obtaining the optimally accurate operators requires that the leading term of the truncation error of the discretized homogeneous (without body-force term) equation of motion (that is if operand is an eigenfunction and frequency is equal to eigenfrequency) is zero. Consequently, the optimally accurate operators satisfy (up to the leading term of the truncation error) homogeneous equation of motion. The grid dispersion of an optimally accurate FD scheme is significantly smaller than that of a standard FD scheme. A heterogeneous FD scheme cannot be anything else than a FD approximation to the heterogeneous formulation of the equation of motion (the same form of the equation for a point away from a material discontinuity and a point at the material discontinuity). If an optimally accurate FD scheme for heterogeneous media is to be obtained, the optimally accurate operators have to be applied to the heterogeneous formulation of the equation of motion. Moczo et al. (2002) found a heterogeneous formulation and developed a FD scheme based on standard staggered-grid 4th-order operators. The scheme is capable to sense both smooth material heterogeneity and material discontinuity at any position in a spatial grid. We present a new FD scheme that combines optimally accurate operators of Geller and Takeuchi (1995) with a material parameterization of Moczo et al. (2002). Models of a single material discontinuity, interior constant-velocity layer, and interior layer with the velocity gradient were calculated with the new scheme, conventional-operator scheme and analytically. Numerical results clearly isolate and demonstrate effects of the boundary and grid dispersion. The results demonstrate significant accuracy improvement compared to previous FD schemes.
An accurate scheme by block method for third order ordinary ...
African Journals Online (AJOL)
A block linear multistep method for solving special third order initial value problems of ordinary differential equations is presented in this paper. The approach of collocation approximation is adopted in the derivation of the scheme and then the scheme is applied as simultaneous integrator to special third order initial value ...
Liu, Meilin
2011-07-01
A discontinuous Galerkin finite element method (DG-FEM) with a highly-accurate time integration scheme is presented. The scheme achieves its high accuracy using numerically constructed predictor-corrector integration coefficients. Numerical results show that this new time integration scheme uses considerably larger time steps than the fourth-order Runge-Kutta method when combined with a DG-FEM using higher-order spatial discretization/basis functions for high accuracy. © 2011 IEEE.
Modelling of Two-Phase Flow with Second-Order Accurate Scheme
Tiselj, Iztok; Petelin, Stojan
1997-09-01
A second-order accurate scheme based on high-resolution shock-capturing methods was used with a typical two-phase flow model which is used in the computer codes for simulation of nuclear power plant accidents. The two-fluid model, which has been taken from the computer code RELAP5, consists of six first-order partial differential equations that represent 1D mass, momentum, and energy balances for vapour and liquid. The partial differential equations are ill-posed-nonhyperbolic. The hyperbolicity required by the presented numerical scheme was obtained in the practical range of the physical parameters by minor modification of the virtual mass term. No conservative form of the applied equations exists, therefore, instead of the Riemann solver, more basic averaging was used for the evaluation of the Jacobian matrix. The equations were solved using nonconservative and conservative basic variables. Since the source terms are stiff, they were integrated with time steps which were shorter than or equal to the convection time step. The sources were treated with Strang splitting to retain the second-order accuracy of the scheme. The numerical scheme has been used for the simulations of the two-phase shock tube problem and the Edwards pipe experiment. Results show the importance of the closure laws which have a crucial impact on the accuracy of two-fluid models. Advantages of the second-order accurate schemes are evident especially in the area of fast transients dominated by acoustic phenomena.
Time accurate application of the MacCormack 2-4 scheme on massively parallel computers
Hudson, Dale A.; Long, Lyle N.
1995-01-01
Many recent computational efforts in turbulence and acoustics research have used higher order numerical algorithms. One popular method has been the explicit MacCormack 2-4 scheme. The MacCormack 2-4 scheme is second order accurate in time and fourth order accurate in space, and is stable for CFL's below 2/3. Current research has shown that the method can give accurate results but does exhibit significant Gibbs phenomena at sharp discontinuities. The impact of adding Jameson type second, third, and fourth order artificial viscosity was examined here. Category 2 problems, the nonlinear traveling wave and the Riemann problem, were computed using a CFL number of 0.25. This research has found that dispersion errors can be significantly reduced or nearly eliminated by using a combination of second and third order terms in the damping. Use of second and fourth order terms reduced the magnitude of dispersion errors but not as effectively as the second and third order combination. The program was coded using Thinking Machine's CM Fortran, a variant of Fortran 90/High Performance Fortran, and was executed on a 2K CM-200. Simple extrapolation boundary conditions were used for both problems.
Accurate B-spline-based 3-D interpolation scheme for digital volume correlation.
Ren, Maodong; Liang, Jin; Wei, Bin
2016-12-01
An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.
Accurate B-spline-based 3-D interpolation scheme for digital volume correlation
Ren, Maodong; Liang, Jin; Wei, Bin
2016-12-01
An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.
A Fast and Accurate Scheme for Sea Ice Dynamics with a Stochastic Subgrid Model
Seinen, C.; Khouider, B.
2016-12-01
Sea ice physics is a very complex process occurring over a wide range of scales; such as local melting or large scale drift. At the current grid resolution of Global Climate Models (GCMs), we are able to resolve large scale sea ice dynamics but uncertainty remains due to subgrid physics and potential dynamic feedback, especially due to the formation of melt ponds. Recent work in atmospheric science has shown success of Markov Jump stochastic subgrid models in the representation of clouds and convection and their feedback into the large scales. There has been a push to implement these methods in other parts of the Earth System and for the cryosphere in particular but in order to test these methods, efficient and accurate solvers are required for the resolved large scale sea-ice dynamics. We present a second order accurate scheme, in both time and space, for the sea ice momentum equation (SIME) with a Jacobian Free Newton Krylov (JFNK) solver. SIME is a highly nonlinear equation due to sea ice rheology terms appearing in the stress tensor. The most commonly accepted formulation, introduced by Hibler, allows sea-ice to resist significant stresses in compression but significantly less in tension. The relationship also leads to large changes in internal stresses from small changes in velocity fields. These non-linearities have resulted in the use of implicit methods for SIME and a JFNK solver was recently introduced and used to gain efficiency. However, the method used so far is only first order accurate in time. Here we expand the JFNK approach to a Crank-Nicholson discretization of SIME. This fully second order scheme is achieved with no increase in computational cost and will allow efficient testing and development of subgrid stochastic models of sea ice in the near future.
DEFF Research Database (Denmark)
Fasano, Andrea; Rasmussen, Henrik K.
2017-01-01
A third order accurate, in time and space, finite element scheme for the numerical simulation of three- dimensional time-dependent flow of the molecular stress function type of fluids in a generalized formu- lation is presented. The scheme is an extension of the K-BKZ Lagrangian finite element...
Balsara, Dinshaw S.
2017-12-01
As computational astrophysics comes under pressure to become a precision science, there is an increasing need to move to high accuracy schemes for computational astrophysics. The algorithmic needs of computational astrophysics are indeed very special. The methods need to be robust and preserve the positivity of density and pressure. Relativistic flows should remain sub-luminal. These requirements place additional pressures on a computational astrophysics code, which are usually not felt by a traditional fluid dynamics code. Hence the need for a specialized review. The focus here is on weighted essentially non-oscillatory (WENO) schemes, discontinuous Galerkin (DG) schemes and PNPM schemes. WENO schemes are higher order extensions of traditional second order finite volume schemes. At third order, they are most similar to piecewise parabolic method schemes, which are also included. DG schemes evolve all the moments of the solution, with the result that they are more accurate than WENO schemes. PNPM schemes occupy a compromise position between WENO and DG schemes. They evolve an Nth order spatial polynomial, while reconstructing higher order terms up to Mth order. As a result, the timestep can be larger. Time-dependent astrophysical codes need to be accurate in space and time with the result that the spatial and temporal accuracies must be matched. This is realized with the help of strong stability preserving Runge-Kutta schemes and ADER (Arbitrary DERivative in space and time) schemes, both of which are also described. The emphasis of this review is on computer-implementable ideas, not necessarily on the underlying theory.
Solving moving interface problems using a higher order accurate finite difference scheme
Mittal, H. V. R.; Ray, Rajendra K.
2017-07-01
A new finite difference scheme is applied to solve partial differential equations in domains with discontinuities due to the presence of time dependent moving or deforming interfaces. This scheme is an extension of the finite difference idea developed for solving incompressible, steady stokes equations in discontinuous domains with fixed interfaces [1]. This new idea is applied at the irregular points at each time step in conjunction with the Crank-Nicolson (CN) implicit scheme and a recently developed Higher Order Compact (HOC) scheme at regular points. For validation, Stefan's problem is considered with a moving interface in one dimension. In two dimensions, heat equation is considered on a square domain with a circular interface whose radius is continuously changing with time. HOC scheme is found to produce better results and the order of accuracy is slightly better than that of the CN scheme. However, both the schemes show around second order accuracy and good agreement with the analytical solution.
Fu, Bina; Xu, Xin; Zhang, Dong H
2008-07-07
We present a hierarchical construction scheme for accurate ab initio potential energy surface generation. The scheme is based on the observation that when molecular configuration changes, the variation in the potential energy difference between different ab initio methods is much smaller than the variation for potential energy itself. This means that it is easier to numerically represent energy difference to achieve a desired accuracy. Because the computational cost for ab initio calculations increases very rapidly with the accuracy, one can gain substantial saving in computational time by constructing a high accurate potential energy surface as a sum of a low accurate surface based on extensive ab initio data points and an energy difference surface for high and low accuracy ab initio methods based on much fewer data points. The new scheme was applied to construct an accurate ground potential energy surface for the FH(2) system using the coupled-cluster method and a very large basis set. The constructed potential energy surface is found to be more accurate on describing the resonance states in the FH(2) and FHD systems than the existing surfaces.
Directory of Open Access Journals (Sweden)
C. Bommaraju
2005-01-01
Full Text Available Numerical methods are extremely useful in solving real-life problems with complex materials and geometries. However, numerical methods in the time domain suffer from artificial numerical dispersion. Standard numerical techniques which are second-order in space and time, like the conventional Finite Difference 3-point (FD3 method, Finite-Difference Time-Domain (FDTD method, and Finite Integration Technique (FIT provide estimates of the error of discretized numerical operators rather than the error of the numerical solutions computed using these operators. Here optimally accurate time-domain FD operators which are second-order in time as well as in space are derived. Optimal accuracy means the greatest attainable accuracy for a particular type of scheme, e.g., second-order FD, for some particular grid spacing. The modified operators lead to an implicit scheme. Using the first order Born approximation, this implicit scheme is transformed into a two step explicit scheme, namely predictor-corrector scheme. The stability condition (maximum time step for a given spatial grid interval for the various modified schemes is roughly equal to that for the corresponding conventional scheme. The modified FD scheme (FDM attains reduction of numerical dispersion almost by a factor of 40 in 1-D case, compared to the FD3, FDTD, and FIT. The CPU time for the FDM scheme is twice of that required by the FD3 method. The simulated synthetic data for a 2-D P-SV (elastodynamics problem computed using the modified scheme are 30 times more accurate than synthetics computed using a conventional scheme, at a cost of only 3.5 times as much CPU time. The FDM is of particular interest in the modeling of large scale (spatial dimension is more or equal to one thousand wave lengths or observation time interval is very high compared to reference time step wave propagation and scattering problems, for instance, in ultrasonic antenna and synthetic scattering data modeling for Non
Directory of Open Access Journals (Sweden)
Andrew Erwin
Full Text Available In this paper, a novel haptic feedback scheme, used for accurately positioning a 1DOF virtual wrist prosthesis through sensory substitution, is presented. The scheme employs a three-node tactor array and discretely and selectively modulates the stimulation frequency of each tactor to relay 11 discrete haptic stimuli to the user. Able-bodied participants were able to move the virtual wrist prosthesis via a surface electromyography based controller. The participants evaluated the feedback scheme without visual or audio feedback and relied solely on the haptic feedback alone to correctly position the hand. The scheme was evaluated through both normal (perpendicular and shear (lateral stimulations applied on the forearm. Normal stimulations were applied through a prototype device previously developed by the authors while shear stimulations were generated using an ubiquitous coin motor vibrotactor. Trials with no feedback served as a baseline to compare results within the study and to the literature. The results indicated that using normal and shear stimulations resulted in accurately positioning the virtual wrist, but were not significantly different. Using haptic feedback was substantially better than no feedback. The results found in this study are significant since the feedback scheme allows for using relatively few tactors to relay rich haptic information to the user and can be learned easily despite a relatively short amount of training. Additionally, the results are important for the haptic community since they contradict the common conception in the literature that normal stimulation is inferior to shear. From an ergonomic perspective normal stimulation has the potential to benefit upper limb amputees since it can operate at lower frequencies than shear-based vibrotactors while also generating less noise. Through further tuning of the novel haptic feedback scheme and normal stimulation device, a compact and comfortable sensory substitution
An accurate momentum advection scheme for a z-level coordinate models
Kleptsova, O.; Stelling, G.S.; Pietrzak, J.D.
2010-01-01
In this paper, we focus on a conservative momentum advection discretisation in the presence of zlayers. While in the 2Dcase conservation ofmomentum is achieved automatically for an Eulerian advection scheme, special attention is required in the multi-layer case. We show here that an artificial
Numerical Investigation of a Novel Wiring Scheme Enabling Simple and Accurate Impedance Cytometry
Directory of Open Access Journals (Sweden)
Federica Caselli
2017-09-01
Full Text Available Microfluidic impedance cytometry is a label-free approach for high-throughput analysis of particles and cells. It is based on the characterization of the dielectric properties of single particles as they flow through a microchannel with integrated electrodes. However, the measured signal depends not only on the intrinsic particle properties, but also on the particle trajectory through the measuring region, thus challenging the resolution and accuracy of the technique. In this work we show via simulation that this issue can be overcome without resorting to particle focusing, by means of a straightforward modification of the wiring scheme for the most typical and widely used microfluidic impedance chip.
Boukandou-Mombo, Charlotte; Bakrim, Hassan; Claustre, Jonathan; Margot, Joëlle; Matte, Jean-Pierre; Vidal, François
2017-11-01
We present a new numerical method for solving the time-dependent isotropic Fokker-Planck equation. We show analytically and numerically that the numerical scheme provides accurate particle and energy density conservation in practical conditions, an equilibrium solution close to the Maxwellian distribution, and the decrease of entropy with time. The slight nonconservation of particle and energy density is only due to the finite value of the upper bound of the energy grid. Additionally, the totally implicit scheme proves to provide positive solutions and to be unconditionally stable. The implicit forms of the scheme can be set as a nonlinear tridiagonal system of equations and solved iteratively. For a uniform grid in energy with N points, the number of operations required to compute the solution at a given time is only O(N) , in contrast to the totally explicit variant, which requires O(N3) operations due to the restriction on the time step. The time-centered variant is more accurate than the totally implicit one, and uses an equivalent CPU time, but does not provide positive solutions for very large timesteps. The results of the method are analyzed for the classical problem of an initially Gaussian distribution as well as for an initially quasi-truncated Maxwellian distribution.
Oyeyemi, Victor B; Krisiloff, David B; Keith, John A; Libisch, Florian; Pavone, Michele; Carter, Emily A
2014-01-28
Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.
Caro, Miguel A; Laurila, Tomi; Lopez-Acevedo, Olga
2016-12-28
We explore different schemes for improved accuracy of entropy calculations in aqueous liquid mixtures from molecular dynamics (MD) simulations. We build upon the two-phase thermodynamic (2PT) model of Lin et al. [J. Chem. Phys. 119, 11792 (2003)] and explore new ways to obtain the partition between the gas-like and solid-like parts of the density of states, as well as the effect of the chosen ideal "combinatorial" entropy of mixing, both of which have a large impact on the results. We also propose a first-order correction to the issue of kinetic energy transfer between degrees of freedom (DoF). This problem arises when the effective temperatures of translational, rotational, and vibrational DoF are not equal, either due to poor equilibration or reduced system size/time sampling, which are typical problems for ab initio MD. The new scheme enables improved convergence of the results with respect to configurational sampling, by up to one order of magnitude, for short MD runs. To ensure a meaningful assessment, we perform MD simulations of liquid mixtures of water with several other molecules of varying sizes: methanol, acetonitrile, N, N-dimethylformamide, and n-butanol. Our analysis shows that results in excellent agreement with experiment can be obtained with little computational effort for some systems. However, the ability of the 2PT method to succeed in these calculations is strongly influenced by the choice of force field, the fluidicity (hard-sphere) formalism employed to obtain the solid/gas partition, and the assumed combinatorial entropy of mixing. We tested two popular force fields, GAFF and OPLS with SPC/E water. For the mixtures studied, the GAFF force field seems to perform as a slightly better "all-around" force field when compared to OPLS+SPC/E.
Caro, Miguel A.; Laurila, Tomi; Lopez-Acevedo, Olga
2016-12-01
We explore different schemes for improved accuracy of entropy calculations in aqueous liquid mixtures from molecular dynamics (MD) simulations. We build upon the two-phase thermodynamic (2PT) model of Lin et al. [J. Chem. Phys. 119, 11792 (2003)] and explore new ways to obtain the partition between the gas-like and solid-like parts of the density of states, as well as the effect of the chosen ideal "combinatorial" entropy of mixing, both of which have a large impact on the results. We also propose a first-order correction to the issue of kinetic energy transfer between degrees of freedom (DoF). This problem arises when the effective temperatures of translational, rotational, and vibrational DoF are not equal, either due to poor equilibration or reduced system size/time sampling, which are typical problems for ab initio MD. The new scheme enables improved convergence of the results with respect to configurational sampling, by up to one order of magnitude, for short MD runs. To ensure a meaningful assessment, we perform MD simulations of liquid mixtures of water with several other molecules of varying sizes: methanol, acetonitrile, N, N-dimethylformamide, and n-butanol. Our analysis shows that results in excellent agreement with experiment can be obtained with little computational effort for some systems. However, the ability of the 2PT method to succeed in these calculations is strongly influenced by the choice of force field, the fluidicity (hard-sphere) formalism employed to obtain the solid/gas partition, and the assumed combinatorial entropy of mixing. We tested two popular force fields, GAFF and OPLS with SPC/E water. For the mixtures studied, the GAFF force field seems to perform as a slightly better "all-around" force field when compared to OPLS+SPC/E.
Direct Simulations of Transition and Turbulence Using High-Order Accurate Finite-Difference Schemes
Rai, Man Mohan
1997-01-01
In recent years the techniques of computational fluid dynamics (CFD) have been used to compute flows associated with geometrically complex configurations. However, success in terms of accuracy and reliability has been limited to cases where the effects of turbulence and transition could be modeled in a straightforward manner. Even in simple flows, the accurate computation of skin friction and heat transfer using existing turbulence models has proved to be a difficult task, one that has required extensive fine-tuning of the turbulence models used. In more complex flows (for example, in turbomachinery flows in which vortices and wakes impinge on airfoil surfaces causing periodic transitions from laminar to turbulent flow) the development of a model that accounts for all scales of turbulence and predicts the onset of transition may prove to be impractical. Fortunately, current trends in computing suggest that it may be possible to perform direct simulations of turbulence and transition at moderate Reynolds numbers in some complex cases in the near future. This seminar will focus on direct simulations of transition and turbulence using high-order accurate finite-difference methods. The advantage of the finite-difference approach over spectral methods is that complex geometries can be treated in a straightforward manner. Additionally, finite-difference techniques are the prevailing methods in existing application codes. In this seminar high-order-accurate finite-difference methods for the compressible and incompressible formulations of the unsteady Navier-Stokes equations and their applications to direct simulations of turbulence and transition will be presented.
DEFF Research Database (Denmark)
He, Jinwei; Li, Yun Wei; Blaabjerg, Frede
2013-01-01
To address inaccurate power sharing problems in autonomous islanding microgrids, an enhanced droop control method through adaptive virtual impedance adjustment is proposed. First, a term associated with DG reactive power, imbalance power or harmonic power is added to the conventional real power......-frequency droop control. The transient real power variations caused by this additional term are captured to realize DG series virtual impedance tuning. With the regulation of DG virtual impedance at fundamental positive sequence, fundamental negative sequence, and harmonic frequencies, an accurate power sharing...
An Accurate Direction Finding Scheme Using Virtual Antenna Array via Smartphones.
Wang, Xiaopu; Xiong, Yan; Huang, Wenchao
2016-10-29
With the development of localization technologies, researchers solve the indoor localization problems using diverse methods and equipment. Most localization techniques require either specialized devices or fingerprints, which are inconvenient for daily use. Therefore, we propose and implement an accurate, efficient and lightweight system for indoor direction finding using common smartphones and loudspeakers. Our method is derived from a key insight: By moving a smartphone in regular patterns, we can effectively emulate the sensitivity and functionality of a Uniform Antenna Array to estimate the angle of arrival of the target signal. Specifically, a user only needs to hold his smartphone still in front of him, and then rotate his body around 360 ∘ duration with the smartphone at an approximate constant velocity. Then, our system can provide accurate directional guidance and lead the user to their destinations (normal loudspeakers we preset in the indoor environment transmitting high frequency acoustic signals) after a few measurements. Major challenges in implementing our system are not only imitating a virtual antenna array by ordinary smartphones but also overcoming the detection difficulties caused by the complex indoor environment. In addition, we leverage the gyroscope of the smartphone to reduce the impact of a user's motion pattern change to the accuracy of our system. In order to get rid of the multipath effect, we leverage multiple signal classification to calculate the direction of the target signal, and then design and deploy our system in various indoor scenes. Extensive comparative experiments show that our system is reliable under various circumstances.
Reconciling privacy and security
Lieshout, M.J. van; Friedewald, M.; Wright, D.; Gutwirth, S.
2013-01-01
This paper considers the relationship between privacy and security and, in particular, the traditional "trade-off" paradigm. The issue is this: how, in a democracy, can one reconcile the trend towards increasing security (for example, as manifested by increasing surveillance) with the fundamental
Reconciling Evolution and Creation.
Tax, Sol
1983-01-01
Proposes a way to reconcile evolution with creationism by hypothesizing that the universe was created when the scientific evidence shows, speculating that this was when God began the series of creations described in Genesis, and assuming that God gave humans intelligence to uncover the methods by which he ordained scientific evolution. (Author/MJL)
Rybynok, V. O.; Kyriacou, P. A.
2007-10-01
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.
Energy Technology Data Exchange (ETDEWEB)
Rybynok, V O; Kyriacou, P A [City University, London (United Kingdom)
2007-10-15
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.
Silva, Goncalo; Talon, Laurent; Ginzburg, Irina
2017-04-01
is thoroughly evaluated in three benchmark tests, which are run throughout three distinctive permeability regimes. The first configuration is a horizontal porous channel, studied with a symbolic approach, where we construct the exact solutions of FEM and BF/IBF with different boundary schemes. The second problem refers to an inclined porous channel flow, which brings in as new challenge the formation of spurious boundary layers in LBM; that is, numerical artefacts that arise due to a deficient accommodation of the bulk solution by the low-accurate boundary scheme. The third problem considers a porous flow past a periodic square array of solid cylinders, which intensifies the previous two tests with the simulation of a more complex flow pattern. The ensemble of numerical tests provides guidelines on the effect of grid resolution and the TRT free collision parameter over the accuracy and the quality of the velocity field, spanning from Stokes to Darcy permeability regimes. It is shown that, with the use of the high-order accurate boundary schemes, the simple, uniform-mesh-based TRT-LBM formulation can even surpass the accuracy of FEM employing hardworking body-fitted meshes.
Energy Technology Data Exchange (ETDEWEB)
Silva, Goncalo, E-mail: goncalo.nuno.silva@gmail.com [Irstea, Antony Regional Centre, HBAN, 1 rue Pierre-Gilles de Gennes CS 10030, 92761 Antony cedex (France); Talon, Laurent, E-mail: talon@fast.u-psud.fr [CNRS (UMR 7608), Laboratoire FAST, Batiment 502, Campus University, 91405 Orsay (France); Ginzburg, Irina, E-mail: irina.ginzburg@irstea.fr [Irstea, Antony Regional Centre, HBAN, 1 rue Pierre-Gilles de Gennes CS 10030, 92761 Antony cedex (France)
2017-04-15
and FEM is thoroughly evaluated in three benchmark tests, which are run throughout three distinctive permeability regimes. The first configuration is a horizontal porous channel, studied with a symbolic approach, where we construct the exact solutions of FEM and BF/IBF with different boundary schemes. The second problem refers to an inclined porous channel flow, which brings in as new challenge the formation of spurious boundary layers in LBM; that is, numerical artefacts that arise due to a deficient accommodation of the bulk solution by the low-accurate boundary scheme. The third problem considers a porous flow past a periodic square array of solid cylinders, which intensifies the previous two tests with the simulation of a more complex flow pattern. The ensemble of numerical tests provides guidelines on the effect of grid resolution and the TRT free collision parameter over the accuracy and the quality of the velocity field, spanning from Stokes to Darcy permeability regimes. It is shown that, with the use of the high-order accurate boundary schemes, the simple, uniform-mesh-based TRT-LBM formulation can even surpass the accuracy of FEM employing hardworking body-fitted meshes.
Directory of Open Access Journals (Sweden)
Han Zou
2016-02-01
Full Text Available The location and contextual status (indoor or outdoor is fundamental and critical information for upper-layer applications, such as activity recognition and location-based services (LBS for individuals. In addition, optimizations of building management systems (BMS, such as the pre-cooling or heating process of the air-conditioning system according to the human traffic entering or exiting a building, can utilize the information, as well. The emerging mobile devices, which are equipped with various sensors, become a feasible and flexible platform to perform indoor-outdoor (IO detection. However, power-hungry sensors, such as GPS and WiFi, should be used with caution due to the constrained battery storage on mobile device. We propose BlueDetect: an accurate, fast response and energy-efficient scheme for IO detection and seamless LBS running on the mobile device based on the emerging low-power iBeacon technology. By leveraging the on-broad Bluetooth module and our proposed algorithms, BlueDetect provides a precise IO detection service that can turn on/off on-board power-hungry sensors smartly and automatically, optimize their performances and reduce the power consumption of mobile devices simultaneously. Moreover, seamless positioning and navigation services can be realized by it, especially in a semi-outdoor environment, which cannot be achieved by GPS or an indoor positioning system (IPS easily. We prototype BlueDetect on Android mobile devices and evaluate its performance comprehensively. The experimental results have validated the superiority of BlueDetect in terms of IO detection accuracy, localization accuracy and energy consumption.
Directory of Open Access Journals (Sweden)
M. A. Starasotnikau
2015-01-01
Full Text Available The paper considers a control scheme of such optoelectronic devices with matrix photo-detectors as autocollimators, microscopes, star trackers and other film equipment an d the control is carried out with the help of a collimator. A number of factors (structure discreteness, photo-detector noise, consistency in collimator test-object size, photo-detector pixel size and point scattering function of optical components exert an influence on control accuracy.In the context of control problems and alignment of optoelectronic devices the paper studies a scheme which includes two components: controlling component that is a collimator and a component to be controlled that is a tele-centric system. A mathematical model for control schemes has been proposed with the purpose to determine an effect of the above-mentioned factors and its mathematical implementation has been described in the paper.Due to simulation an optimal ratio has been selected for component parameters of the optical control scheme: point scattering function for a collimator objective and a telecentric system, collimator test-object size, photo-detector pixel size. A collimator test-object size has been determined in the paper. Using the considered scheme the size will give the smallest measurement error caused by photo-detector discreteness of a controlled device. A standard deviation of the gravity energy center for a collimator test-object caused by photo-detector noise has been determined in the paper. In order to reduce the effect of photo-detector noise the paper proposes to take as zero values of a signal such values which are smaller than a doubled discretization interval of an analog-to-digital converter.
Reconcile: A Coreference Resolution Research Platform
Energy Technology Data Exchange (ETDEWEB)
Stoyanov, V; Cardie, C; Gilbert, N; Riloff, E; Buttler, D; Hysom, D
2009-10-29
Despite the availability of standard data sets and metrics, approaches to the problem of noun phrase coreference resolution are hard to compare empirically due to the different evaluation setting stemming, in part, from the lack of comprehensive coreference resolution research platforms. In this tech report we present Reconcile, a coreference resolution research platform that aims to facilitate the implementation of new approaches to coreference resolution as well as the comparison of existing approaches. We discuss Reconcile's architecture and give results of running Reconcile on six data sets using four evaluation metrics, showing that Reconcile's performance is comparable to state-of-the-art systems in coreference resolution.
Reconciling Medical Expenditure Estimates from the MEPS...
U.S. Department of Health & Human Services — Reconciling Medical Expenditure Estimates from the MEPS and NHEA, 2007, published in Volume 2, Issue 4 of the Medicare and Medicaid Research Review, provides a...
Reconciling Multiple IPsec and Firewall Policies
Aura, Tuomas; Becker, Moritz; Roe, Michael; Zieliński, Piotr
Manually configuring large firewall policies can be a hard and error-prone task. It is even harder in the case of IPsec policies that can specify IP packets not only to be accepted or discarded, but also to be cryptographically protected in various ways. However, in many cases the configuration task can be simplified by writing a set of smaller, independent policies that are then reconciled consistently. Similarly, there is often the need to reconcile policies from multiple sources into a single one. In this paper, we discuss the issues that arise in combining multiple IPsec and firewall policies and present algorithms for policy reconciliation.
Reconciling Voices in Writing an Autoethnographic Thesis
Dawn Johnston MSc; Tom Strong PhD
2008-01-01
The authors consider writing and supervising an autoethnographic thesis as a process of reconciling voices while finding one's own academic and personal voice. They draw from notions of polyphony to speak about how we negotiated with different voices (the voices of experts, research participants, personal affiliations, those used in our supervisory discussions) our way forward in the supervisory relationship, as well as in the thesis itself. They invite readers to draw their own meanings from...
Has bioscience reconciled mind and body?
Davies, Carmel; Redmond, Catherine; Toole, Sinead O; Coughlan, Barbara
2016-09-01
The aim of this discursive paper is to explore the question 'has biological science reconciled mind and body?'. This paper has been inspired by the recognition that bioscience has a historical reputation for privileging the body over the mind. The disregard for the mind (emotions and behaviour) cast bioscience within a 'mind-body problem' paradigm. It has also led to inherent limitations in its capacity to contribute to understanding the complex nature of health. This is a discursive paper. Literature from the history and sociology of science and psychoneuroimmunology (1975-2015) inform the arguments in this paper. The historical and sociological literature provides the basis for a socio-cultural debate on mind-body considerations in science since the 1970s. The psychoneuroimmunology literature draws on mind-body bioscientific theory as a way to demonstrate how science is reconciling mind and body and advancing its understanding of the interconnections between emotions, behaviour and health. Using sociological and biological evidence, this paper demonstrates how bioscience is embracing and advancing its understanding of mind-body interconnectedness. It does this by demonstrating the emotional and behavioural alterations that are caused by two common phenomena; prolonged, chronic peripheral inflammation and prolonged psychological stress. The evidence and arguments provided has global currency that advances understanding of the inter-relationship between emotions, behaviour and health. This paper shows how bioscience has reconciled mind and body. In doing so, it has advanced an understanding of science's contribution to the inter-relationship between emotions, behaviour and health. The biological evidence supporting mind-body science has relevance to clinical practice for nurses and other healthcare professions. This paper discusses how this evidence can inform and enhance clinical practice directly and through research, education and policy. © 2015 John Wiley
Reconciling Voices in Writing an Autoethnographic Thesis
Directory of Open Access Journals (Sweden)
Dawn Johnston MSc
2008-09-01
Full Text Available The authors consider writing and supervising an autoethnographic thesis as a process of reconciling voices while finding one's own academic and personal voice. They draw from notions of polyphony to speak about how we negotiated with different voices (the voices of experts, research participants, personal affiliations, those used in our supervisory discussions our way forward in the supervisory relationship, as well as in the thesis itself. They invite readers to draw their own meanings from these negotiations as they can relate to supervisory relationships and the writing of academic theses.
Reconciling atmospheric temperatures in the early Archean
DEFF Research Database (Denmark)
Pope, Emily Catherine; Bird, Dennis K.; Rosing, Minik Thorleif
Average surface temperatures of Earth in the Archean remain unresolved despite decades of diverse approaches to the problem. As in the present, early Earth climates were complex systems dependent on many variables. With few constraints on such variables, climate models must be relatively simplistic...... rock record. The goal of this study is to compile and reconcile Archean geologic and geochemical features that are in some way controlled by surface temperature and/or atmospheric composition, so that at the very least paleoclimate models can be checked by physical limits. Data used to this end include...
Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.
2013-01-01
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol−1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB
Reconciling controversies about the 'global warming hiatus'.
Medhaug, Iselin; Stolpe, Martin B; Fischer, Erich M; Knutti, Reto
2017-05-03
Between about 1998 and 2012, a time that coincided with political negotiations for preventing climate change, the surface of Earth seemed hardly to warm. This phenomenon, often termed the 'global warming hiatus', caused doubt in the public mind about how well anthropogenic climate change and natural variability are understood. Here we show that apparently contradictory conclusions stem from different definitions of 'hiatus' and from different datasets. A combination of changes in forcing, uptake of heat by the oceans, natural variability and incomplete observational coverage reconciles models and data. Combined with stronger recent warming trends in newer datasets, we are now more confident than ever that human influence is dominant in long-term warming.
DEFF Research Database (Denmark)
van Leeuwen, Theo
2013-01-01
This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation.......This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation....
Reconciling Hierarchical and Edge Organizations: 9-11 Revisited
2014-06-01
Grant, T.J. (2006). Measuring the Potential Benefits of NCW: 9/11 as case study. In Alberts, D.S. (Ed.), Proceedings, 11 th International Command...C2: • 2 PhD students (cultural influences; eCommerce to support CMI) – Offensive cyber operations: • Integrating kinetic & cyber ops...question: Can 2 forms be reconciled? – Answer: Yes, and with synergistic benefits too 19th ICCRTS, Alexandria VA, 17-19 Jun 14 Reconciling hierarchy
Directory of Open Access Journals (Sweden)
M. von Hobe
2013-09-01
Full Text Available The international research project RECONCILE has addressed central questions regarding polar ozone depletion, with the objective to quantify some of the most relevant yet still uncertain physical and chemical processes and thereby improve prognostic modelling capabilities to realistically predict the response of the ozone layer to climate change. This overview paper outlines the scope and the general approach of RECONCILE, and it provides a summary of observations and modelling in 2010 and 2011 that have generated an in many respects unprecedented dataset to study processes in the Arctic winter stratosphere. Principally, it summarises important outcomes of RECONCILE including (i better constraints and enhanced consistency on the set of parameters governing catalytic ozone destruction cycles, (ii a better understanding of the role of cold binary aerosols in heterogeneous chlorine activation, (iii an improved scheme of polar stratospheric cloud (PSC processes that includes heterogeneous nucleation of nitric acid trihydrate (NAT and ice on non-volatile background aerosol leading to better model parameterisations with respect to denitrification, and (iv long transient simulations with a chemistry-climate model (CCM updated based on the results of RECONCILE that better reproduce past ozone trends in Antarctica and are deemed to produce more reliable predictions of future ozone trends. The process studies and the global simulations conducted in RECONCILE show that in the Arctic, ozone depletion uncertainties in the chemical and microphysical processes are now clearly smaller than the sensitivity to dynamic variability.
von Hobe, M.; Bekki, S.; Borrmann, S.; Cairo, F.; D'Amato, F.; Di Donfrancesco, G.; Dörnbrack, A.; Ebersoldt, A.; Ebert, M.; Emde, C.; Engel, I.; Ern, M.; Frey, W.; Genco, S.; Griessbach, S.; Grooß, J.-U.; Gulde, T.; Günther, G.; Hösen, E.; Hoffmann, L.; Homonnai, V.; Hoyle, C. R.; Isaksen, I. S. A.; Jackson, D. R.; Jánosi, I. M.; Jones, R. L.; Kandler, K.; Kalicinsky, C.; Keil, A.; Khaykin, S. M.; Khosrawi, F.; Kivi, R.; Kuttippurath, J.; Laube, J. C.; Lefèvre, F.; Lehmann, R.; Ludmann, S.; Luo, B. P.; Marchand, M.; Meyer, J.; Mitev, V.; Molleker, S.; Müller, R.; Oelhaf, H.; Olschewski, F.; Orsolini, Y.; Peter, T.; Pfeilsticker, K.; Piesch, C.; Pitts, M. C.; Poole, L. R.; Pope, F. D.; Ravegnani, F.; Rex, M.; Riese, M.; Röckmann, T.; Rognerud, B.; Roiger, A.; Rolf, C.; Santee, M. L.; Scheibe, M.; Schiller, C.; Schlager, H.; Siciliani de Cumis, M.; Sitnikov, N.; Søvde, O. A.; Spang, R.; Spelten, N.; Stordal, F.; Sumińska-Ebersoldt, O.; Ulanovski, A.; Ungermann, J.; Viciani, S.; Volk, C. M.; vom Scheidt, M.; von der Gathen, P.; Walker, K.; Wegner, T.; Weigel, R.; Weinbruch, S.; Wetzel, G.; Wienhold, F. G.; Wohltmann, I.; Woiwode, W.; Young, I. A. K.; Yushkov, V.; Zobrist, B.; Stroh, F.
2013-09-01
The international research project RECONCILE has addressed central questions regarding polar ozone depletion, with the objective to quantify some of the most relevant yet still uncertain physical and chemical processes and thereby improve prognostic modelling capabilities to realistically predict the response of the ozone layer to climate change. This overview paper outlines the scope and the general approach of RECONCILE, and it provides a summary of observations and modelling in 2010 and 2011 that have generated an in many respects unprecedented dataset to study processes in the Arctic winter stratosphere. Principally, it summarises important outcomes of RECONCILE including (i) better constraints and enhanced consistency on the set of parameters governing catalytic ozone destruction cycles, (ii) a better understanding of the role of cold binary aerosols in heterogeneous chlorine activation, (iii) an improved scheme of polar stratospheric cloud (PSC) processes that includes heterogeneous nucleation of nitric acid trihydrate (NAT) and ice on non-volatile background aerosol leading to better model parameterisations with respect to denitrification, and (iv) long transient simulations with a chemistry-climate model (CCM) updated based on the results of RECONCILE that better reproduce past ozone trends in Antarctica and are deemed to produce more reliable predictions of future ozone trends. The process studies and the global simulations conducted in RECONCILE show that in the Arctic, ozone depletion uncertainties in the chemical and microphysical processes are now clearly smaller than the sensitivity to dynamic variability.
Reconciling Cadastral Records in a Dual Land Registration System ...
African Journals Online (AJOL)
... activities and the processes for recording land titles in Ghana in view of existing laws. Experiences in developing procedures for reconciling the records from the two systems are also discussed. Keywords: Cadastral survey, land records, land tenure, registration, survey regulation. Journal of Science and Technology Vol.
A Novel Iris Segmentation Scheme
Directory of Open Access Journals (Sweden)
Chen-Chung Liu
2014-01-01
Full Text Available One of the key steps in the iris recognition system is the accurate iris segmentation from its surrounding noises including pupil, sclera, eyelashes, and eyebrows of a captured eye-image. This paper presents a novel iris segmentation scheme which utilizes the orientation matching transform to outline the outer and inner iris boundaries initially. It then employs Delogne-Kåsa circle fitting (instead of the traditional Hough transform to further eliminate the outlier points to extract a more precise iris area from an eye-image. In the extracted iris region, the proposed scheme further utilizes the differences in the intensity and positional characteristics of the iris, eyelid, and eyelashes to detect and delete these noises. The scheme is then applied on iris image database, UBIRIS.v1. The experimental results show that the presented scheme provides a more effective and efficient iris segmentation than other conventional methods.
Additive operator-difference schemes splitting schemes
Vabishchevich, Petr N
2013-01-01
Applied mathematical modeling isconcerned with solving unsteady problems. This bookshows how toconstruct additive difference schemes to solve approximately unsteady multi-dimensional problems for PDEs. Two classes of schemes are highlighted: methods of splitting with respect to spatial variables (alternating direction methods) and schemes of splitting into physical processes. Also regionally additive schemes (domain decomposition methods)and unconditionally stable additive schemes of multi-component splitting are considered for evolutionary equations of first and second order as well as for sy
Reconciling bottom-up, top-down, and direct measurements of biogenic VOC emissions
Guenther, A.; Karl, T.; Wiedinmyer, C.; Barkley, M.; Palmer, P.; Muller, J. F.; Stavrakov, T.; Millet, D.
2007-12-01
Biogenic Volatile Organic compound (BVOC) emissions vary considerably on spatial scales ranging from a few meters to thousands of kilometers and temporal scales ranging from seconds to years. Accurate estimates of BVOC emissions are required for many regional air quality modeling studies and global earth system investigations. We compare results from bottom-up estimates, using The Model of Emissions of Gases and Aerosols from Nature (MEGAN), with top-down estimates, based on satellite and in-situ concentration distributions, and direct flux measurements. We describe examples of both agreement and disagreement in U.S., tropical forest and other landscapes and discuss potential explanations for differences that can exceed a factor of 2. Future measurement and modeling needs are outlined and specific activities are proposed to improve efforts to reconcile these approaches and understand the controlling processes.
Scheme Program Documentation Tools
DEFF Research Database (Denmark)
Nørmark, Kurt
2004-01-01
This paper describes and discusses two different Scheme documentation tools. The first is SchemeDoc, which is intended for documentation of the interfaces of Scheme libraries (APIs). The second is the Scheme Elucidator, which is for internal documentation of Scheme programs. Although the tools...
How self-interactions can reconcile sterile neutrinos with cosmology.
Hannestad, Steen; Hansen, Rasmus Sloth; Tram, Thomas
2014-01-24
Short baseline neutrino oscillation experiments have shown hints of the existence of additional sterile neutrinos in the eV mass range. However, such neutrinos seem incompatible with cosmology because they have too large of an impact on cosmic structure formation. Here we show that new interactions in the sterile neutrino sector can prevent their production in the early Universe and reconcile short baseline oscillation experiments with cosmology.
Bad Company: Reconciling Negative Peer Effects in College Achievement
Brady, Ryan; Insler, Michael; Rahman, Ahmed
2015-01-01
Existing peer effects studies produce contradictory findings, including positive, negative, large, and small effects, despite similar contexts. We reconcile these results using U.S. Naval Academy data covering a 22-year history of the random assignment of students to peer groups. Coupled with students' limited discretion over freshman-year courses, our setting affords an opportunity to better understand peer effects in different social networks. We find negative effects at the broader "compan...
RECONCILE: a machine-learning coreference resolution system
Energy Technology Data Exchange (ETDEWEB)
2007-12-10
RECONCILE is a noun phrase conference resolution system: it identifies noun phrases in a text document and determines which subsets refer to each real world entity referenced in the text. The heart of the system is a combination of supervised and unsupervised machine learning systems. It uses a machine learning algorithm (chosen from an extensive suite, including Weka) for training noun phrase coreference classifier models and implements a variety of clustering algorithms to coordinate the pairwise classifications. A number of features have been implemented, including all of the features employed in Ng & Cardie [2002].
Comparison of Accuracy and Efficiency of Time-domain Schemes for Calculating Synthetic Seismograms
Mizutani, Hiromitsu; Geller, Robert J.; Takeuchi, Nozomu
2000-04-01
We conduct numerical experiments for several simple models to illustrate the advantages and disadvantages of various schemes for computing synthetic seismograms in the time domain. We consider both schemes that use the pseudo-spectral method (PSM) to compute spatial derivatives and schemes that use the finite difference method (FDM) to compute spatial derivatives. We show that schemes satisfying the criterion for optimal accuracy of Geller and Takeuchi (1995) are significantly more cost-effective than non-optimally accurate schemes of the same type. We then compare optimally accurate PSM schemes to optimally accurate FDM schemes. For homogeneous or smoothly varying heterogeneous media, PSM schemes require significantly fewer grid points per wavelength than FDM schemes, and are thus more cost-effective. In contrast, we show that FDM schemes are more cost-effective for media with sharp boundaries or steep velocity gradients. Thus FDM schemes appear preferable to PSM schemes for practical seismological applications. We analyze the solution error of various schemes and show that widely cited Lax-Wendroff PSM or FDM schemes that are frequently referred to as higher order schemes are in fact equivalent to second-order optimally accurate PSM or FDM schemes implemented as two-step (predictor-corrector) schemes. The error of solutions obtained using such schemes is thus second-order, rather than fourth-order.
Towards an "All Speed" Unstructured Upwind Scheme
Loh, Ching Y.; Jorgenson, Philip C.E.
2009-01-01
In the authors previous studies [1], a time-accurate, upwind finite volume method (ETAU scheme) for computing compressible flows on unstructured grids was proposed. The scheme is second order accurate in space and time and yields high resolution in the presence of discontinuities. The scheme features a multidimensional limiter and multidimensional numerical dissipation. These help to stabilize the numerical process and to overcome the annoying pathological behaviors of upwind schemes. In the present paper, it will be further shown that such multidimensional treatments also lead to a nearly all-speed or Mach number insensitive upwind scheme. For flows at very high Mach number, e.g., 10, local numerical instabilities or the pathological behaviors are suppressed, while for flows at very low Mach number, e.g., 0.02, computation can be directly carried out without invoking preconditioning. For flows in different Mach number regimes, i.e., low, medium, and high Mach numbers, one only needs to adjust one or two parameters in the scheme. Several examples with low and high Mach numbers are demonstrated in this paper. Thus, the ETAU scheme is applicable to a broad spectrum of flow regimes ranging from high supersonic to low subsonic, appropriate for both CFD (computational fluid dynamics) and CAA (computational aeroacoustics).
Love, Patrick G.; Guthrie, Victoria L.
1999-01-01
Summarizes William Perry's intellectual scheme and places it in the context of the 1990's. Perry's scheme of cognitive development, though more than thirty years old, is still being used by practitioners today to enhance practice in and out of the classroom. It laid a foundation for new research to extend, challenge, and build onto the scheme.…
Sman, van der R.G.M.
2006-01-01
In the special case of relaxation parameter = 1 lattice Boltzmann schemes for (convection) diffusion and fluid flow are equivalent to finite difference/volume (FD) schemes, and are thus coined finite Boltzmann (FB) schemes. We show that the equivalence is inherent to the homology of the
Reconciling controversies about the ‘global warming hiatus’
Medhaug, Iselin; Stolpe, Martin B.; Fischer, Erich M.; Knutti, Reto
2017-05-01
Between about 1998 and 2012, a time that coincided with political negotiations for preventing climate change, the surface of Earth seemed hardly to warm. This phenomenon, often termed the ‘global warming hiatus’, caused doubt in the public mind about how well anthropogenic climate change and natural variability are understood. Here we show that apparently contradictory conclusions stem from different definitions of ‘hiatus’ and from different datasets. A combination of changes in forcing, uptake of heat by the oceans, natural variability and incomplete observational coverage reconciles models and data. Combined with stronger recent warming trends in newer datasets, we are now more confident than ever that human influence is dominant in long-term warming.
ENSEMBLE methods to reconcile disparate national long range dispersion forecasts
DEFF Research Database (Denmark)
Mikkelsen, Torben; Galmarini, S.; Bianconi, R.
2003-01-01
ENSEMBLE is a web-based decision support system for real-time exchange and evaluation of national long-range dispersion forecasts of nuclear releases with cross-boundary consequences. The system is developed with the purpose to reconcile among disparatenational forecasts for long-range dispersion....... ENSEMBLE addresses the problem of achieving a common coherent strategy across European national emergency management when national long-range dispersion forecasts differ from one another during an accidentalatmospheric release of radioactive material. A series of new decision-making “ENSEMBLE” procedures...... and Web-based software evaluation and exchange tools have been created for real-time reconciliation and harmonisation of real-time dispersion forecastsfrom meteorological and emergency centres across Europe during an accident. The new ENSEMBLE software tools is available to participating national...
Reconciling parenting and smoking in the context of child development.
Bottorff, Joan L; Oliffe, John L; Kelly, Mary T; Johnson, Joy L; Chan, Anna
2013-08-01
In this article we explore the micro-social context of parental tobacco use in the first years of a child's life and early childhood. We conducted individual interviews with 28 mothers and fathers during the 4 years following the birth of their child. Using grounded theory methods, we identified the predominant explanatory concept in parents' accounts as the need to reconcile being a parent and smoking. Desires to become smoke-free coexisted with five types of parent-child interactions: (a) protecting the defenseless child, (b) concealing smoking and cigarettes from the mimicking child, (c) reinforcing smoking as bad with the communicative child, (d) making guilt-driven promises to the fearful child, and (e) relinquishing personal responsibility to the autonomous child. We examine the agency of the child in influencing parents' smoking practices, the importance of children's observational learning in the early years, and the reciprocal nature of parent-child interactions related to parents' smoking behavior.
Improved shock-capturing of Jameson's scheme for the Euler equations
van der Burg, J.W.; Kuerten, Johannes G.M.; Zandbergen, P.J.
1992-01-01
It is known that Jameson's scheme is a pseudo-second-order-accurate scheme for solving discrete conservation laws. The scheme contains a non-linear artificial dissipative flux which is designed to capture shocks. In this paper, it is shown that the, shock-capturing of Jameson's scheme for the Euler
Rom, Mark Carl
2011-01-01
Grades matter. College grading systems, however, are often ad hoc and prone to mistakes. This essay focuses on one factor that contributes to high-quality grading systems: grading accuracy (or "efficiency"). I proceed in several steps. First, I discuss the elements of "efficient" (i.e., accurate) grading. Next, I present analytical results…
Staggered Schemes for Fluctuating Hydrodynamics
Balboa, F; Delgado-Buscalioni, R; Donev, A; Fai, T; Griffith, B; Peskin, C S
2011-01-01
We develop numerical schemes for solving the isothermal compressible and incompressible equations of fluctuating hydrodynamics on a grid with staggered momenta. We develop a second-order accurate spatial discretization of the diffusive, advective and stochastic fluxes that satisfies a discrete fluctuation-dissipation balance, and construct temporal discretizations that are at least second-order accurate in time deterministically and in a weak sense. Specifically, the methods reproduce the correct equilibrium covariances of the fluctuating fields to third (compressible) and second (incompressible) order in the time step, as we verify numerically. We apply our techniques to model recent experimental measurements of giant fluctuations in diffusively mixing fluids in a micro-gravity environment [A. Vailati et. al., Nature Communications 2:290, 2011]. Numerical results for the static spectrum of non-equilibrium concentration fluctuations are in excellent agreement between the compressible and incompressible simula...
de Bont, Chris
2018-01-01
This booklet was written to share research results with farmers and practitioners in Tanzania. It gives a summary of the empirical material collected during three months of field work in the Mawala irrigation scheme (Kilimanjaro Region), and includes maps, tables and photos. It describes the history of the irrigation scheme, as well current irrigation and farming practices. It especially focuses on the different kinds of infrastructural improvement in the scheme (by farmers and the government...
A Comparative Analysis of Schemes for Correlated Branch Prediction
Young, Cliff; Gloy, Nicolas; Smith, Michael D.
1995-01-01
Modern high-performance architectures require extremely accurate branch prediction to overcome the performance limitations of conditional branches. We present a framework that categorizes branch prediction schemes by the way in which they partition dynamic branches and by the kind of predictor that they use. The framework allows us to compare and contrast branch prediction schemes, and to analyze why they work. We use the framework to show how a static correlated branch prediction scheme incr...
Thermally-Driven Mantle Plumes Reconcile Hot-spot Observations
Davies, D.; Davies, J.
2008-12-01
Hot-spots are anomalous regions of magmatism that cannot be directly associated with plate tectonic processes (e.g. Morgan, 1972). They are widely regarded as the surface expression of upwelling mantle plumes. Hot-spots exhibit variable life-spans, magmatic productivity and fixity (e.g. Ito and van Keken, 2007). This suggests that a wide-range of upwelling structures coexist within Earth's mantle, a view supported by geochemical and seismic evidence, but, thus far, not reproduced by numerical models. Here, results from a new, global, 3-D spherical, mantle convection model are presented, which better reconcile hot-spot observations, the key modification from previous models being increased convective vigor. Model upwellings show broad-ranging dynamics; some drift slowly, while others are more mobile, displaying variable life-spans, intensities and migration velocities. Such behavior is consistent with hot-spot observations, indicating that the mantle must be simulated at the correct vigor and in the appropriate geometry to reproduce Earth-like dynamics. Thermally-driven mantle plumes can explain the principal features of hot-spot volcanism on Earth.
Reconciling laboratory and field assessments of neonicotinoid toxicity to honeybees.
Henry, Mickaël; Cerrutti, Nicolas; Aupinel, Pierrick; Decourtye, Axel; Gayrard, Mélanie; Odoux, Jean-François; Pissard, Aurélien; Rüger, Charlotte; Bretagnolle, Vincent
2015-11-22
European governments have banned the use of three common neonicotinoid pesticides due to insufficiently identified risks to bees. This policy decision is controversial given the absence of clear consistency between toxicity assessments of those substances in the laboratory and in the field. Although laboratory trials report deleterious effects in honeybees at trace levels, field surveys reveal no decrease in the performance of honeybee colonies in the vicinity of treated fields. Here we provide the missing link, showing that individual honeybees near thiamethoxam-treated fields do indeed disappear at a faster rate, but the impact of this is buffered by the colonies' demographic regulation response. Although we could ascertain the exposure pathway of thiamethoxam residues from treated flowers to honeybee dietary nectar, we uncovered an unexpected pervasive co-occurrence of similar concentrations of imidacloprid, another neonicotinoid normally restricted to non-entomophilous crops in the study country. Thus, its origin and transfer pathways through the succession of annual crops need be elucidated to conveniently appraise the risks of combined neonicotinoid exposures. This study reconciles the conflicting laboratory and field toxicity assessments of neonicotinoids on honeybees and further highlights the difficulty in actually detecting non-intentional effects on the field through conventional risk assessment methods. © 2015 The Author(s).
ENSEMBLE methods to reconcile disparate national long range dispersion forecasting
Energy Technology Data Exchange (ETDEWEB)
Mikkelsen, T.; Galmarini, S.; Bianconi, R.; French, S. (eds.)
2003-11-01
ENSEMBLE is a web-based decision support system for real-time exchange and evaluation of national long-range dispersion forecasts of nuclear releases with cross-boundary consequences. The system is developed with the purpose to reconcile among disparate national forecasts for long-range dispersion. ENSEMBLE addresses the problem of achieving a common coherent strategy across European national emergency management when national long-range dispersion forecasts differ from one another during an accidental atmospheric release of radioactive material. A series of new decision-making 'ENSEMBLE' procedures and Web-based software evaluation and exchange tools have been created for real-time reconciliation and harmonisation of real-time dispersion forecasts from meteorological and emergency centres across Europe during an accident. The new ENSEMBLE software tools is available to participating national emergency and meteorological forecasting centres, which may choose to integrate them directly into operational emergency information systems, or possibly use them as a basis for future system development. (au)
Reconciling change blindness with long-term memory for objects.
Wood, Katherine; Simons, Daniel J
2017-02-01
How can we reconcile remarkably precise long-term memory for thousands of images with failures to detect changes to similar images? We explored whether people can use detailed, long-term memory to improve change detection performance. Subjects studied a set of images of objects and then performed recognition and change detection tasks with those images. Recognition memory performance exceeded change detection performance, even when a single familiar object in the postchange display consistently indicated the change location. In fact, participants were no better when a familiar object predicted the change location than when the displays consisted of unfamiliar objects. When given an explicit strategy to search for a familiar object as a way to improve performance on the change detection task, they performed no better than in a 6-alternative recognition memory task. Subjects only benefited from the presence of familiar objects in the change detection task when they had more time to view the prechange array before it switched. Once the cost to using the change detection information decreased, subjects made use of it in conjunction with memory to boost performance on the familiar-item change detection task. This suggests that even useful information will go unused if it is sufficiently difficult to extract.
Towards Symbolic Encryption Schemes
DEFF Research Database (Denmark)
Ahmed, Naveed; Jensen, Christian D.; Zenner, Erik
2012-01-01
, namely an authenticated encryption scheme that is secure under chosen ciphertext attack. Therefore, many reasonable encryption schemes, such as AES in the CBC or CFB mode, are not among the implementation options. In this paper, we report new attacks on CBC and CFB based implementations of the well...
Multiresolution signal decomposition schemes
J. Goutsias (John); H.J.A.M. Heijmans (Henk)
1998-01-01
textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis
Directory of Open Access Journals (Sweden)
R. Sitharthan
2016-09-01
Full Text Available This paper aims at modelling an electronically coupled distributed energy resource with an adaptive protection scheme. The electronically coupled distributed energy resource is a microgrid framework formed by coupling the renewable energy source electronically. Further, the proposed adaptive protection scheme provides a suitable protection to the microgrid for various fault conditions irrespective of the operating mode of the microgrid: namely, grid connected mode and islanded mode. The outstanding aspect of the developed adaptive protection scheme is that it monitors the microgrid and instantly updates relay fault current according to the variations that occur in the system. The proposed adaptive protection scheme also employs auto reclosures, through which the proposed adaptive protection scheme recovers faster from the fault and thereby increases the consistency of the microgrid. The effectiveness of the proposed adaptive protection is studied through the time domain simulations carried out in the PSCAD⧹EMTDC software environment.
A new numerical scheme for the simulation of active magnetic regenerators
DEFF Research Database (Denmark)
Torregrosa-Jaime, B.; Engelbrecht, Kurt; Payá, J.
2014-01-01
A 1D model of a parallel-plate active magnetic regenerator (AMR) has been developed based on a new numerical scheme. With respect to the implicit scheme, the new scheme achieves accurate results, minimizes computational time and prevents numerical errors. The model has been used to check the boun......A 1D model of a parallel-plate active magnetic regenerator (AMR) has been developed based on a new numerical scheme. With respect to the implicit scheme, the new scheme achieves accurate results, minimizes computational time and prevents numerical errors. The model has been used to check...... the boundary condition of heat transfer in the regenerator bed....
A two-step scheme for the advection equation with minimized dissipation and dispersion errors
Takacs, L. L.
1985-01-01
A two-step advection scheme of the Lax-Wendroff type is derived which has accuracy and phase characteristics similar to that of a third-order scheme. The scheme is exactly third-order accurate in time and space for uniform flow. The new scheme is compared with other currently used methods, and is shown to simulate well the advection of localized disturbances with steep gradients. The scheme is derived for constant flow and generalized to two-dimensional nonuniform flow.
Reconciling sensory cues and varied consequences of avian repellents.
Werner, Scott J; Provenza, Frederick D
2011-02-01
We learned previously that red-winged blackbirds (Agelaius phoeniceus) use affective processes to shift flavor preference, and cognitive associations (colors) to avoid food, subsequent to avoidance conditioning. We conducted three experiments with captive red-winged blackbirds to reconcile varied consequences of treated food with conditioned sensory cues. In Experiment 1, we compared food avoidance conditioned with lithium chloride (LiCl) or naloxone hydrochloride (NHCl) to evaluate cue-consequence specificity. All blackbirds conditioned with LiCl (gastrointestinal toxin) avoided the color (red) and flavor (NaCl) of food experienced during conditioning; birds conditioned with NHCl (opioid antagonist) avoided only the color (not the flavor) of food subsequent to conditioning. In Experiment 2, we conditioned experimentally naïve blackbirds using free choice of colored (red) and flavored (NaCl) food paired with an anthraquinone- (postingestive, cathartic purgative), methiocarb- (postingestive, cholinesterase inhibitor), or methyl anthranilate-based repellent (preingestive, trigeminal irritant). Birds conditioned with the postingestive repellents avoided the color and flavor of foods experienced during conditioning; methyl anthranilate conditioned only color (not flavor) avoidance. In Experiment 3, we used a third group of blackbirds to evaluate effects of novel comparison cues (blue, citric acid) subsequent to conditioning with red and NaCl paired with anthraquinone or methiocarb. Birds conditioned with the postingestive repellents did not avoid conditioned color or flavor cues when novel comparison cues were presented during the test. Thus, blackbirds cognitively associate pre- and postingestive consequences with visual cues, and reliably integrate visual and gustatory experience with postingestive consequences to procure nutrients and avoid toxins. Published by Elsevier Inc.
DEFF Research Database (Denmark)
Juhl, Hans Jørn; Stacey, Julia
2001-01-01
It is usual practice to evaluate the success of a labelling scheme by looking at the awareness percentage, but in many cases this is not sufficient. The awareness percentage gives no indication of which of the consumer segments that are aware of and use labelling schemes and which do not....... In the spring of 2001 MAPP carried out an extensive consumer study with special emphasis on the Nordic environmentally friendly label 'the swan'. The purpose was to find out how much consumers actually know and use various labelling schemes. 869 households were contacted and asked to fill in a questionnaire....... 664 households returned a completed questionnaire. There were five answering categories for each label in the questionnaire: * have not seen the label before. * I have seen the label before but I do not know the precise contents of the labelling scheme. * I have seen the label before, I do not know...
Energy Technology Data Exchange (ETDEWEB)
Willcock, J J; Lumsdaine, A; Quinlan, D J
2008-08-19
Tabled execution is a generalization of memorization developed by the logic programming community. It not only saves results from tabled predicates, but also stores the set of currently active calls to them; tabled execution can thus provide meaningful semantics for programs that seemingly contain infinite recursions with the same arguments. In logic programming, tabled execution is used for many purposes, both for improving the efficiency of programs, and making tasks simpler and more direct to express than with normal logic programs. However, tabled execution is only infrequently applied in mainstream functional languages such as Scheme. We demonstrate an elegant implementation of tabled execution in Scheme, using a mix of continuation-passing style and mutable data. We also show the use of tabled execution in Scheme for a problem in formal language and automata theory, demonstrating that tabled execution can be a valuable tool for Scheme users.
BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad
Reconciling divergent trends and millennial variations in Holocene temperatures.
Marsicek, Jeremiah; Shuman, Bryan N; Bartlein, Patrick J; Shafer, Sarah L; Brewer, Simon
2018-01-31
Cooling during most of the past two millennia has been widely recognized and has been inferred to be the dominant global temperature trend of the past 11,700 years (the Holocene epoch). However, long-term cooling has been difficult to reconcile with global forcing, and climate models consistently simulate long-term warming. The divergence between simulations and reconstructions emerges primarily for northern mid-latitudes, for which pronounced cooling has been inferred from marine and coastal records using multiple approaches. Here we show that temperatures reconstructed from sub-fossil pollen from 642 sites across North America and Europe closely match simulations, and that long-term warming, not cooling, defined the Holocene until around 2,000 years ago. The reconstructions indicate that evidence of long-term cooling was limited to North Atlantic records. Early Holocene temperatures on the continents were more than two degrees Celsius below those of the past two millennia, consistent with the simulated effects of remnant ice sheets in the climate model Community Climate System Model 3 (CCSM3). CCSM3 simulates increases in 'growing degree days'-a measure of the accumulated warmth above five degrees Celsius per year-of more than 300 kelvin days over the Holocene, consistent with inferences from the pollen data. It also simulates a decrease in mean summer temperatures of more than two degrees Celsius, which correlates with reconstructed marine trends and highlights the potential importance of the different subseasonal sensitivities of the records. Despite the differing trends, pollen- and marine-based reconstructions are correlated at millennial-to-centennial scales, probably in response to ice-sheet and meltwater dynamics, and to stochastic dynamics similar to the temperature variations produced by CCSM3. Although our results depend on a single source of palaeoclimatic data (pollen) and a single climate-model simulation, they reinforce the notion that climate
Groundwater recharge: Accurately representing evapotranspiration
CSIR Research Space (South Africa)
Bugan, Richard DH
2011-09-01
Full Text Available Groundwater recharge is the basis for accurate estimation of groundwater resources, for determining the modes of water allocation and groundwater resource susceptibility to climate change. Accurate estimations of groundwater recharge with models...
Energy Technology Data Exchange (ETDEWEB)
Placidi, M.; Jung, J. -Y.; Ratti, A.; Sun, C.
2014-07-25
This paper describes beam distribution schemes adopting a novel implementation based on low amplitude vertical deflections combined with horizontal ones generated by Lambertson-type septum magnets. This scheme offers substantial compactness in the longitudinal layouts of the beam lines and increased flexibility for beam delivery of multiple beam lines on a shot-to-shot basis. Fast kickers (FK) or transverse electric field RF Deflectors (RFD) provide the low amplitude deflections. Initially proposed at the Stanford Linear Accelerator Center (SLAC) as tools for beam diagnostics and more recently adopted for multiline beam pattern schemes, RFDs offer repetition capabilities and a likely better amplitude reproducibility when compared to FKs, which, in turn, offer more modest financial involvements both in construction and operation. Both solutions represent an ideal approach for the design of compact beam distribution systems resulting in space and cost savings while preserving flexibility and beam quality.
7 CFR 1412.64 - Inaccurate representation, misrepresentation, and scheme or device.
2010-01-01
... scheme or device. 1412.64 Section 1412.64 Agriculture Regulations of the Department of Agriculture..., misrepresentation, and scheme or device. (a) Producers must report and certify program matters accurately. Errors in... a misrepresentation or scheme or device, such person will be ineligible to receive DCP or ACRE...
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 2. Electronic Commerce - Payment Schemes. V Rajaraman. Series Article Volume 6 Issue 2 February 2001 pp 6-13. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/006/02/0006-0013 ...
Alternative health insurance schemes
DEFF Research Database (Denmark)
Keiding, Hans; Hansen, Bodil O.
2002-01-01
In this paper, we present a simple model of health insurance with asymmetric information, where we compare two alternative ways of organizing the insurance market. Either as a competitive insurance market, where some risks remain uninsured, or as a compulsory scheme, where however, the level...... competitive insurance; this situation turns out to be at least as good as either of the alternatives...
1991-01-01
their Butterfly Scheme Reference, and to Margaret O’Connell for translating it from BBN’s text-formatting language to ours. Special thanks to Richard ... Stallman , Bob Chassell, and Brian Fox, all of the Free Software Foundation, for creating and maintaining the Texinfo formatting language in which this
Directory of Open Access Journals (Sweden)
Berman Jules
2005-08-01
Full Text Available Abstract Background For over 150 years, pathologists have relied on histomorphology to classify and diagnose neoplasms. Their success has been stunning, permitting the accurate diagnosis of thousands of different types of neoplasms using only a microscope and a trained eye. In the past two decades, cancer genomics has challenged the supremacy of histomorphology by identifying genetic alterations shared by morphologically diverse tumors and by finding genetic features that distinguish subgroups of morphologically homogeneous tumors. Discussion The Developmental Lineage Classification and Taxonomy of Neoplasms groups neoplasms by their embryologic origin. The putative value of this classification is based on the expectation that tumors of a common developmental lineage will share common metabolic pathways and common responses to drugs that target these pathways. The purpose of this manuscript is to show that grouping tumors according to their developmental lineage can reconcile certain fundamental discrepancies resulting from morphologic and molecular approaches to neoplasm classification. In this study, six issues in tumor classification are described that exemplify the growing rift between morphologic and molecular approaches to tumor classification: 1 the morphologic separation between epithelial and non-epithelial tumors; 2 the grouping of tumors based on shared cellular functions; 3 the distinction between germ cell tumors and pluripotent tumors of non-germ cell origin; 4 the distinction between tumors that have lost their differentiation and tumors that arise from uncommitted stem cells; 5 the molecular properties shared by morphologically disparate tumors that have a common developmental lineage, and 6 the problem of re-classifying morphologically identical but clinically distinct subsets of tumors. The discussion of these issues in the context of describing different methods of tumor classification is intended to underscore the clinical
Selectively strippable paint schemes
Stein, R.; Thumm, D.; Blackford, Roger W.
1993-03-01
In order to meet the requirements of more environmentally acceptable paint stripping processes many different removal methods are under evaluation. These new processes can be divided into mechanical and chemical methods. ICI has developed a paint scheme with intermediate coat and fluid resistant polyurethane topcoat which can be stripped chemically in a short period of time with methylene chloride free and phenol free paint strippers.
Law, James; Huby, Guro; Irving, Anne-Marie; Pringle, Ann-Marie; Conochie, Douglas; Haworth, Catherine; Burston, Amanda
2010-01-01
Background: It is widely accepted that service users should be actively involved in new service developments, but there remain issues about how best to consult with them and how to reconcile their views with those of service providers. Aims: This paper uses data from The Aphasia in Scotland study, set up by NHS Quality Improvement Scotland to…
Reconciling Leadership Paradigms: Authenticity as Practiced by American Indian School Leaders
Henderson, David; Carjuzaa, Jioanna; Ruff, William G.
2015-01-01
This phenomenological study examined the complexity American Indian K-12 school leaders face on reservations in Montana, USA The study described how these leaders have to reconcile their Westernized educational leadership training with their traditional ways of knowing, living, and leading. Three major themes emerged that enabled these leaders to…
Teacher Candidates Reconcile "The Child and the Curriculum" with "No Child Left Behind"
Samuel, Francis A.; Suh, Bernadyn
2012-01-01
What relevance does John Dewey have for students and teachers of the 21st century? Can his educational philosophy be reconciled with "No Child Left Behind" (NCLB) and its emphasis on accountability and high-stakes testing? In this article, the authors discuss Dewey's ideas about the child and the curriculum; delineate how teacher…
Allen, Michele L; Garcia-Huidobro, Diego; Bastian, Tiana; Hurtado, G Ali; Linares, Roxana; Svetaz, María Veronica
2017-06-01
Participatory research (PR) trials aim to achieve the dual, and at times competing, demands of producing an intervention and research process that address community perspectives and priorities, while establishing intervention effectiveness. To identify research and community priorities that must be reconciled in the areas of collaborative processes, study design and aim and study implementation quality in order to successfully conduct a participatory trial. We describe how this reconciliation was approached in the smoking prevention participatory trial Padres Informados/Jovenes Preparados (Informed Parents/Prepared Youth) and evaluate the success of our reconciled priorities. Data sources to evaluate success of the reconciliations included a survey of all partners regarding collaborative group processes, intervention participant recruitment and attendance and surveys of enrolled study participants assessing intervention outcomes. While we successfully achieved our reconciled collaborative processes and implementation quality goals, we did not achieve our reconciled goals in study aim and design. Due in part to the randomized wait-list control group design chosen in the reconciliation process, we were not able to demonstrate overall efficacy of the intervention or offer timely services to families in need of support. Achieving the goals of participatory trials is challenging but may yield community and research benefits. Innovative research designs are needed to better support the complex goals of participatory trials.
"You Start Feeling Old": Rock Musicians Reconciling the Dilemmas of Adulthood
Ramirez, Michael
2013-01-01
Using interview data from 38 musicians, this study examines the ways in which the transition to adulthood is complicated by aspirations to a nonstandard line of work. Musicians face a recurring set of obstacles as they move into adulthood and respond by enacting various tactics to reconcile these dilemmas. The "on-time" musicians do so by framing…
Reconciling Ourselves to Reality: Arendt, Education and the Challenge of Being at Home in the World
Biesta, Gert
2016-01-01
In this paper, I explore the educational significance of the work of Hannah Arendt through reflections on four papers that constitute this special issue. I focus on the challenge of reconciling ourselves to reality, that is, of being at home in the world. Although Arendt's idea of being at home in the world is connected to her explorations of…
England, Richard
2009-01-01
Since before the time of writers such as Plato in his "Republic" and "Timaeus"; Martianus Capella in "The Marriage of Mercury and Philology"; Boethius in "De institutione musica"; Kepler in "The Harmony of the Universe"; and many others, there have been attempts to reconcile the various disciplines in the sciences, arts, humanities, and religion…
A simplification of the unified gas kinetic scheme
Chen, Songze; Xu, Kun
2016-01-01
Unified gas kinetic scheme (UGKS) is an asymptotic preserving scheme for the kinetic equations. It is superior for transition flow simulations, and has been validated in the past years. However, compared to the well known discrete ordinate method (DOM) which is a classical numerical method solving the kinetic equations, the UGKS needs more computational resources. In this study, we propose a simplification of the unified gas kinetic scheme. It allows almost identical numerical cost as the DOM, but predicts numerical results as accurate as the UGKS. Based on the observation that the equilibrium part of the UGKS fluxes can be evaluated analytically, the equilibrium part in the UGKS flux is not necessary to be discretized in velocity space. In the simplified scheme, the numerical flux for the velocity distribution function and the numerical flux for the macroscopic conservative quantities are evaluated separately. The simplification is equivalent to a flux hybridization of the gas kinetic scheme for the Navier-S...
CANONICAL BACKWARD DIFFERENTIATION SCHEMES FOR ...
African Journals Online (AJOL)
CANONICAL BACKWARD DIFFERENTIATION SCHEMES FOR SOLUTION OF NONLINEAR INITIAL VALUE PROBLEMS OF FIRST ORDER ORDINARY DIFFERENTIAL EQUATIONS. ... Global Journal of Mathematical Sciences ... KEY WORDS: backward differentiation scheme, collocation, initial value problems. Global Jnl ...
Bonus Schemes and Trading Activity
Pikulina, E.S.; Renneboog, L.D.R.; Ter Horst, J.R.; Tobler, P.N.
2013-01-01
Abstract: Little is known about how different bonus schemes affect traders’ propensity to trade and which bonus schemes improve traders’ performance. We study the effects of linear versus threshold (convex) bonus schemes on traders’ behavior. Traders purchase and sell shares in an experimental stock
Bonus schemes and trading activity
Pikulina, E.S.; Renneboog, L.D.R.; ter Horst, J.R.; Tobler, P.N.
2014-01-01
Little is known about how different bonus schemes affect traders' propensity to trade and which bonus schemes improve traders' performance. We study the effects of linear versus threshold bonus schemes on traders' behavior. Traders buy and sell shares in an experimental stock market on the basis of
NNLOPS accurate associated HW production
Astill, William; Re, Emanuele; Zanderighi, Giulia
2016-01-01
We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross Section Working Group.
Efficient Scheme for Chemical Flooding Simulation
Directory of Open Access Journals (Sweden)
Braconnier Benjamin
2014-07-01
Full Text Available In this paper, we investigate an efficient implicit scheme for the numerical simulation of chemical enhanced oil recovery technique for oil fields. For the sake of brevity, we only focus on flows with polymer to describe the physical and numerical models. In this framework, we consider a black oil model upgraded with the polymer modeling. We assume the polymer only transported in the water phase or adsorbed on the rock following a Langmuir isotherm. The polymer reduces the water phase mobility which can change drastically the behavior of water oil interfaces. Then, we propose a fractional step technique to resolve implicitly the system. The first step is devoted to the resolution of the black oil subsystem and the second to the polymer mass conservation. In such a way, jacobian matrices coming from the implicit formulation have a moderate size and preserve solvers efficiency. Nevertheless, the coupling between the black-oil subsystem and the polymer is not fully resolved. For efficiency and accuracy comparison, we propose an explicit scheme for the polymer for which large time step is prohibited due to its CFL (Courant-Friedrichs-Levy criterion and consequently approximates accurately the coupling. Numerical experiments with polymer are simulated : a core flood, a 5-spot reservoir with surfactant and ions and a 3D real case. Comparisons are performed between the polymer explicit and implicit scheme. They prove that our polymer implicit scheme is efficient, robust and resolves accurately the coupling physics. The development and the simulations have been performed with the software PumaFlow [PumaFlow (2013 Reference manual, release V600, Beicip Franlab].
Accurate car plate detection via car face landmark localization
Li, Hailiang; Chiu, Man-Yau; Wu, Kangheng; Lei, Zhibin; Lam, Kin-Man
2017-07-01
For intelligent vehicle surveillance systems, it is a big challenge to detect small, blurred car plates of vehicles driving on a highway. In this paper, we present a novel, two-stage detection scheme for small, blurred car-plate detection in large surveillance images. Our proposed scheme firstly detects vehicles, and then locates the car plates in specific regions of detected vehicles based on our proposed car-face landmark localization algorithm. Our scheme can also solve the high false-alarming rate problem with small, blurred car-plate detection. Experimental results show that our proposed method is accurate, and able to reduce the false-alarming rate, without any compromise in speed.
Relational Human Ecology: Reconciling the Boundaries of Humans and Nature
McNiel, J.; Lopes, V. L.
2010-12-01
Global change is transforming the planet at unprecedented rates. Global warming, massive species extinction, increasing land degradation, overpopulation, poverty and injustice, are all the result of human choices and non-sustainable ways of life. What do we have to do and how much do we have to change to allow a transition to a more ecologically-conscious and just society? While these questions are of central concern, they cannot be fully addressed under the current paradigm, which hinders both our collection of knowledge and derivation of solutions. This paper attempts to develop a new variant of ecological thinking based on a relational ontological/epistemological approach. This is offered as a foundation for the political initiative to strive for a more fulfilling, sustainable and just society. This new approach, theoretically conceptualized as ‘relational human ecology,’ offers a relational (holistic) framework for overcoming mechanistic thinking and exploring questions regarding the long-term attainment of sustainability. Once established, we illustrate how the relational framework offers a new holistic approach centered on participatory inquiry within the context of a community workshop. We conclude with discussing possible directions for future relational human ecological participatory research, conducted from the intersection of myriad participants (i.e. agencies, academics, and community residents), and the ways in which this will allow for the derivation of accurate and sustainable solutions for global change. Key words: relational thinking, human ecology, complex adaptive systems, participatory inquiry, sustainability Vicente L. Lopes (corresponding author) Department of Biology Texas State University San Marcos, TX, USA e-mail: vlopes@txstate.edu Jamie N. McNiel Department of Sociology Texas State University San Marcos, TX, USATable 2 - Comparing Orthodox versus Relational Approaches to Ecological Inquiry * Retroduction, logical reasoning that
A broadcast-based key agreement scheme using set reconciliation for wireless body area networks.
Ali, Aftab; Khan, Farrukh Aslam
2014-05-01
Information and communication technologies have thrived over the last few years. Healthcare systems have also benefited from this progression. A wireless body area network (WBAN) consists of small, low-power sensors used to monitor human physiological values remotely, which enables physicians to remotely monitor the health of patients. Communication security in WBANs is essential because it involves human physiological data. Key agreement and authentication are the primary issues in the security of WBANs. To agree upon a common key, the nodes exchange information with each other using wireless communication. This information exchange process must be secure enough or the information exchange should be minimized to a certain level so that if information leak occurs, it does not affect the overall system. Most of the existing solutions for this problem exchange too much information for the sake of key agreement; getting this information is sufficient for an attacker to reproduce the key. Set reconciliation is a technique used to reconcile two similar sets held by two different hosts with minimal communication complexity. This paper presents a broadcast-based key agreement scheme using set reconciliation for secure communication in WBANs. The proposed scheme allows the neighboring nodes to agree upon a common key with the personal server (PS), generated from the electrocardiogram (EKG) feature set of the host body. Minimal information is exchanged in a broadcast manner, and even if every node is missing a different subset, by reconciling these feature sets, the whole network will still agree upon a single common key. Because of the limited information exchange, if an attacker gets the information in any way, he/she will not be able to reproduce the key. The proposed scheme mitigates replay, selective forwarding, and denial of service attacks using a challenge-response authentication mechanism. The simulation results show that the proposed scheme has a great deal of
Liu, Meilin
2012-08-01
A discontinuous Galerkin finite element method (DG-FEM) with a highly accurate time integration scheme for solving Maxwell equations is presented. The new time integration scheme is in the form of traditional predictor-corrector algorithms, PE CE m, but it uses coefficients that are obtained using a numerical scheme with fully controllable accuracy. Numerical results demonstrate that the proposed DG-FEM uses larger time steps than DG-FEM with classical PE CE) m schemes when high accuracy, which could be obtained using high-order spatial discretization, is required. © 1963-2012 IEEE.
Hanks, E.M.; Hooten, M.B.; Baker, F.A.
2011-01-01
Ecological spatial data often come from multiple sources, varying in extent and accuracy. We describe a general approach to reconciling such data sets through the use of the Bayesian hierarchical framework. This approach provides a way for the data sets to borrow strength from one another while allowing for inference on the underlying ecological process. We apply this approach to study the incidence of eastern spruce dwarf mistletoe (Arceuthobium pusillum) in Minnesota black spruce (Picea mariana). A Minnesota Department of Natural Resources operational inventory of black spruce stands in northern Minnesota found mistletoe in 11% of surveyed stands, while a small, specific-pest survey found mistletoe in 56% of the surveyed stands. We reconcile these two surveys within a Bayesian hierarchical framework and predict that 35-59% of black spruce stands in northern Minnesota are infested with dwarf mistletoe. ?? 2011 by the Ecological Society of America.
Reconciling White-Box and Black-Box Perspectives on Behavioral Self-adaptation
Bruni, Roberto; Corradini, Andrea; Gadducci, Fabio; Hölzl, Matthias; Lluch Lafuente, Alberto; Vandin, Andrea; Wirsing, Martin
2015-01-01
This paper proposes to reconcile two perspectives on behavioral adaptation commonly taken at different stages of the engineering of autonomic computing systems. Requirements engineering activities often take a black-box perspective: A system is considered to be adaptive with respect to an environment whenever the system is able to satisfy its goals irrespectively of the environment perturbations. Modeling and programming engineering activities often take a white-box perspective: A system is e...
2016-03-25
cannot accept the /Signed/ symbol in place of the actual signature. We appreciate the courtesies extended to the staff. Please direct questions to me at...basis for the graphic representation of a process map. • Process map: { uses a flowchart of a process, depicting inputs, activities, and output...organizational charts; • civilian pay flowcharts ; • SOPs for reconciling civilian pay; and • Mission Work Agreements with DCMA and MDA. We reviewed
Ji, Yang; Chen, Hong; Tang, Hongwu
2017-06-01
A highly accurate wide-angle scheme, based on the generalized mutistep scheme in the propagation direction, is developed for the finite difference beam propagation method (FD-BPM). Comparing with the previously presented method, the simulation shows that our method results in a more accurate solution, and the step size can be much larger
Directory of Open Access Journals (Sweden)
Karena Shaw
2013-05-01
Full Text Available Shale gas proponents argue this unconventional fossil fuel offers a “bridge” towards a cleaner energy system by offsetting higher-carbon fuels such as coal. The technical feasibility of reconciling shale gas development with climate action remains contested. However, we here argue that governance challenges are both more pressing and more profound. Reconciling shale gas and climate action requires institutions capable of responding effectively to uncertainty; intervening to mandate emissions reductions and internalize costs to industry; and managing the energy system strategically towards a lower carbon future. Such policy measures prove challenging, particularly in jurisdictions that stand to benefit economically from unconventional fuels. We illustrate this dilemma through a case study of shale gas development in British Columbia, Canada, a global leader on climate policy that is nonetheless struggling to manage gas development for mitigation. The BC case is indicative of the constraints jurisdictions face both to reconcile gas development and climate action, and to manage the industry adequately to achieve social licence and minimize resistance. More broadly, the case attests to the magnitude of change required to transform our energy systems to mitigate climate change.
Small-scale classification schemes
DEFF Research Database (Denmark)
Hertzum, Morten
2004-01-01
Small-scale classification schemes are used extensively in the coordination of cooperative work. This study investigates the creation and use of a classification scheme for handling the system requirements during the redevelopment of a nation-wide information system. This requirements classificat....... This difference between the written requirements specification and the oral discussions at the meetings may help explain software engineers general preference for people, rather than documents, as their information sources.......Small-scale classification schemes are used extensively in the coordination of cooperative work. This study investigates the creation and use of a classification scheme for handling the system requirements during the redevelopment of a nation-wide information system. This requirements....... While coordination mechanisms focus on how classification schemes enable cooperation among people pursuing a common goal, boundary objects embrace the implicit consequences of classification schemes in situations involving conflicting goals. Moreover, the requirements specification focused on functional...
Honohan, Patrick
1987-01-01
A Ponzi scheme is an arrangement whereby a promoter offers an investment opportunity with attractive dividends, but where the only basis for the dividends is the future receipts from new investors. The first of these two notes explores some of the analytical properties of a Ponzi scheme, addressing in particular the question whether it is possible for a Ponzi scheme to exist if all the participants are rational. The second note briefly examines the collapse of the PMPA insurance company whos...
Remarks on quantum duopoly schemes
Fraçkiewicz, Piotr
2016-01-01
The aim of this paper is to discuss in some detail the two different quantum schemes for duopoly problems. We investigate under what conditions one of the schemes is more reasonable than the other one. Using the Cournot's duopoly example, we show that the current quantum schemes require a slight refinement so that they output the classical game in a particular case. Then, we show how the amendment changes the way of studying the quantum games with respect to Nash equilibria. Finally, we define another scheme for the Cournot's duopoly in terms of quantum computation.
DEFF Research Database (Denmark)
Rotbart, Noy Galil
times, effectively eliminating the second penalty mentioned. We continue this theoretical study in several ways. First, we dedicate a large part of the thesis to the graph family of trees, for which we provide an overview of labeling schemes supporting several important functions such as ancestry......, routing and especially adjacency. The survey is complemented by novel contributions to this study, among which are the first asymptotically optimal adjacency labeling scheme for bounded degree trees, improved bounds on ancestry labeling schemes, dynamic multifunctional labeling schemes and an experimental...
Accurate measurements in volume data
Oliván Bescós, J.; Bosma, Marco; Smit, Jaap; Mun, S.K.
2001-01-01
An algorithm for very accurate visualization of an iso- surface in a 3D medical dataset has been developed in the past few years. This technique is extended in this paper to several kinds of measurements in which exact geometric information of a selected iso-surface is used to derive volume, length,
Efficient scheme for parametric fitting of data in arbitrary dimensions.
Pang, Ning-Ning; Tzeng, Wen-Jer; Kao, Hisen-Ching
2008-07-01
We propose an efficient scheme for parametric fitting expressed in terms of the Legendre polynomials. For continuous systems, our scheme is exact and the derived explicit expression is very helpful for further analytical studies. For discrete systems, our scheme is almost as accurate as the method of singular value decomposition. Through a few numerical examples, we show that our algorithm costs much less CPU time and memory space than the method of singular value decomposition. Thus, our algorithm is very suitable for a large amount of data fitting. In addition, the proposed scheme can also be used to extract the global structure of fluctuating systems. We then derive the exact relation between the correlation function and the detrended variance function of fluctuating systems in arbitrary dimensions and give a general scaling analysis.
A digital memories based user authentication scheme with privacy preservation.
Directory of Open Access Journals (Sweden)
JunLiang Liu
Full Text Available The traditional username/password or PIN based authentication scheme, which still remains the most popular form of authentication, has been proved insecure, unmemorable and vulnerable to guessing, dictionary attack, key-logger, shoulder-surfing and social engineering. Based on this, a large number of new alternative methods have recently been proposed. However, most of them rely on users being able to accurately recall complex and unmemorable information or using extra hardware (such as a USB Key, which makes authentication more difficult and confusing. In this paper, we propose a Digital Memories based user authentication scheme adopting homomorphic encryption and a public key encryption design which can protect users' privacy effectively, prevent tracking and provide multi-level security in an Internet & IoT environment. Also, we prove the superior reliability and security of our scheme compared to other schemes and present a performance analysis and promising evaluation results.
A digital memories based user authentication scheme with privacy preservation.
Liu, JunLiang; Lyu, Qiuyun; Wang, Qiuhua; Yu, Xiangxiang
2017-01-01
The traditional username/password or PIN based authentication scheme, which still remains the most popular form of authentication, has been proved insecure, unmemorable and vulnerable to guessing, dictionary attack, key-logger, shoulder-surfing and social engineering. Based on this, a large number of new alternative methods have recently been proposed. However, most of them rely on users being able to accurately recall complex and unmemorable information or using extra hardware (such as a USB Key), which makes authentication more difficult and confusing. In this paper, we propose a Digital Memories based user authentication scheme adopting homomorphic encryption and a public key encryption design which can protect users' privacy effectively, prevent tracking and provide multi-level security in an Internet & IoT environment. Also, we prove the superior reliability and security of our scheme compared to other schemes and present a performance analysis and promising evaluation results.
Hybrid flux splitting schemes for numerical resolution of two-phase flows
Energy Technology Data Exchange (ETDEWEB)
Flaatten, Tore
2003-07-01
This thesis deals with the construction of numerical schemes for approximating. solutions to a hyperbolic two-phase flow model. Numerical schemes for hyperbolic models are commonly divided in two main classes: Flux Vector Splitting (FVS) schemes which are based on scalar computations and Flux Difference Splitting (FDS) schemes which are based on matrix computations. FVS schemes are more efficient than FDS schemes, but FDS schemes are more accurate. The canonical FDS schemes are the approximate Riemann solvers which are based on a local decomposition of the system into its full wave structure. In this thesis the mathematical structure of the model is exploited to construct a class of hybrid FVS/FDS schemes, denoted as Mixture Flux (MF) schemes. This approach is based on a splitting of the system in two components associated with the pressure and volume fraction variables respectively, and builds upon hybrid FVS/FDS schemes previously developed for one-phase flow models. Through analysis and numerical experiments it is demonstrated that the MF approach provides several desirable features, including (1) Improved efficiency compared to standard approximate Riemann solvers, (2) Robustness under stiff conditions, (3) Accuracy on linear and nonlinear phenomena. In particular it is demonstrated that the framework allows for an efficient weakly implicit implementation, focusing on an accurate resolution of slow transients relevant for the petroleum industry. (author)
Shao, X.; Veldhuis, Raymond N.J.
2013-01-01
The helper data scheme utilizes a secret key to protect biometric templates. The current helper data scheme requires binary feature representations that introduce quantization error and thus reduce the capacity of biometric channels. For spectral-minutiae based fingerprint recognition systems,
CANONICAL BACKWARD DIFFERENTIATION SCHEMES FOR ...
African Journals Online (AJOL)
This paper describes a new nonlinear backward differentiation schemes for the numerical solution of nonlinear initial value problems of first order ordinary differential equations. The schemes are based on rational interpolation obtained from canonical polynomials. They are A-stable. The test problems show that they give ...
Directory of Open Access Journals (Sweden)
Criel Bart
2007-07-01
Full Text Available Abstract Background Despite the promotion of Community Health Insurance (CHI in Uganda in the second half of the 90's, mainly under the impetus of external aid organisations, overall membership has remained low. Today, some 30,000 persons are enrolled in about a dozen different schemes located in Central and Southern Uganda. Moreover, most of these schemes were created some 10 years ago but since then, only one or two new schemes have been launched. The dynamic of CHI has apparently come to a halt. Methods A case study evaluation was carried out on two selected CHI schemes: the Ishaka and the Save for Health Uganda (SHU schemes. The objective of this evaluation was to explore the reasons for the limited success of CHI. The evaluation involved review of the schemes' records, key informant interviews and exit polls with both insured and non-insured patients. Results Our research points to a series of not mutually exclusive explanations for this under-achievement at both the demand and the supply side of health care delivery. On the demand side, the following elements have been identified: lack of basic information on the scheme's design and operation, limited understanding of the principles underlying CHI, limited community involvement and lack of trust in the management of the schemes, and, last but not least, problems in people's ability to pay the insurance premiums. On the supply-side, we have identified the following explanations: limited interest and knowledge of health care providers and managers of CHI, and the absence of a coherent policy framework for the development of CHI. Conclusion The policy implications of this study refer to the need for the government to provide the necessary legislative, technical and regulative support to CHI development. The main policy challenge however is the need to reconcile the government of Uganda's interest in promoting CHI with the current policy of abolition of user fees in public facilities.
A new finite difference scheme adapted to the one-dimensional Schrodinger equation
Geurts, Bernardus J.
1993-01-01
We present a new discretisation scheme for the Schrödinger equation based on analytic solutions to local linearisations. The scheme generates the normalised eigenfunctions and eigenvalues simultaneously and is exact for piecewise constant potentials and effective masses. Highly accurate results can
A NEW SCHEME FOR MOTOR PROTECTION COMPRESSOR TRAINS SERIES ER2
Directory of Open Access Journals (Sweden)
L. V. Dubynets
2010-01-01
Full Text Available The new scheme of protection of the engine DK 409 (DK406 of the compressor (EK 7B on electric trains of series ER2 is offered. Such a scheme provides accurate and reliable protection of the electric machine, unites in itself the protection functions both under overloads with long-term currents and in case of short-circuit.
When Is Network Lasso Accurate?
Directory of Open Access Journals (Sweden)
Alexander Jung
2018-01-01
Full Text Available The “least absolute shrinkage and selection operator” (Lasso method has been adapted recently for network-structured datasets. In particular, this network Lasso method allows to learn graph signals from a small number of noisy signal samples by using the total variation of a graph signal for regularization. While efficient and scalable implementations of the network Lasso are available, only little is known about the conditions on the underlying network structure which ensure network Lasso to be accurate. By leveraging concepts of compressed sensing, we address this gap and derive precise conditions on the underlying network topology and sampling set which guarantee the network Lasso for a particular loss function to deliver an accurate estimate of the entire underlying graph signal. We also quantify the error incurred by network Lasso in terms of two constants which reflect the connectivity of the sampled nodes.
Accurate Accident Reconstruction in VANET
Kopylova, Yuliya; Farkas, Csilla; Xu, Wenyuan
2011-01-01
Part 9: Short Papers; International audience; We propose a forensic VANET application to aid an accurate accident reconstruction. Our application provides a new source of objective real-time data impossible to collect using existing methods. By leveraging inter-vehicle communications, we compile digital evidence describing events before, during, and after an accident in its entirety. In addition to sensors data and major components’ status, we provide relative positions of all vehicles involv...
DEFF Research Database (Denmark)
Christiansen, Anders Vest; Auken, Esben; Kirkegaard, Casper
2016-01-01
function itself. This methodological difference is important, as it introduces no further approximation in the physical description of the system, but only in the process of iteratively guiding the inversion algorithm towards the solution. By means of a synthetic study, we demonstrate how our new hybrid...
Numerical Investigation of a Novel Wiring Scheme Enabling Simple and Accurate Impedance Cytometry
Federica Caselli; Riccardo Reale; Nicola Antonio Nodargi; Paolo Bisegna
2017-01-01
Microfluidic impedance cytometry is a label-free approach for high-throughput analysis of particles and cells. It is based on the characterization of the dielectric properties of single particles as they flow through a microchannel with integrated electrodes. However, the measured signal depends not only on the intrinsic particle properties, but also on the particle trajectory through the measuring region, thus challenging the resolution and accuracy of the technique. In this work we show via...
Efficiency of High-Order Accurate Difference Schemes for the Korteweg-de Vries Equation
Directory of Open Access Journals (Sweden)
Kanyuta Poochinapan
2014-01-01
Full Text Available Two numerical models to obtain the solution of the KdV equation are proposed. Numerical tools, compact fourth-order and standard fourth-order finite difference techniques, are applied to the KdV equation. The fundamental conservative properties of the equation are preserved by the finite difference methods. Linear stability analysis of two methods is presented by the Von Neumann analysis. The new methods give second- and fourth-order accuracy in time and space, respectively. The numerical experiments show that the proposed methods improve the accuracy of the solution significantly.
High-order time-accurate schemes for parabolic singular perturbation problems with convection
P.W. Hemker (Piet); G.I. Shishkin (Gregori); L.P. Shishkina
2001-01-01
textabstractWe consider the first boundary value problem for a singularly perturbed para-bo-lic PDE with convection on an interval. For the case of sufficiently smooth data, it is easy to construct a standard finite difference operator and a piecewise uniform mesh, condensing in the boundary layer,
Multiuser Switched Diversity Scheduling Schemes
Shaqfeh, Mohammad; Alouini, Mohamed-Slim
2012-01-01
Multiuser switched-diversity scheduling schemes were recently proposed in order to overcome the heavy feedback requirements of conventional opportunistic scheduling schemes by applying a threshold-based, distributed, and ordered scheduling mechanism. The main idea behind these schemes is that slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we characterize the achievable rate region of multiuser switched diversity systems and compare it with the rate region of full feedback multiuser diversity systems. We propose also a novel proportional fair multiuser switched-based scheduling scheme and we demonstrate that it can be optimized using a practical and distributed method to obtain the feedback thresholds. We finally demonstrate by numerical examples that switched-diversity scheduling schemes operate within 0.3 bits/sec/Hz from the ultimate network capacity of full feedback sys...
Reconciling Top-Down and Bottom-Up Estimates of Oil and Gas Methane Emissions in the Barnett Shale
Hamburg, S.
2015-12-01
Top-down approaches that use aircraft, tower, or satellite-based measurements of well-mixed air to quantify regional methane emissions have typically estimated higher emissions from the natural gas supply chain when compared to bottom-up inventories. A coordinated research campaign in October 2013 used simultaneous top-down and bottom-up approaches to quantify total and fossil methane emissions in the Barnett Shale region of Texas. Research teams have published individual results including aircraft mass-balance estimates of regional emissions and a bottom-up, 25-county region spatially-resolved inventory. This work synthesizes data from the campaign to directly compare top-down and bottom-up estimates. A new analytical approach uses statistical estimators to integrate facility emission rate distributions from unbiased and targeted high emission site datasets, which more rigorously incorporates the fat-tail of skewed distributions to estimate regional emissions of well pads, compressor stations, and processing plants. The updated spatially-resolved inventory was used to estimate total and fossil methane emissions from spatial domains that match seven individual aircraft mass balance flights. Source apportionment of top-down emissions between fossil and biogenic methane was corroborated with two independent analyses of methane and ethane ratios. Reconciling top-down and bottom-up estimates of fossil methane emissions leads to more accurate assessment of natural gas supply chain emission rates and the relative contribution of high emission sites. These results increase our confidence in our understanding of the climate impacts of natural gas relative to more carbon-intensive fossil fuels and the potential effectiveness of mitigation strategies.
Optimal strategies for throwing accurately
Venkadesan, M.; Mahadevan, L.
2017-04-01
The accuracy of throwing in games and sports is governed by how errors in planning and initial conditions are propagated by the dynamics of the projectile. In the simplest setting, the projectile path is typically described by a deterministic parabolic trajectory which has the potential to amplify noisy launch conditions. By analysing how parabolic trajectories propagate errors, we show how to devise optimal strategies for a throwing task demanding accuracy. Our calculations explain observed speed-accuracy trade-offs, preferred throwing style of overarm versus underarm, and strategies for games such as dart throwing, despite having left out most biological complexities. As our criteria for optimal performance depend on the target location, shape and the level of uncertainty in planning, they also naturally suggest an iterative scheme to learn throwing strategies by trial and error.
Accurate determination of antenna directivity
DEFF Research Database (Denmark)
Dich, Mikael
1997-01-01
The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...
Coordinated renewable energy support schemes
DEFF Research Database (Denmark)
Morthorst, P.E.; Jensen, S.G.
2006-01-01
This paper illustrates the effect that can be observed when support schemes for renewable energy are regionalised. Two theoretical examples are used to explain interactive effects on, e.g., the price of power, conditions for conventional power producers, and changes in import and export of power...... RES-E support schemes already has a common liberalised power market. In this case the introduction of a common support scheme for renewable technologies will lead to more efficient sitings of renewable plants, improving economic and environmental performance of the total power system...
Numerical solution to nonlinear Tricomi equation using WENO schemes
Directory of Open Access Journals (Sweden)
Adrian Sescu
2010-09-01
Full Text Available Nonlinear Tricomi equation is a hybrid (hyperbolic-elliptic second order partial differential equation, modelling the sonic boom focusing. In this paper, the Tricomi equation is transformed into a hyperbolic system of first order equations, in conservation law form. On the upper boundary, a new mixed boundary condition for the acoustic pressure is used to avoid the inclusion of the Dirac function in the numerical solution. Weighted Essentially Non-Oscillatory (WENO schemes are used for the spatial discretization, and the time marching is carried out using the second order accurate Runge-Kutta total-variation diminishing (TVD scheme.
Accurate performance analysis of opportunistic decode-and-forward relaying
Tourki, Kamel
2011-07-01
In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path may be considered unusable, and the destination may use a selection combining technique. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end outage probability for a transmission rate R. Furthermore, we evaluate the asymptotical performance analysis and the diversity order is deduced. Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over different network architectures. © 2011 IEEE.
Reconciling White-Box and Black-Box Perspectives on Behavioral Self-adaptation
DEFF Research Database (Denmark)
Bruni, Roberto; Corradini, Andrea; Gadducci, Fabio
2015-01-01
This paper proposes to reconcile two perspectives on behavioral adaptation commonly taken at different stages of the engineering of autonomic computing systems. Requirements engineering activities often take a black-box perspective: A system is considered to be adaptive with respect...... to an environment whenever the system is able to satisfy its goals irrespectively of the environment perturbations. Modeling and programming engineering activities often take a white-box perspective: A system is equipped with suitable adaptation mechanisms and its behavior is classified as adaptive depending...... and possibly modify the adaptation requirements, models and programs of an autonomic system....
Graph construction using adaptive Local Hybrid Coding scheme.
Dornaika, Fadi; Kejani, Mahdi Tavassoli; Bosaghzadeh, Alireza
2017-11-01
It is well known that dense coding with local bases (via Least Square coding schemes) can lead to large quantization errors or poor performances of machine learning tasks. On the other hand, sparse coding focuses on accurate representation without taking into account data locality due to its tendency to ignore the intrinsic structure hidden among the data. Local Hybrid Coding (LHC) (Xiang et al., 2014) was recently proposed as an alternative to the sparse coding scheme that is used in Sparse Representation Classifier (SRC). The LHC blends sparsity and bases-locality criteria in a unified optimization problem. It can retain the strengths of both sparsity and locality. Thus, the hybrid codes would have some advantages over both dense and sparse codes. This paper introduces a data-driven graph construction method that exploits and extends the LHC scheme. In particular, we propose a new coding scheme coined Adaptive Local Hybrid Coding (ALHC). The main contributions are as follows. First, the proposed coding scheme adaptively selects the local and non-local bases of LHC using data similarities provided by Locality-constrained Linear code. Second, the proposed ALHC exploits local similarities in its solution. Third, we use the proposed coding scheme for graph construction. For the task of graph-based label propagation, we demonstrate high classification performance of the proposed graph method on four benchmark face datasets: Extended Yale, PF01, PIE, and FERET. Copyright © 2017 Elsevier Ltd. All rights reserved.
Computing homogeneous models with finite volume upwind schemes
Energy Technology Data Exchange (ETDEWEB)
Herard, J.M. [Electricite de France (EDF/DRD/DMFTT), 78 - Chatou (France); Universite de Provence, Centre de Mathematiques et d' Informatique, L.A.T.P. - UMR CNRS 6632, 13 - Marseille (France); Kokh, S. [CEA Saclay, Dir. de l' Energie Nucleaire (DEN/SFME), 91 - Gif sur Yvette (France)
2003-07-01
We provide in this paper some algorithms to deal with homogeneous equilibrium models on any type of mesh. These schemes rely on the exact Godunov scheme or on some rough Godunov schemes. We first recall some basic ideas which were recently introduced in the 1D framework. We then turn to the 2D framework, and define a class of first order schemes which enable to improve the accuracy on coarse unstructured 2D meshes. We have also tried to underline that current computing power does not enable to perform accurate enough computations for reasonable CPU time, in a few situations including sharp contact discontinuities; this is due in some sense to some kind of loss of pressure consistency between PDE and numerical schemes in conservative form. We have also indicated that some specific remedies available in the literature are not adequate for nuclear safety problems, both in a one dimensional framework (owing to EOS), and in the two dimensional framework (due to differences between mesh interfaces and waves fronts). Comments pertaining to other hybrid schemes are then made. The last part of the paper will focus on computations of three dimensional gas-liquid flows including sharp interfaces. (authors)
Symmetry preserving compact schemes for numerical solution of PDEs
Ozbenli, Ersin; Vedula, Prakash
2017-11-01
In this study, a new approach for construction of invariant, high order accurate compact finite difference schemes that preserve Lie symmetry groups of underlying partial differential equations (PDEs) is presented. It is well known that compact numerical schemes based on Pade approximants achieve high order accuracy with a relatively small number of stencil points and are found to have good spectral-like resolution. Considering applicable Lie symmetry groups (such as translation, scaling, rotation, and projection groups) of underlying PDEs, invariant compact schemes are developed based on the use of equivariant moving frames and extended group transformations. This work represents an extension of the authors recent work on construction of invariant, high-order, non-compact, finite difference schemes based on the method of modified equations. Performance of the proposed symmetry preserving compact schemes is evaluated via consideration of some canonical PDEs like linear advection-diffusion equation, inviscid Burgers' equation, and viscous Burgers' equation. Effects on accuracy due to choice of subgroups used in construction of these schemes will be discussed. Generalization of the proposed framework to multidimensional problems and non-orthogonal grids will also be presented.
Good governance for pension schemes
Thornton, Paul
2011-01-01
Regulatory and market developments have transformed the way in which UK private sector pension schemes operate. This has increased demands on trustees and advisors and the trusteeship governance model must evolve in order to remain fit for purpose. This volume brings together leading practitioners to provide an overview of what today constitutes good governance for pension schemes, from both a legal and a practical perspective. It provides the reader with an appreciation of the distinctive characteristics of UK occupational pension schemes, how they sit within the capital markets and their social and fiduciary responsibilities. Providing a holistic analysis of pension risk, both from the trustee and the corporate perspective, the essays cover the crucial role of the employer covenant, financing and investment risk, developments in longevity risk hedging and insurance de-risking, and best practice scheme administration.
Chen, Rangfu
A computational methodology has been developed in the first part of the thesis for the simulations of acoustic radiation, propagation and reflection. The developed methodology is high order accurate, uses less grid points per wave length comparing to standard high order accurate numerical methods, and automatically damps out spurious short waves. Furthermore, the methodology can be applied to acoustic problems in the presence of objects with curved geometries. To achieve these results, high order accurate optimized upwind schemes, which are applied to discretize spatial derivatives on interior grid points, have been developed. High order accurate optimized one- side biased schemes, which are only applied to discretize the spatial derivatives on grid points near computational boundaries, have also been constructed. The developed schemes are combined with a time difference scheme to fully discretize acoustic field equations in multi- dimension in arbitrary curvilinear coordinates. Numerical boundary conditions are investigated and intuitively illustrated. Applications of the developed methodology to a sequence of one-dimensional and multi-dimensional acoustic problems are performed. The numerical results have validated the developed methodology and demonstrated advantages of the methodology over central-difference Dispersion-Relation-Preserving method. Numerical results have also shown that the optimized upwind schemes minimize not only the dissipation error but also the dissipation error, while retaining the numerical stability. The second part of the thesis deals with a fully conservative Chimera methodology. The fully conservative Chimera was originally developed based on a finite volume approach. A finite difference scheme is shown to be identical to a finite volume scheme with proper definition of control volumes and metrics. The fully conservative Chimera has been successfully extended to finite difference schemes for viscous flows including turbulence models
Breeding schemes in reindeer husbandry
Directory of Open Access Journals (Sweden)
Lars Rönnegård
2003-04-01
Full Text Available The objective of the paper was to investigate annual genetic gain from selection (G, and the influence of selection on the inbreeding effective population size (Ne, for different possible breeding schemes within a reindeer herding district. The breeding schemes were analysed for different proportions of the population within a herding district included in the selection programme. Two different breeding schemes were analysed: an open nucleus scheme where males mix and mate between owner flocks, and a closed nucleus scheme where the males in non-selected owner flocks are culled to maximise G in the whole population. The theory of expected long-term genetic contributions was used and maternal effects were included in the analyses. Realistic parameter values were used for the population, modelled with 5000 reindeer in the population and a sex ratio of 14 adult females per male. The standard deviation of calf weights was 4.1 kg. Four different situations were explored and the results showed: 1. When the population was randomly culled, Ne equalled 2400. 2. When the whole population was selected on calf weights, Ne equalled 1700 and the total annual genetic gain (direct + maternal in calf weight was 0.42 kg. 3. For the open nucleus scheme, G increased monotonically from 0 to 0.42 kg as the proportion of the population included in the selection programme increased from 0 to 1.0, and Ne decreased correspondingly from 2400 to 1700. 4. In the closed nucleus scheme the lowest value of Ne was 1300. For a given proportion of the population included in the selection programme, the difference in G between a closed nucleus scheme and an open one was up to 0.13 kg. We conclude that for mass selection based on calf weights in herding districts with 2000 animals or more, there are no risks of inbreeding effects caused by selection.
Accurate Modeling of Advanced Reflectarrays
DEFF Research Database (Denmark)
Zhou, Min
of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...
The Accurate Particle Tracer Code
Wang, Yulei; Qin, Hong; Yu, Zhi
2016-01-01
The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusion energy research, computational mathematics, software engineering, and high-performance computation. The APT code consists of seven main modules, including the I/O module, the initialization module, the particle pusher module, the parallelization module, the field configuration module, the external force-field module, and the extendible module. The I/O module, supported by Lua and Hdf5 projects, provides a user-friendly interface for both numerical simulation and data analysis. A series of new geometric numerical methods...
An assessment of the differential quadrature time integration scheme for nonlinear dynamic equations
Liu, Jian; Wang, Xinwei
2008-07-01
In 1996, Xie [An assessment of time integration schemes for non-linear dynamic equations, Journal of Sound and Vibration 192(1) (1996) 321-331] presented an assessment on seven existing and commonly used time integration schemes for nonlinear dynamic equations. In this work, the differential quadrature (DQ) time integration scheme proposed by Fung in 2001 is assessed following the same procedures as Xie's. It is shown that accurate numerical results can be obtained by the DQ method when using much larger time step over the commonly used time integration schemes. Based on the results reported herein, some conclusions are drawn.
Baldwin, A; Mills, J; Birks, M; Budden, L
2017-12-01
Role modelling by experienced nurses, including nurse academics, is a key factor in the process of preparing undergraduate nursing students for practice, and may contribute to longevity in the workforce. A grounded theory study was undertaken to investigate the phenomenon of nurse academics' role modelling for undergraduate students. The study sought to answer the research question: how do nurse academics role model positive professional behaviours for undergraduate students? The aims of this study were to: theorise a process of nurse academic role modelling for undergraduate students; describe the elements that support positive role modelling by nurse academics; and explain the factors that influence the implementation of academic role modelling. The study sample included five second year nursing students and sixteen nurse academics from Australia and the United Kingdom. Data was collected from observation, focus groups and individual interviews. This study found that in order for nurse academics to role model professional behaviours for nursing students, they must reconcile their own professional identity. This paper introduces the theory of reconciling professional identity and discusses the three categories that comprise the theory, creating a context for learning, creating a context for authentic rehearsal and mirroring identity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bates, Alice P; Kennedy, Rodney A
2015-01-01
We propose a sampling scheme on the sphere and develop a corresponding spherical harmonic transform (SHT) for the accurate reconstruction of the diffusion signal in diffusion magnetic resonance imaging (dMRI). By exploiting the antipodal symmetry, we design a sampling scheme that requires the optimal number of samples on the sphere, equal to the degrees of freedom required to represent the antipodally symmetric band-limited diffusion signal in the spectral (spherical harmonic) domain. Compared with existing sampling schemes on the sphere that allow for the accurate reconstruction of the diffusion signal, the proposed sampling scheme reduces the number of samples required by a factor of two or more. We analyse the numerical accuracy of the proposed SHT and show through experiments that the proposed sampling allows for the accurate and rotationally invariant computation of the SHT to near machine precision accuracy.
Third Order Reconstruction of the KP Scheme for Model of River Tinnelva
Directory of Open Access Journals (Sweden)
Susantha Dissanayake
2017-01-01
Full Text Available The Saint-Venant equation/Shallow Water Equation is used to simulate flow of river, flow of liquid in an open channel, tsunami etc. The Kurganov-Petrova (KP scheme which was developed based on the local speed of discontinuity propagation, can be used to solve hyperbolic type partial differential equations (PDEs, hence can be used to solve the Saint-Venant equation. The KP scheme is semi discrete: PDEs are discretized in the spatial domain, resulting in a set of Ordinary Differential Equations (ODEs. In this study, the common 2nd order KP scheme is extended into 3rd order scheme while following the Weighted Essentially Non-Oscillatory (WENO and Central WENO (CWENO reconstruction steps. Both the 2nd order and 3rd order schemes have been used in simulation in order to check the suitability of the KP schemes to solve hyperbolic type PDEs. The simulation results indicated that the 3rd order KP scheme shows some better stability compared to the 2nd order scheme. Computational time for the 3rd order KP scheme for variable step-length ode solvers in MATLAB is less compared to the computational time of the 2nd order KP scheme. In addition, it was confirmed that the order of the time integrators essentially should be lower compared to the order of the spatial discretization. However, for computation of abrupt step changes, the 2nd order KP scheme shows a more accurate solution.
Implicit Flux Limiting Schemes for Petroleum Reservoir Simulation
Blunt, Martin; Rubin, Barry
1992-09-01
Explicit total variation diminishing (TVD) numerical methods have been used in the past to give convergent, high order accurate solutions to hyperbolic conservation equations, such as those governing flow in oil reservoirs. To ensure stability there is a restriction on the size of time step that can be used. Many petroleum reservoir simulation problems have regions of fast flow away from sharp fronts, which means that this time step limitation makes explicit schemes less efficient than the best implicit methods. This work extends the theory of TVD schemes to both fully implicit and partially implicit methods. We use our theoretical results to construct schemes which are stable even for very large time steps. We show how to construct an adaptively implicit scheme which is nearly fully implicit in regions of fast flow, but which may be explicit at sharp fronts which are moving more slowly. In general these schemes are only first-order accurate in time overall, but locally may achieve second-order time accuracy. Results, presented for a one-dimensional Buckley-Leverett problem, demonstrate that these methods are more accurate than conventional implicit algorithms and more efficient than fully explicit methods, for which smaller time steps must be used. The theory is also extended to embrace mixed hyperbolic/parabolic (black oil) systems and example solutions to a radial flow equation are presented. In this case the time step is not limited by the high flow speeds at a small radius, as would be the case for an explicit solution. Moreover, the shock front is resolved more sharply than for a fully implicit method.
The accurate particle tracer code
Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun
2017-11-01
The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.
Multiuser switched diversity scheduling schemes
Shaqfeh, Mohammad
2012-09-01
Multiuser switched-diversity scheduling schemes were recently proposed in order to overcome the heavy feedback requirements of conventional opportunistic scheduling schemes by applying a threshold-based, distributed, and ordered scheduling mechanism. The main idea behind these schemes is that slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we characterize the achievable rate region of multiuser switched diversity systems and compare it with the rate region of full feedback multiuser diversity systems. We propose also a novel proportional fair multiuser switched-based scheduling scheme and we demonstrate that it can be optimized using a practical and distributed method to obtain the feedback thresholds. We finally demonstrate by numerical examples that switched-diversity scheduling schemes operate within 0.3 bits/sec/Hz from the ultimate network capacity of full feedback systems in Rayleigh fading conditions. © 2012 IEEE.
Implicit time accurate simulation of unsteady flow
van Buuren, R.; Kuerten, Johannes G.M.; Geurts, Bernardus J.
1998-01-01
In this paper we study the properties of an implicit time integration method for the simulation of unsteady shock boundary layer interaction flow. Using an explicit second-order Runge-Kutta scheme we determine a reference solution for the implicit second-order Crank Nicolson scheme. This a-stable
An Approximate Optimal Maximum Range Guidance Scheme for Subsonic Unpowered Gliding Vehicles
Directory of Open Access Journals (Sweden)
Dao-Chi Zhang
2015-01-01
Full Text Available This study investigates the maximum gliding range problems of subsonic unpowered gliding vehicles and proposes an approximate optimal maximum range guidance scheme. First, the gliding flight path angle corresponding to constant dynamic pressure is derived. A lift-to-drag ratio (L/D inversely proportional to the dynamic pressure is then proven. On this basis, the calculation method of an optimal dynamic pressure (ODP profile with a maximum L/D throughout the flight is presented. A guidance scheme for tracking the ODP profile, which uses the flight path angle as control variable, is then designed. The maximum ranges of the unpowered gliding vehicle obtained by the proposed guidance scheme and pseudospectral method are compared. Results show that the guidance scheme provides an accurate approximation of the optimal results, and the errors are less than 2%. The proposed guidance scheme is easy to implement and is not influenced by wind compared with numerical schemes.
Multi-dimensional high-order numerical schemes for Lagrangian hydrodynamics
Energy Technology Data Exchange (ETDEWEB)
Dai, William W [Los Alamos National Laboratory; Woodward, Paul R [Los Alamos National Laboratory
2009-01-01
An approximate solver for multi-dimensional Riemann problems at grid points of unstructured meshes, and a numerical scheme for multi-dimensional hydrodynamics have been developed in this paper. The solver is simple, and is developed only for the use in numerical schemes for hydrodynamics. The scheme is truely multi-dimensional, is second order accurate in both space and time, and satisfies conservation laws exactly for mass, momentum, and total energy. The scheme has been tested through numerical examples involving strong shocks. It has been shown that the scheme offers the principle advantages of high-order Codunov schemes; robust operation in the presence of very strong shocks and thin shock fronts.
Space-Time Transformation in Flux-form Semi-Lagrangian Schemes
Directory of Open Access Journals (Sweden)
Peter C. Chu Chenwu Fan
2010-01-01
Full Text Available With a finite volume approach, a flux-form semi-Lagrangian (TFSL scheme with space-time transformation was developed to provide stable and accurate algorithm in solving the advection-diffusion equation. Different from the existing flux-form semi-Lagrangian schemes, the temporal integration of the flux from the present to the next time step is transformed into a spatial integration of the flux at the side of a grid cell (space for the present time step using the characteristic-line concept. The TFSL scheme not only keeps the good features of the semi-Lagrangian schemes (no Courant number limitation, but also has higher accuracy (of a second order in both time and space. The capability of the TFSL scheme is demonstrated by the simulation of the equatorial Rossby-soliton propagation. Computational stability and high accuracy makes this scheme useful in ocean modeling, computational fluid dynamics, and numerical weather prediction.
Modulation Schemes for Wireless Access
Directory of Open Access Journals (Sweden)
F. Vejrazka
2000-12-01
Full Text Available Four modulation schemes, namely minimum shift keying (MSK, Gaussianminimum shift keying (GMSK, multiamplitude minimum shift keying(MAMSK and ÃÂ€/4 differential quadrature phase shift keying (ÃÂ€/4-QPSKare described and their applicability to wireless access is discussedin the paper. Low complexity receiver structures based on differentialdetection are analysed to estimate the performance of the modulationschemes in the additive Gaussian noise and the Rayleigh and Riceenvelope fast fading channel. The bandwidth efficiency is calculated toevaluate the modulation schemes. The results show that the MAMSK schemegives the greatest bandwidth efficiency, but its performance in theRayleigh channel is rather poor. In contrast, the MSK scheme is lessbandwidth efficient, but it is more resistant to Rayleigh fading. Theperformance of ÃÂ€/4-QPSK signal is considerably improved by appropriateprefiltering.
Effective and Accurate Colormap Selection
Thyng, K. M.; Greene, C. A.; Hetland, R. D.; Zimmerle, H.; DiMarco, S. F.
2016-12-01
Science is often communicated through plots, and design choices can elucidate or obscure the presented data. The colormap used can honestly and clearly display data in a visually-appealing way, or can falsely exaggerate data gradients and confuse viewers. Fortunately, there is a large resource of literature in color science on how color is perceived which we can use to inform our own choices. Following this literature, colormaps can be designed to be perceptually uniform; that is, so an equally-sized jump in the colormap at any location is perceived by the viewer as the same size. This ensures that gradients in the data are accurately percieved. The same colormap is often used to represent many different fields in the same paper or presentation. However, this can cause difficulty in quick interpretation of multiple plots. For example, in one plot the viewer may have trained their eye to recognize that red represents high salinity, and therefore higher density, while in the subsequent temperature plot they need to adjust their interpretation so that red represents high temperature and therefore lower density. In the same way that a single Greek letter is typically chosen to represent a field for a paper, we propose to choose a single colormap to represent a field in a paper, and use multiple colormaps for multiple fields. We have created a set of colormaps that are perceptually uniform, and follow several other design guidelines. There are 18 colormaps to give options to choose from for intuitive representation. For example, a colormap of greens may be used to represent chlorophyll concentration, or browns for turbidity. With careful consideration of human perception and design principles, colormaps may be chosen which faithfully represent the data while also engaging viewers.
Electrical Injection Schemes for Nanolasers
DEFF Research Database (Denmark)
Lupi, Alexandra; Chung, Il-Sug; Yvind, Kresten
2014-01-01
Three electrical injection schemes based on recently demonstrated electrically pumped photonic crystal nanolasers have been numerically investigated: 1) a vertical p-i-n junction through a post structure; 2) a lateral p-i-n junction with a homostructure; and 3) a lateral p-i-n junction....... For this analysis, the properties of different schemes, i.e., electrical resistance, threshold voltage, threshold current, and internal efficiency as energy requirements for optical interconnects are compared and the physics behind the differences is discussed....
High order boron transport scheme in TRAC-BF1
Energy Technology Data Exchange (ETDEWEB)
Barrachina, Teresa; Jambrina, Ana; Miro, Rafael; Verdu, Gumersindo, E-mail: tbarrachina@iqn.upv.es, E-mail: rmiro@iqn.upv.es, E-mail: gverdu@iqn.upv.es [Universidade Politecnica de Valencia (UPV), Valencia, (Spain). Institute for the Industrial, Radiophysical and Environmental Safety; Soler, Amparo, E-mail: asoler@iberdrola.es [SEA Propulsion S.L., Madrid (Spain); Concejal, Alberto, E-mail: acbe@iberdrola.es [Iberdrola Ingenieria y Construcion S.A.U, Madrid (Spain)
2013-07-01
In boiling water reactors (BWR), unlike pressurized water reactors (PWR) in which the reactivity control is accomplished through movement of the control rods and boron dilution, the importance of boron transport lies in maintaining the core integrity during ATWS-kind severe accidents in which under certain circumstances a boron injection is required. This is the reason for implementing boron transport models thermal-hydraulic codes as TRAC-BF1, RELAP5 and TRACE, bringing an improvement in the accuracy of the simulations. TRAC-BF1 code provides a best estimate analysis capability for the analysis of the full range of postulated accidents in boiling water reactors systems and related facilities. The boron transport model implemented in TRAC-BF1 code is based on a calculation according to a first order accurate upwind difference scheme. There is a need in reviewing and improving this model. Four numerical schemes that solve the boron transport model have been analyzed and compared with the analytical solution that provides the Burgers equation. The studied numerical schemes are: first order Upwind, second order Godunov, second-order modified Godunov adding physical diffusion term and a third-order QUICKEST using the ULTIMATE universal limiter (UL). The modified Godunov scheme has been implemented in TRAC-BF1 source code. The results using these new schemes are presented in this paper. (author)
Deference or Interrogation? Contrasting Models for Reconciling Religion, Gender and Equality
Directory of Open Access Journals (Sweden)
Moira Dustin
2012-01-01
Full Text Available Abstract Since the late 1990s, the extension of the equality framework in the United Kingdom has been accompanied by the recognition of religion within that framework and new measures to address religious discrimination. This development has been contested, with many arguing that religion is substantively different to other discrimination grounds and that increased protection against religious discrimination may undermine equality for other marginalized groups – in particular, women and lesbian, gay, bisexual and transgender (LGBT people. This paper considers these concerns from the perspective of minoritized women in the UK. It analyses two theoretical approaches to reconciling religious claims with gender equality – one based on privileging, the other based on challenging religious claims – before considering which, if either, reflects experiences in the UK in recent years and what this means for gender equality.
DEFF Research Database (Denmark)
Flavio, Hugo; Ferreira, P.; Formigo, N.
2017-01-01
Agriculture is widespread across the EU and has caused considerable impacts on freshwater ecosystems. To revert the degradation caused to streams and rivers, research and restoration efforts have been developed to recover ecosystem functions and services, with the European Water Framework Directive...... (WFD) playing a significant role in strengthening the progress. Analysing recent peer-reviewed European literature (2009–2016), this review explores 1) the conflicts and difficulties faced when restoring agriculturally impacted streams, 2) the aspects relevant to effectively reconcile agricultural land......-reviewed literature. The first WFD cycle ended in 2015 without reaching the goal of good ecological status in many European water-bodies. Addressing limitations reported in recent papers, including difficulties in stakeholder integration and importance of small headwater streams, is crucial. Analysing recent...
Lee, Soomi; Duvander, Ann-Zofie; Zarit, Steven H
2016-01-01
South Korea has extremely low rates of fertility and labor force participation by women during their childbearing years, whereas Sweden has high rates for both. Variations in family policy models may explain differences in fertility and women's employment between the two countries. Drawing upon literature that examines the effects of family policies on fertility and women's employment, this paper compares childcare support for very young children and parental leave policies in South Korea and Sweden. Thereafter, we discuss the importance of providing stronger support for dual-earner rather than single-earner families to reconcile the two objectives of increasing fertility and women's workforce participation. Specifically, it is critical to: (a) enhance the quantity and quality of childcare services for very young children, (b) achieve gender equality in parental leave policies, and (c) reduce gaps in the accessibility and utilization of family benefits by working parents from different social classes.
The Effect(s) of Teen Pregnancy: Reconciling Theory, Methods, and Findings.
Diaz, Christina J; Fiel, Jeremy E
2016-02-01
Although teenage mothers have lower educational attainment and earnings than women who delay fertility, causal interpretations of this relationship remain controversial. Scholars argue that there are reasons to predict negative, trivial, or even positive effects, and different methodological approaches provide some support for each perspective. We reconcile this ongoing debate by drawing on two heuristics: (1) each methodological strategy emphasizes different women in estimation procedures, and (2) the effects of teenage fertility likely vary in the population. Analyses of the Child and Young Adult Cohorts of the National Longitudinal Survey of Youth (N = 3,661) confirm that teen pregnancy has negative effects on most women's attainment and earnings. More striking, however, is that effects on college completion and early earnings vary considerably and are most pronounced among those least likely to experience an early pregnancy. Further analyses suggest that teen pregnancy is particularly harmful for those with the brightest socioeconomic prospects and who are least prepared for the transition to motherhood.
Wilk, Szymon; Michalowski, Martin; Michalowski, Wojtek; Hing, Marisela Mainegra; Farion, Ken
2011-01-01
This paper describes a new methodological approach to reconciling adverse and contradictory activities (called points of contention) occurring when a patient is managed according to two or more concurrently used clinical practice guidelines (CPGs). The need to address these inconsistencies occurs when a patient with more than one disease, each of which is a comorbid condition, has to be managed according to different treatment regimens. We propose an automatic procedure that constructs a mathematical guideline model using the Constraint Logic Programming (CLP) methodology, uses this model to identify and mitigate encountered points of contention, and revises the considered CPGs accordingly. The proposed procedure is used as an alerting mechanism and coupled with a guideline execution engine warns the physician about potential problems with the concurrent application of two or more guidelines. We illustrate the operation of our procedure in a clinical scenario describing simultaneous use of CPGs for duodenal ulcer and transient ischemic attack. PMID:22195153
Probabilistic modeling of the evolution of gene synteny within reconciled phylogenies.
Semeria, Magali; Tannier, Eric; Guéguen, Laurent
2015-01-01
Most models of genome evolution concern either genetic sequences, gene content or gene order. They sometimes integrate two of the three levels, but rarely the three of them. Probabilistic models of gene order evolution usually have to assume constant gene content or adopt a presence/absence coding of gene neighborhoods which is blind to complex events modifying gene content. We propose a probabilistic evolutionary model for gene neighborhoods, allowing genes to be inserted, duplicated or lost. It uses reconciled phylogenies, which integrate sequence and gene content evolution. We are then able to optimize parameters such as phylogeny branch lengths, or probabilistic laws depicting the diversity of susceptibility of syntenic regions to rearrangements. We reconstruct a structure for ancestral genomes by optimizing a likelihood, keeping track of all evolutionary events at the level of gene content and gene synteny. Ancestral syntenies are associated with a probability of presence.
Reconciling Tensions: Needing Formal and Family/Friend Care but Feeling like a Burden.
Barken, Rachel
2017-03-01
Within a neoliberal policy context that shifts responsibility for health and well-being from the state to families and individuals, Canadian home care strategies tend to present family members as "partners in care". Drawing on an interpretive grounded theory study that involved 34 qualitative interviews, this article examines older people's experiences at the intersections of formal home care and family/friend care arrangements, against the backdrop of policies that emphasize partnerships with family. The core concept derived from the interviews was reconciling tensions between care needs and concerns about burdening others, in the context of available home and community care. Four processes are identified, which illustrate how access to financial and social resources may lead to opportunities and constraints in experiences of care. Findings underscore the emotional and practical challenges that older people may encounter vis-à-vis policy discourses that encourage family responsibility for care. Implications for policy and practice are discussed.
Knapp, Alan K; Ciais, Philippe; Smith, Melinda D
2017-04-01
Contents 41 I. 41 II. 42 III. 43 IV. 44 V. 45 Acknowledgements 46 References 46 SUMMARY: Precipitation (PPT) is a primary climatic determinant of plant growth and aboveground net primary production (ANPP) over much of the globe. Thus, PPT-ANPP relationships are important both ecologically and to land-atmosphere models that couple terrestrial vegetation to the global carbon cycle. Empirical PPT-ANPP relationships derived from long-term site-based data are almost always portrayed as linear, but recent evidence has accumulated that is inconsistent with an underlying linear relationship. We review, and then reconcile, these inconsistencies with a nonlinear model that incorporates observed asymmetries in PPT-ANPP relationships. Although data are currently lacking for parameterization, this new model highlights research needs that, when met, will improve our understanding of carbon cycle dynamics, as well as forecasts of ecosystem responses to climate change. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.
Evaluation of numerical schemes for the analysis of sound generation by blade-gust interaction
Scott, J. N.; Hariharan, S. I.; Mankbadi, R.
1995-01-01
In this investigation three different numerical algorithms have been utilized to compute the flow about a flat plate in the presence of a transverse gust described by a sinusoidal disturbance. The three schemes include the MacCormack explicit finite difference scheme which is second order accurate in both time and space, the Gottlieb and Turkel modification of MacCormack's scheme which is fourth order accurate in space and second order accurate in time, (referred to as the 2-4 scheme), and a two step scheme developed by Bayliss et al. which has second order temporal accuracy and sixth order spatial accuracy (a 2-6 scheme). The flow field results are obtained with these schemes by using the same code with the only difference being the implementation of the respective solution algorithms. The problem is set up so that the sinusoidal disturbance is imposed at the surface of the flat plate as a surface boundary condition. Thus the problem is treated as scattering problem. The computed results include the time average of the acoustic pressure squared along grid lines five points away from the boundaries; distribution throughout the computational domain is monitored at various times. The numerical results are compared with an exact solution obtained by Atassi, Dusey, and Davis.
Czech, Brian
2008-12-01
The conflict between economic growth and biodiversity conservation is understood in portions of academia and sometimes acknowledged in political circles. Nevertheless, there is not a unified response. In political and policy circles, the environmental Kuznets curve (EKC) is posited to solve the conflict between economic growth and environmental protection. In academia, however, the EKC has been deemed fallacious in macroeconomic scenarios and largely irrelevant to biodiversity. A more compelling response to the conflict is that it may be resolved with technological progress. Herein I review the conflict between economic growth and biodiversity conservation in the absence of technological progress, explore the prospects for technological progress to reconcile that conflict, and provide linguistic suggestions for describing the relationships among economic growth, technological progress, and biodiversity conservation. The conflict between economic growth and biodiversity conservation is based on the first two laws of thermodynamics and principles of ecology such as trophic levels and competitive exclusion. In this biophysical context, the human economy grows at the competitive exclusion of nonhuman species in the aggregate. Reconciling the conflict via technological progress has not occurred and is infeasible because of the tight linkage between technological progress and economic growth at current levels of technology. Surplus production in existing economic sectors is required for conducting the research and development necessary for bringing new technologies to market. Technological regimes also reflect macroeconomic goals, and if the goal is economic growth, reconciliatory technologies are less likely to be developed. As the economy grows, the loss of biodiversity may be partly mitigated with end-use innovation that increases technical efficiency, but this type of technological progress requires policies that are unlikely if the conflict between economic growth
New true fourth-order accurate scalar beam propagation methods for both TE and TM polarization
Stoffer, Remco; Bollerman, P.A.A.J.; Hoekstra, Hugo; van Groesen, Embrecht W.C.; van Beckum, F.P.H.
1999-01-01
New 2D scalar beam propagation methods for both TE and TM polarization are presented. Both second- and fourth-order accurate schemes, in the lateral stepsize, are shown. The methods use uniform discretization and can handle arbitrary positions of interfaces between materials with different
Wildemeersch, Danny, Ed.; Finger, Matthias, Ed.; Jansen, Theo, Ed.
In this book, 16 authors from Europe, Africa, and the United States reflect on the transformations that are currently taking place in the field of adult and continuing education. The 12 chapters are "Reconciling the Irreconcilable? Adult and Continuing Education Between Personal Development, Corporate Concerns, and Public Responsibility"…
Anderton, Cindy L.
2010-01-01
LGB individuals seek out counseling at higher rates than their straight counterparts and they tend to present for counseling with concerns that are unique and different from heterosexuals, such as difficulty reconciling one's sexual orientation with one's own religious beliefs. Yet counselors and counselors-in-training indicate that they have…
Bozkurt, Gulay
2017-01-01
This article examines the literature associated with social constructivism. It discusses whether social constructivism succeeds in reconciling individual cognition with social teaching and learning practices. After reviewing the meaning of individual cognition and social constructivism, two views--Piaget and Vygotsky's--accounting for learning…
Distance labeling schemes for trees
DEFF Research Database (Denmark)
Alstrup, Stephen; Gørtz, Inge Li; Bistrup Halvorsen, Esben
2016-01-01
We consider distance labeling schemes for trees: given a tree with n nodes, label the nodes with binary strings such that, given the labels of any two nodes, one can determine, by looking only at the labels, the distance in the tree between the two nodes. A lower bound by Gavoille et al. [Gavoill...
Distance labeling schemes for trees
DEFF Research Database (Denmark)
Alstrup, Stephen; Gørtz, Inge Li; Bistrup Halvorsen, Esben
2016-01-01
(log(n)) bits for constant ε> 0. (1 + ε)-stretch labeling schemes with polylogarithmic label size have previously been established for doubling dimension graphs by Talwar [Talwar, STOC, 2004]. In addition, we present matching upper and lower bounds for distance labeling for caterpillars, showing that labels...
Homogenization scheme for acoustic metamaterials
Yang, Min
2014-02-26
We present a homogenization scheme for acoustic metamaterials that is based on reproducing the lowest orders of scattering amplitudes from a finite volume of metamaterials. This approach is noted to differ significantly from that of coherent potential approximation, which is based on adjusting the effective-medium parameters to minimize scatterings in the long-wavelength limit. With the aid of metamaterials’ eigenstates, the effective parameters, such as mass density and elastic modulus can be obtained by matching the surface responses of a metamaterial\\'s structural unit cell with a piece of homogenized material. From the Green\\'s theorem applied to the exterior domain problem, matching the surface responses is noted to be the same as reproducing the scattering amplitudes. We verify our scheme by applying it to three different examples: a layered lattice, a two-dimensional hexagonal lattice, and a decorated-membrane system. It is shown that the predicted characteristics and wave fields agree almost exactly with numerical simulations and experiments and the scheme\\'s validity is constrained by the number of dominant surface multipoles instead of the usual long-wavelength assumption. In particular, the validity extends to the full band in one dimension and to regimes near the boundaries of the Brillouin zone in two dimensions.
High-Order Hyperbolic Residual-Distribution Schemes on Arbitrary Triangular Grids
Mazaheri, Alireza; Nishikawa, Hiroaki
2015-01-01
In this paper, we construct high-order hyperbolic residual-distribution schemes for general advection-diffusion problems on arbitrary triangular grids. We demonstrate that the second-order accuracy of the hyperbolic schemes can be greatly improved by requiring the scheme to preserve exact quadratic solutions. We also show that the improved second-order scheme can be easily extended to third-order by further requiring the exactness for cubic solutions. We construct these schemes based on the LDA and the SUPG methodology formulated in the framework of the residual-distribution method. For both second- and third-order-schemes, we construct a fully implicit solver by the exact residual Jacobian of the second-order scheme, and demonstrate rapid convergence of 10-15 iterations to reduce the residuals by 10 orders of magnitude. We demonstrate also that these schemes can be constructed based on a separate treatment of the advective and diffusive terms, which paves the way for the construction of hyperbolic residual-distribution schemes for the compressible Navier-Stokes equations. Numerical results show that these schemes produce exceptionally accurate and smooth solution gradients on highly skewed and anisotropic triangular grids, including curved boundary problems, using linear elements. We also present Fourier analysis performed on the constructed linear system and show that an under-relaxation parameter is needed for stabilization of Gauss-Seidel relaxation.
Moore, William S.
1989-01-01
Presents initial data from 725 college and university students on the construct validity of the Learning Environmental Preferences (LEP) instrument. Findings suggest that the LEP accurately measures the cognitive portion of the Perry scheme of intellectual development. (Author/TE)
38 CFR 4.46 - Accurate measurement.
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect to...
Secure Dynamic access control scheme of PHR in cloud computing.
Chen, Tzer-Shyong; Liu, Chia-Hui; Chen, Tzer-Long; Chen, Chin-Sheng; Bau, Jian-Guo; Lin, Tzu-Ching
2012-12-01
With the development of information technology and medical technology, medical information has been developed from traditional paper records into electronic medical records, which have now been widely applied. The new-style medical information exchange system "personal health records (PHR)" is gradually developed. PHR is a kind of health records maintained and recorded by individuals. An ideal personal health record could integrate personal medical information from different sources and provide complete and correct personal health and medical summary through the Internet or portable media under the requirements of security and privacy. A lot of personal health records are being utilized. The patient-centered PHR information exchange system allows the public autonomously maintain and manage personal health records. Such management is convenient for storing, accessing, and sharing personal medical records. With the emergence of Cloud computing, PHR service has been transferred to storing data into Cloud servers that the resources could be flexibly utilized and the operation cost can be reduced. Nevertheless, patients would face privacy problem when storing PHR data into Cloud. Besides, it requires a secure protection scheme to encrypt the medical records of each patient for storing PHR into Cloud server. In the encryption process, it would be a challenge to achieve accurately accessing to medical records and corresponding to flexibility and efficiency. A new PHR access control scheme under Cloud computing environments is proposed in this study. With Lagrange interpolation polynomial to establish a secure and effective PHR information access scheme, it allows to accurately access to PHR with security and is suitable for enormous multi-users. Moreover, this scheme also dynamically supports multi-users in Cloud computing environments with personal privacy and offers legal authorities to access to PHR. From security and effectiveness analyses, the proposed PHR access
Liquid propellant rocket engine combustion simulation with a time-accurate CFD method
Chen, Y. S.; Shang, H. M.; Liaw, Paul; Hutt, J.
1993-01-01
Time-accurate computational fluid dynamics (CFD) algorithms are among the basic requirements as an engineering or research tool for realistic simulations of transient combustion phenomena, such as combustion instability, transient start-up, etc., inside the rocket engine combustion chamber. A time-accurate pressure based method is employed in the FDNS code for combustion model development. This is in connection with other program development activities such as spray combustion model development and efficient finite-rate chemistry solution method implementation. In the present study, a second-order time-accurate time-marching scheme is employed. For better spatial resolutions near discontinuities (e.g., shocks, contact discontinuities), a 3rd-order accurate TVD scheme for modeling the convection terms is implemented in the FDNS code. Necessary modification to the predictor/multi-corrector solution algorithm in order to maintain time-accurate wave propagation is also investigated. Benchmark 1-D and multidimensional test cases, which include the classical shock tube wave propagation problems, resonant pipe test case, unsteady flow development of a blast tube test case, and H2/O2 rocket engine chamber combustion start-up transient simulation, etc., are investigated to validate and demonstrate the accuracy and robustness of the present numerical scheme and solution algorithm.
A Split-Step Scheme for the Incompressible Navier-Stokes
Energy Technology Data Exchange (ETDEWEB)
Henshaw, W; Petersson, N A
2001-06-12
We describe a split-step finite-difference scheme for solving the incompressible Navier-Stokes equations on composite overlapping grids. The split-step approach decouples the solution of the velocity variables from the solution of the pressure. The scheme is based on the velocity-pressure formulation and uses a method of lines approach so that a variety of implicit or explicit time stepping schemes can be used once the equations have been discretized in space. We have implemented both second-order and fourth-order accurate spatial approximations that can be used with implicit or explicit time stepping methods. We describe how to choose appropriate boundary conditions to make the scheme accurate and stable. A divergence damping term is added to the pressure equation to keep the numerical dilatation small. Several numerical examples are presented.
Support Schemes and Ownership Structures
DEFF Research Database (Denmark)
Ropenus, Stephanie; Schröder, Sascha Thorsten; Costa, Ana
for promoting combined heat and power and energy from renewable sources. These Directives are to be implemented at the national level by the Member States. Section 3 conceptually presents the spectrum of national support schemes, ranging from investment support to market‐based operational support. The choice...... an introduction to the policy context for mCHP. Section 1 describes the rationale for the promotion of mCHP by explaining its potential contribution to European energy policy goals. Section 2 addresses the policy context at the supranational European level by outlining relevant EU Directives on support schemes......In recent years, fuel cell based micro‐combined heat and power has received increasing attention due to its potential contribution to energy savings, efficiency gains, customer proximity and flexibility in operation and capacity size. The FC4Home project assesses technical and economic aspects...
Network Regulation and Support Schemes
DEFF Research Database (Denmark)
Ropenus, Stephanie; Schröder, Sascha Thorsten; Jacobsen, Henrik
2009-01-01
-in tariffs to market-based quota systems, and network regulation approaches, comprising rate-of-return and incentive regulation. National regulation and the vertical structure of the electricity sector shape the incentives of market agents, notably of distributed generators and network operators....... This article seeks to investigate the interactions between the policy dimensions of support schemes and network regulation and how they affect the deployment of distributed generation. Firstly, a conceptual analysis examines how the incentives of the different market agents are affected. In particular......At present, there exists no explicit European policy framework on distributed generation. Various Directives encompass distributed generation; inherently, their implementation is to the discretion of the Member States. The latter have adopted different kinds of support schemes, ranging from feed...
Subranging scheme for SQUID sensors
Penanen, Konstantin I. (Inventor)
2008-01-01
A readout scheme for measuring the output from a SQUID-based sensor-array using an improved subranging architecture that includes multiple resolution channels (such as a coarse resolution channel and a fine resolution channel). The scheme employs a flux sensing circuit with a sensing coil connected in series to multiple input coils, each input coil being coupled to a corresponding SQUID detection circuit having a high-resolution SQUID device with independent linearizing feedback. A two-resolution configuration (course and fine) is illustrated with a primary SQUID detection circuit for generating a fine readout, and a secondary SQUID detection circuit for generating a course readout, both having feedback current coupled to the respective SQUID devices via feedback/modulation coils. The primary and secondary SQUID detection circuits function and derive independent feedback. Thus, the SQUID devices may be monitored independently of each other (and read simultaneously) to dramatically increase slew rates and dynamic range.
Ulku, Huseyin Arda
2012-09-01
An explicit yet stable marching-on-in-time (MOT) scheme for solving the time domain magnetic field integral equation (TD-MFIE) is presented. The stability of the explicit scheme is achieved via (i) accurate evaluation of the MOT matrix elements using closed form expressions and (ii) a PE(CE) m type linear multistep method for time marching. Numerical results demonstrate the accuracy and stability of the proposed explicit MOT-TD-MFIE solver. © 2012 IEEE.
Supporting Argumentation Schemes in Argumentative Dialogue Games
Directory of Open Access Journals (Sweden)
Wells Simon
2014-03-01
Full Text Available This paper reports preliminary work into the exploitation of argumentation schemes within dialogue games. We identify a property of dialogue games that we call “scheme awareness” that captures the relationship between dialogue game systems and argumentation schemes. Scheme awareness is used to examine the ways in which existing dialogue games utilise argumentation schemes and consequently the degree with which a dialogue game can be used to construct argument structures. The aim is to develop a set of guidelines for dialogue game design, which feed into a set of Dialogue Game Description Language (DGDL extensions in turn enabling dialogue games to better exploit argumentation schemes.
An Arbitrated Quantum Signature Scheme without Entanglement*
Li, Hui-Ran; Luo, Ming-Xing; Peng, Dai-Yuan; Wang, Xiao-Jun
2017-09-01
Several quantum signature schemes are recently proposed to realize secure signatures of quantum or classical messages. Arbitrated quantum signature as one nontrivial scheme has attracted great interests because of its usefulness and efficiency. Unfortunately, previous schemes cannot against Trojan horse attack and DoS attack and lack of the unforgeability and the non-repudiation. In this paper, we propose an improved arbitrated quantum signature to address these secure issues with the honesty arbitrator. Our scheme takes use of qubit states not entanglements. More importantly, the qubit scheme can achieve the unforgeability and the non-repudiation. Our scheme is also secure for other known quantum attacks.
A biometric signcryption scheme without bilinear pairing
Wang, Mingwen; Ren, Zhiyuan; Cai, Jun; Zheng, Wentao
2013-03-01
How to apply the entropy in biometrics into the encryption and remote authentication schemes to simplify the management of keys is a hot research area. Utilizing Dodis's fuzzy extractor method and Liu's original signcryption scheme, a biometric identity based signcryption scheme is proposed in this paper. The proposed scheme is more efficient than most of the previous proposed biometric signcryption schemes for that it does not need bilinear pairing computation and modular exponentiation computation which is time consuming largely. The analysis results show that under the CDH and DL hard problem assumption, the proposed scheme has the features of confidentiality and unforgeability simultaneously.
Cambridge community Optometry Glaucoma Scheme.
Keenan, Jonathan; Shahid, Humma; Bourne, Rupert R; White, Andrew J; Martin, Keith R
2015-04-01
With a higher life expectancy, there is an increased demand for hospital glaucoma services in the United Kingdom. The Cambridge community Optometry Glaucoma Scheme (COGS) was initiated in 2010, where new referrals for suspected glaucoma are evaluated by community optometrists with a special interest in glaucoma, with virtual electronic review and validation by a consultant ophthalmologist with special interest in glaucoma. 1733 patients were evaluated by this scheme between 2010 and 2013. Clinical assessment is performed by the optometrist at a remote site. Goldmann applanation tonometry, pachymetry, monoscopic colour optic disc photographs and automated Humphrey visual field testing are performed. A clinical decision is made as to whether a patient has glaucoma or is a suspect, and referred on or discharged as a false positive referral. The clinical findings, optic disc photographs and visual field test results are transmitted electronically for virtual review by a consultant ophthalmologist. The number of false positive referrals from initial referral into the scheme. Of the patients, 46.6% were discharged at assessment and a further 5.7% were discharged following virtual review. Of the patients initially discharged, 2.8% were recalled following virtual review. Following assessment at the hospital, a further 10.5% were discharged after a single visit. The COGS community-based glaucoma screening programme is a safe and effective way of evaluating glaucoma referrals in the community and reducing false-positive referrals for glaucoma into the hospital system. © 2014 Royal Australian and New Zealand College of Ophthalmologists.
A stable and accurate partitioned algorithm for conjugate heat transfer
Meng, F.; Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.
2017-09-01
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in an implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems together with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode theory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized-Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and diffusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS
Free will: A case study in reconciling phenomenological philosophy with reductionist sciences.
Hong, Felix T
2015-12-01
Phenomenology aspires to philosophical analysis of humans' subjective experience while it strives to avoid pitfalls of subjectivity. The first step towards naturalizing phenomenology - making phenomenology scientific - is to reconcile phenomenology with modern physics, on the one hand, and with modern cellular and molecular neuroscience, on the other hand. In this paper, free will is chosen for a case study to demonstrate the feasibility. Special attention is paid to maintain analysis with mathematical precision, if possible, and to evade the inherent deceptive power of natural language. Laplace's determinism is re-evaluated along with the concept of microscopic reversibility. A simple and transparent version of proof demonstrates that microscopic reversibility is irreconcilably incompatible with macroscopic irreversibility, contrary to Boltzmann's claim. But the verdict also exalts Boltzmann's statistical mechanics to the new height of a genuine paradigm shift, thus cutting the umbilical cord linking it to Newtonian mechanics. Laplace's absolute determinism must then be replaced with a weaker form of causality called quasi-determinism. Biological indeterminism is also affirmed with numerous lines of evidence. The strongest evidence is furnished by ion channel fluctuations, which obey an indeterministic stochastic phenomenological law. Furthermore, quantum indeterminacy is shown to be relevant in biology, contrary to the opinion of Erwin Schrödinger. In reconciling phenomenology of free will with modern sciences, three issues - alternativism, intelligibility and origination - of free will must be accounted for. Alternativism and intelligibility can readily be accounted for by quasi-determinism. In order to account for origination of free will, the concept of downward causation must be invoked. However, unlike what is commonly believed, there is no evidence that downward causation can influence, shield off, or overpower low-level physical forces already known to
Chai, Zhenhua; Zhao, T S
2014-07-01
In this paper, we propose a local nonequilibrium scheme for computing the flux of the convection-diffusion equation with a source term in the framework of the multiple-relaxation-time (MRT) lattice Boltzmann method (LBM). Both the Chapman-Enskog analysis and the numerical results show that, at the diffusive scaling, the present nonequilibrium scheme has a second-order convergence rate in space. A comparison between the nonequilibrium scheme and the conventional second-order central-difference scheme indicates that, although both schemes have a second-order convergence rate in space, the present nonequilibrium scheme is more accurate than the central-difference scheme. In addition, the flux computation rendered by the present scheme also preserves the parallel computation feature of the LBM, making the scheme more efficient than conventional finite-difference schemes in the study of large-scale problems. Finally, a comparison between the single-relaxation-time model and the MRT model is also conducted, and the results show that the MRT model is more accurate than the single-relaxation-time model, both in solving the convection-diffusion equation and in computing the flux.
Versypt, Ashlee N Ford; Braatz, Richard D
2014-12-04
Two finite difference discretization schemes for approximating the spatial derivatives in the diffusion equation in spherical coordinates with variable diffusivity are presented and analyzed. The numerical solutions obtained by the discretization schemes are compared for five cases of the functional form for the variable diffusivity: (I) constant diffusivity, (II) temporally-dependent diffusivity, (III) spatially-dependent diffusivity, (IV) concentration-dependent diffusivity, and (V) implicitly-defined, temporally- and spatially-dependent diffusivity. Although the schemes have similar agreement to known analytical or semi-analytical solutions in the first four cases, in the fifth case for the variable diffusivity, one scheme produces a stable, physically reasonable solution, while the other diverges. We recommend the adoption of the more accurate and stable of these finite difference discretization schemes to numerically approximate the spatial derivatives of the diffusion equation in spherical coordinates for any functional form of variable diffusivity, especially cases where the diffusivity is a function of position.
Novel Two-Scale Discretization Schemes for Lagrangian Hydrodynamics
Energy Technology Data Exchange (ETDEWEB)
Vassilevski, P
2008-05-29
In this report we propose novel higher order conservative schemes of discontinuous Galerkin (or DG) type for the equations of gas dynamics in Lagrangian coordinates suitable for general unstructured finite element meshes. The novelty of our approach is in the formulation of two-scale non-oscillatory function recovery procedures utilizing integral moments of the quantities of interest (pressure and velocity). The integral moments are computed on a primary mesh (cells or zones) which defines our original scale that governs the accuracy of the schemes. In the non-oscillatory smooth function recovery procedures, we introduce a finer mesh which defines the second scale. Mathematically, the recovery can be formulated as nonlinear energy functional minimization subject to equality and nonlinear inequality constraints. The schemes are highly accurate due to both the embedded (local) mesh refinement features as well as the ability to utilize higher order integral moments. The new DG schemes seem to offer an alternative to currently used artificial viscosity techniques and limiters since the two-scale recovery procedures aim at resolving these issues. We report on some preliminary tests for the lowest order case, and outline some possible future research directions.
METAPHORIC MECHANISMS IN IMAGE SCHEME DEVELOPMENT
National Research Council Canada - National Science Library
Pankratova, S.A
2017-01-01
.... In the summary the author underscores that both ways of image scheme development are of importance to cognitive science, both heuristic epiphora and image-based diaphora play a significant role in the explication of image scheme development.
The Ubiquity of Smooth Hilbert Schemes
Staal, Andrew P.
2017-01-01
We investigate the geography of Hilbert schemes that parametrize closed subschemes of projective space with a specified Hilbert polynomial. We classify Hilbert schemes with unique Borel-fixed points via combinatorial expressions for their Hilbert polynomials. We realize the set of all nonempty Hilbert schemes as a probability space and prove that Hilbert schemes are irreducible and nonsingular with probability greater than $0.5$.
MIRD radionuclide data and decay schemes
Eckerman, Keith F
2007-01-01
For all physicians, scientists, and physicists working in the nuclear medicine field, the MIRD: Radionuclide Data and Decay Schemes updated edition is an essential sourcebook for radiation dosimetry and understanding the properties of radionuclides. Includes CD Table of Contents Decay schemes listed by atomic number Radioactive decay processes Serial decay schemes Decay schemes and decay tables This essential reference for nuclear medicine physicians, scientists and physicists also includes a CD with tabulations of the radionuclide data necessary for dosimetry calculations.
An Integrative Approach to Accurate Vehicle Logo Detection
Directory of Open Access Journals (Sweden)
Hao Pan
2013-01-01
required for many applications in intelligent transportation systems and automatic surveillance. The task is challenging considering the small target of logos and the wide range of variability in shape, color, and illumination. A fast and reliable vehicle logo detection approach is proposed following visual attention mechanism from the human vision. Two prelogo detection steps, that is, vehicle region detection and a small RoI segmentation, rapidly focalize a small logo target. An enhanced Adaboost algorithm, together with two types of features of Haar and HOG, is proposed to detect vehicles. An RoI that covers logos is segmented based on our prior knowledge about the logos’ position relative to license plates, which can be accurately localized from frontal vehicle images. A two-stage cascade classier proceeds with the segmented RoI, using a hybrid of Gentle Adaboost and Support Vector Machine (SVM, resulting in precise logo positioning. Extensive experiments were conducted to verify the efficiency of the proposed scheme.
Robust second-order scheme for multi-phase flow computations
Shahbazi, Khosro
2017-06-01
A robust high-order scheme for the multi-phase flow computations featuring jumps and discontinuities due to shock waves and phase interfaces is presented. The scheme is based on high-order weighted-essentially non-oscillatory (WENO) finite volume schemes and high-order limiters to ensure the maximum principle or positivity of the various field variables including the density, pressure, and order parameters identifying each phase. The two-phase flow model considered besides the Euler equations of gas dynamics consists of advection of two parameters of the stiffened-gas equation of states, characterizing each phase. The design of the high-order limiter is guided by the findings of Zhang and Shu (2011) [36], and is based on limiting the quadrature values of the density, pressure and order parameters reconstructed using a high-order WENO scheme. The proof of positivity-preserving and accuracy is given, and the convergence and the robustness of the scheme are illustrated using the smooth isentropic vortex problem with very small density and pressure. The effectiveness and robustness of the scheme in computing the challenging problem of shock wave interaction with a cluster of tightly packed air or helium bubbles placed in a body of liquid water is also demonstrated. The superior performance of the high-order schemes over the first-order Lax-Friedrichs scheme for computations of shock-bubble interaction is also shown. The scheme is implemented in two-dimensional space on parallel computers using message passing interface (MPI). The proposed scheme with limiter features approximately 50% higher number of inter-processor message communications compared to the corresponding scheme without limiter, but with only 10% higher total CPU time. The scheme is provably second-order accurate in regions requiring positivity enforcement and higher order in the rest of domain.
Deep mixing of 3He: reconciling Big Bang and stellar nucleosynthesis.
Eggleton, Peter P; Dearborn, David S P; Lattanzio, John C
2006-12-08
Low-mass stars, approximately 1 to 2 solar masses, near the Main Sequence are efficient at producing the helium isotope 3He, which they mix into the convective envelope on the giant branch and should distribute into the Galaxy by way of envelope loss. This process is so efficient that it is difficult to reconcile the low observed cosmic abundance of 3He with the predictions of both stellar and Big Bang nucleosynthesis. Here we find, by modeling a red giant with a fully three-dimensional hydrodynamic code and a full nucleosynthetic network, that mixing arises in the supposedly stable and radiative zone between the hydrogen-burning shell and the base of the convective envelope. This mixing is due to Rayleigh-Taylor instability within a zone just above the hydrogen-burning shell, where a nuclear reaction lowers the mean molecular weight slightly. Thus, we are able to remove the threat that 3He production in low-mass stars poses to the Big Bang nucleosynthesis of 3He.
Lauvaux, T.; Ogle, S. M.; Davis, K. J.; Williams, C. A.; Breidt, J.; Feng, S.; Butler, M. P.
2016-12-01
Monitoring, Reporting and Verification (MRV) have been associated with inventory-based products for the monitoring and reporting of carbon sources and sinks to the United Nations Framework Convention on Climate Change (UNFCCC), while verification frameworks are anticipated to incorporate atmospheric top-down metholodogies. In recent years, few studies presented inter-comparison exercises between the two approaches, exercises that remain the closest accomplishment to a complete independent verification system. However, gaps between atmospheric and inventory methods require new methodologies and data fusion approaches to merge and reconcile them that have not been fully explored. Atmospheric GHG concentration data provide an independent source of information about emissions that can be used to verify the inventory. We present different strategies to propagate error structures from the inventories into the atmospheric assimilation frameworks, and discuss the implications for the relative independence of atmospheric and inventory estimates. In addition, the current state of inventory methods with their internal complexity, particularly for predicting emissions from forestry and agricultural systems, could utilize top-down approaches in a data fusion technique to optimize the parameters used in the inventories. We demonstrate here that the link between carbon stocks and surface exchanges can be made through inventory-based mechanistic approaches and atmospheric verification if future MRV systems address the needs for new data fusion approaches.
Carlisle, Nancy B; Woodman, Geoffrey F
2013-10-01
Maintaining a representation in working memory has been proposed to be sufficient for the execution of top-down attentional control. Two recent electrophysiological studies that recorded event-related potentials (ERPs) during similar paradigms have tested this proposal, but have reported contradictory findings. The goal of the present study was to reconcile these previous reports. To this end, we used the stimuli from one study (Kumar, Soto, & Humphreys, 2009) combined with the task manipulations from the other (Carlisle & Woodman, 2011b). We found that when an item matching a working memory representation was presented in a visual search array, we could use ERPs to quantify the size of the covert attention effect. When the working memory matches were consistently task-irrelevant, we observed a weak attentional bias to these items. However, when the same item indicated the location of the search target, we found that the covert attention effect was approximately four times larger. This shows that simply maintaining a representation in working memory is not equivalent to having a top-down attentional set for that item. Our findings indicate that high-level goals mediate the relationship between the contents of working memory and perceptual attention.
Vargas, R.; Baldocchi, D. D.; Bahn, M.; Hanson, P. J.; Hosman, K.; Kulmala, L.; Pumpanen, J.; Yang, B.
2010-12-01
The temporal correlation between canopy photosynthesis and soil respiration (SR) is a current debate as different methods report lags for this relationship that range from hours to several days. We explore the temporal correlation between these fluxes during the growing season at four study sites, including three forests from different climates and a grassland. We used continuous (hourly average) data and applied time series analysis (wavelet coherence analysis) to identify significant temporal correlations and quantify time-lags between canopy photosynthesis and SR. Results show the existence of multi-temporal correlations at time-periods that varied between 1- and 16-days during the growing seasons at all sites. These results reconcile previous observations done by different methods. The temporal correlation was strongest at the 1-day time-period at all study sites demonstrating the strong influence of diel canopy photosynthesis on SR during the growing season. However, this temporal correlation was not uniform throughout the growing season, and was weakened when variation in soil temperature and soil CO2 diffusivity on SR were taken into account. We conclude that a comprehensive SR theory should include canopy photosynthesis, but must consider the multi-temporal influence of canopy photosynthesis, soil CO2 diffusion and soil temperature on SR.
Reconciling results of LSND, MiniBooNE and other experiments with soft decoherence
Farzan, Yasaman; Smirnov, Alexei Yu
2008-01-01
We propose an explanation of the LSND signal via quantum-decoherence of the mass states, which leads to damping of the interference terms in the oscillation probabilities. The decoherence parameters as well as their energy dependence are chosen in such a way that the damping affects only oscillations with the large (atmospheric) $\\Delta m^2$ and rapidly decreases with the neutrino energy. This allows us to reconcile the positive LSND signal with MiniBooNE and other null-result experiments. The standard explanations of solar, atmospheric, KamLAND and MINOS data are not affected. No new particles, and in particular, no sterile neutrinos are needed. The LSND signal is controlled by the 1-3 mixing angle $\\theta_{13}$ and, depending on the degree of damping, yields $0.0014 < \\sin^2\\theta_{13} < 0.034$ at $3\\sigma$. The scenario can be tested at upcoming $\\theta_{13}$ searches: while the comparison of near and far detector measurements at reactors should lead to a null-result a positive signal for $\\theta_{13...
Last Glacial Maximum CO2 and δ13C successfully reconciled
Bouttes, N.; Paillard, D.; Roche, D. M.; Brovkin, V.; Bopp, L.
2011-01-01
During the Last Glacial Maximum (LGM, ˜21,000 years ago) the cold climate was strongly tied to low atmospheric CO2 concentration (˜190 ppm). Although it is generally assumed that this low CO2 was due to an expansion of the oceanic carbon reservoir, simulating the glacial level has remained a challenge especially with the additional δ13C constraint. Indeed the LGM carbon cycle was also characterized by a modern-like δ13C in the atmosphere and a higher surface to deep Atlantic δ13C gradient indicating probable changes in the thermohaline circulation. Here we show with a model of intermediate complexity, that adding three oceanic mechanisms: brine induced stratification, stratification-dependant diffusion and iron fertilization to the standard glacial simulation (which includes sea level drop, temperature change, carbonate compensation and terrestrial carbon release) decreases CO2 down to the glacial value of ˜190 ppm and simultaneously matches glacial atmospheric and oceanic δ13C inferred from proxy data. LGM CO2 and δ13C can at last be successfully reconciled.
Munoz-Jaramillo, Andres; Nandy, D.; Martens, P. C. H.; Yeates, A. R.
2011-05-01
The emergence of tilted bipolar active regions and the dispersal of their flux, mediated via processes such as diffusion, differential rotation and meridional circulation is believed to be responsible for the reversal of the Sun's polar field. This process (commonly known as the Babcock-Leighton mechanism) is usually modeled as a near-surface, spatially distributed α-effect in kinematic mean-field dynamo models. However, this formulation leads to a relationship between polar field strength and meridional flow speed which is opposite to that suggested by physical insight and predicted by surface flux-transport simulations. With this in mind, we present an improved double-ring algorithm for modeling the Babcock-Leighton mechanism based on active region eruption, within the framework of an axisymmetric dynamo model. We demonstrate that our treatment of the Babcock-Leighton mechanism through double-ring eruption leads to an inverse relationship between polar field strength and meridional flow speed as expected, reconciling the discrepancy between surface flux-transport simulations and kinematic dynamo models. Finally, we show how this new formulation paves the way for applications, which were not possible before, like understanding the nature of the extended minimum of sunspot cycle 23 and direct assimilation of active region data. This work is funded by NASA Living With a Star Grant NNX08AW53G to Montana State University/Harvard-Smithsonian Center for Astrophysics and the Government of India's Ramanujan Fellowship.
Reconcilability of Socio-Economic Development and Environmental Conservation in Sub-Saharan Africa
Rudi, Lisa-Marie; Azadi, Hossein; Witlox, Frank
2012-04-01
Are the achievements of sustainable development and the improvement of environmental standards mutually exclusive in the 21st century? Is there a possibility to combine the two? This study is an effort to investigate the mutual exclusiveness of the two policy areas and asks for the necessity and possibility to combine the two with a reference to Sub-Saharan Africa (SSA). After describing the historical, geographical, and climatic backgrounds of SSA, negative effects of global warming and local environmentally harmful practices are discussed. Subsequently, the appropriate development measures for the region are elaborated in order to understand their compatibility with regards to improving the environment. It is concluded that to change the dependency on agriculture, the economy needs to be restructured towards technologies. Furthermore, it is found that there is a direct link between global warming and economic efficiency. Theories, which imply that some regions are simply 'too poor to be green', are investigated and rebutted by another theory, which states that it is indeed possible to industrialize in an environmentally friendly way. It follows that environmental and development measures are interconnected, equally important and can be reconciled. The paper finally concludes that the threat posed by global warming and the previously practised environmentally-harmful local measures might be so pressing that it may be too tragic to go for 'develop first and clean up later' approach.
Jabot, Françoise; Turgeon, Jean; Carbonnel, Léopold
2011-08-01
For more than a decade now, evaluation has developed considerably in France, thanks in particular to the Société Française de l'Évaluation, whose charter sets out a number of principles designed to guide the work of evaluators. This article examines how the evaluation process surrounding a regional public health plan (referred to as PRSP)--itself being a new instrument for regional planning in France--accords with one of these principles, which specifies that evaluation must be framed according to "a three-fold logic involving public management, democracy and scientific debate." Our analysis shows that while this evaluation was driven primarily by managerial concerns (i.e., assessing the capacity of the plan to structure health policy in a region), it also provided an Opportunity for restoring dialogue with a range of actors by opening up a space of cooperation and consultation. In addition, in order to ensure the legitimacy of the evaluation's conclusions, the knowledge produced by the evaluators had to rest on an irreproachable methodology. This example illustrates how evaluation, in the French tradition, is a process that strives to reconcile the viewpoints and expectations of managers, scientists and the general public; it is also a process that brings out lines of tension and areas of complementariness between these three logics. Copyright © 2011 Elsevier Ltd. All rights reserved.
Ooms, Gorik; Forman, Lisa; Williams, Owain D; Hill, Peter S
2014-12-18
The heads of the Global Fund and the GAVI Alliance have recently promoted the idea of an international tiered pricing framework for medicines, despite objections from civil society groups who fear that this would reduce the leeway for compulsory licenses and generic competition. This paper explores the extent to which an international tiered pricing framework and the present leeway for compulsory licensing can be reconciled, using the perspective of the right to health as defined in international human rights law. We explore the practical feasibility of an international tiered pricing and compulsory licensing framework governed by the World Health Organization. We use two simple benchmarks to compare the relative affordability of medicines for governments - average income and burden of disease - to illustrate how voluntary tiered pricing practice fails to make medicines affordable enough for low and middle income countries (if compared with the financial burden of the same medicines for high income countries), and when and where international compulsory licenses should be issued in order to allow governments to comply with their obligations to realize the right to health. An international tiered pricing and compulsory licensing framework based on average income and burden of disease could ease the tension between governments' human rights obligation to provide medicines and governments' trade obligation to comply with the Agreement on Trade-Related Aspects of Intellectual Property Rights.
Reconciling the Reynolds number dependence of scalar roughness length and laminar resistance
Li, Dan; Rigden, Angela; Salvucci, Guido; Liu, Heping
2017-04-01
The scalar roughness length and laminar resistance are necessary for computing scalar fluxes in numerical simulations and experimental studies. Their dependence on flow properties such as the Reynolds number remains controversial. In particular, two important power laws ("1/4" and "1/2"), both having strong theoretical foundations, have been widely used in various parameterizations and models. Building on a previously proposed phenomenological model for interactions between the viscous sublayer and the turbulent flow, it is shown here that the two scaling laws can be reconciled. The 1/4 power law corresponds to the situation where the vertical diffusion is balanced by the temporal change or advection due to a constant velocity in the viscous sublayer, while the 1/2 scaling corresponds to the situation where the vertical diffusion is balanced by the advection due to a linear velocity profile in the viscous sublayer. In addition, the recently proposed "1" power law scaling is also recovered, which corresponds to the situation where molecular diffusion dominates the scalar budget in the viscous sublayer. The formulation proposed here provides a unified framework for understanding the onset of these different scaling laws and offers a new perspective on how to evaluate them experimentally.
Non-Bunch–Davis initial state reconciles chaotic models with BICEP and Planck
Directory of Open Access Journals (Sweden)
Amjad Ashoorioon
2014-10-01
Full Text Available The BICEP2 experiment has announced a signal for primordial gravity waves with tensor-to-scalar ratio r=0.2−0.05+0.07 [1]. There are two ways to reconcile this result with the latest Planck experiment [2]. One is by assuming that there is a considerable tilt of r, Tr, with a positive sign, Tr=dlnr/dlnk≳0.57−0.27+0.29 corresponding to a blue tilt for the tensor modes of order nT≃0.53−0.27+0.29, assuming the Planck experiment best-fit value for tilt of scalar power spectrum nS. The other possibility is to assume that there is a negative running in the scalar spectral index, dnS/dlnk≃−0.02 which pushes up the upper bound on r from 0.11 up to 0.26 in the Planck analysis assuming the existence of a tensor spectrum. Simple slow-roll models fail to provide such large values for Tr or negative runnings in nS [1]. In this note we show that a non-Bunch–Davies initial state for perturbations can provide a match between large field chaotic models (like m2ϕ2 with the latest Planck result [3] and BICEP2 results by accommodating either the blue tilt of r or the negative large running of nS.
Towards reconciling national emission inventories for methane with the global budget
Energy Technology Data Exchange (ETDEWEB)
Keith R. Lassey; Elizabeth A. Scheehle; Dina Kruger [NIWA, Wellington (New Zealand)
2005-06-15
The global methane source to the atmosphere from human activities is determined only to within about 20%, and its evolution over recent decades is essentially undetermined. IPCC Assessment Reports adjudge the contemporary global anthropogenic source to be in the range 300-450 Tg year{sup -1} based on a 'bottom-up' inventory that aggregates the best literature-based estimates for identified sources. This source strength is compatible with, but not tightly constrained by, a 'top-down' analysis in which a global source of 495-700 Tg year{sup -1} is inferred from a global sink estimated at 460-660 Tg year{sup -1}. While such an inventory could in principle be compiled from national emission inventories reported to the United Nations Framework Convention on Climate Change, such inventories can be incomplete or based on sparse data. As an approach towards reconciling aggregated national inventories with top-down assessments, the authors apply a simple model of budget evolution that incorporates constraints from carbon isotope information. Our analysis suggests that aggregated inventories probably understate anthropogenic emissions, especially isotopically heavy emissions such as from fossil sources or biomass combustion. This could be due either to emission underestimation from recognized sources or to significant unrecognized sources, both of which should be remedied with ongoing inventory improvement.
Energy Technology Data Exchange (ETDEWEB)
Nalewajko, Krzysztof; Begelman, Mitchell C. [JILA, University of Colorado and National Institute of Standards and Technology, 440 UCB, Boulder, CO 80309 (United States); Sikora, Marek, E-mail: knalew@stanford.edu [Nicolaus Copernicus Astronomical Center, Bartycka 18, 00-716 Warsaw (Poland)
2014-11-20
Estimates of magnetic field strength in relativistic jets of active galactic nuclei, obtained by measuring the frequency-dependent radio core location, imply that the total magnetic fluxes in those jets are consistent with the predictions of the magnetically arrested disk (MAD) scenario of jet formation. On the other hand, the magnetic field strength determines the luminosity of the synchrotron radiation, which forms the low-energy bump of the observed blazar spectral energy distribution (SED). The SEDs of the most powerful blazars are strongly dominated by the high-energy bump, which is most likely due to the external radiation Compton mechanism. This high Compton dominance may be difficult to reconcile with the MAD scenario, unless (1) the geometry of external radiation sources (broad-line region, hot-dust torus) is quasi-spherical rather than flat, or (2) most gamma-ray radiation is produced in jet regions of low magnetization, e.g., in magnetic reconnection layers or in fast jet spines.
Energy Technology Data Exchange (ETDEWEB)
Nguyen, Thien Khoi V. [Department of Environmental Sciences, Rutgers University, New Brunswick New Jersey USA; Ghate, Virendra P. [Environmental Science Division, Argonne National Laboratory, Argonne Illinois USA; Carlton, Annmarie G. [Department of Chemistry, University of California, Irvine California USA
2016-11-22
Summertime aerosol optical thickness (AOT) over the Southeast U.S. is sharply enhanced over wintertime values. This seasonal pattern is unique and of particular interest because temperatures there have not warmed over the past 100 years. Patterns in surface fine particle mass are inconsistent with satellite reported AOT. In this work, we attempt to reconcile the spatial and temporal distribution of AOT over the U.S. with particle mass measurements at the surface by examining trends in aerosol liquid water (ALW), a particle constituent that scatters radiation affecting the satellite AOT, but is removed in mass measurements at routine surface monitoring sites. We employ the thermodynamic model ISORROPIAv2.1 to estimate ALW mass concentrations at IMRPOVE sites using measured ion mass concentrations and NARR meteorological data. Our findings suggest ALW provides a plausible explanation for the geographical and seasonal patterns in AOT and can reconcile previously noted discrepancies with surface mass measurements.
THROUGHPUT ANALYSIS OF EXTENDED ARQ SCHEMES
African Journals Online (AJOL)
PUBLICATIONS1
and Wait (SW), Selective Repeat (SR), Stutter. (ST) and Go-Back-N (GBN) (Lin and Costello,. 2003). Combinations of these schemes lead to mixed mode schemes which include the SR-. GBN, SR-ST1 and SR-ST2. In the mixed mode schemes, when a prescribed number of failures occur in the SR mode, the GBN or ST ...
Plasma simulation with the Differential Algebraic Cubic Interpolated Propagation scheme
Energy Technology Data Exchange (ETDEWEB)
Utsumi, Takayuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1998-03-01
A computer code based on the Differential Algebraic Cubic Interpolated Propagation scheme has been developed for the numerical solution of the Boltzmann equation for a one-dimensional plasma with immobile ions. The scheme advects the distribution function and its first derivatives in the phase space for one time step by using a numerical integration method for ordinary differential equations, and reconstructs the profile in phase space by using a cubic polynomial within a grid cell. The method gives stable and accurate results, and is efficient. It is successfully applied to a number of equations; the Vlasov equation, the Boltzmann equation with the Fokker-Planck or the Bhatnagar-Gross-Krook (BGK) collision term and the relativistic Vlasov equation. The method can be generalized in a straightforward way to treat cases such as problems with nonperiodic boundary conditions and higher dimensional problems. (author)
Accurate Numerical Method for Pricing Two-Asset American Put Options
Directory of Open Access Journals (Sweden)
Xianbin Wu
2013-01-01
Full Text Available We develop an accurate finite difference scheme for pricing two-asset American put options. We use the central difference method for space derivatives and the implicit Euler method for the time derivative. Under certain mesh step size limitations, the matrix associated with the discrete operator is an M-matrix, which ensures that the solutions are oscillation-free. We apply the maximum principle to the discrete linear complementarity problem in two mesh sets and derive the error estimates. It is shown that the scheme is second-order convergent with respect to the spatial variables. Numerical results support the theoretical results.
The Original Management Incentive Schemes
Richard T. Holden
2005-01-01
During the 1990s, the structure of pay for top corporate executives shifted markedly as the use of stock options greatly expanded. By the early 2000s, as the dot-com boom ended and the Nasdaq stock index melted down, these modern executive incentive schemes were being sharply questioned on many grounds—for encouraging excessive risk-taking and a short-run orientation, for being an overly costly and inefficient method of providing incentives, and even for tempting managers of firms like Enron,...
Adaptive Optics Metrics & QC Scheme
Girard, Julien H.
2017-09-01
"There are many Adaptive Optics (AO) fed instruments on Paranal and more to come. To monitor their performances and assess the quality of the scientific data, we have developed a scheme and a set of tools and metrics adapted to each flavour of AO and each data product. Our decisions to repeat observations or not depends heavily on this immediate quality control "zero" (QC0). Atmospheric parameters monitoring can also help predict performances . At the end of the chain, the user must be able to find the data that correspond to his/her needs. In Particular, we address the special case of SPHERE."
The mathematics of Ponzi schemes
Artzrouni, Marc
2009-01-01
A first order linear differential equation is used to describe the dynamics of an investment fund that promises more than it can deliver, also known as a Ponzi scheme. The model is based on a promised, unrealistic interest rate; on the actual, realized nominal interest rate; on the rate at which new deposits are accumulated and on the withdrawal rate. Conditions on these parameters are given for the fund to be solvent or to collapse. The model is fitted to data available on Charles...
Energy Technology Data Exchange (ETDEWEB)
Touma, Rony [Department of Computer Science & Mathematics, Lebanese American University, Beirut (Lebanon); Zeidan, Dia [School of Basic Sciences and Humanities, German Jordanian University, Amman (Jordan)
2016-06-08
In this paper we extend a central finite volume method on nonuniform grids to the case of drift-flux two-phase flow problems. The numerical base scheme is an unstaggered, non oscillatory, second-order accurate finite volume scheme that evolves a piecewise linear numerical solution on a single grid and uses dual cells intermediately while updating the numerical solution to avoid the resolution of the Riemann problems arising at the cell interfaces. We then apply the numerical scheme and solve a classical drift-flux problem. The obtained results are in good agreement with corresponding ones appearing in the recent literature, thus confirming the potential of the proposed scheme.
von Hobe, M.; Röckmann, T.|info:eu-repo/dai/nl/304838233; Stroh, F.; et al., [No Value
2013-01-01
The international research project RECONCILE has addressed central questions regarding polar ozone depletion, with the objective to quantify some of the most relevant yet still uncertain physical and chemical processes and thereby improve prognostic modelling capabilities to realistically predict
REMINDER: Saved Leave Scheme (SLS)
2003-01-01
Transfer of leave to saved leave accounts Under the provisions of the voluntary saved leave scheme (SLS), a maximum total of 10 days'* annual and compensatory leave (excluding saved leave accumulated in accordance with the provisions of Administrative Circular No 22B) can be transferred to the saved leave account at the end of the leave year (30 September). We remind you that unused leave of all those taking part in the saved leave scheme at the closure of the leave year accounts is transferred automatically to the saved leave account on that date. Therefore, staff members have no administrative steps to take. In addition, the transfer, which eliminates the risk of omitting to request leave transfers and rules out calculation errors in transfer requests, will be clearly shown in the list of leave transactions that can be consulted in EDH from October 2003 onwards. Furthermore, this automatic leave transfer optimizes staff members' chances of benefiting from a saved leave bonus provided that they ar...
Delivery of a national home safety equipment scheme in England: a survey of local scheme leaders.
Mulvaney, C A; Watson, M C; Hamilton, T; Errington, G
2013-11-01
Unintentional home injuries sustained by preschool children are a major cause of morbidity in the UK. Home safety equipment schemes may reduce home injury rates. In 2009, the Royal Society for the Prevention of Accidents was appointed as central coordinator of a two-year, £18m national home safety equipment scheme in England. This paper reports the findings from a national survey of all scheme leaders responsible for local scheme delivery. A questionnaire mailed to all local scheme leaders sought details of how the schemes were operated locally; barriers and facilitators to scheme implementation; evaluation of the local scheme and its sustainability. A response rate of 73% was achieved. Health visitors and family support workers played a key role in both the identification of eligible families and performing home safety checks. The majority of local scheme leaders (94.6%) reported that they thought their local scheme had been successful in including those families considered 'harder to engage'. Many scheme leaders (72.4%) reported that they had evaluated the provision of safety equipment in their scheme and over half (56.6%) stated that they would not be able to continue the scheme once funding ceased. Local schemes need support to effectively evaluate their scheme and to seek sustainability funding to ensure the future of the scheme. There remains a lack of evidence of whether the provision of home safety equipment reduces injuries in preschool children.
Channel Aggregation Schemes for Cognitive Radio Networks
Lee, Jongheon; So, Jaewoo
This paper proposed three channel aggregation schemes for cognitive radio networks, a constant channel aggregation scheme, a probability distribution-based variable channel aggregation scheme, and a residual channel-based variable channel aggregation scheme. A cognitive radio network can have a wide bandwidth if unused channels in the primary networks are aggregated. Channel aggregation schemes involve either constant channel aggregation or variable channel aggregation. In this paper, a Markov chain is used to develop an analytical model of channel aggregation schemes; and the performance of the model is evaluated in terms of the average sojourn time, the average throughput, the forced termination probability, and the blocking probability. Simulation results show that channel aggregation schemes can reduce the average sojourn time of cognitive users by increasing the channel occupation rate of unused channels in a primary network.
Hybrid scheme for Brownian semistationary processes
DEFF Research Database (Denmark)
Bennedsen, Mikkel; Lunde, Asger; Pakkanen, Mikko S.
is to approximate the kernel function by a power function near zero and by a step function elsewhere. The resulting approximation of the process is a combination of Wiener integrals of the power function and a Riemann sum, which is why we call this method a hybrid scheme. Our main theoretical result describes...... the asymptotics of the mean square error of the hybrid scheme and we observe that the scheme leads to a substantial improvement of accuracy compared to the ordinary forward Riemann-sum scheme, while having the same computational complexity. We exemplify the use of the hybrid scheme by two numerical experiments......We introduce a simulation scheme for Brownian semistationary processes, which is based on discretizing the stochastic integral representation of the process in the time domain. We assume that the kernel function of the process is regularly varying at zero. The novel feature of the scheme...
A New Grünwald-Letnikov Derivative Derived from a Second-Order Scheme
B. A. Jacobs
2015-01-01
A novel derivation of a second-order accurate Grünwald-Letnikov-type approximation to the fractional derivative of a function is presented. This scheme is shown to be second-order accurate under certain modifications to account for poor accuracy in approximating the asymptotic behavior near the lower limit of differentiation. Some example functions are chosen and numerical results are presented to illustrate the efficacy of this new method over some other popular choices ...
In vitro transcription accurately predicts lac repressor phenotype in vivo in Escherichia coli
Directory of Open Access Journals (Sweden)
Matthew Almond Sochor
2014-07-01
Full Text Available A multitude of studies have looked at the in vivo and in vitro behavior of the lac repressor binding to DNA and effector molecules in order to study transcriptional repression, however these studies are not always reconcilable. Here we use in vitro transcription to directly mimic the in vivo system in order to build a self consistent set of experiments to directly compare in vivo and in vitro genetic repression. A thermodynamic model of the lac repressor binding to operator DNA and effector is used to link DNA occupancy to either normalized in vitro mRNA product or normalized in vivo fluorescence of a regulated gene, YFP. An accurate measurement of repressor, DNA and effector concentrations were made both in vivo and in vitro allowing for direct modeling of the entire thermodynamic equilibrium. In vivo repression profiles are accurately predicted from the given in vitro parameters when molecular crowding is considered. Interestingly, our measured repressor–operator DNA affinity differs significantly from previous in vitro measurements. The literature values are unable to replicate in vivo binding data. We therefore conclude that the repressor-DNA affinity is much weaker than previously thought. This finding would suggest that in vitro techniques that are specifically designed to mimic the in vivo process may be necessary to replicate the native system.
Closed-Loop Autofocus Scheme for Scanning Electron Microscope
Directory of Open Access Journals (Sweden)
Cui Le
2015-01-01
Full Text Available In this paper, we present a full scale autofocus approach for scanning electron microscope (SEM. The optimal focus (in-focus position of the microscope is achieved by maximizing the image sharpness using a vision-based closed-loop control scheme. An iterative optimization algorithm has been designed using the sharpness score derived from image gradient information. The proposed method has been implemented and validated using a tungsten gun SEM at various experimental conditions like varying raster scan speed, magnification at real-time. We demonstrate that the proposed autofocus technique is accurate, robust and fast.
Modified unified kinetic scheme for all flow regimes.
Liu, Sha; Zhong, Chengwen
2012-06-01
A modified unified kinetic scheme for the prediction of fluid flow behaviors in all flow regimes is described. The time evolution of macrovariables at the cell interface is calculated with the idea that both free transport and collision mechanisms should be considered. The time evolution of macrovariables is obtained through the conservation constraints. The time evolution of local Maxwellian distribution is obtained directly through the one-to-one mapping from the evolution of macrovariables. These improvements provide more physical realities in flow behaviors and more accurate numerical results in all flow regimes especially in the complex transition flow regime. In addition, the improvement steps introduce no extra computational complexity.
Tanaka, Satoshi; Yoshikawa, Kohji; Minoshima, Takashi; Yoshida, Naoki
2017-11-01
We develop new numerical schemes for Vlasov–Poisson equations with high-order accuracy. Our methods are based on a spatially monotonicity-preserving (MP) scheme and are modified suitably so that the positivity of the distribution function is also preserved. We adopt an efficient semi-Lagrangian time integration scheme that is more accurate and computationally less expensive than the three-stage TVD Runge–Kutta integration. We apply our spatially fifth- and seventh-order schemes to a suite of simulations of collisionless self-gravitating systems and electrostatic plasma simulations, including linear and nonlinear Landau damping in one dimension and Vlasov–Poisson simulations in a six-dimensional phase space. The high-order schemes achieve a significantly improved accuracy in comparison with the third-order positive-flux-conserved scheme adopted in our previous study. With the semi-Lagrangian time integration, the computational cost of our high-order schemes does not significantly increase, but remains roughly the same as that of the third-order scheme. Vlasov–Poisson simulations on {128}3× {128}3 mesh grids have been successfully performed on a massively parallel computer.
Angel, Jordan B.; Banks, Jeffrey W.; Henshaw, William D.
2018-01-01
High-order accurate upwind approximations for the wave equation in second-order form on overlapping grids are developed. Although upwind schemes are well established for first-order hyperbolic systems, it was only recently shown by Banks and Henshaw [1] how upwinding could be incorporated into the second-order form of the wave equation. This new upwind approach is extended here to solve the time-domain Maxwell's equations in second-order form; schemes of arbitrary order of accuracy are formulated for general curvilinear grids. Taylor time-stepping is used to develop single-step space-time schemes, and the upwind dissipation is incorporated by embedding the exact solution of a local Riemann problem into the discretization. Second-order and fourth-order accurate schemes are implemented for problems in two and three space dimensions, and overlapping grids are used to treat complex geometry and problems with multiple materials. Stability analysis of the upwind-scheme on overlapping grids is performed using normal mode theory. The stability analysis and computations confirm that the upwind scheme remains stable on overlapping grids, including the difficult case of thin boundary grids when the traditional non-dissipative scheme becomes unstable. The accuracy properties of the scheme are carefully evaluated on a series of classical scattering problems for both perfect conductors and dielectric materials in two and three space dimensions. The upwind scheme is shown to be robust and provide high-order accuracy.
Accurate registration of temporal CT images for pulmonary nodules detection
Yan, Jichao; Jiang, Luan; Li, Qiang
2017-02-01
Interpretation of temporal CT images could help the radiologists to detect some subtle interval changes in the sequential examinations. The purpose of this study was to develop a fully automated scheme for accurate registration of temporal CT images for pulmonary nodule detection. Our method consisted of three major registration steps. Firstly, affine transformation was applied in the segmented lung region to obtain global coarse registration images. Secondly, B-splines based free-form deformation (FFD) was used to refine the coarse registration images. Thirdly, Demons algorithm was performed to align the feature points extracted from the registered images in the second step and the reference images. Our database consisted of 91 temporal CT cases obtained from Beijing 301 Hospital and Shanghai Changzheng Hospital. The preliminary results showed that approximately 96.7% cases could obtain accurate registration based on subjective observation. The subtraction images of the reference images and the rigid and non-rigid registered images could effectively remove the normal structures (i.e. blood vessels) and retain the abnormalities (i.e. pulmonary nodules). This would be useful for the screening of lung cancer in our future study.
Lemaire, Gilles; Gastal, François; Franzluebbers, Alan; Chabbi, Abad
2015-11-01
A need to increase agricultural production across the world to ensure continued food security appears to be at odds with the urgency to reduce the negative environmental impacts of intensive agriculture. Around the world, intensification has been associated with massive simplification and uniformity at all levels of organization, i.e., field, farm, landscape, and region. Therefore, we postulate that negative environmental impacts of modern agriculture are due more to production simplification than to inherent characteristics of agricultural productivity. Thus by enhancing diversity within agricultural systems, it should be possible to reconcile high quantity and quality of food production with environmental quality. Intensification of livestock and cropping systems separately within different specialized regions inevitably leads to unacceptable environmental impacts because of the overly uniform land use system in intensive cereal areas and excessive N-P loads in intensive animal areas. The capacity of grassland ecosystems to couple C and N cycles through microbial-soil-plant interactions as a way for mitigating the environmental impacts of intensive arable cropping system was analyzed in different management options: grazing, cutting, and ley duration, in order to minimize trade-offs between production and the environment. We suggest that integrated crop-livestock systems are an appropriate strategy to enhance diversity. Sod-based rotations can temporally and spatially capture the benefits of leys for minimizing environmental impacts, while still maintaining periods and areas of intensive cropping. Long-term experimental results illustrate the potential of such systems to sequester C in soil and to reduce and control N emissions to the atmosphere and hydrosphere.
Davies, Althea L; White, Rehema M
2012-12-15
The challenges of integrated, adaptive and ecosystem management are leading government agencies to adopt participatory modes of engagement. Collaborative governance is a form of participation in which stakeholders co-produce goals and strategies and share responsibilities and resources. We assess the potential and challenges of collaborative governance as a mechanism to provide an integrated, ecosystem approach to natural resource management, using red deer in Scotland as a case study. Collaborative Deer Management Groups offer a well-established example of a 'bridging organisation', intended to reduce costs and facilitate decision making and learning across institutions and scales. We examine who initiates collaborative processes and why, what roles different actors adopt and how these factors influence the outcomes, particularly at a time of changing values, management and legislative priorities. Our findings demonstrate the need for careful consideration of where and how shared responsibility might be best implemented and sustained as state agencies often remain key to the process, despite the partnership intention. Differing interpretations between agencies and landowners of the degree of autonomy and division of responsibilities involved in 'collaboration' can create tension, while the diversity of landowner priorities brings additional challenges for defining shared goals in red deer management and in other cases. Effective maintenance depends on appropriate role allocation and adoption of responsibilities, definition of convergent values and goals, and establishing communication and trust in institutional networks. Options that may help private stakeholders offset the costs of accepting responsibility for delivering public benefits need to be explicitly addressed to build capacity and support adaptation. This study indicates that collaborative governance has the potential to help reconcile statutory obligations with stakeholder empowerment. The potential of
Directory of Open Access Journals (Sweden)
Andreas Reuß
2016-05-01
Full Text Available Traditional participating life insurance contracts with year-to-year (cliquet-style guarantees have come under pressure in the current situation of low interest rates and volatile capital markets, in particular when priced in a market-consistent valuation framework. In addition, such guarantees lead to rather high capital requirements under risk-based solvency frameworks such as Solvency II or the Swiss Solvency Test (SST. Therefore, insurers in several countries have developed new forms of participating products with alternative (typically weaker and/or lower guarantees that are less risky for the insurer. In a previous paper, it has been shown that such alternative product designs can lead to higher capital efficiency, i.e., higher and more stable profits and reduced capital requirements. As a result, the financial risk for the insurer is significantly reduced while the main guarantee features perceived and requested by the policyholder are preserved. Based on these findings, this paper now combines the insurer’s and the policyholder’s perspective by analyzing product versions that compensate policyholders for the less valuable guarantees. We particularly identify combinations of asset allocation and profit participation rate for the different product designs that lead to an identical expected profit for the insurer (and identical risk-neutral value for the policyholder, but differ with respect to the insurer’s risk and solvency capital requirements as well as with respect to the real-world return distribution for the policyholder. We show that alternative products can be designed in a way that the insurer’s expected profitability remains unchanged, the insurer’s risk and hence capital requirement is substantially reduced and the policyholder’s expected return is increased. This illustrates that such products might be able to reconcile insurers’ and policyholders’ interests and serve as an alternative to the rather risky
Receiving post-conflict affiliation from the enemy's friend reconciles former opponents.
Directory of Open Access Journals (Sweden)
Roman M Wittig
Full Text Available The adaptive function of bystander initiated post-conflict affiliation (also: consolation & appeasement has been debated for 30 years. Three influential hypotheses compete for the most likely explanation but have not previously been tested with a single data set. The consolation hypothesis argues that bystander affiliation calms the victim and reduces their stress levels. The self-protection hypothesis proposes that a bystander offers affiliation to either opponent to protect himself from redirected aggression by this individual. The relationship-repair hypothesis suggests a bystander can substitute for a friend to reconcile the friend with the friend's former opponent. Here, we contrast all three hypotheses and tested their predictions with data on wild chimpanzees (Pan troglodytes verus of the Taï National Park, Côte d'Ivoire. We examined the first and second post-conflict interactions with respect to both the dyadic and triadic relationships between the bystander and the two opponents. Results showed that female bystanders offered affiliation to their aggressor friends and the victims of their friends, while male bystanders offered affiliation to their victim friends and the aggressors of their friends. For both sexes, bystander affiliation resulted in a subsequent interaction pattern that is expected for direct reconciliation. Bystander affiliation offered to the opponent's friend was more likely to lead to affiliation among opponents in their subsequent interaction. Also, tolerance levels among former opponents were reset to normal levels. In conclusion, this study provides strong evidence for the relationship-repair hypothesis, moderate evidence for the consolation hypothesis and no evidence for the self-protection hypothesis. Furthermore, that bystanders can repair a relationship on behalf of their friend indicates that recipient chimpanzees are aware of the relationships between others, even when they are not kin. This presents a
Schneider, David P.; Deser, Clara
2017-09-01
Recent work suggests that natural variability has played a significant role in the increase of Antarctic sea ice extent during 1979-2013. The ice extent has responded strongly to atmospheric circulation changes, including a deepened Amundsen Sea Low (ASL), which in part has been driven by tropical variability. Nonetheless, this increase has occurred in the context of externally forced climate change, and it has been difficult to reconcile observed and modeled Antarctic sea ice trends. To understand observed-model disparities, this work defines the internally driven and radiatively forced patterns of Antarctic sea ice change and exposes potential model biases using results from two sets of historical experiments of a coupled climate model compared with observations. One ensemble is constrained only by external factors such as greenhouse gases and stratospheric ozone, while the other explicitly accounts for the influence of tropical variability by specifying observed SST anomalies in the eastern tropical Pacific. The latter experiment reproduces the deepening of the ASL, which drives an increase in regional ice extent due to enhanced ice motion and sea surface cooling. However, the overall sea ice trend in every ensemble member of both experiments is characterized by ice loss and is dominated by the forced pattern, as given by the ensemble-mean of the first experiment. This pervasive ice loss is associated with a strong warming of the ocean mixed layer, suggesting that the ocean model does not locally store or export anomalous heat efficiently enough to maintain a surface environment conducive to sea ice expansion. The pervasive upper-ocean warming, not seen in observations, likely reflects ocean mean-state biases.
Reconciling Longitudinal Naive T-Cell and TREC Dynamics during HIV-1 Infection.
Directory of Open Access Journals (Sweden)
Julia Drylewicz
Full Text Available Naive T cells in untreated HIV-1 infected individuals have a reduced T-cell receptor excision circle (TREC content. Previous mathematical models have suggested that this is due to increased naive T-cell division. It remains unclear, however, how reduced naive TREC contents can be reconciled with a gradual loss of naive T cells in HIV-1 infection. We performed longitudinal analyses in humans before and after HIV-1 seroconversion, and used a mathematical model to investigate which processes could explain the observed changes in naive T-cell numbers and TRECs during untreated HIV-1 disease progression. Both CD4+ and CD8+ naive T-cell TREC contents declined biphasically, with a rapid loss during the first year and a much slower loss during the chronic phase of infection. While naive CD8+ T-cell numbers hardly changed during follow-up, naive CD4+ T-cell counts continually declined. We show that a fine balance between increased T-cell division and loss in the peripheral naive T-cell pool can explain the observed short- and long-term changes in TRECs and naive T-cell numbers, especially if T-cell turnover during the acute phase is more increased than during the chronic phase of infection. Loss of thymic output, on the other hand, does not help to explain the biphasic loss of TRECs in HIV infection. The observed longitudinal changes in TRECs and naive T-cell numbers in HIV-infected individuals are most likely explained by a tight balance between increased T-cell division and death, suggesting that these changes are intrinsically linked in HIV infection.
Directory of Open Access Journals (Sweden)
A. V. Mikhailov
2006-10-01
Full Text Available The ionospheric F2-layer parameter long-term trends are considered from the geomagnetic control concept and the greenhouse hypothesis points of view. It is stressed that long-term geomagnetic activity variations are crucial for ionosphere long-term trends, as they determine the basic natural pattern of foF2 and hmF2 long-term variations. The geomagnetic activity effects should be removed from the analyzed data to obtain real trends in ionospheric parameters, but this is not usually done. Only a thermosphere cooling, which is accepted as an explanation for the neutral density decrease, cannot be reconciled with negative foF2 trends revealed for the same period. A more pronounced decrease of the O/N2 ratio is required which is not provided by empirical thermospheric models. Thermospheric cooling practically cannot be seen in foF2 trends, due to a weak NmF2 dependence on neutral temperature; therefore, foF2 trends are mainly controlled by geomagnetic activity long-term variations. Long-term hmF2 variations are also controlled by geomagnetic activity variations, as both parameters, NmF2 and hmF2 are related by the F2-layer formation mechanism. But hmF2 is very sensitive to neutral temperature changes, so strongly damped hmF2 long-term variations observed at Slough after 1972 may be considered as a direct manifestation of the thermosphere cooling. Earlier revealed negative hmF2 trends in western Europe, where magnetic declination D<0 and positive trends at the eastern stations (D>0, can be related to westward thermospheric wind whose role has been enhanced due to a competition between the thermosphere cooling (CO2 increase and its heating under increasing geomagnetic activity after the end of the 1960s.
Scheme of thinking quantum systems
Yukalov, V. I.; Sornette, D.
2009-11-01
A general approach describing quantum decision procedures is developed. The approach can be applied to quantum information processing, quantum computing, creation of artificial quantum intelligence, as well as to analyzing decision processes of human decision makers. Our basic point is to consider an active quantum system possessing its own strategic state. Processing information by such a system is analogous to the cognitive processes associated to decision making by humans. The algebra of probability operators, associated with the possible options available to the decision maker, plays the role of the algebra of observables in quantum theory of measurements. A scheme is advanced for a practical realization of decision procedures by thinking quantum systems. Such thinking quantum systems can be realized by using spin lattices, systems of magnetic molecules, cold atoms trapped in optical lattices, ensembles of quantum dots, or multilevel atomic systems interacting with electromagnetic field.
Fragment separator momentum compression schemes
Energy Technology Data Exchange (ETDEWEB)
Bandura, Laura, E-mail: bandura@anl.gov [Facility for Rare Isotope Beams (FRIB), 1 Cyclotron, East Lansing, MI 48824-1321 (United States); National Superconducting Cyclotron Lab, Michigan State University, 1 Cyclotron, East Lansing, MI 48824-1321 (United States); Erdelyi, Bela [Argonne National Laboratory, Argonne, IL 60439 (United States); Northern Illinois University, DeKalb, IL 60115 (United States); Hausmann, Marc [Facility for Rare Isotope Beams (FRIB), 1 Cyclotron, East Lansing, MI 48824-1321 (United States); Kubo, Toshiyuki [RIKEN Nishina Center, RIKEN, Wako (Japan); Nolen, Jerry [Argonne National Laboratory, Argonne, IL 60439 (United States); Portillo, Mauricio [Facility for Rare Isotope Beams (FRIB), 1 Cyclotron, East Lansing, MI 48824-1321 (United States); Sherrill, Bradley M. [National Superconducting Cyclotron Lab, Michigan State University, 1 Cyclotron, East Lansing, MI 48824-1321 (United States)
2011-07-21
We present a scheme to use a fragment separator and profiled energy degraders to transfer longitudinal phase space into transverse phase space while maintaining achromatic beam transport. The first order beam optics theory of the method is presented and the consequent enlargement of the transverse phase space is discussed. An interesting consequence of the technique is that the first order mass resolving power of the system is determined by the first dispersive section up to the energy degrader, independent of whether or not momentum compression is used. The fragment separator at the Facility for Rare Isotope Beams is a specific application of this technique and is described along with simulations by the code COSY INFINITY.
Fragment separator momentum compression schemes.
Energy Technology Data Exchange (ETDEWEB)
Bandura, L.; Erdelyi, B.; Hausmann, M.; Kubo, T.; Nolen, J.; Portillo, M.; Sherrill, B.M. (Physics); (MSU); (Northern Illinois Univ.); (RIKEN)
2011-07-21
We present a scheme to use a fragment separator and profiled energy degraders to transfer longitudinal phase space into transverse phase space while maintaining achromatic beam transport. The first order beam optics theory of the method is presented and the consequent enlargement of the transverse phase space is discussed. An interesting consequence of the technique is that the first order mass resolving power of the system is determined by the first dispersive section up to the energy degrader, independent of whether or not momentum compression is used. The fragment separator at the Facility for Rare Isotope Beams is a specific application of this technique and is described along with simulations by the code COSY INFINITY.
Fragment separator momentum compression schemes
Bandura, Laura; Erdelyi, Bela; Hausmann, Marc; Kubo, Toshiyuki; Nolen, Jerry; Portillo, Mauricio; Sherrill, Bradley M.
2011-07-01
We present a scheme to use a fragment separator and profiled energy degraders to transfer longitudinal phase space into transverse phase space while maintaining achromatic beam transport. The first order beam optics theory of the method is presented and the consequent enlargement of the transverse phase space is discussed. An interesting consequence of the technique is that the first order mass resolving power of the system is determined by the first dispersive section up to the energy degrader, independent of whether or not momentum compression is used. The fragment separator at the Facility for Rare Isotope Beams is a specific application of this technique and is described along with simulations by the code COSY INFINITY.
Electrical injection schemes for nanolasers
DEFF Research Database (Denmark)
Lupi, Alexandra; Chung, Il-Sug; Yvind, Kresten
2013-01-01
threshold current has been achieved with the lateral electrical injection through the BH; while the lowest resistance has been obtained from the current post structure even though this model shows a higher current threshold because of the lack of carrier confinement. Final scope of the simulations......The performance of injection schemes among recently demonstrated electrically pumped photonic crystal nanolasers has been investigated numerically. The computation has been carried out at room temperature using a commercial semiconductor simulation software. For the simulations two electrical...... of 3 InGaAsP QWs on an InP substrate has been chosen for the modeling. In the simulations the main focus is on the electrical and optical properties of the nanolasers i.e. electrical resistance, threshold voltage, threshold current and wallplug efficiency. In the current flow evaluation the lowest...
Accurate test limits under prescribed consumer risk
Albers, Willem/Wim; Arts, G.R.J.; Kallenberg, W.C.M.
1997-01-01
Measurement errors occurring during inspection of manufactured parts force producers to replace specification limits by slightly more strict test limits. Here accurate test limits are presented which maximize the yield while limiting the fraction of defectives reaching the consumer.
A Unified Methodology for Computing Accurate Quaternion Color Moments and Moment Invariants.
Karakasis, Evangelos G; Papakostas, George A; Koulouriotis, Dimitrios E; Tourassis, Vassilios D
2014-02-01
In this paper, a general framework for computing accurate quaternion color moments and their corresponding invariants is proposed. The proposed unified scheme arose by studying the characteristics of different orthogonal polynomials. These polynomials are used as kernels in order to form moments, the invariants of which can easily be derived. The resulted scheme permits the usage of any polynomial-like kernel in a unified and consistent way. The resulted moments and moment invariants demonstrate robustness to noisy conditions and high discriminative power. Additionally, in the case of continuous moments, accurate computations take place to avoid approximation errors. Based on this general methodology, the quaternion Tchebichef, Krawtchouk, Dual Hahn, Legendre, orthogonal Fourier-Mellin, pseudo Zernike and Zernike color moments, and their corresponding invariants are introduced. A selected paradigm presents the reconstruction capability of each moment family, whereas proper classification scenarios evaluate the performance of color moment invariants.
Which quantum theory must be reconciled with gravity? (And what does it mean for black holes?)
Lake, Matthew J
2016-01-01
We consider the nature of quantum properties in non-relativistic quantum mechanics (QM) and relativistic QFTs, and examine the connection between formal quantization schemes and intuitive notions of wave-particle duality. Based on the map between classical Poisson brackets and their associated commutators, such schemes give rise to quantum states obeying canonical dispersion relations, obtained by substituting the de Broglie relations into the relevant (classical) energy-momentum relation. In canonical QM, this yields a dispersion relation involving $\\hbar$ but not $c$, whereas the canonical relativistic dispersion relation involves both. Extending this logic to the canonical quantization of the gravitational field gives rise to loop quantum gravity, and a map between classical variables containing $G$ and $c$, and associated commutators involving $\\hbar$. This naturally defines a "wave-gravity duality", suggesting that a quantum wave packet describing {\\it self-gravitating matter} obeys a dispersion relation...
Pötz, Walter
2017-11-01
A single-cone finite-difference lattice scheme is developed for the (2+1)-dimensional Dirac equation in presence of general electromagnetic textures. The latter is represented on a (2+1)-dimensional staggered grid using a second-order-accurate finite difference scheme. A Peierls-Schwinger substitution to the wave function is used to introduce the electromagnetic (vector) potential into the Dirac equation. Thereby, the single-cone energy dispersion and gauge invariance are carried over from the continuum to the lattice formulation. Conservation laws and stability properties of the formal scheme are identified by comparison with the scheme for zero vector potential. The placement of magnetization terms is inferred from consistency with the one for the vector potential. Based on this formal scheme, several numerical schemes are proposed and tested. Elementary examples for single-fermion transport in the presence of in-plane magnetization are given, using material parameters typical for topological insulator surfaces.
An Adaptive Semi-Implicit Scheme for Simulations of Unsteady Viscous Compressible Flows
Steinthorsson, Erlendur; Modiano, David; Crutchfield, William Y.; Bell, John B.; Colella, Phillip
1995-01-01
A numerical scheme for simulation of unsteady, viscous, compressible flows is considered. The scheme employs an explicit discretization of the inviscid terms of the Navier-Stokes equations and an implicit discretization of the viscous terms. The discretization is second order accurate in both space and time. Under appropriate assumptions, the implicit system of equations can be decoupled into two linear systems of reduced rank. These are solved efficiently using a Gauss-Seidel method with multigrid convergence acceleration. When coupled with a solution-adaptive mesh refinement technique, the hybrid explicit-implicit scheme provides an effective methodology for accurate simulations of unsteady viscous flows. The methodology is demonstrated for both body-fitted structured grids and for rectangular (Cartesian) grids.
On Optimal Designs of Some Censoring Schemes
Directory of Open Access Journals (Sweden)
Dr. Adnan Mohammad Awad
2016-03-01
Full Text Available The main objective of this paper is to explore suitability of some entropy-information measures for introducing a new optimality censoring criterion and to apply it to some censoring schemes from some underlying life-time models. In addition, the paper investigates four related issues namely; the effect of the parameter of parent distribution on optimal scheme, equivalence of schemes based on Shannon and Awad sup-entropy measures, the conjecture that the optimal scheme is one stage scheme, and a conjecture by Cramer and Bagh (2011 about Shannon minimum and maximum schemes when parent distribution is reflected power. Guidelines for designing an optimal censoring plane are reported together with theoretical and numerical results and illustrations.
Improved Load Shedding Scheme considering Distributed Generation
DEFF Research Database (Denmark)
Das, Kaushik; Nitsas, Antonios; Altin, Müfit
2017-01-01
With high penetration of distributed generation (DG), the conventional under-frequency load shedding (UFLS) face many challenges and may not perform as expected. This article proposes new UFLS schemes, which are designed to overcome the shortcomings of traditional load shedding scheme. These sche......With high penetration of distributed generation (DG), the conventional under-frequency load shedding (UFLS) face many challenges and may not perform as expected. This article proposes new UFLS schemes, which are designed to overcome the shortcomings of traditional load shedding scheme....... These schemes utilize directional relays, power flow through feeders, wind and PV measurements to optimally select the feeders to be disconnected during load shedding such that DG disconnection is minimized while disconnecting required amount of consumption. These different UFLS schemes are compared in terms...
Resonance ionization scheme development for europium
Chrysalidis, K; Fedosseev, V N; Marsh, B A; Naubereit, P; Rothe, S; Seiffert, C; Kron, T; Wendt, K
2017-01-01
Odd-parity autoionizing states of europium have been investigated by resonance ionization spectroscopy via two-step, two-resonance excitations. The aim of this work was to establish ionization schemes specifically suited for europium ion beam production using the ISOLDE Resonance Ionization Laser Ion Source (RILIS). 13 new RILIS-compatible ionization schemes are proposed. The scheme development was the first application of the Photo Ionization Spectroscopy Apparatus (PISA) which has recently been integrated into the RILIS setup.
Resonance ionization scheme development for europium
Energy Technology Data Exchange (ETDEWEB)
Chrysalidis, K., E-mail: katerina.chrysalidis@cern.ch; Goodacre, T. Day; Fedosseev, V. N.; Marsh, B. A. [CERN (Switzerland); Naubereit, P. [Johannes Gutenberg-Universität, Institiut für Physik (Germany); Rothe, S.; Seiffert, C. [CERN (Switzerland); Kron, T.; Wendt, K. [Johannes Gutenberg-Universität, Institiut für Physik (Germany)
2017-11-15
Odd-parity autoionizing states of europium have been investigated by resonance ionization spectroscopy via two-step, two-resonance excitations. The aim of this work was to establish ionization schemes specifically suited for europium ion beam production using the ISOLDE Resonance Ionization Laser Ion Source (RILIS). 13 new RILIS-compatible ionization schemes are proposed. The scheme development was the first application of the Photo Ionization Spectroscopy Apparatus (PISA) which has recently been integrated into the RILIS setup.
SIGNCRYPTION BASED ON DIFFERENT DIGITAL SIGNATURE SCHEMES
Adrian Atanasiu; Laura Savu
2012-01-01
This article presents two new signcryption schemes. The first one is based on Schnorr digital signature algorithm and the second one is using Proxy Signature scheme introduced by Mambo. Schnorr Signcryption has been implemented in a program and here are provided the steps of the algorithm, the results and some examples. The Mambo’s Proxy Signature is adapted for Shortened Digital Signature Standard, being part of a new Proxy Signcryption scheme.
Optics design for CEPC double ring scheme
Wang, Yiwei; Su, Feng; Bai, Sha; Yu, Chenghui; Gao, Jie
2017-12-01
The CEPC is a future Circular Electron and Positron Collider proposed by China to mainly study the Higgs boson. Its baseline design is a double ring scheme and an alternative design is a partial double ring scheme. This paper will present the optics design for the main ring of the double ring scheme. The CEPC will also work as a W and Z factory. Compatible optics designs for a W and a Z modes will be presented as well.
Wavelet Denoising within the Lifting Scheme Framework
Directory of Open Access Journals (Sweden)
M. P. Paskaš
2012-11-01
Full Text Available In this paper, we consider an example of the lifting scheme and present the results of the simple lifting scheme implementation using lazy transform. The paper is tutorial-oriented. The results are obtained by testing several common test signals for the signal denoising problem and using different threshold values. The lifting scheme represents an effective and flexible tool that can be used for introducing signal dependence into the problem by improving the wavelet properties.
Application of the GRP scheme to open channel flow equations
Birman, A.; Falcovitz, J.
2007-03-01
The GRP (generalized Riemann problem) scheme, originally conceived for gasdynamics, is reformulated for the numerical integration of the shallow water equations in channels of rectangular cross-section, variable width and bed profile, including a friction model for the fluid-channel shear stress. This scheme is a second-order analytic extension of the first-order Godunov-scheme, based on time-derivatives of flow variables at cell-interfaces resulting from piecewise-linear data reconstruction in cells. The second-order time-integration is based on solutions to generalized Riemann problems at cell-interfaces, thus accounting for the full governing equations, including source terms. The source term due to variable bed elevation is treated in a well-balanced way so that quiescent flow is exactly replicated; this is done by adopting the Surface Gradient Method (SGM). Several problems of steady or unsteady open channel flow are considered, including the terms corresponding to variable channel width and bed elevation, as well as to shear stress at the fluid-channel interface (using the Manning friction model). In all these examples remarkable agreement is obtained between the numerical integration and the exact or accurate solutions.
Mixed ultrasoft/norm-conserved pseudopotential scheme
DEFF Research Database (Denmark)
Stokbro, Kurt
1996-01-01
A variant of the Vanderbilt ultrasoft pseudopotential scheme, where the norm conservation is released for only one or a few angular channels, is presented. Within this scheme some difficulties of the truly ultrasoft pseudopotentials are overcome without sacrificing the pseudopotential softness. (i......) Ghost states are easily avoided without including semicore shells. (ii) The ultrasoft pseudo-charge-augmentation functions can be made softer. (iii) The number of nonlocal operators is reduced. The scheme will be most useful for transition metals, and the feasibility and accuracy of the scheme...
Central schemes for open-channel flow
Gottardi, Guido; Venutelli, Maurizio
2003-03-01
The resolution of the Saint-Venant equations for modelling shock phenomena in open-channel flow by using the second-order central schemes of Nessyahu and Tadmor (NT) and Kurganov and Tadmor (KT) is presented. The performances of the two schemes that we have extended to the non-homogeneous case and that of the classical first-order Lax-Friedrichs (LF) scheme in predicting dam-break and hydraulic jumps in rectangular open channels are investigated on the basis of different numerical and physical conditions. The efficiency and robustness of the schemes are tested by comparing model results with analytical or experimental solutions.
Numerical study of read scheme in one-selector one-resistor crossbar array
Kim, Sungho; Kim, Hee-Dong; Choi, Sung-Jin
2015-12-01
A comprehensive numerical circuit analysis of read schemes of a one selector-one resistance change memory (1S1R) crossbar array is carried out. Three schemes-the ground, V/2, and V/3 schemes-are compared with each other in terms of sensing margin and power consumption. Without the aid of a complex analytical approach or SPICE-based simulation, a simple numerical iteration method is developed to simulate entire current flows and node voltages within a crossbar array. Understanding such phenomena is essential in successfully evaluating the electrical specifications of selectors for suppressing intrinsic drawbacks of crossbar arrays, such as sneaky current paths and series line resistance problems. This method provides a quantitative tool for the accurate analysis of crossbar arrays and provides guidelines for developing an optimal read scheme, array configuration, and selector device specifications.
Haggard, M P; Wood, E J; Carroll, S
1984-08-01
for medical assessment and treatment of pathology would still require admittance methods at some stage, whatever the drawbacks may be in their sole use as a screen. The two-test sequential approach reconciles the pathology perspective and the disability perspective.
Agricultural ammonia emissions in China: reconciling bottom-up and top-down estimates
Directory of Open Access Journals (Sweden)
L. Zhang
2018-01-01
Full Text Available Current estimates of agricultural ammonia (NH3 emissions in China differ by more than a factor of 2, hindering our understanding of their environmental consequences. Here we apply both bottom-up statistical and top-down inversion methods to quantify NH3 emissions from agriculture in China for the year 2008. We first assimilate satellite observations of NH3 column concentration from the Tropospheric Emission Spectrometer (TES using the GEOS-Chem adjoint model to optimize Chinese anthropogenic NH3 emissions at the 1∕2° × 2∕3° horizontal resolution for March–October 2008. Optimized emissions show a strong summer peak, with emissions about 50 % higher in summer than spring and fall, which is underestimated in current bottom-up NH3 emission estimates. To reconcile the latter with the top-down results, we revisit the processes of agricultural NH3 emissions and develop an improved bottom-up inventory of Chinese NH3 emissions from fertilizer application and livestock waste at the 1∕2° × 2∕3° resolution. Our bottom-up emission inventory includes more detailed information on crop-specific fertilizer application practices and better accounts for meteorological modulation of NH3 emission factors in China. We find that annual anthropogenic NH3 emissions are 11.7 Tg for 2008, with 5.05 Tg from fertilizer application and 5.31 Tg from livestock waste. The two sources together account for 88 % of total anthropogenic NH3 emissions in China. Our bottom-up emission estimates also show a distinct seasonality peaking in summer, consistent with top-down results from the satellite-based inversion. Further evaluations using surface network measurements show that the model driven by our bottom-up emissions reproduces the observed spatial and seasonal variations of NH3 gas concentrations and ammonium (NH4+ wet deposition fluxes over China well, providing additional credibility to the improvements we have made to our
Reconciling Weak Interior Mixing and Abyssal Recipes via Concentrated Boundary Mixing
Yang, X.; Miller, M. D.; Tziperman, E.
2016-12-01
-4 m2 s-1 . Therefore our result can reconcile the interior mid-depth temperature stratification with both Munk's theory that would require high diapycnal diffusivity and with observations that the interior diapycnal diffusivity is only O(10-5 ) m2 s-1 .
Agricultural ammonia emissions in China: reconciling bottom-up and top-down estimates
Zhang, Lin; Chen, Youfan; Zhao, Yuanhong; Henze, Daven K.; Zhu, Liye; Song, Yu; Paulot, Fabien; Liu, Xuejun; Pan, Yuepeng; Lin, Yi; Huang, Binxiang
2018-01-01
Current estimates of agricultural ammonia (NH3) emissions in China differ by more than a factor of 2, hindering our understanding of their environmental consequences. Here we apply both bottom-up statistical and top-down inversion methods to quantify NH3 emissions from agriculture in China for the year 2008. We first assimilate satellite observations of NH3 column concentration from the Tropospheric Emission Spectrometer (TES) using the GEOS-Chem adjoint model to optimize Chinese anthropogenic NH3 emissions at the 1/2° × 2/3° horizontal resolution for March-October 2008. Optimized emissions show a strong summer peak, with emissions about 50 % higher in summer than spring and fall, which is underestimated in current bottom-up NH3 emission estimates. To reconcile the latter with the top-down results, we revisit the processes of agricultural NH3 emissions and develop an improved bottom-up inventory of Chinese NH3 emissions from fertilizer application and livestock waste at the 1/2° × 2/3° resolution. Our bottom-up emission inventory includes more detailed information on crop-specific fertilizer application practices and better accounts for meteorological modulation of NH3 emission factors in China. We find that annual anthropogenic NH3 emissions are 11.7 Tg for 2008, with 5.05 Tg from fertilizer application and 5.31 Tg from livestock waste. The two sources together account for 88 % of total anthropogenic NH3 emissions in China. Our bottom-up emission estimates also show a distinct seasonality peaking in summer, consistent with top-down results from the satellite-based inversion. Further evaluations using surface network measurements show that the model driven by our bottom-up emissions reproduces the observed spatial and seasonal variations of NH3 gas concentrations and ammonium (NH4+) wet deposition fluxes over China well, providing additional credibility to the improvements we have made to our agricultural NH3 emission inventory.
The challenge of reconciling development objectives in the context of demographic change
Directory of Open Access Journals (Sweden)
John Provo
2011-04-01
Full Text Available This paper considers whether the US Appalachian Regional Commission (ARC Asset-Based Development Initiative (ABDI reconciles economic development objectives in communities experiencing demographic change. Through a case study approach utilizing key informant interviews in Southwest Virginia communities and a review of ARC-funded projects, the authors consider two main questions. Did community leadership change or adapt to the program? Were new projects demonstrably different in objectives, content, or outcomes than past projects? Economic and demographic similarities between Alpine and Appalachian communities, particularly in the role of in-migrants, suggest that this study’s findings will be relevant for other mountain regions and could contribute to a conversation among international scholars of mountain development.Cet article cherche à déterminer si l’initiative de développement basé sur les ressources (ABDI, Asset-Based Development Initiative de la Commission régionale des Appalaches (ARC, Appalachian Regional Commission aux États-Unis réconcilie les objectifs de développement économique dans les communautés qui présentent un changement démographique. À travers des études de cas reposant sur des entretiens informatifs clés menés dans les communautés de la Virginie Occidentale et un examen de projets financés par l’ARC, les auteurs tentent de répondre à deux questions fondamentales : « Le leadership communautaire a-t-il évolué et s’est-il adapté au programme ? » et « Les nouveaux projets différaient-ils clairement, en termes d’objectifs, de contenu ou de résultats, des projets antérieurs ? ». Les similitudes économiques et démographiques entre les communautés alpines et appalachiennes, notamment en ce qui concerne le rôle des immigrants, suggèrent que les conclusions de cette étude seront pertinentes pour d’autres régions de montagnes et pourraient contribuer à un débat entre sp
Wagner, Karla D; Davidson, Peter J; Pollini, Robin A; Strathdee, Steffanie A; Washburn, Rachel; Palinkas, Lawrence A
2012-01-01
Mixed methods research is increasingly being promoted in the health sciences as a way to gain more comprehensive understandings of how social processes and individual behaviours shape human health. Mixed methods research most commonly combines qualitative and quantitative data collection and analysis strategies. Often, integrating findings from multiple methods is assumed to confirm or validate the findings from one method with the findings from another, seeking convergence or agreement between methods. Cases in which findings from different methods are congruous are generally thought of as ideal, whilst conflicting findings may, at first glance, appear problematic. However, the latter situation provides the opportunity for a process through which apparently discordant results are reconciled, potentially leading to new emergent understandings of complex social phenomena. This paper presents three case studies drawn from the authors' research on HIV risk amongst injection drug users in which mixed methods studies yielded apparently discrepant results. We use these case studies (involving injection drug users [IDUs] using a Needle/Syringe Exchange Program in Los Angeles, CA, USA; IDUs seeking to purchase needle/syringes at pharmacies in Tijuana, Mexico; and young street-based IDUs in San Francisco, CA, USA) to identify challenges associated with integrating findings from mixed methods projects, summarize lessons learned, and make recommendations for how to more successfully anticipate and manage the integration of findings. Despite the challenges inherent in reconciling apparently conflicting findings from qualitative and quantitative approaches, in keeping with others who have argued in favour of integrating mixed methods findings, we contend that such an undertaking has the potential to yield benefits that emerge only through the struggle to reconcile discrepant results and may provide a sum that is greater than the individual qualitative and quantitative parts
Asymptotic analysis of discrete schemes for non-equilibrium radiation diffusion
Cui, Xia; Yuan, Guang-wei; Shen, Zhi-jun
2016-05-01
Motivated by providing well-behaved fully discrete schemes in practice, this paper extends the asymptotic analysis on time integration methods for non-equilibrium radiation diffusion in [2] to space discretizations. Therein studies were carried out on a two-temperature model with Larsen's flux-limited diffusion operator, both the implicitly balanced (IB) and linearly implicit (LI) methods were shown asymptotic-preserving. In this paper, we focus on asymptotic analysis for space discrete schemes in dimensions one and two. First, in construction of the schemes, in contrast to traditional first-order approximations, asymmetric second-order accurate spatial approximations are devised for flux-limiters on boundary, and discrete schemes with second-order accuracy on global spatial domain are acquired consequently. Then by employing formal asymptotic analysis, the first-order asymptotic-preserving property for these schemes and furthermore for the fully discrete schemes is shown. Finally, with the help of manufactured solutions, numerical tests are performed, which demonstrate quantitatively the fully discrete schemes with IB time evolution indeed have the accuracy and asymptotic convergence as theory predicts, hence are well qualified for both non-equilibrium and equilibrium radiation diffusion.
SPI Based Meteorological Drought Assessment over a Humid Basin: Effects of Processing Schemes
Directory of Open Access Journals (Sweden)
Han Zhou
2016-08-01
Full Text Available Meteorological drought monitoring is important for drought early warning and disaster prevention. Regional meteorological drought can be evaluated and analyzed with standardized precipitation index (SPI. Two main processing schemes are frequently adopted: (1 mean of all SPI calculated from precipitation at individual stations (SPI-mean; and (2 SPI calculated from all-station averaged precipitation (precipitation-mean. It yet remains unclear if two processing schemes could make difference in drought assessment, which is of significance to reliable drought monitoring. Taking the Poyang Lake Basin with monthly precipitation recorded by 13 national stations for 1957–2014, this study examined two processing schemes. The precipitation mean and SPI mean were respectively calculated with the Thiessen Polygon weighting approach. Our results showed that the two SPI series individually constructed from two schemes had similar features and monitoring trends of regional meteorological droughts. Both SPI series had a significantly positive correlation (p < 0.005 with the number of precipitation stations. The precipitation-mean scheme reduced the extent of precipitation extremes and made the precipitation data more clustered in some certain, it made less precipitation deviate from the precipitation-mean series farther when less precipitation occurred universally, which would probably change the drought levels. Alternatively, the SPI-mean scheme accurately highlighted the extremes especially for those with wide spatial distribution over the region. Therefore, for regional meteorological drought monitoring, the SPI-mean scheme is recommended for its more suitable assessment of historical droughts.
Directory of Open Access Journals (Sweden)
Rodolfo Jaffé
Full Text Available Caves pose significant challenges for mining projects, since they harbor many endemic and threatened species, and must therefore be protected. Recent discussions between academia, environmental protection agencies, and industry partners, have highlighted problems with the current Brazilian legislation for the protection of caves. While the licensing process is long, complex and cumbersome, the criteria used to assign caves into conservation relevance categories are often subjective, with relevance being mainly determined by the presence of obligate cave dwellers (troglobites and their presumed rarity. However, the rarity of these troglobitic species is questionable, as most remain unidentified to the species level and their habitats and distribution ranges are poorly known. Using data from 844 iron caves retrieved from different speleology reports for the Carajás region (South-Eastern Amazon, Brazil, one of the world's largest deposits of high-grade iron ore, we assess the influence of different cave characteristics on four biodiversity proxies (species richness, presence of troglobites, presence of rare troglobites, and presence of resident bat populations. We then examine how the current relevance classification scheme ranks caves with different biodiversity indicators. Large caves were found to be important reservoirs of biodiversity, so they should be prioritized in conservation programs. Our results also reveal spatial autocorrelation in all the biodiversity proxies assessed, indicating that iron caves should be treated as components of a cave network immersed in the karst landscape. Finally, we show that by prioritizing the conservation of rare troglobites, the current relevance classification scheme is undermining overall cave biodiversity and leaving ecologically important caves unprotected. We argue that conservation efforts should target subterranean habitats as a whole and propose an alternative relevance ranking scheme, which could
Jaffé, Rodolfo; Prous, Xavier; Zampaulo, Robson; Giannini, Tereza C; Imperatriz-Fonseca, Vera L; Maurity, Clóvis; Oliveira, Guilherme; Brandi, Iuri V; Siqueira, José O
2016-01-01
Caves pose significant challenges for mining projects, since they harbor many endemic and threatened species, and must therefore be protected. Recent discussions between academia, environmental protection agencies, and industry partners, have highlighted problems with the current Brazilian legislation for the protection of caves. While the licensing process is long, complex and cumbersome, the criteria used to assign caves into conservation relevance categories are often subjective, with relevance being mainly determined by the presence of obligate cave dwellers (troglobites) and their presumed rarity. However, the rarity of these troglobitic species is questionable, as most remain unidentified to the species level and their habitats and distribution ranges are poorly known. Using data from 844 iron caves retrieved from different speleology reports for the Carajás region (South-Eastern Amazon, Brazil), one of the world's largest deposits of high-grade iron ore, we assess the influence of different cave characteristics on four biodiversity proxies (species richness, presence of troglobites, presence of rare troglobites, and presence of resident bat populations). We then examine how the current relevance classification scheme ranks caves with different biodiversity indicators. Large caves were found to be important reservoirs of biodiversity, so they should be prioritized in conservation programs. Our results also reveal spatial autocorrelation in all the biodiversity proxies assessed, indicating that iron caves should be treated as components of a cave network immersed in the karst landscape. Finally, we show that by prioritizing the conservation of rare troglobites, the current relevance classification scheme is undermining overall cave biodiversity and leaving ecologically important caves unprotected. We argue that conservation efforts should target subterranean habitats as a whole and propose an alternative relevance ranking scheme, which could help simplify the
Engwirda, Darren; Marshall, John
2016-01-01
The development of a set of high-order accurate finite-volume formulations for evaluation of the pressure gradient force in layered ocean models is described. A pair of new schemes are presented, both based on an integration of the contact pressure force about the perimeter of an associated momentum control-volume. The two proposed methods differ in their choice of control-volume geometries. High-order accurate numerical integration techniques are employed in both schemes to account for non-linearities in the underlying equation-of-state definitions and thermodynamic profiles, and details of an associated vertical interpolation and quadrature scheme are discussed in detail. Numerical experiments are used to confirm the consistency of the two formulations, and it is demonstrated that the new methods maintain hydrostatic and thermobaric equilibrium in the presence of strongly-sloping layer-wise geometry, non-linear equation-of-state definitions and non-uniform vertical stratification profiles. Additionally, one...
Energy Technology Data Exchange (ETDEWEB)
Thompson, Kelly Glen [Texas A & M Univ., College Station, TX (United States)
2000-11-01
In this work, we develop a new spatial discretization scheme that may be used to numerically solve the neutron transport equation. This new discretization extends the family of corner balance spatial discretizations to include spatial grids of arbitrary polyhedra. This scheme enforces balance on subcell volumes called corners. It produces a lower triangular matrix for sweeping, is algebraically linear, is non-negative in a source-free absorber, and produces a robust and accurate solution in thick diffusive regions. Using an asymptotic analysis, we design the scheme so that in thick diffusive regions it will attain the same solution as an accurate polyhedral diffusion discretization. We then refine the approximations in the scheme to reduce numerical diffusion in vacuums, and we attempt to capture a second order truncation error. After we develop this Upstream Corner Balance Linear (UCBL) discretization we analyze its characteristics in several limits. We complete a full diffusion limit analysis showing that we capture the desired diffusion discretization in optically thick and highly scattering media. We review the upstream and linear properties of our discretization and then demonstrate that our scheme captures strictly non-negative solutions in source-free purely absorbing media. We then demonstrate the minimization of numerical diffusion of a beam and then demonstrate that the scheme is, in general, first order accurate. We also note that for slab-like problems our method actually behaves like a second-order method over a range of cell thicknesses that are of practical interest. We also discuss why our scheme is first order accurate for truly 3D problems and suggest changes in the algorithm that should make it a second-order accurate scheme. Finally, we demonstrate 3D UCBL's performance on several very different test problems. We show good performance in diffusive and streaming problems. We analyze truncation error in a 3D problem and demonstrate robustness
Nonstandard finite difference schemes for differential equations
Directory of Open Access Journals (Sweden)
Mohammad Mehdizadeh Khalsaraei
2014-12-01
Full Text Available In this paper, the reorganization of the denominator of the discrete derivative and nonlocal approximation of nonlinear terms are used in the design of nonstandard finite difference schemes (NSFDs. Numerical examples confirming then efficiency of schemes, for some differential equations are provided. In order to illustrate the accuracy of the new NSFDs, the numerical results are compared with standard methods.
Privacy Preserving Mapping Schemes Supporting Comparison
Tang, Qiang
2010-01-01
To cater to the privacy requirements in cloud computing, we introduce a new primitive, namely Privacy Preserving Mapping (PPM) schemes supporting comparison. An PPM scheme enables a user to map data items into images in such a way that, with a set of images, any entity can determine the <, =, >
Sampling scheme optimization from hyperspectral data
Debba, P.
2006-01-01
This thesis presents statistical sampling scheme optimization for geo-environ-menta] purposes on the basis of hyperspectral data. It integrates derived products of the hyperspectral remote sensing data into individual sampling schemes. Five different issues are being dealt with.First, the optimized
Consolidation of the health insurance scheme
Association du personnel
2009-01-01
In the last issue of Echo, we highlighted CERN’s obligation to guarantee a social security scheme for all employees, pensioners and their families. In that issue we talked about the first component: pensions. This time we shall discuss the other component: the CERN Health Insurance Scheme (CHIS).
A hierarchical classification scheme of psoriasis images
DEFF Research Database (Denmark)
Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær
2003-01-01
A two-stage hierarchical classification scheme of psoriasis lesion images is proposed. These images are basically composed of three classes: normal skin, lesion and background. The scheme combines conventional tools to separate the skin from the background in the first stage, and the lesion from...
Mobile Machine for E-payment Scheme
Sattar J Aboud
2010-01-01
In this article e-payment scheme by mobile machine is considered. The requirements for a mobile machine in e-payment are presented, particularly when merchant employing accounts for payments and when using advertising. In the proposed scheme we will use the public key infrastructure and certificate for authenticate purchaser and merchant and to secure the communication between them.
THROUGHPUT ANALYSIS OF EXTENDED ARQ SCHEMES
African Journals Online (AJOL)
PUBLICATIONS1
formation transmitted in digital communication systems. Such schemes ... Department of Electronics and Telecommunications Engineering,. University of Dar es ...... (2): 165-176. Kundaeli, H. N. (2013). Throughput-Delay. Analysis of the SR-ST-GBN ARQ Scheme. Mediterranean Journal of Electronics and. Communication.
Sengupta, Arkajyoti; Raghavachari, Krishnan
2014-10-14
Accurate modeling of the chemical reactions in many diverse areas such as combustion, photochemistry, or atmospheric chemistry strongly depends on the availability of thermochemical information of the radicals involved. However, accurate thermochemical investigations of radical systems using state of the art composite methods have mostly been restricted to the study of hydrocarbon radicals of modest size. In an alternative approach, systematic error-canceling thermochemical hierarchy of reaction schemes can be applied to yield accurate results for such systems. In this work, we have extended our connectivity-based hierarchy (CBH) method to the investigation of radical systems. We have calibrated our method using a test set of 30 medium sized radicals to evaluate their heats of formation. The CBH-rad30 test set contains radicals containing diverse functional groups as well as cyclic systems. We demonstrate that the sophisticated error-canceling isoatomic scheme (CBH-2) with modest levels of theory is adequate to provide heats of formation accurate to ∼1.5 kcal/mol. Finally, we predict heats of formation of 19 other large and medium sized radicals for which the accuracy of available heats of formation are less well-known.
Fast and accurate methods for phylogenomic analyses
Directory of Open Access Journals (Sweden)
Warnow Tandy
2011-10-01
Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.
Accurate overlaying for mobile augmented reality
Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.
1999-01-01
Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency
Accurate automatic profile monitoring. Genaue automatische Profilkontrolle
Energy Technology Data Exchange (ETDEWEB)
Sacher, F. (Amberg Messtechnik AG (Germany))
1994-06-09
It is almost inconceivable that the present tunnelling methods will not employ modern surveying and monitoring technologies. Accurate, automatic profile monitoring is an aid to optimization of construction work in technical, financial and scheduling respects. These aspects are explained in more detail on the basis of a description of use, various practical examples and a cost analysis. (orig.)
Puzzarini, Cristina; Biczysko, Malgorzata; Barone, Vincenzo; Peña, Isabel; Cabezas, Carlos; Alonso, José L.
2015-01-01
The computational composite scheme purposely set up for accurately describing the electronic structure and spectroscopic properties of small biomolecules has been applied to the first study of the rotational spectrum of 2-thiouracil. The experimental investigation was made possible thanks to the combination of the laser ablation technique with Fourier Transform Microwave spectrometers. The joint experimental – computational study allowed us to determine accurate molecular structure and spectroscopic properties for the title molecule, but more important, it demonstrates a reliable approach for the accurate investigation of isolated small biomolecules. PMID:24002739
Directory of Open Access Journals (Sweden)
S. Szopa
2005-01-01
Full Text Available The objective of this work was to develop and assess an automatic procedure to generate reduced chemical schemes for the atmospheric photooxidation of volatile organic carbon (VOC compounds. The procedure is based on (i the development of a tool for writing the fully explicit schemes for VOC oxidation (see companion paper Aumont et al., 2005, (ii the application of several commonly used reduction methods to the fully explicit scheme, and (iii the assessment of resulting errors based on direct comparison between the reduced and full schemes. The reference scheme included seventy emitted VOCs chosen to be representative of both anthropogenic and biogenic emissions, and their atmospheric degradation chemistry required more than two million reactions among 350000 species. Three methods were applied to reduce the size of the reference chemical scheme: (i use of operators, based on the redundancy of the reaction sequences involved in the VOC oxidation, (ii grouping of primary species having similar reactivities into surrogate species and (iii grouping of some secondary products into surrogate species. The number of species in the final reduced scheme is 147, this being small enough for practical inclusion in current three-dimensional models. Comparisons between the fully explicit and reduced schemes, carried out with a box model for several typical tropospheric conditions, showed that the reduced chemical scheme accurately predicts ozone concentrations and some other aspects of oxidant chemistry for both polluted and clean tropospheric conditions.
1981-03-25
AD-AO98 635 .JAYCOR ALEXANDRIA VA F/6 4/2 DEVELOPMENTO A PBL PARAMETERIZATION SCHEME FOR THE TROPICAL C--ETC(U) MAR al S A CHANG. C AGRITELLIS...PERIOD COVERED Final Report DEVELOPMENT OF A PBL PARAMETERIZATION SCHEME 9/07/79 - 9/08/80 FOR THE TROPICAL CYCLONE MODEL AND AN IMPROVED G. PERFORMING...dimensional, multi-layer PBL model, the GST parameterization yields accurate moisture fluxes, but slightly over- estimates the momentum flux and
Liu, Jianjun; Zhang, Feimin; Pu, Zhaoxia
2017-04-01
Accurate forecasting of the intensity changes of hurricanes is an important yet challenging problem in numerical weather prediction. The rapid intensification of Hurricane Katrina (2005) before its landfall in the southern US is studied with the Advanced Research version of the WRF (Weather Research and Forecasting) model. The sensitivity of numerical simulations to two popular planetary boundary layer (PBL) schemes, the Mellor-Yamada-Janjic (MYJ) and the Yonsei University (YSU) schemes, is investigated. It is found that, compared with the YSU simulation, the simulation with the MYJ scheme produces better track and intensity evolution, better vortex structure, and more accurate landfall time and location. Large discrepancies (e.g., over 10 hPa in simulated minimum sea level pressure) are found between the two simulations during the rapid intensification period. Further diagnosis indicates that stronger surface fluxes and vertical mixing in the PBL from the simulation with the MYJ scheme lead to enhanced air-sea interaction, which helps generate more realistic simulations of the rapid intensification process. Overall, the results from this study suggest that improved representation of surface fluxes and vertical mixing in the PBL is essential for accurate prediction of hurricane intensity changes.
Reconciling Observations of the Yellowstone Hotspot with the Standard Plume Model
Ihinger, P. D.; Watkins, J. M.; Johnson, B. R.
2004-12-01
predominantly calc-alkaline (associated with ancient slab-derived fluids within the sub-continental lithosphere) to predominantly tholeiitic (with distinctive OIB signatures). This transition has been attributed to the eventual foundering of the shallow slab with replacement by `asthenosphere'. Here, we document that magmas with OIB affinity are observed throughout the Cenozoic in the NAC, often before a documented `transition'. We show that these magmas are primarily binary mixtures of two well-known mantle plume components EMI and FOZO. In our model, we propose that the Yellowstone starting plumehead impinged beneath the subducting Farallon Plate at 80 Ma and spread laterally while continuing to ascend. Magmas with OIB affinity erupted only after penetration of the plume through the cold, rigid Farallon slab. In this way, the CRFB, at only 10% of the eruptive volume of typical flood basalt provinces, represent partial melting of only a fraction of the original Yellowstone starting plumehead. Evidence of additional leakage of the plume is found in the Chilcotin flood basalts in BC, the Crescent Terrane volcanics in the Pacific Northwest, and kimberlites, diatremes, and widespread basaltic flows found throughout the NAC. Collectively, the magmatic features that seem to oppose the plume hypothesis can be reconciled by considering a broader context for the origin of the Yellowstone hotspot. Indeed, the `anomalous' geologic activity observed within the NAC is anticipated by the standard plume model; the frequency of hotspots observed on Earth demands that some starting plumeheads will encounter destructive plate margins and generate significant uplift, deformation, and magmatism within a broad region of the overriding lithosphere(s).
A Time Marching Scheme for Solving Volume Integral Equations on Nonlinear Scatterers
Bagci, Hakan
2015-01-07
Transient electromagnetic field interactions on inhomogeneous penetrable scatterers can be analyzed by solving time domain volume integral equations (TDVIEs). TDVIEs are oftentimes solved using marchingon-in-time (MOT) schemes. Unlike finite difference and finite element schemes, MOT-TDVIE solvers require discretization of only the scatterers, do not call for artificial absorbing boundary conditions, and are more robust to numerical phase dispersion. On the other hand, their computational cost is high, they suffer from late-time instabilities, and their implicit nature makes incorporation of nonlinear constitutive relations more difficult. Development of plane-wave time-domain (PWTD) and FFT-based schemes has significantly reduced the computational cost of the MOT-TDVIE solvers. Additionally, latetime instability problem has been alleviated for all practical purposes with the development of accurate integration schemes and specially designed temporal basis functions. Addressing the third challenge is the topic of this presentation. I will talk about an explicit MOT scheme developed for solving the TDVIE on scatterers with nonlinear material properties. The proposed scheme separately discretizes the TDVIE and the nonlinear constitutive relation between electric field intensity and flux density. The unknown field intensity and flux density are expanded using half and full Schaubert-Wilton-Glisson (SWG) basis functions in space and polynomial temporal interpolators in time. The resulting coupled system of the discretized TDVIE and constitutive relation is integrated in time using an explicit P E(CE) m scheme to yield the unknown expansion coefficients. Explicitness of time marching allows for straightforward incorporation of the nonlinearity as a function evaluation on the right hand side of the coupled system of equations. Consequently, the resulting MOT scheme does not call for a Newton-like nonlinear solver. Numerical examples, which demonstrate the applicability
Boss, Alan P.; Myhill, Elizabeth A.
1992-01-01
Two related numerical schemes for calculating the 3D collapse of protostellar clouds are defined, developed, and checked on a wide variety of test problems in spherical symmetry and multiple dimensions. One scheme is first-order accurate in time (code S), and the other second-order accurate in time (code ST). Through convergence testing, the codes are shown to be second-order accurate in spatial differences. Compared with the previous 3D code, the combination of reduced numerical dissipation through second-order accuracy and of removing the systematic bias toward central concentrations implies that the tendency for fragmentation into binary or multiple protostars should increase. A reinvestigation of fragmentation as a mechanism for forming binary stars is expected to yield an even more favorable evaluation.
Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi
2012-01-01
One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…
Directory of Open Access Journals (Sweden)
Ku David N
2010-07-01
% and 116% was demonstrated between the experimental results and those obtained from the First-Order Upwind and Power Law schemes, respectively. However, both the Second-Order upwind and QUICK schemes accurately predict species concentration under high Peclet number, convection-dominated flow conditions. Conclusion Convection-diffusion discretisation scheme selection has a strong influence on resultant species concentration fields, as determined by CFD. Furthermore, either the Second-Order or QUICK discretisation schemes should be implemented when numerically modelling convection-dominated mass-transport conditions. Finally, care should be taken not to utilize computationally inexpensive discretisation schemes at the cost of accuracy in resultant species concentration.
Optimal Face-Iris Multimodal Fusion Scheme
Directory of Open Access Journals (Sweden)
Omid Sharifi
2016-06-01
Full Text Available Multimodal biometric systems are considered a way to minimize the limitations raised by single traits. This paper proposes new schemes based on score level, feature level and decision level fusion to efficiently fuse face and iris modalities. Log-Gabor transformation is applied as the feature extraction method on face and iris modalities. At each level of fusion, different schemes are proposed to improve the recognition performance and, finally, a combination of schemes at different fusion levels constructs an optimized and robust scheme. In this study, CASIA Iris Distance database is used to examine the robustness of all unimodal and multimodal schemes. In addition, Backtracking Search Algorithm (BSA, a novel population-based iterative evolutionary algorithm, is applied to improve the recognition accuracy of schemes by reducing the number of features and selecting the optimized weights for feature level and score level fusion, respectively. Experimental results on verification rates demonstrate a significant improvement of proposed fusion schemes over unimodal and multimodal fusion methods.
Ponzi scheme diffusion in complex networks
Zhu, Anding; Fu, Peihua; Zhang, Qinghe; Chen, Zhenyue
2017-08-01
Ponzi schemes taking the form of Internet-based financial schemes have been negatively affecting China's economy for the last two years. Because there is currently a lack of modeling research on Ponzi scheme diffusion within social networks yet, we develop a potential-investor-divestor (PID) model to investigate the diffusion dynamics of Ponzi scheme in both homogeneous and inhomogeneous networks. Our simulation study of artificial and real Facebook social networks shows that the structure of investor networks does indeed affect the characteristics of dynamics. Both the average degree of distribution and the power-law degree of distribution will reduce the spreading critical threshold and will speed up the rate of diffusion. A high speed of diffusion is the key to alleviating the interest burden and improving the financial outcomes for the Ponzi scheme operator. The zero-crossing point of fund flux function we introduce proves to be a feasible index for reflecting the fast-worsening situation of fiscal instability and predicting the forthcoming collapse. The faster the scheme diffuses, the higher a peak it will reach and the sooner it will collapse. We should keep a vigilant eye on the harm of Ponzi scheme diffusion through modern social networks.
Secure Electronic Cash Scheme with Anonymity Revocation
Directory of Open Access Journals (Sweden)
Baoyuan Kang
2016-01-01
Full Text Available In a popular electronic cash scheme, there are three participants: the bank, the customer, and the merchant. First, a customer opens an account in a bank. Then, he withdraws an e-cash from his account and pays it to a merchant. After checking the electronic cash’s validity, the merchant accepts it and deposits it to the bank. There are a number of requirements for an electronic cash scheme, such as, anonymity, unforgeability, unreusability, divisibility, transferability, and portability. Anonymity property of electronic cash schemes can ensure the privacy of payers. However, this anonymity property is easily abused by criminals. In 2011, Chen et al. proposed a novel electronic cash system with trustee-based anonymity revocation from pairing. On demand, the trustee can disclose the identity for e-cash. But, in this paper we point out that Chen et al.’s scheme is subjected to some drawbacks. To contribute secure electronic cash schemes, we propose a new offline electronic cash scheme with anonymity revocation. We also provide the formally security proofs of the unlinkability and unforgeability. Furthermore, the proposed scheme ensures the property of avoiding merchant frauds.
Deitmar schemes, graphs and zeta functions
Mérida-Angulo, Manuel; Thas, Koen
2017-07-01
In Thas (2014) it was explained how one can naturally associate a Deitmar scheme (which is a scheme defined over the field with one element, F1) to a so-called ;loose graph; (which is a generalization of a graph). Several properties of the Deitmar scheme can be proven easily from the combinatorics of the (loose) graph, and known realizations of objects over F1 such as combinatorial F1-projective and F1-affine spaces exactly depict the loose graph which corresponds to the associated Deitmar scheme. In this paper, we first modify the construction of loc. cit., and show that Deitmar schemes which are defined by finite trees (with possible end points) are ;defined over F1; in Kurokawa's sense; we then derive a precise formula for the Kurokawa zeta function for such schemes (and so also for the counting polynomial of all associated Fq-schemes). As a corollary, we find a zeta function for all such trees which contains information such as the number of inner points and the spectrum of degrees, and which is thus very different than Ihara's zeta function (which is trivial in this case). Using a process called ;surgery,; we show that one can determine the zeta function of a general loose graph and its associated {Deitmar, Grothendieck}-schemes in a number of steps, eventually reducing the calculation essentially to trees. We study a number of classes of examples of loose graphs, and introduce the Grothendieck ring ofF1-schemes along the way in order to perform the calculations. Finally, we include a computer program for performing more tedious calculations, and compare the new zeta function to Ihara's zeta function for graphs in a number of examples.
Accurate Recovery of H i Velocity Dispersion from Radio Interferometers
Energy Technology Data Exchange (ETDEWEB)
Ianjamasimanana, R. [Max-Planck Institut für Astronomie, Königstuhl 17, D-69117, Heidelberg (Germany); Blok, W. J. G. de [Netherlands Institute for Radio Astronomy (ASTRON), Postbus 2, 7990 AA Dwingeloo (Netherlands); Heald, George H., E-mail: roger@mpia.de, E-mail: blok@astron.nl, E-mail: George.Heald@csiro.au [Kapteyn Astronomical Institute, University of Groningen, P.O. Box 800, 9700 AV, Groningen (Netherlands)
2017-05-01
Gas velocity dispersion measures the amount of disordered motion of a rotating disk. Accurate estimates of this parameter are of the utmost importance because the parameter is directly linked to disk stability and star formation. A global measure of the gas velocity dispersion can be inferred from the width of the atomic hydrogen (H i) 21 cm line. We explore how several systematic effects involved in the production of H i cubes affect the estimate of H i velocity dispersion. We do so by comparing the H i velocity dispersion derived from different types of data cubes provided by The H i Nearby Galaxy Survey. We find that residual-scaled cubes best recover the H i velocity dispersion, independent of the weighting scheme used and for a large range of signal-to-noise ratio. For H i observations, where the dirty beam is substantially different from a Gaussian, the velocity dispersion values are overestimated unless the cubes are cleaned close to (e.g., ∼1.5 times) the noise level.
Sparse Reconstruction Schemes for Nonlinear Electromagnetic Imaging
Desmal, Abdulla
2016-03-01
Electromagnetic imaging is the problem of determining material properties from scattered fields measured away from the domain under investigation. Solving this inverse problem is a challenging task because (i) it is ill-posed due to the presence of (smoothing) integral operators used in the representation of scattered fields in terms of material properties, and scattered fields are obtained at a finite set of points through noisy measurements; and (ii) it is nonlinear simply due the fact that scattered fields are nonlinear functions of the material properties. The work described in this thesis tackles the ill-posedness of the electromagnetic imaging problem using sparsity-based regularization techniques, which assume that the scatterer(s) occupy only a small fraction of the investigation domain. More specifically, four novel imaging methods are formulated and implemented. (i) Sparsity-regularized Born iterative method iteratively linearizes the nonlinear inverse scattering problem and each linear problem is regularized using an improved iterative shrinkage algorithm enforcing the sparsity constraint. (ii) Sparsity-regularized nonlinear inexact Newton method calls for the solution of a linear system involving the Frechet derivative matrix of the forward scattering operator at every iteration step. For faster convergence, the solution of this matrix system is regularized under the sparsity constraint and preconditioned by leveling the matrix singular values. (iii) Sparsity-regularized nonlinear Tikhonov method directly solves the nonlinear minimization problem using Landweber iterations, where a thresholding function is applied at every iteration step to enforce the sparsity constraint. (iv) This last scheme is accelerated using a projected steepest descent method when it is applied to three-dimensional investigation domains. Projection replaces the thresholding operation and enforces the sparsity constraint. Numerical experiments, which are carried out using
Which Quantum Theory Must be Reconciled with Gravity? (And What Does it Mean for Black Holes?
Directory of Open Access Journals (Sweden)
Matthew J. Lake
2016-10-01
Full Text Available We consider the nature of quantum properties in non-relativistic quantum mechanics (QM and relativistic quantum field theories, and examine the connection between formal quantization schemes and intuitive notions of wave-particle duality. Based on the map between classical Poisson brackets and their associated commutators, such schemes give rise to quantum states obeying canonical dispersion relations, obtained by substituting the de Broglie relations into the relevant (classical energy-momentum relation. In canonical QM, this yields a dispersion relation involving ℏ but not c, whereas the canonical relativistic dispersion relation involves both. Extending this logic to the canonical quantization of the gravitational field gives rise to loop quantum gravity, and a map between classical variables containing G and c, and associated commutators involving ℏ. This naturally defines a “wave-gravity duality”, suggesting that a quantum wave packet describing self-gravitating matter obeys a dispersion relation involving G, c and ℏ. We propose an Ansatz for this relation, which is valid in the semi-Newtonian regime of both QM and general relativity. In this limit, space and time are absolute, but imposing v max = c allows us to recover the standard expressions for the Compton wavelength λ C and the Schwarzschild radius r S within the same ontological framework. The new dispersion relation is based on “extended” de Broglie relations, which remain valid for slow-moving bodies of any mass m. These reduce to canonical form for m ≪ m P , yielding λ C from the standard uncertainty principle, whereas, for m ≫ m P , we obtain r S as the natural radius of a self-gravitating quantum object. Thus, the extended de Broglie theory naturally gives rise to a unified description of black holes and fundamental particles in the semi-Newtonian regime.
Accurate estimation of indoor travel times
DEFF Research Database (Denmark)
Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan
2014-01-01
The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...
Vector domain decomposition schemes for parabolic equations
Vabishchevich, P. N.
2017-09-01
A new class of domain decomposition schemes for finding approximate solutions of timedependent problems for partial differential equations is proposed and studied. A boundary value problem for a second-order parabolic equation is used as a model problem. The general approach to the construction of domain decomposition schemes is based on partition of unity. Specifically, a vector problem is set up for solving problems in individual subdomains. Stability conditions for vector regionally additive schemes of first- and second-order accuracy are obtained.
Algebraic K-theory of generalized schemes
DEFF Research Database (Denmark)
Anevski, Stella Victoria Desiree
and geometry over the field with one element. It also permits the construction of important Arakelov theoretical objects, such as the completion \\Spec Z of Spec Z. In this thesis, we prove a projective bundle theorem for the eld with one element and compute the Chow rings of the generalized schemes Sp\\ec ZN......Nikolai Durov has developed a generalization of conventional scheme theory in which commutative algebraic monads replace commutative unital rings as the basic algebraic objects. The resulting geometry is expressive enough to encompass conventional scheme theory, tropical algebraic geometry......, appearing in the construction of \\Spec Z....
Galilean invariant resummation schemes of cosmological perturbations
Peloso, Marco; Pietroni, Massimo
2017-01-01
Many of the methods proposed so far to go beyond Standard Perturbation Theory break invariance under time-dependent boosts (denoted here as extended Galilean Invariance, or GI). This gives rise to spurious large scale effects which spoil the small scale predictions of these approximation schemes. By using consistency relations we derive fully non-perturbative constraints that GI imposes on correlation functions. We then introduce a method to quantify the amount of GI breaking of a given scheme, and to correct it by properly tailored counterterms. Finally, we formulate resummation schemes which are manifestly GI, discuss their general features, and implement them in the so called Time-Flow, or TRG, equations.
Finite-volume scheme for anisotropic diffusion
Energy Technology Data Exchange (ETDEWEB)
Es, Bram van, E-mail: bramiozo@gmail.com [Centrum Wiskunde & Informatica, P.O. Box 94079, 1090GB Amsterdam (Netherlands); FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, The Netherlands" 1 (Netherlands); Koren, Barry [Eindhoven University of Technology (Netherlands); Blank, Hugo J. de [FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, The Netherlands" 1 (Netherlands)
2016-02-01
In this paper, we apply a special finite-volume scheme, limited to smooth temperature distributions and Cartesian grids, to test the importance of connectivity of the finite volumes. The area of application is nuclear fusion plasma with field line aligned temperature gradients and extreme anisotropy. We apply the scheme to the anisotropic heat-conduction equation, and compare its results with those of existing finite-volume schemes for anisotropic diffusion. Also, we introduce a general model adaptation of the steady diffusion equation for extremely anisotropic diffusion problems with closed field lines.
Cognitive radio networks dynamic resource allocation schemes
Wang, Shaowei
2014-01-01
This SpringerBrief presents a survey of dynamic resource allocation schemes in Cognitive Radio (CR) Systems, focusing on the spectral-efficiency and energy-efficiency in wireless networks. It also introduces a variety of dynamic resource allocation schemes for CR networks and provides a concise introduction of the landscape of CR technology. The author covers in detail the dynamic resource allocation problem for the motivations and challenges in CR systems. The Spectral- and Energy-Efficient resource allocation schemes are comprehensively investigated, including new insights into the trade-off
Graph state-based quantum authentication scheme
Liao, Longxia; Peng, Xiaoqi; Shi, Jinjing; Guo, Ying
2017-04-01
Inspired by the special properties of the graph state, a quantum authentication scheme is proposed in this paper, which is implemented with the utilization of the graph state. Two entities, a reliable party, Trent, as a verifier and Alice as prover are included. Trent is responsible for registering Alice in the beginning and confirming Alice in the end. The proposed scheme is simple in structure and convenient to realize in the realistic physical system due to the use of the graph state in a one-way quantum channel. In addition, the security of the scheme is extensively analyzed and accordingly can resist the general individual attack strategies.
Autonomous Droop Scheme With Reduced Generation Cost
DEFF Research Database (Denmark)
Nutkani, Inam Ullah; Loh, Poh Chiang; Wang, Peng
2014-01-01
. This objective might, however, not suit microgrids well since DGs are usually of different types, unlike synchronous generators. Other factors like cost, efficiency, and emission penalty of each DG at different loading must be considered since they contribute directly to the total generation cost (TGC......) of the microgrid. To reduce this TGC without relying on fast communication links, an autonomous droop scheme is proposed here, whose resulting power sharing is decided by the individual DG generation costs. Comparing it with the traditional scheme, the proposed scheme retains its simplicity and it is hence more...
2009-09-01
Chapra and Canale, 2006). That is, we set ∫ Ω RwdΩ = 0 (3.2) where w is the test function. If w and the numerical solution uh were in an infinite...the flux. While this is more accurate than an upwind scheme, it is well known that central schemes tend to be unstable ( Chapra and Canale, 2006) for...High resolution methods for multidimensional advection-diffusion problems in free-surface hydrodynamics. Ocean Modelling, 10(1- 2):137–151. Chapra
SOLVING FRACTIONAL-ORDER COMPETITIVE LOTKA-VOLTERRA MODEL BY NSFD SCHEMES
Directory of Open Access Journals (Sweden)
S.ZIBAEI
2016-12-01
Full Text Available In this paper, we introduce fractional-order into a model competitive Lotka- Volterra prey-predator system. We will discuss the stability analysis of this fractional system. The non-standard nite difference (NSFD scheme is implemented to study the dynamic behaviors in the fractional-order Lotka-Volterra system. Proposed non-standard numerical scheme is compared with the forward Euler and fourth order Runge-Kutta methods. Numerical results show that the NSFD approach is easy and accurate for implementing when applied to fractional-order Lotka-Volterra model.
An efficient numerical scheme for the simulation of parallel-plate active magnetic regenerators
DEFF Research Database (Denmark)
Torregrosa-Jaime, Bárbara; Corberán, José M.; Payá, Jorge
2015-01-01
A one-dimensional model of a parallel-plate active magnetic regenerator (AMR) is presented in this work. The model is based on an efficient numerical scheme which has been developed after analysing the heat transfer mechanisms in the regenerator bed. The new finite difference scheme optimally...... combines explicit and implicit techniques in order to solve the one-dimensional conjugate heat transfer problem in an accurate and fast manner while ensuring energy conservation. The present model has been thoroughly validated against passive regenerator cases with an analytical solution. Compared...
Implicit time-accurate simulation of viscous flow
van Buuren, R.; Kuerten, Johannes G.M.; Geurts, Bernardus J.
2001-01-01
Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is
Highly Accurate Prediction of Jobs Runtime Classes
Anat Reiner-Benaim; Anna Grabarnick; Edi Shmueli
2016-01-01
Separating the short jobs from the long is a known technique to improve scheduling performance. In this paper we describe a method we developed for accurately predicting the runtimes classes of the jobs to enable this separation. Our method uses the fact that the runtimes can be represented as a mixture of overlapping Gaussian distributions, in order to train a CART classifier to provide the prediction. The threshold that separates the short jobs from the long jobs is determined during the ev...
Lugovsky, A. Yu.; Popov, Yu. P.
2015-08-01
The Roe-Einfeldt-Osher scheme is considered, which has the third order of accuracy. Its advantages over the first-order accurate Roe scheme are demonstrated, and its choice for the simulation of accretion disk flows is justified. The Roe-Einfeldt-Osher scheme is shown to be efficient as applied to the simulation of real-world problems on parallel computers. Results of simulation of flows in accretion disks in two and three dimensions are presented. Limited capabilities of two-dimensional disk models are noted.
Accurate phase-shift velocimetry in rock.
Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M
2016-06-01
Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. Copyright © 2016. Published by Elsevier Inc.
Accurate Calculation of Electric Fields Inside Enzymes.
Wang, X; He, X; Zhang, J Z H
2016-01-01
The specific electric field generated by a protease at its active site is considered as an important source of the catalytic power. Accurate calculation of electric field at the active site of an enzyme has both fundamental and practical importance. Measuring site-specific changes of electric field at internal sites of proteins due to, eg, mutation, has been realized by using molecular probes with CO or CN groups in the context of vibrational Stark effect. However, theoretical prediction of change in electric field inside a protein based on a conventional force field, such as AMBER or OPLS, is often inadequate. For such calculation, quantum chemical approach or quantum-based polarizable or polarized force field is highly preferable. Compared with the result from conventional force field, significant improvement is found in predicting experimentally measured mutation-induced electric field change using quantum-based methods, indicating that quantum effect such as polarization plays an important role in accurate description of electric field inside proteins. In comparison, the best theoretical prediction comes from fully quantum mechanical calculation in which both polarization and inter-residue charge transfer effects are included for accurate prediction of electrostatics in proteins. © 2016 Elsevier Inc. All rights reserved.
Password authentication scheme based on the quadratic residue problem
Ali, Muhammad Helmi; Ismail, Eddie Shahril
2017-04-01
In this paper, we propose a new password-authentication scheme based on quadratic residue problem with the following advantages: the scheme does not require a verification file, and the scheme can withstand replay attacks and resist from the guessing and impersonation attacks. We next discuss the advantages of our designated scheme over other schemes in terms of security and efficiency.
A survey of Strong Convergent Schemes for the Simulation of ...
African Journals Online (AJOL)
We considered strong convergent stochastic schemes for the simulation of stochastic differential equations. The stochastic Taylor's expansion, which is the main tool used for the derivation of strong convergent schemes; the Euler Maruyama, Milstein scheme, stochastic multistep schemes, Implicit and Explicit schemes were ...
Speeding up Monte Carlo molecular simulation by a non-conservative early rejection scheme
Kadoura, Ahmad Salim
2015-04-23
Monte Carlo (MC) molecular simulation describes fluid systems with rich information, and it is capable of predicting many fluid properties of engineering interest. In general, it is more accurate and representative than equations of state. On the other hand, it requires much more computational effort and simulation time. For that purpose, several techniques have been developed in order to speed up MC molecular simulations while preserving their precision. In particular, early rejection schemes are capable of reducing computational cost by reaching the rejection decision for the undesired MC trials at an earlier stage in comparison to the conventional scheme. In a recent work, we have introduced a ‘conservative’ early rejection scheme as a method to accelerate MC simulations while producing exactly the same results as the conventional algorithm. In this paper, we introduce a ‘non-conservative’ early rejection scheme, which is much faster than the conservative scheme, yet it preserves the precision of the method. The proposed scheme is tested for systems of structureless Lennard-Jones particles in both canonical and NVT-Gibbs ensembles. Numerical experiments were conducted at several thermodynamic conditions for different number of particles. Results show that at certain thermodynamic conditions, the non-conservative method is capable of doubling the speed of the MC molecular simulations in both canonical and NVT-Gibbs ensembles. © 2015 Taylor & Francis
On the modelling of compressible inviscid flow problems using AUSM schemes
Directory of Open Access Journals (Sweden)
Hajžman M.
2007-11-01
Full Text Available During last decades, upwind schemes have become a popular method in the field of computational fluid dynamics. Although they are only first order accurate, AUSM (Advection Upstream Splitting Method schemes proved to be well suited for modelling of compressible flows due to their robustness and ability of capturing shock discontinuities. In this paper, we review the composition of the AUSM flux-vector splitting scheme and its improved version noted AUSM+, proposed by Liou, for the solution of the Euler equations. Mach number splitting functions operating with values from adjacent cells are used to determine numerical convective fluxes and pressure splitting is used for the evaluation of numerical pressure fluxes. Both versions of the AUSM scheme are applied for solving some test problems such as one-dimensional shock tube problem and three dimensional GAMM channel. Features of the schemes are discussed in comparison with some explicit central schemes of the first order accuracy (Lax-Friedrichs and of the second order accuracy (MacCormack.
Directory of Open Access Journals (Sweden)
Muhammad
2017-01-01
Full Text Available We review harvested energy prediction schemes to be used in wireless sensor networks and explore the relative merits of landmark solutions. We propose enhancements to the well-known Profile-Energy (Pro-Energy model, the so-called Improved Profile-Energy (IPro-Energy, and compare its performance with Accurate Solar Irradiance Prediction Model (ASIM, Pro-Energy, and Weather Conditioned Moving Average (WCMA. The performance metrics considered are the prediction accuracy and the execution time which measure the implementation complexity. In addition, the effectiveness of the considered models, when integrated in an energy management scheme, is also investigated in terms of the achieved throughput and the energy consumption. Both solar irradiance and wind power datasets are used for the evaluation study. Our results indicate that the proposed IPro-Energy scheme outperforms the other candidate models in terms of the prediction accuracy achieved by up to 78% for short term predictions and 50% for medium term prediction horizons. For long term predictions, its prediction accuracy is comparable to the Pro-Energy model but outperforms the other models by up to 64%. In addition, the IPro scheme is able to achieve the highest throughput when integrated in the developed energy management scheme. Finally, the ASIM scheme reports the smallest implementation complexity.
Implicit - symplectic partitioned (IMSP) Runge-Kutta schemes for predator-prey dynamics
Diele, F.; Marangi, C.; Ragni, S.
2012-09-01
In the study of the effects of habitat fragmentation on biodiversity the role of spatial processes reveals of great interest since both the variation of size of the domains as well as their heterogeneity largely affects the dynamics of species. In order to begin a preliminary study about the effects of habitat fragmentation on wolf - wild boar pair populating the Italian "Alta Murgia" Natura 2000 site, object of interest for FP7 project BIOSOS, (BIOdiversity multi-SOurce Monitoring System: from Space TO Species), spatially explicit models described by reaction-diffusion partial differential equations are considered. Numerical methods based on partitioned Runge-Kutta schemes which use an implicit scheme for the stiff diffusive term and a partitioned symplectic scheme for the reaction function are here proposed. We are motivated by the classical results about Lotka-Volterra model described by ordinary differential equations to which the spatially explicit model reduces for diffusion coefficients tending to zero: for their accurate solution symplectic schemes have to be used for an optimal long run preservation of the dynamics invariant. Moreover, for models based on logistic growth and Holling type II functional predator response we verify the better performance of our schemes when compared with classical implicit-explicit (IMEX) schemes on chaotic dynamics given in literature.
Designing optimal sampling schemes for field visits
CSIR Research Space (South Africa)
Debba, Pravesh
2008-10-01
Full Text Available This is a presentation of a statistical method for deriving optimal spatial sampling schemes. The research focuses on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting (SFF...
A Climate Classification Scheme for Habitable Worlds
Byrne, J. F.
2017-11-01
This presentation will include an exploration of the internal/external forcings and variability associated with climate using Earth as a reference model in addition to a classification scheme consisting of five categories.
Secure Wake-Up Scheme for WBANs
Liu, Jing-Wei; Ameen, Moshaddique Al; Kwak, Kyung-Sup
Network life time and hence device life time is one of the fundamental metrics in wireless body area networks (WBAN). To prolong it, especially those of implanted sensors, each node must conserve its energy as much as possible. While a variety of wake-up/sleep mechanisms have been proposed, the wake-up radio potentially serves as a vehicle to introduce vulnerabilities and attacks to WBAN, eventually resulting in its malfunctions. In this paper, we propose a novel secure wake-up scheme, in which a wake-up authentication code (WAC) is employed to ensure that a BAN Node (BN) is woken up by the correct BAN Network Controller (BNC) rather than unintended users or malicious attackers. The scheme is thus particularly implemented by a two-radio architecture. We show that our scheme provides higher security while consuming less energy than the existing schemes.
Renormalization scheme dependence with renormalization group summation
McKeon, D. G. C.
2015-08-01
We consider all perturbative radiative corrections to the total e+e- annihilation cross section Re+e- showing how the renormalization group (RG) equation associated with the radiatively induced mass scale μ can be used to sum the logarithmic contributions in two ways. First of all, one can sum leading-log, next-to-leading-log, etc., contributions to Re+e- using in turn the one-loop, two-loop, etc., contributions to the RG function β . A second summation shows how all logarithmic corrections to Re+e- can be expressed entirely in terms of the log-independent contributions when one employs the full β -function. Next, using Stevenson's characterization of any choice of renormalization scheme by the use of the contributions to the β -function arising beyond two-loop order, we examine the RG scheme dependence in Re+e- when using the second way of summing logarithms. The renormalization scheme invariants that arise are then related to the renormalization scheme invariants found by Stevenson. We next consider two choices of the renormalization scheme, one which can be used to express Re+e- solely in terms of two powers of a running coupling, and the second which can be used to express Re+e- as an infinite series in the two-loop running coupling (i.e., a Lambert W -function). In both cases, Re+e- is expressed solely in terms of renormalization scheme invariant parameters that are to be computed by a perturbative evaluation of Re+e-. We then establish how in general the coupling constant arising in one renormalization scheme can be expressed as a power series of the coupling arising in any other scheme. We then establish how, by using a different renormalization mass scale at each order of perturbation theory, all renormalization scheme dependence can be absorbed into these mass scales when one uses the second way of summing logarithmic corrections to Re+e-. We then employ the approach to renormalization scheme dependency that we have applied to Re+e- to a RG summed
Asynchronous Communication Scheme For Hypercube Computer
Madan, Herb S.
1988-01-01
Scheme devised for asynchronous-message communication system for Mark III hypercube concurrent-processor network. Network consists of up to 1,024 processing elements connected electrically as though were at corners of 10-dimensional cube. Each node contains two Motorola 68020 processors along with Motorola 68881 floating-point processor utilizing up to 4 megabytes of shared dynamic random-access memory. Scheme intended to support applications requiring passage of both polled or solicited and unsolicited messages.
Navigators’ Behavior in Traffic Separation Schemes
Directory of Open Access Journals (Sweden)
Zbigniew Pietrzykowski
2015-03-01
Full Text Available One of the areas of decision support in the navigational ship conduct process is a Traffic Separation Scheme. TSSs are established in areas with high traffic density, often near the shore and in port approaches. The main purpose of these schemes is to improve maritime safety by channeling vessel traffic into streams. Traffic regulations as well as ships behavior in real conditions in chosen TSSs have been analyzed in order to develop decision support algorithms.
Readout scheme of the upgraded ALICE TPC
Appelshaeuser, Harald; Ivanov, Marian; Lippmann, Christian; Wiechula, Jens
2016-01-01
In this document, we present the updated readout scheme for the ALICE TPC Upgrade. Two major design changes are implemented with respect to the concept that was presented in the TPC Upgrade Technical Design Report: – The SAMPA front-end ASIC will be used in direct readout mode. – The ADC sampling frequency will be reduced from 10 to 5 MHz. The main results from simulations and a description of the new readout scheme is outlined.
Dynamic Restarting Schemes for Eigenvalue Problems
Energy Technology Data Exchange (ETDEWEB)
Wu, Kesheng; Simon, Horst D.
1999-03-10
In studies of restarted Davidson method, a dynamic thick-restart scheme was found to be excellent in improving the overall effectiveness of the eigen value method. This paper extends the study of the dynamic thick-restart scheme to the Lanczos method for symmetric eigen value problems and systematically explore a range of heuristics and strategies. We conduct a series of numerical tests to determine their relative strength and weakness on a class of electronic structure calculation problems.
A Scheme for Evaluating Feral Horse Management Strategies
Directory of Open Access Journals (Sweden)
L. L. Eberhardt
2012-01-01
Full Text Available Context. Feral horses are an increasing problem in many countries and are popular with the public, making management difficult. Aims. To develop a scheme useful in planning management strategies. Methods. A model is developed and applied to four different feral horse herds, three of which have been quite accurately counted over the years. Key Results. The selected model has been tested on a variety of data sets, with emphasis on the four sets of feral horse data. An alternative, nonparametric model is used to check the selected parametric approach. Conclusions. A density-dependent response was observed in all 4 herds, even though only 8 observations were available in each case. Consistency in the model fits suggests that small starting herds can be used to test various management techniques. Implications. Management methods can be tested on actual, confined populations.
An optimal performance control scheme for a 3D crane
Maghsoudi, Mohammad Javad; Mohamed, Z.; Husain, A. R.; Tokhi, M. O.
2016-01-01
This paper presents an optimal performance control scheme for control of a three dimensional (3D) crane system including a Zero Vibration shaper which considers two control objectives concurrently. The control objectives are fast and accurate positioning of a trolley and minimum sway of a payload. A complete mathematical model of a lab-scaled 3D crane is simulated in Simulink. With a specific cost function the proposed controller is designed to cater both control objectives similar to a skilled operator. Simulation and experimental studies on a 3D crane show that the proposed controller has better performance as compared to a sequentially tuned PID-PID anti swing controller. The controller provides better position response with satisfactory payload sway in both rail and trolley responses. Experiments with different payloads and cable lengths show that the proposed controller is robust to changes in payload with satisfactory responses.
Accelerating convergence for backward Euler and trapezoid time discretization schemes
Directory of Open Access Journals (Sweden)
Osman Raşit Işık
2014-12-01
Full Text Available In this study, we introduce two algorithms to numerically solve any initial value problem (IVP. These algorithms depend on time relaxation model (TRM which is obtained adding a time relaxation term into IVP. Discretizing TRM by using backward Euler (BE method gives the first algorithm. Similarly, the second algorithm is followed by using trapezoid (TR time stepping scheme . Under some conditions, the first algorithm increases the order of convergence from one to two and the second one increases the order from two to three. Thus, more accurate results can be obtained. To verify the accuracy of the methods, they are applied to some numerical examples. Numerical results overlap with the theoretical results.
Accurate measurement of unsteady state fluid temperature
Jaremkiewicz, Magdalena
2017-03-01
In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.
Climate Models have Accurately Predicted Global Warming
Nuccitelli, D. A.
2016-12-01
Climate model projections of global temperature changes over the past five decades have proven remarkably accurate, and yet the myth that climate models are inaccurate or unreliable has formed the basis of many arguments denying anthropogenic global warming and the risks it poses to the climate system. Here we compare average global temperature predictions made by both mainstream climate scientists using climate models, and by contrarians using less physically-based methods. We also explore the basis of the myth by examining specific arguments against climate model accuracy and their common characteristics of science denial.
CPSFS: A Credible Personalized Spam Filtering Scheme by Crowdsourcing
Directory of Open Access Journals (Sweden)
Xin Liu
2017-01-01
Full Text Available Email spam consumes a lot of network resources and threatens many systems because of its unwanted or malicious content. Most existing spam filters only target complete-spam but ignore semispam. This paper proposes a novel and comprehensive CPSFS scheme: Credible Personalized Spam Filtering Scheme, which classifies spam into two categories: complete-spam and semispam, and targets filtering both kinds of spam. Complete-spam is always spam for all users; semispam is an email identified as spam by some users and as regular email by other users. Most existing spam filters target complete-spam but ignore semispam. In CPSFS, Bayesian filtering is deployed at email servers to identify complete-spam, while semispam is identified at client side by crowdsourcing. An email user client can distinguish junk from legitimate emails according to spam reports from credible contacts with the similar interests. Social trust and interest similarity between users and their contacts are calculated so that spam reports are more accurately targeted to similar users. The experimental results show that the proposed CPSFS can improve the accuracy rate of distinguishing spam from legitimate emails compared with that of Bayesian filter alone.
Ferreira, Iuri E P; Zocchi, Silvio S; Baron, Daniel
2017-11-01
Reliable fertilizer recommendations depend on the correctness of the crop production models fitted to the data, but generally the crop models are built empirically, neglecting important physiological aspects related with response to fertilizers, or they are based in laws of plant mineral nutrition seen by many authors as conflicting theories: the Liebig's Law of the Minimum and Mitscherlich's Law of Diminishing Returns. We developed a new approach to modelling the crop response to fertilizers that reconcile these laws. In this study, the Liebig's Law is applied at the cellular level to explain plant production and, as a result, crop models compatible with the Law of Diminishing Returns are derived. Some classical crop models appear here as special cases of our methodology, and a new interpretation for Mitscherlich's Law is also provided. Copyright © 2017 Elsevier Inc. All rights reserved.
Accurate outage analysis of incremental decode-and-forward opportunistic relaying
Tourki, Kamel
2011-04-01
In this paper, we investigate a dual-hop decode-and-forward opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. We first derive the exact statistics of received signal-to-noise (SNR) over each hop with co-located relays, in terms of probability density function (PDF). Then, the PDFs are used to determine very accurate closed-form expression for the outage probability for a transmission rate R. Furthermore, we perform asymptotic analysis and we deduce the diversity order of the scheme. We validate our analysis by showing that performance simulation results coincide with our analytical results over different network architectures. © 2011 IEEE.
Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation
Buning, Pieter G.; Pulliam, Thomas H.
2011-01-01
An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.
Five challenges to reconcile agricultural land use and forest ecosystem services in Southeast Asia.
Carrasco, L R; Papworth, S K; Reed, J; Symes, W S; Ickowitz, A; Clements, T; Peh, K S-H; Sunderland, T
2016-10-01
Southeast Asia possesses the highest rates of tropical deforestation globally and exceptional levels of species richness and endemism. Many countries in the region are also recognized for their food insecurity and poverty, making the reconciliation of agricultural production and forest conservation a particular priority. This reconciliation requires recognition of the trade-offs between competing land-use values and the subsequent incorporation of this information into policy making. To date, such reconciliation has been relatively unsuccessful across much of Southeast Asia. We propose an ecosystem services (ES) value-internalization framework that identifies the key challenges to such reconciliation. These challenges include lack of accessible ES valuation techniques; limited knowledge of the links between forests, food security, and human well-being; weak demand and political will for the integration of ES in economic activities and environmental regulation; a disconnect between decision makers and ES valuation; and lack of transparent discussion platforms where stakeholders can work toward consensus on negotiated land-use management decisions. Key research priorities to overcome these challenges are developing easy-to-use ES valuation techniques; quantifying links between forests and well-being that go beyond economic values; understanding factors that prevent the incorporation of ES into markets, regulations, and environmental certification schemes; understanding how to integrate ES valuation into policy making processes, and determining how to reduce corruption and power plays in land-use planning processes. © 2016 Society for Conservation Biology.
Leider, Jonathon P; Coronado, Fatima; Beck, Angela J; Harper, Elizabeth
2018-01-02
The purpose of this study is to reconcile public health workforce supply and demand data to understand whether the expected influx of public health graduates can meet turnover events. Four large public health workforce data sources were analyzed to establish measures of workforce demand, voluntary separations, and workforce employees likely to retire at state and local health departments. Data were collected in 2014-2016 and analyzed in 2016 and 2017. Potential workforce supply (i.e., candidates with formal public health training) was assessed by analyzing data on public health graduates. Supply and demand data were reconciled to identify potential gaps in the public health workforce. At the state and local level, ≅197,000 staff are employed in health departments. This is down more than 50,000 from 2008. In total, ≥65,000 staff will leave their organizations during fiscal years 2016-2020, with ≤100,000 staff leaving if all planned retirements occur by 2020. During 2000-2015, more than 223,000 people received a formal public health degree at some level. More than 25,000 students will receive a public health degree at some level in each year through 2020. Demands for public health staff could possibly be met by the influx of graduates from schools and programs of public health. However, substantial implications exist for transferal of institutional knowledge and ability to recruit and retain the best staff to sufficiently meet demand. Copyright © 2017 American Journal of Preventive Medicine. All rights reserved.
Hamm, Laura M; Giuffre, Anthony J; Han, Nizhou; Tao, Jinhui; Wang, Debin; De Yoreo, James J; Dove, Patricia M
2014-01-28
The physical basis for how macromolecules regulate the onset of mineral formation in calcifying tissues is not well established. A popular conceptual model assumes the organic matrix provides a stereochemical match during cooperative organization of solute ions. In contrast, another uses simple binding assays to identify good promoters of nucleation. Here, we reconcile these two views and provide a mechanistic explanation for template-directed nucleation by correlating heterogeneous nucleation barriers with crystal-substrate-binding free energies. We first measure the kinetics of calcite nucleation onto model substrates that present different functional group chemistries (carboxyl, thiol, phosphate, and hydroxyl) and conformations (C11 and C16 chain lengths). We find rates are substrate-specific and obey predictions of classical nucleation theory at supersaturations that extend above the solubility of amorphous calcium carbonate. Analysis of the kinetic data shows the thermodynamic barrier to nucleation is reduced by minimizing the interfacial free energy of the system, γ. We then use dynamic force spectroscopy to independently measure calcite-substrate-binding free energies, ΔGb. Moreover, we show that within the classical theory of nucleation, γ and ΔGb should be linearly related. The results bear out this prediction and demonstrate that low-energy barriers to nucleation correlate with strong crystal-substrate binding. This relationship is general to all functional group chemistries and conformations. These findings provide a physical model that reconciles the long-standing concept of templated nucleation through stereochemical matching with the conventional wisdom that good binders are good nucleators. The alternative perspectives become internally consistent when viewed through the lens of crystal-substrate binding.
Energy Technology Data Exchange (ETDEWEB)
Smedley-Stevenson, Richard P., E-mail: richard.smedley-stevenson@awe.co.uk [AWE PLC, Aldermaston, Reading, Berkshire, RG7 4PR (United Kingdom); Department of Earth Science and Engineering, Imperial College London, SW7 2AZ (United Kingdom); McClarren, Ryan G., E-mail: rmcclarren@ne.tamu.edu [Department of Nuclear Engineering, Texas A & M University, College Station, TX 77843-3133 (United States)
2015-04-01
This paper attempts to unify the asymptotic diffusion limit analysis of thermal radiation transport schemes, for a linear-discontinuous representation of the material temperature reconstructed from cell centred temperature unknowns, in a process known as ‘source tilting’. The asymptotic limits of both Monte Carlo (continuous in space) and deterministic approaches (based on linear-discontinuous finite elements) for solving the transport equation are investigated in slab geometry. The resulting discrete diffusion equations are found to have nonphysical terms that are proportional to any cell-edge discontinuity in the temperature representation. Based on this analysis it is possible to design accurate schemes for representing the material temperature, for coupling thermal radiation transport codes to a cell centred representation of internal energy favoured by ALE (arbitrary Lagrange–Eulerian) hydrodynamics schemes.
Combining image-processing and image compression schemes
Greenspan, H.; Lee, M.-C.
1995-01-01
An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.
Vogl, Matthias
2014-04-01
The paper analyzes the German inpatient capital costing scheme by assessing its cost module calculation. The costing scheme represents the first separated national calculation of performance-oriented capital cost lump sums per DRG. The three steps in the costing scheme are reviewed and assessed: (1) accrual of capital costs; (2) cost-center and cost category accounting; (3) data processing for capital cost modules. The assessment of each step is based on its level of transparency and efficiency. A comparative view on operating costing and the English costing scheme is given. Advantages of the scheme are low participation hurdles, low calculation effort for G-DRG calculation participants, highly differentiated cost-center/cost category separation, and advanced patient-based resource allocation. The exclusion of relevant capital costs, nontransparent resource allocation, and unclear capital cost modules, limit the managerial relevance and transparency of the capital costing scheme. The scheme generates the technical premises for a change from dual financing by insurances (operating costs) and state (capital costs) to a single financing source. The new capital costing scheme will intensify the discussion on how to solve the current investment backlog in Germany and can assist regulators in other countries with the introduction of accurate capital costing. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
A New Grünwald-Letnikov Derivative Derived from a Second-Order Scheme
Directory of Open Access Journals (Sweden)
B. A. Jacobs
2015-01-01
Full Text Available A novel derivation of a second-order accurate Grünwald-Letnikov-type approximation to the fractional derivative of a function is presented. This scheme is shown to be second-order accurate under certain modifications to account for poor accuracy in approximating the asymptotic behavior near the lower limit of differentiation. Some example functions are chosen and numerical results are presented to illustrate the efficacy of this new method over some other popular choices for discretizing fractional derivatives.
Directory of Open Access Journals (Sweden)
Lilja Jóhannesdóttir
2017-03-01
Full Text Available Intensified agricultural practices have driven biodiversity loss throughout the world, and although many actions aimed at halting and reversing these declines have been developed, their effectiveness depends greatly on the willingness of stakeholders to take part in conservation management. Knowledge of the willingness and capacity of landowners to engage with conservation can therefore be key to designing successful management strategies in agricultural land. In Iceland, agriculture is currently at a relatively low intensity but is very likely to expand in the near future. At the same time, Iceland supports internationally important breeding populations of many ground-nesting birds that could be seriously impacted by further expansion of agricultural activities. To understand the views of Icelandic farmers toward bird conservation, given the current potential for agricultural expansion, 62 farms across Iceland were visited and farmers were interviewed, using a structured questionnaire survey in which respondents indicated of a series of future actions. Most farmers intend to increase the area of cultivated land in the near future, and despite considering having rich birdlife on their land to be very important, most also report they are unlikely to specifically consider bird conservation in their management, even if financial compensation were available. However, as no agri-environment schemes are currently in place in Iceland, this concept is highly unfamiliar to Icelandic farmers. Nearly all respondents were unwilling, and thought it would be impossible, to delay harvest, but many were willing to consider sparing important patches of land and/or maintaining existing pools within fields (a key habitat feature for breeding waders. Farmers' views on the importance of having rich birdlife on their land and their willingness to participate in bird conservation provide a potential platform for the codesign of conservation management with landowners
HR Department
2007-01-01
As announced at the meeting of the Standing Concertation Committee (SCC) on 26 June 2007 and in http://Bulletin No. 28/2007, the existing Saved Leave Scheme will be discontinued as of 31 December 2007. Staff participating in the Scheme will shortly receive a contract amendment stipulating the end of financial contributions compensated by save leave. Leave already accumulated on saved leave accounts can continue to be taken in accordance with the rules applicable to the current scheme. A new system of saved leave will enter into force on 1 January 2008 and will be the subject of a new im-plementation procedure entitled "Short-term saved leave scheme" dated 1 January 2008. At its meeting on 4 December 2007, the SCC agreed to recommend the Director-General to approve this procedure, which can be consulted on the HR Department’s website at the following address: https://cern.ch/hr-services/services-Ben/sls_shortterm.asp All staff wishing to participate in the new scheme ...
2007-01-01
As announced at the meeting of the Standing Concertation Committee (SCC) on 26 June 2007 and in http://Bulletin No. 28/2007, the existing Saved Leave Scheme will be discontinued as of 31 December 2007. Staff participating in the Scheme will shortly receive a contract amendment stipulating the end of financial contributions compensated by save leave. Leave already accumulated on saved leave accounts can continue to be taken in accordance with the rules applicable to the current scheme. A new system of saved leave will enter into force on 1 January 2008 and will be the subject of a new implementation procedure entitled "Short-term saved leave scheme" dated 1 January 2008. At its meeting on 4 December 2007, the SCC agreed to recommend the Director-General to approve this procedure, which can be consulted on the HR Department’s website at the following address: https://cern.ch/hr-services/services-Ben/sls_shortterm.asp All staff wishing to participate in the new scheme a...
Financial incentive schemes in primary care
Directory of Open Access Journals (Sweden)
Gillam S
2015-09-01
Full Text Available Stephen Gillam Department of Public Health and Primary Care, Institute of Public Health, University of Cambridge, Cambridge, UK Abstract: Pay-for-performance (P4P schemes have become increasingly common in primary care, and this article reviews their impact. It is based primarily on existing systematic reviews. The evidence suggests that P4P schemes can change health professionals' behavior and improve recorded disease management of those clinical processes that are incentivized. P4P may narrow inequalities in performance comparing deprived with nondeprived areas. However, such schemes have unintended consequences. Whether P4P improves the patient experience, the outcomes of care or population health is less clear. These practical uncertainties mirror the ethical concerns of many clinicians that a reductionist approach to managing markers of chronic disease runs counter to the humanitarian values of family practice. The variation in P4P schemes between countries reflects different historical and organizational contexts. With so much uncertainty regarding the effects of P4P, policy makers are well advised to proceed carefully with the implementation of such schemes until and unless clearer evidence for their cost–benefit emerges. Keywords: financial incentives, pay for performance, quality improvement, primary care
High-order conservative finite difference GLM-MHD schemes for cell-centered MHD
Mignone, Andrea; Tzeferacos, Petros; Bodo, Gianluigi
2010-08-01
We present and compare third- as well as fifth-order accurate finite difference schemes for the numerical solution of the compressible ideal MHD equations in multiple spatial dimensions. The selected methods lean on four different reconstruction techniques based on recently improved versions of the weighted essentially non-oscillatory (WENO) schemes, monotonicity preserving (MP) schemes as well as slope-limited polynomial reconstruction. The proposed numerical methods are highly accurate in smooth regions of the flow, avoid loss of accuracy in proximity of smooth extrema and provide sharp non-oscillatory transitions at discontinuities. We suggest a numerical formulation based on a cell-centered approach where all of the primary flow variables are discretized at the zone center. The divergence-free condition is enforced by augmenting the MHD equations with a generalized Lagrange multiplier yielding a mixed hyperbolic/parabolic correction, as in Dedner et al. [J. Comput. Phys. 175 (2002) 645-673]. The resulting family of schemes is robust, cost-effective and straightforward to implement. Compared to previous existing approaches, it completely avoids the CPU intensive workload associated with an elliptic divergence cleaning step and the additional complexities required by staggered mesh algorithms. Extensive numerical testing demonstrate the robustness and reliability of the proposed framework for computations involving both smooth and discontinuous features.
A stable higher order space time Galerkin marching-on-in-time scheme
Pray, Andrew J.
2013-07-01
We present a method for the stable solution of time-domain integral equations. The method uses a technique developed in [1] to accurately evaluate matrix elements. As opposed to existing stabilization schemes, the method presented uses higher order basis functions in time to improve the accuracy of the solver. The method is validated by showing convergence in temporal basis function order, time step size, and geometric discretization order. © 2013 IEEE.
Liu, Xuezhe; Lin, Zhong; Wang, Ruili
2017-07-01
A cell-centered finite volume scheme to solve diffusion equations on nonmatched meshes which result from the hydrodynamics calculation with slide line treatment is presented. The sliding meshes near the interface are handled as arbitrary polygons together with the internal ones and the hanging nodes can be considered naturally as the vertices of the polygon. A robust and accurate interpolation method based on Taylor expansion is proposed to eliminate the node unknowns including the ones at the hanging nodes.
Lomba, Angela; Alves, Paulo; Jongman, Rob H G; McCracken, David I
2015-03-01
Agriculture constitutes a dominant land cover worldwide, and rural landscapes under extensive farming practices acknowledged due to high biodiversity levels. The High Nature Value farmland (HNVf) concept has been highlighted in the EU environmental and rural policies due to their inherent potential to help characterize and direct financial support to European landscapes where high nature and/or conservation value is dependent on the continuation of specific low-intensity farming systems. Assessing the extent of HNV farmland by necessity relies on the availability of both ecological and farming systems' data, and difficulties associated with making such assessments have been widely described across Europe. A spatially explicit framework of data collection, building out from local administrative units, has recently been suggested as a means of addressing such difficulties. This manuscript tests the relevance of the proposed approach, describes the spatially explicit framework in a case study area in northern Portugal, and discusses the potential of the approach to help better inform the implementation of conservation and rural development policies. Synthesis and applications: The potential of a novel approach (combining land use/cover, farming and environmental data) to provide more accurate and efficient mapping and monitoring of HNV farmlands is tested at the local level in northern Portugal. The approach is considered to constitute a step forward toward a more precise targeting of landscapes for agri-environment schemes, as it allowed a more accurate discrimination of areas within the case study landscape that have a higher value for nature conservation.
Toward Accurate and Quantitative Comparative Metagenomics.
Nayfach, Stephen; Pollard, Katherine S
2016-08-25
Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. Copyright © 2016 Elsevier Inc. All rights reserved.
Accurate renormalization group analyses in neutrino sector
Energy Technology Data Exchange (ETDEWEB)
Haba, Naoyuki [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Kaneta, Kunio [Kavli IPMU (WPI), The University of Tokyo, Kashiwa, Chiba 277-8568 (Japan); Takahashi, Ryo [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Yamaguchi, Yuya [Department of Physics, Faculty of Science, Hokkaido University, Sapporo 060-0810 (Japan)
2014-08-15
We investigate accurate renormalization group analyses in neutrino sector between ν-oscillation and seesaw energy scales. We consider decoupling effects of top quark and Higgs boson on the renormalization group equations of light neutrino mass matrix. Since the decoupling effects are given in the standard model scale and independent of high energy physics, our method can basically apply to any models beyond the standard model. We find that the decoupling effects of Higgs boson are negligible, while those of top quark are not. Particularly, the decoupling effects of top quark affect neutrino mass eigenvalues, which are important for analyzing predictions such as mass squared differences and neutrinoless double beta decay in an underlying theory existing at high energy scale.
Accurate metacognition for visual sensory memory representations.
Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F
2014-04-01
The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.
Accurate predictions for the LHC made easy
CERN. Geneva
2014-01-01
The data recorded by the LHC experiments is of a very high quality. To get the most out of the data, precise theory predictions, including uncertainty estimates, are needed to reduce as much as possible theoretical bias in the experimental analyses. Recently, significant progress has been made in computing Next-to-Leading Order (NLO) computations, including matching to the parton shower, that allow for these accurate, hadron-level predictions. I shall discuss one of these efforts, the MadGraph5_aMC@NLO program, that aims at the complete automation of predictions at the NLO accuracy within the SM as well as New Physics theories. I’ll illustrate some of the theoretical ideas behind this program, show some selected applications to LHC physics, as well as describe the future plans.
Fast and accurate multicomponent transport property evaluation
Energy Technology Data Exchange (ETDEWEB)
Ern, A. [CERMICS-ENPC, Le Grand (France)]|[Yale Univ., New Haven, CT (United States); Giovangigli, V. [CMAP-CNRS, Palaiseau (France)
1995-08-01
We investigate iterative methods for solving linear systems arising from the kinetic theory and providing transport coefficients of dilute polyatomic gas mixtures. These linear systems are obtained in their naturally constrained, singular, and symmetric form, using the formalism of Waldmann and Truebenbacher. The transport coefficients associated with the systems obtained by Monchick, Yun, and Mason are also recovered, if two misprints are corrected in the work of these authors. Using the recent theory of Ern and Giovangigli, all the transport coefficients are expressed as convergent series. By truncating these series, new, accurate, approximate expressions are obtained for all the transport coefficients. Finally, the computational efficiency of the present transport algorithms in multicomponent flow applications is illustrated with several numerical experiments. 38 refs., 12 tabs.
Apparatus for accurately measuring high temperatures
Smith, D.D.
The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.
Kumar, Vivek; Raghurama Rao, S. V.
2008-04-01
Non-standard finite difference methods (NSFDM) introduced by Mickens [ Non-standard Finite Difference Models of Differential Equations, World Scientific, Singapore, 1994] are interesting alternatives to the traditional finite difference and finite volume methods. When applied to linear hyperbolic conservation laws, these methods reproduce exact solutions. In this paper, the NSFDM is first extended to hyperbolic systems of conservation laws, by a novel utilization of the decoupled equations using characteristic variables. In the second part of this paper, the NSFDM is studied for its efficacy in application to nonlinear scalar hyperbolic conservation laws. The original NSFDMs introduced by Mickens (1994) were not in conservation form, which is an important feature in capturing discontinuities at the right locations. Mickens [Construction and analysis of a non-standard finite difference scheme for the Burgers-Fisher equations, Journal of Sound and Vibration 257 (4) (2002) 791-797] recently introduced a NSFDM in conservative form. This method captures the shock waves exactly, without any numerical dissipation. In this paper, this algorithm is tested for the case of expansion waves with sonic points and is found to generate unphysical expansion shocks. As a remedy to this defect, we use the strategy of composite schemes [R. Liska, B. Wendroff, Composite schemes for conservation laws, SIAM Journal of Numerical Analysis 35 (6) (1998) 2250-2271] in which the accurate NSFDM is used as the basic scheme and localized relaxation NSFDM is used as the supporting scheme which acts like a filter. Relaxation schemes introduced by Jin and Xin [The relaxation schemes for systems of conservation laws in arbitrary space dimensions, Communications in Pure and Applied Mathematics 48 (1995) 235-276] are based on relaxation systems which replace the nonlinear hyperbolic conservation laws by a semi-linear system with a stiff relaxation term. The relaxation parameter ( λ) is chosen locally
METAPHORIC MECHANISMS IN IMAGE SCHEME DEVELOPMENT
Directory of Open Access Journals (Sweden)
Pankratova, S.A.
2017-06-01
Full Text Available Problems of knowledge representation by means of images are still cognitively significant and invariably modern. The article deals with the image heuristic potential of a bookish sphere as a donor of meanings, aiding metaphoric scheme development of another modern sphere of cinematography. The key factor here is the differentiation between two basically different metaphor types – heuristic epiphora and image diaphora. The author had offered a unique methodology of counting of quantitative parameters of heuristic potential which opens the possibility of modeling the mechanisms of metaphoric meaning development. In the summary the author underscores that both ways of image scheme development are of importance to cognitive science, both heuristic epiphora and image-based diaphora play a significant role in the explication of image scheme development.
IPCT: A scheme for mobile authentication
Directory of Open Access Journals (Sweden)
Vishnu Shankar
2016-09-01
Full Text Available Mobile is becoming a part of everyone's life and as their power of computation and storage is rising and cost is coming down. Most of mobile phone users have a lot of private data which they want to protect from others (La Polla et al., 2013. It means user must be authenticated properly for accessing the mobile resources. Normally user is authenticated using text passwords, PIN, face recognition or patterns etc. All these methods are used but they have some shortcomings. In this paper we have seen various existing methods of mobile authentications and proposed our improved mobile authentication IPCT scheme. We have compared our Image Pass Code with tapping scheme with existing techniques and shown that our scheme is better than existing techniques.
Development of an explicit non-staggered scheme for solving three-dimensional Maxwell's equations
Sheu, Tony W. H.; Chung, Y. W.; Li, J. H.; Wang, Y. C.
2016-10-01
An explicit finite-difference scheme for solving the three-dimensional Maxwell's equations in non-staggered grids is presented. We aspire to obtain time-dependent solutions of the Faraday's and Ampère's equations and predict the electric and magnetic fields within the discrete zero-divergence context (or Gauss's law). The local conservation laws in Maxwell's equations are numerically preserved using the explicit second-order accurate symplectic partitioned Runge-Kutta temporal scheme. Following the method of lines, the spatial derivative terms in the semi-discretized Faraday's and Ampère's equations are approximated theoretically to obtain a highly accurate numerical phase velocity. The proposed fourth-order accurate space-centered finite difference scheme minimizes the discrepancy between the exact and numerical phase velocities. This minimization process considerably reduces the dispersion and anisotropy errors normally associated with finite difference time-domain methods. The computational efficiency of getting the same level of accuracy at less computing time and the ability of preserving the symplectic property have been numerically demonstrated through several test problems.
Tradable white certificate schemes : what can we learn from tradable green certificate schemes?
Oikonomou, Vlasis; Mundaca, Luis
In this paper, we analyze the experiences gained from tradable green certificate (TGC) schemes and extract some policy lessons that can lead to a successful design of a market-based approach for energy efficiency improvement, alias tradable white certificate schemes. We use tradable green
A simple angular transmit diversity scheme using a single RF frontend for PSK modulation schemes
DEFF Research Database (Denmark)
Alrabadi, Osama Nafeth Saleem; Papadias, Constantinos B.; Kalis, Antonis
2009-01-01
array (SPA) with a single transceiver, and an array area of 0.0625 square wavelengths. The scheme which requires no channel state information (CSI) at the transmitter, provides mainly a diversity gain to combat against multipath fading. The performance/capacity of the proposed diversity scheme...
El Gharamti, Mohamad
2014-09-01
Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system\\'s parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.
Carbon trading: Current schemes and future developments
Energy Technology Data Exchange (ETDEWEB)
Perdan, Slobodan, E-mail: slobodan.perdan@manchester.ac.uk [School of Chemical Engineering and Analytical Science, Room F30, The Mill, University of Manchester, Sackville Street, Manchester M13 9PL (United Kingdom); Azapagic, Adisa [School of Chemical Engineering and Analytical Science, Room F30, The Mill, University of Manchester, Sackville Street, Manchester M13 9PL (United Kingdom)
2011-10-15
This paper looks at the greenhouse gas (GHG) emissions trading schemes and examines the prospects of carbon trading. The first part of the paper gives an overview of several mandatory GHG trading schemes around the world. The second part focuses on the future trends in carbon trading. It argues that the emergence of new schemes, a gradual enlargement of the current ones, and willingness to link existing and planned schemes seem to point towards geographical, temporal and sectoral expansion of emissions trading. However, such expansion would need to overcome some considerable technical and non-technical obstacles. Linking of the current and emerging trading schemes requires not only considerable technical fixes and harmonisation of different trading systems, but also necessitates clear regulatory and policy signals, continuing political support and a more stable economic environment. Currently, the latter factors are missing. The global economic turmoil and its repercussions for the carbon market, a lack of the international deal on climate change defining the Post-Kyoto commitments, and unfavourable policy shifts in some countries, cast serious doubts on the expansion of emissions trading and indicate that carbon trading enters an uncertain period. - Highlights: > The paper provides an extensive overview of mandatory emissions trading schemes around the world. > Geographical, temporal and sectoral expansion of emissions trading are identified as future trends. > The expansion requires considerable technical fixes and harmonisation of different trading systems. > Clear policy signals, political support and a stable economic environment are needed for the expansion. > A lack of the post-Kyoto commitments and unfavourable policy shifts indicate an uncertain future for carbon trading.
Clocking Scheme for Switched-Capacitor Circuits
DEFF Research Database (Denmark)
Steensgaard-Madsen, Jesper
1998-01-01
A novel clocking scheme for switched-capacitor (SC) circuits is presented. It can enhance the understanding of SC circuits and the errors caused by MOSFET (MOS) switches. Charge errors, and techniques to make SC circuits less sensitive to them are discussed.......A novel clocking scheme for switched-capacitor (SC) circuits is presented. It can enhance the understanding of SC circuits and the errors caused by MOSFET (MOS) switches. Charge errors, and techniques to make SC circuits less sensitive to them are discussed....
An Elaborate Secure Quantum Voting Scheme
Zhang, Jia-Lei; Xie, Shu-Cui; Zhang, Jian-Zhong
2017-10-01
An elaborate secure quantum voting scheme is presented in this paper. It is based on quantum proxy blind signature. The eligible voter's voting information can be transmitted to the tallyman Bob with the help of the scrutineer Charlie. Charlie's supervision in the whole voting process can make the protocol satisfy fairness and un-repeatability so as to avoid Bob's dishonest behaviour. Our scheme uses the physical characteristics of quantum mechanics to achieve voting, counting and immediate supervision. In addition, the program also uses quantum key distribution protocol and quantum one-time pad to guarantee its unconditional security.
A COMPLETE SCHEME FOR A MUON COLLIDER.
Energy Technology Data Exchange (ETDEWEB)
PALMER,R.B.; BERG, J.S.; FERNOW, R.C.; GALLARDO, J.C.; KIRK, H.G.; ALEXAHIN, Y.; NEUFFER, D.; KAHN, S.A.; SUMMERS, D.
2007-09-01
A complete scheme for production, cooling, acceleration, and ring for a 1.5 TeV center of mass muon collider is presented, together with parameters for two higher energy machines. The schemes starts with the front end of a proposed neutrino factory that yields bunch trains of both muon signs. Six dimensional cooling in long-period helical lattices reduces the longitudinal emittance until it becomes possible to merge the trains into single bunches, one of each sign. Further cooling in all dimensions is applied to the single bunches in further helical lattices. Final transverse cooling to the required parameters is achieved in 50 T solenoids.
Autonomous droop scheme with reduced generation cost
DEFF Research Database (Denmark)
Nutkani, Inam Ullah; Loh, Poh Chiang; Blaabjerg, Frede
2013-01-01
Droop scheme has been widely applied to the control of Distributed Generators (DGs) in microgrids for proportional power sharing based on their ratings. For standalone microgrid, where centralized management system is not viable, the proportional power sharing based droop might not suit well since...... DGs are usually of different types unlike synchronous generators. This paper presents an autonomous droop scheme that takes into consideration the operating cost, efficiency and emission penalty of each DG since all these factors directly or indirectly contributes to the Total Generation Cost (TGC...
Security problem on arbitrated quantum signature schemes
Energy Technology Data Exchange (ETDEWEB)
Choi, Jeong Woon [Emerging Technology R and D Center, SK Telecom, Kyunggi 463-784 (Korea, Republic of); Chang, Ku-Young; Hong, Dowon [Cryptography Research Team, Electronics and Telecommunications Research Institute, Daejeon 305-700 (Korea, Republic of)
2011-12-15
Many arbitrated quantum signature schemes implemented with the help of a trusted third party have been developed up to now. In order to guarantee unconditional security, most of them take advantage of the optimal quantum one-time encryption based on Pauli operators. However, in this paper we point out that the previous schemes provide security only against a total break attack and show in fact that there exists an existential forgery attack that can validly modify the transmitted pair of message and signature. In addition, we also provide a simple method to recover security against the proposed attack.
Droop Scheme With Consideration of Operating Costs
DEFF Research Database (Denmark)
Nutkani, I. U.; Loh, Poh Chiang; Blaabjerg, Frede
2014-01-01
considered even though they are different for different types of DGs. This letter thus proposes an alternative droop scheme, which can better distinguish the different operating characteristics and objectives of the DGs grouped together in a weighted droop expression. The power sharing arrived in the steady......Although many droop schemes have been proposed for distributed generator (DG) control in a microgrid, they mainly focus on achieving proportional power sharing based on the DG kVA ratings. Other factors like generation costs, efficiencies, and emission penalties at different loads have not been...
Jah, M.; Mallik, V.
There are many Resident Space Objects (RSOs) in the Geostationary Earth Orbit (GEO) regime, both operational and debris. The primary non-gravitational force acting on these RSOs is Solar Radiation Pressure (SRP), which is sensitive to the RSO’s area-to-mass ratio. Sparse observation data and mismodelling of non-gravitational forces has constrained the state of practice in tracking and characterizing RSOs. Accurate identification, characterization, tracking, and motion prediction of RSOs is a high priority research issue as it shall aid in assessing collision probabilities in the GEO regime, and orbital safety writ large. Previous work in characterizing RSOs has taken a preliminary step in exploiting fused astrometric and photometric data to estimate the RSO mass, shape, attitude, and size. This works, in theory, since angles data are sensitive to SRP albedo-area-to-mass ratio, and photometric data are sensitive to shape, attitude, and observed albedo-area. By fusing these two data types, mass and albedo-area both become observable parameters and can be estimated as independent quantities. However, previous work in mass and albedo-area estimation has not quantified and assessed the fundamental physical link between SRP albedo-area and observed albedo-area. The observed albedo-area is always a function of the SRP albedo-area along the line of sight of the observer. This is the physical relationship that this current research exploits.
Duru, Kenneth
2014-12-01
© 2014 Elsevier Inc. In this paper, we develop a stable and systematic procedure for numerical treatment of elastic waves in discontinuous and layered media. We consider both planar and curved interfaces where media parameters are allowed to be discontinuous. The key feature is the highly accurate and provably stable treatment of interfaces where media discontinuities arise. We discretize in space using high order accurate finite difference schemes that satisfy the summation by parts rule. Conditions at layer interfaces are imposed weakly using penalties. By deriving lower bounds of the penalty strength and constructing discrete energy estimates we prove time stability. We present numerical experiments in two space dimensions to illustrate the usefulness of the proposed method for simulations involving typical interface phenomena in elastic materials. The numerical experiments verify high order accuracy and time stability.
Improved nonequilibrium viscous shock-layer scheme for hypersonic blunt-body flowfields
Bhutta, Bilal A.; Lewis, Clark H.
1992-01-01
The nonequilibrium viscous shock-layer (VSL) solution scheme is revisited to improve its solution accuracy in the stagnation region and also to minimize and control errors in the conservation of elemental mass. The stagnation-point solution is improved by using a second-order expansion for the normal velocity, and the elemental mass conservation is improved by directly imposing the element conservation equations as solution constraints. These modifications are such that the general structure and computational efficiency of the nonequilibrium VSL scheme is not affected. This revised nonequilibrium VSL scheme is used to study the Mach 20 flow over a 7-deg sphere-cone vehicle under 0- and 20-deg angle-of-attack conditions. Comparisons are made with the corresponding predictions of Navier-Stokes and parabolized Navier-Stokes solution schemes. The results of these tests show that the nonequilibrium blunt-body VSL scheme is indeed an accurate, fast, and extremely efficient means for generating the blunt-body flowfield over spherical nose tips at zero-to-large angles of attack.
Recent improvements in the nonequilibrium VSL scheme for hypersonic blunt-body flows
Bhutta, Bilal A.; Lewis, Clark H.
1991-01-01
The nonequilibrium viscous shock-layer (VSL) solution scheme is revisited to improve its solution accuracy in the stagnation point region and also to minimize and control the errors in the conservation of elemental mass. The stagnation-point solution is improved by using a second-order expansion for the normal velocity and the elemental mass conservation is improved by directly imposing the element conservation equations as solution constraints. These modifications are such that the general structure and computational efficiency of the nonequilibrium VSL scheme is not affected. This revised nonequilibrium VSL scheme is used to study the Mach 20 flow over a 7-deg sphere-cone vehicle under zero and 20-deg angle-of-attack conditons. Comparisons are made with the corresponding predictions of Navier-Stokes and Parabolized Navier-Stokes solution schemes. The results of these tests show that the nonequilibrium blunt-body VSL scheme is indeed an accurate, fast and extremely efficient means for generating the blunt-body flowfield over spherical nosetips at small-to-large angles of attack.
Energy Technology Data Exchange (ETDEWEB)
Baxevanou, Catherine A.; Vlachos, Nicolas S.
2004-07-01
This paper is a comparative study of combining turbulence models and interpolation schemes to calculate turbulent flow around a NACA0012 airfoil before and after separation. The calculations were carried out using the code CAFFA of Peric, which was appropriately modified to include more numerical schemes and turbulence models. This code solves the Navier-Stokes equations for 2D incompressible flow, using finite volumes and structured, collocated, curvilinear, body fitted grids. Seven differencing schemes were investigated: central, upwind, hybrid, QUICK, Harten-Yee upwind TVD with five limiters, Roe-Sweby upwind TVD with three limiters, and Davis-Yee symmetric TVD with three limiters. Turbulence effects were incorporated using four turbulence models: standard {kappa}-{epsilon}, {kappa}-{omega} high Re with wall functions, {kappa}-{omega} high Re with integration up to the wall, and the {kappa}-{omega} low Re model. A parametric study showed that best results are obtained: a) for the {kappa}-{epsilon} model, when using the Harten-Yee upwind TVD scheme for the velocities and the upwind interpolation for the turbulence properties {kappa} and {epsilon}, and b) for the {kappa}-{omega} models, when using the Harten-Yee upwind TVD scheme with different limiters for the velocities and the turbulence quantities {kappa} and {omega}. The turbulence models that integrate up to the wall are more accurate when separation appears, while those using wall functions converge much faster. (Author)
Energy Technology Data Exchange (ETDEWEB)
Baxevanou, C.A.; Vlachos, N.S.
2004-07-01
This paper is a comparative study of combining turbulence models and interpolation schemes to calculate turbulent flow around a NACA0012 airfoil before and after separation. The calculations were carried out using the code CAFFA of Peric, which was appropriately modified to include more numerical schemes and turbulence models. This code solves the Navier-Stokes equations for 2D incompressible flow, using finite volumes and structured, collocated, curvilinear, body fitted grids. Seven differencing schemes were investigated: central, upwind, hybrid, QUICK, Harten-Yee upwind TVD with five limiters, Roe-Sweby upwind TVD with three limiters, and Davis-Yee symmetric TVD with three limiters. Turbulence effects were incorporated using four turbulence models: standard k-{epsilon}, k-{omega} high Re with wall functions, k-{omega} high Re with integration up to the wall, and the k-{omega} low Re model. A parametric study showed that best results are obtained: a) for the k-{epsilon} model, when using the Harten-Yee upwind TVD scheme for the velocities and the upwind interpolation for the turbulence properties k and {epsilon}, and b) for the k-{omega}, models, when using the Harten-Yee upwind TVD scheme with different limiters for the velocities and the turbulence quantities k and {omega}. The turbulence models that integrate up to the wall are more accurate when separation appears, while those using wall functions converge much faster. (author)
A survey of Strong Convergent Schemes for the Simulation of ...
African Journals Online (AJOL)
PROF. OLIVER OSUAGWA
2014-12-01
Dec 1, 2014 ... Abstract. We considered strong convergent stochastic schemes for the simulation of stochastic differential equations. The stochastic Taylor's expansion, which is the main tool used for the derivation of strong convergent schemes; the Euler Maruyama, Milstein scheme, stochastic multistep schemes, Implicit ...
A stiffly accurate integrator for elastodynamic problems
Michels, Dominik L.
2017-07-21
We present a new integration algorithm for the accurate and efficient solution of stiff elastodynamic problems governed by the second-order ordinary differential equations of structural mechanics. Current methods have the shortcoming that their performance is highly dependent on the numerical stiffness of the underlying system that often leads to unrealistic behavior or a significant loss of efficiency. To overcome these limitations, we present a new integration method which is based on a mathematical reformulation of the underlying differential equations, an exponential treatment of the full nonlinear forcing operator as opposed to more standard partially implicit or exponential approaches, and the utilization of the concept of stiff accuracy which ensures that the efficiency of the simulations is significantly less sensitive to increased stiffness. As a consequence, we are able to tremendously accelerate the simulation of stiff systems compared to established integrators and significantly increase the overall accuracy. The advantageous behavior of this approach is demonstrated on a broad spectrum of complex examples like deformable bodies, textiles, bristles, and human hair. Our easily parallelizable integrator enables more complex and realistic models to be explored in visual computing without compromising efficiency.
Accurate Theoretical Thermochemistry for Fluoroethyl Radicals.
Ganyecz, Ádám; Kállay, Mihály; Csontos, József
2017-02-09
An accurate coupled-cluster (CC) based model chemistry was applied to calculate reliable thermochemical quantities for hydrofluorocarbon derivatives including radicals 1-fluoroethyl (CH3-CHF), 1,1-difluoroethyl (CH3-CF2), 2-fluoroethyl (CH2F-CH2), 1,2-difluoroethyl (CH2F-CHF), 2,2-difluoroethyl (CHF2-CH2), 2,2,2-trifluoroethyl (CF3-CH2), 1,2,2,2-tetrafluoroethyl (CF3-CHF), and pentafluoroethyl (CF3-CF2). The model chemistry used contains iterative triple and perturbative quadruple excitations in CC theory, as well as scalar relativistic and diagonal Born-Oppenheimer corrections. To obtain heat of formation values with better than chemical accuracy perturbative quadruple excitations and scalar relativistic corrections were inevitable. Their contributions to the heats of formation steadily increase with the number of fluorine atoms in the radical reaching 10 kJ/mol for CF3-CF2. When discrepancies were found between the experimental and our values it was always possible to resolve the issue by recalculating the experimental result with currently recommended auxiliary data. For each radical studied here this study delivers the best heat of formation as well as entropy data.
Accurate lineshape spectroscopy and the Boltzmann constant.
Truong, G-W; Anstie, J D; May, E F; Stace, T M; Luiten, A N
2015-10-14
Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m.
Accurate Emission Line Diagnostics at High Redshift
Jones, Tucker
2017-08-01
How do the physical conditions of high redshift galaxies differ from those seen locally? Spectroscopic surveys have invested hundreds of nights of 8- and 10-meter telescope time as well as hundreds of Hubble orbits to study evolution in the galaxy population at redshifts z 0.5-4 using rest-frame optical strong emission line diagnostics. These surveys reveal evolution in the gas excitation with redshift but the physical cause is not yet understood. Consequently there are large systematic errors in derived quantities such as metallicity.We have used direct measurements of gas density, temperature, and metallicity in a unique sample at z=0.8 to determine reliable diagnostics for high redshift galaxies. Our measurements suggest that offsets in emission line ratios at high redshift are primarily caused by high N/O abundance ratios. However, our ground-based data cannot rule out other interpretations. Spatially resolved Hubble grism spectra are needed to distinguish between the remaining plausible causes such as active nuclei, shocks, diffuse ionized gas emission, and HII regions with escaping ionizing flux. Identifying the physical origin of evolving excitation will allow us to build the necessary foundation for accurate measurements of metallicity and other properties of high redshift galaxies. Only then can we expoit the wealth of data from current surveys and near-future JWST spectroscopy to understand how galaxies evolve over time.
Fast and accurate exhaled breath ammonia measurement.
Solga, Steven F; Mudalel, Matthew L; Spacek, Lisa A; Risby, Terence H
2014-06-11
This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations.
Role of eruption season in reconciling model and proxy responses to tropical volcanism.
Stevenson, Samantha; Fasullo, John T; Otto-Bliesner, Bette L; Tomas, Robert A; Gao, Chaochao
2017-02-21
The response of the El Niño/Southern Oscillation (ENSO) to tropical volcanic eruptions has important worldwide implications, but remains poorly constrained. Paleoclimate records suggest an "El Niño-like" warming 1 year following major eruptions [Adams JB, Mann ME, Ammann CM (2003) Nature 426:274-278] and "La Niña-like" cooling within the eruption year [Li J, et al. (2013) Nat Clim Chang 3:822-826]. However, climate models currently cannot capture all these responses. Many eruption characteristics are poorly constrained, which may contribute to uncertainties in model solutions-for example, the season of eruption occurrence is often unknown and assigned arbitrarily. Here we isolate the effect of eruption season using experiments with the Community Earth System Model (CESM), varying the starting month of two large tropical eruptions. The eruption-year atmospheric circulation response is strongly seasonally dependent, with effects on European winter warming, the Intertropical Convergence Zone, and the southeast Asian monsoon. This creates substantial variations in eruption-year hydroclimate patterns, which do sometimes exhibit La Niña-like features as in the proxy record. However, eruption-year equatorial Pacific cooling is not driven by La Niña dynamics, but strictly by transient radiative cooling. In contrast, equatorial warming the following year occurs for all starting months and operates dynamically like El Niño. Proxy reconstructions confirm these results: eruption-year cooling is insignificant, whereas warming in the following year is more robust. This implies that accounting for the event season may be necessary to describe the initial response to volcanic eruptions and that climate models may be more accurately simulating volcanic influences than previously thought.
Creating Culturally Sustainable Agri-Environmental Schemes
Burton, Rob J. F.; Paragahawewa, Upananda Herath
2011-01-01
Evidence is emerging from across Europe that contemporary agri-environmental schemes are having only limited, if any, influence on farmers' long-term attitudes towards the environment. In this theoretical paper we argue that these approaches are not "culturally sustainable," i.e. the actions are not becoming embedded within farming…
Pay-what-you-want pricing schemes
DEFF Research Database (Denmark)
Kahsay, Goytom Abraha; Samahita, Margaret
Pay-What-You-Want (PWYW) pricing schemes are becoming increasingly popular in a wide range of industries. We develop a model incorporating self-image into the buyer's utility function and introduce heterogeneity in consumption utility and image-sensitivity, which generates different purchase...
Enhancing Cooperative Loan Scheme Through Automated Loan ...
African Journals Online (AJOL)
Enhancing Cooperative Loan Scheme Through Automated Loan Management System. ... Financial transactions through manual system of operation are prone to errors and unimagined complexities, making it so difficult a task maintaining all entries of users account, search records of activities, handle loan deduction errors ...
Geometrical and frequential watermarking scheme using similarities
Bas, Patrick; Chassery, Jean-Marc; Davoine, Franck
1999-04-01
Watermarking schemes are more and more robust to classical degradations. The NEC system developed by Cox, using both original and marked images, can detect the mark with a JPEG compression ratio of 30. Nevertheless a very simple geometric attack done by the program Stirmark can remove the watermark. Most of the present watermarking schemes only map a mark on the image without geometric reference and therefore are not robust to geometric transformation. We present a scheme based on the modification of a collage map (issued from a fractal code used in fractal compression). We add a mark introducing similarities in the image. The embedding of the mark is done by selection of points of interest supporting blocks on which similarities are hided. This selection is done by the Stephens-Harris detector. The similarity is embedded locally to be robust to cropping. Contrary to many schemes, the reference mark used for the detection comes from the marked image and thus undergoes geometrical distortions. The detection of the mark is done by searching interest blocks and their similarities. It does not use the original image and the robustness is guaranteed by a key. Our first results show that the similarities-based watermarking is quite robust to geometric transformation such as translations, rotations and cropping.
The Partners in Flight species prioritization scheme
William C. Hunter; Michael F. Carter; David N. Pashley; Keith Barker
1993-01-01
The prioritization scheme identifies those birds at any locality on several geographic scales most in need of conservation action. Further, it suggests some of those actions that ought to be taken. Ranking criteria used to set priorities for Neotropical migratory landbirds measure characteristics of species that make them vulnerable to local and global extinction....
Shanghai : Developing a Green Electricity Scheme
Heijndermans, Enno; Berrah, Noureddine; Crowdis, Mark D.
2006-01-01
This report documents the experience of developing a green electricity scheme in Shanghai, China. It is intended to be a resource when replicating this effort in another city or country. The study consists of two parts. In Part 1, the general characteristics of both the framework for green electricity products and the market for green electricity products are presented. It also presents a ...
EXPERIMENTAL EVALUATION OF LIDAR DATA VISUALIZATION SCHEMES
Directory of Open Access Journals (Sweden)
S. Ghosh
2012-07-01
Full Text Available LiDAR (Light Detection and Ranging has attained the status of an industry standard method of data collection for gathering three dimensional topographic information. Datasets captured through LiDAR are dense, redundant and are perceivable from multiple directions, which is unlike other geospatial datasets collected through conventional methods. This three dimensional information has triggered an interest in the scientific community to develop methods for visualizing LiDAR datasets and value added products. Elementary schemes of visualization use point clouds with intensity or colour, triangulation and tetrahedralization based terrain models draped with texture. Newer methods use feature extraction either through the process of classification or segmentation. In this paper, the authors have conducted a visualization experience survey where 60 participants respond to a questionnaire. The questionnaire poses six different questions on the qualities of feature perception and depth for 12 visualization schemes. The answers to these questions are obtained on a scale of 1 to 10. Results are thus presented using the non-parametric Friedman's test, using post-hoc analysis for hypothetically ranking the visualization schemes based on the rating received and finally confirming the rankings through the Page's trend test. Results show that a heuristic based visualization scheme, which has been developed by Ghosh and Lohani (2011 performs the best in terms of feature and depth perception.
The data cyclotron query processing scheme
R.A. Goncalves (Romulo); M.L. Kersten (Martin)
2010-01-01
htmlabstractDistributed database systems exploit static workload characteristics to steer data fragmentation and data allocation schemes. However, the grand challenge of distributed query processing is to come up with a self-organizing architecture, which exploits all resources to manage the hot
Value constraints in the CLP scheme
M.H. van Emden
1996-01-01
textabstractThis paper addresses the question of how to incorporate constraint propagation into logic programming. A likely candidate is the CLP scheme, which allows one to exploit algorithmic opportunities while staying within logic programming semantics. CLP($cal R$) is an example: it combines
Benefit Reentitlement Conditions in Unemployment Insurance Schemes
DEFF Research Database (Denmark)
Andersen, Torben M.; Christoffersen, Mark Strøm; Svarer, Michael
Unemployment insurance schemes include conditions on past employment history as part of the eligibility conditions. This aspect is often neglected in the literature which primarily focuses on benefit levels and benefit duration. In a search-matching framework we show that benefit duration and emp...
External quality assessment schemes for toxicology.
Wilson, John
2002-08-14
A variety of external quality assurance (EQA) schemes monitor quantitative performance for routine biochemical analysis of agents such as paracetamol, salicylate, ethanol and carboxyhaemoglobin. Their usefulness for toxicologists can be lessened where the concentrations monitored do not extend fully into the toxic range or where the matrix is synthetic, of animal origin or serum as opposed to whole human blood. A scheme for quantitative determinations of a wider range of toxicological analytes such as opioids, benzodiazepines and tricyclics in human blood has been piloted by the United Kingdom National External Quality Assessment Scheme (UKNEQAS). Specialist schemes are available for drugs of abuse testing in urine and for hair analysis. Whilst these programmes provide much useful information on the performance of analytical techniques, they fail to monitor the integrated processes that are needed in investigation of toxicological cases. In practice, both qualitative and quantitative tests are used in combination with case information to guide the evaluation of the samples and to develop an interpretation of the analytical findings that is used to provide clinical or forensic advice. EQA programs that combine the analytical and interpretative aspects of case studies are available from EQA providers such as UKNEQAS and the Dutch KKGT program (Stichting Kwaliteitsbewaking Klinische Geneesmiddelanalyse en Toxicologie).
Labeling schemes for bounded degree graphs
DEFF Research Database (Denmark)
Adjiashvili, David; Rotbart, Noy Galil
2014-01-01
graphs. Our results complement a similar bound recently obtained for bounded depth trees [Fraigniaud and Korman, SODA 2010], and may provide new insights for closing the long standing gap for adjacency in trees [Alstrup and Rauhe, FOCS 2002]. We also provide improved labeling schemes for bounded degree...
Voluntary Certification Schemes and Legal Minimum Standards
Herzfeld, T.; Jongeneel, R.A.
2012-01-01
EU farmers face increasing requests to comply with legal as well as private agribusiness and retail standards. Both requests potentially raise farmer’s administrative burden. This paper discusses the potential synergies between cross-compliance and third-party certification schemes. In selected
Traffic calming schemes : opportunities and implementation strategies.
Schagen, I.N.L.G. van (ed.)
2003-01-01
Commissioned by the Swedish National Road Authority, this report aims to provide a concise overview of knowledge of and experiences with traffic calming schemes in urban areas, both on a technical level and on a policy level. Traffic calming refers to a combination of network planning and
A Presuppositional Approach to Conceptual Schemes | Wang ...
African Journals Online (AJOL)
; for they have been focused too much on the truth-conditional notions of meaning/concepts and translation/interpretation in Tarski's style. It is exactly due to such a Quinean interpretation of the notion of conceptual schemes that the very notion ...
Observations and comments on the classification schemes ...
African Journals Online (AJOL)
Reviews of the classification schemes proposed by various geologists are basically similar. However, general discrepancies, inconsistencies and contradictions in the stratigraphic positions of some of the rock units have been observed, as well as terminologies to describe rock units which are inconsistent with stratigraphic ...
Heim, Lale; Schaal, Susanne
2014-01-01
As a consequence of the 1994 Rwandan genocide, prevalences of mental disorders are elevated in Rwanda. More knowledge about determinants of mental stress can help to improve mental health services and treatment in the east-central African country. The present study aimed to investigate actual rates of mental stress (posttraumatic stress disorder, syndromal depression and syndromal anxiety) in Rwanda and to examine if gender, persecution during the genocide, readiness to reconcile as well as importance given to religiosity and quality of religiosity are predictors of mental stress. The study comprised a community sample of N = 200 Rwandans from Rwanda's capital Kigali, who experienced the Rwandan genocide. By conducting structured interviews, ten local Master level psychologists examined types of potentially lifetime traumatic events, symptoms of posttraumatic stress disorder (PTSD), depression and anxiety, readiness to reconcile and religiosity. Applying non-recursive structural equation modeling (SEM), the associations between gender, persecution, readiness to reconcile, religiosity and mental stress were investigated. Respondents had experienced an average number of 11.38 types of potentially lifetime traumatic events. Of the total sample, 11% met diagnostic criteria for PTSD, 19% presented with syndromal depression and 23% with syndromal anxiety. Female sex, persecution and readiness to reconcile were significant predictors of mental stress. Twofold association was found between centrality of religion (which captures the importance given to religiosity) and mental stress, showing, that higher mental stress provokes a higher centrality and that higher centrality reduces mental stress. The variables positive and negative religious functioning (which determine the quality of religiosity) respectively had an indirect negative and positive effect on mental stress. Study results provide evidence that rates of mental stress are still elevated in Rwanda and that
Energy decomposition scheme based on the generalized Kohn-Sham scheme.
Su, Peifeng; Jiang, Zhen; Chen, Zuochang; Wu, Wei
2014-04-03
In this paper, a new energy decomposition analysis scheme based on the generalized Kohn-Sham (GKS) and the localized molecular orbital energy decomposition analysis (LMO-EDA) scheme, named GKS-EDA, is proposed. The GKS-EDA scheme has a wide range of DFT functional adaptability compared to LMO-EDA. In the GKS-EDA scheme, the exchange, repulsion, and polarization terms are determined by DFT orbitals; the correlation term is defined as the difference of the GKS correlation energy from monomers to supermolecule. Using the new definition, the GKS-EDA scheme avoids the error of LMO-EDA which comes from the separated treatment of EX and EC functionals. The scheme can perform analysis both in the gas and in the condensed phases with most of the popular DFT functionals, including LDA, GGA, meta-GGA, hybrid GGA/meta-GGA, double hybrid, range-separated (long-range correction), and dispersion correction. By the GKS-EDA scheme, the DFT functionals assessment for hydrogen bonding, vdW interaction, symmetric radical cation, charge-transfer, and metal-ligand interaction is performed.
LevelScheme: A level scheme drawing and scientific figure preparation system for Mathematica
Caprio, M. A.
2005-09-01
LevelScheme is a scientific figure preparation system for Mathematica. The main emphasis is upon the construction of level schemes, or level energy diagrams, as used in nuclear, atomic, molecular, and hadronic physics. LevelScheme also provides a general infrastructure for the preparation of publication-quality figures, including support for multipanel and inset plotting, customizable tick mark generation, and various drawing and labeling tasks. Coupled with Mathematica's plotting functions and powerful programming language, LevelScheme provides a flexible system for the creation of figures combining diagrams, mathematical plots, and data plots. Program summaryTitle of program:LevelScheme Catalogue identifier:ADVZ Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVZ Operating systems:Any which supports Mathematica; tested under Microsoft Windows XP, Macintosh OS X, and Linux Programming language used:Mathematica 4 Number of bytes in distributed program, including test and documentation:3 051 807 Distribution format:tar.gz Nature of problem:Creation of level scheme diagrams. Creation of publication-quality multipart figures incorporating diagrams and plots. Method of solution:A set of Mathematica packages has been developed, providing a library of level scheme drawing objects, tools for figure construction and labeling, and control code for producing the graphics.
Kuster, Daniel J.; Liu, Chengyu; Fang, Zheng; Ponder, Jay W.; Marshall, Garland R.
2015-01-01
Theoretical and experimental evidence for non-linear hydrogen bonds in protein helices is ubiquitous. In particular, amide three-centered hydrogen bonds are common features of helices in high-resolution crystal structures of proteins. These high-resolution structures (1.0 to 1.5 Å nominal crystallographic resolution) position backbone atoms without significant bias from modeling constraints and identify Φ = -62°, ψ = -43 as the consensus backbone torsional angles of protein helices. These torsional angles preserve the atomic positions of α-β carbons of the classic Pauling α-helix while allowing the amide carbonyls to form bifurcated hydrogen bonds as first suggested by Némethy et al. in 1967. Molecular dynamics simulations of a capped 12-residue oligoalanine in water with AMOEBA (Atomic Multipole Optimized Energetics for Biomolecular Applications), a second-generation force field that includes multipole electrostatics and polarizability, reproduces the experimentally observed high-resolution helical conformation and correctly reorients the amide-bond carbonyls into bifurcated hydrogen bonds. This simple modification of backbone torsional angles reconciles experimental and theoretical views to provide a unified view of amide three-centered hydrogen bonds as crucial components of protein helices. The reason why they have been overlooked by structural biologists depends on the small crankshaft-like changes in orientation of the amide bond that allows maintenance of the overall helical parameters (helix pitch (p) and residues per turn (n)). The Pauling 3.613 α-helix fits the high-resolution experimental data with the minor exception of the amide-carbonyl electron density, but the previously associated backbone torsional angles (Φ, Ψ) needed slight modification to be reconciled with three-atom centered H-bonds and multipole electrostatics. Thus, a new standard helix, the 3.613/10-, Némethy- or N-helix, is proposed. Due to the use of constraints from monopole
Kuster, Daniel J; Liu, Chengyu; Fang, Zheng; Ponder, Jay W; Marshall, Garland R
2015-01-01
Theoretical and experimental evidence for non-linear hydrogen bonds in protein helices is ubiquitous. In particular, amide three-centered hydrogen bonds are common features of helices in high-resolution crystal structures of proteins. These high-resolution structures (1.0 to 1.5 Å nominal crystallographic resolution) position backbone atoms without significant bias from modeling constraints and identify Φ = -62°, ψ = -43 as the consensus backbone torsional angles of protein helices. These torsional angles preserve the atomic positions of α-β carbons of the classic Pauling α-helix while allowing the amide carbonyls to form bifurcated hydrogen bonds as first suggested by Némethy et al. in 1967. Molecular dynamics simulations of a capped 12-residue oligoalanine in water with AMOEBA (Atomic Multipole Optimized Energetics for Biomolecular Applications), a second-generation force field that includes multipole electrostatics and polarizability, reproduces the experimentally observed high-resolution helical conformation and correctly reorients the amide-bond carbonyls into bifurcated hydrogen bonds. This simple modification of backbone torsional angles reconciles experimental and theoretical views to provide a unified view of amide three-centered hydrogen bonds as crucial components of protein helices. The reason why they have been overlooked by structural biologists depends on the small crankshaft-like changes in orientation of the amide bond that allows maintenance of the overall helical parameters (helix pitch (p) and residues per turn (n)). The Pauling 3.6(13) α-helix fits the high-resolution experimental data with the minor exception of the amide-carbonyl electron density, but the previously associated backbone torsional angles (Φ, Ψ) needed slight modification to be reconciled with three-atom centered H-bonds and multipole electrostatics. Thus, a new standard helix, the 3.6(13/10)-, Némethy- or N-helix, is proposed. Due to the use of constraints from
Electricity storage using a thermal storage scheme
White, Alexander
2015-01-01
The increasing use of renewable energy technologies for electricity generation, many of which have an unpredictably intermittent nature, will inevitably lead to a greater demand for large-scale electricity storage schemes. For example, the expanding fraction of electricity produced by wind turbines will require either backup or storage capacity to cover extended periods of wind lull. This paper describes a recently proposed storage scheme, referred to here as Pumped Thermal Storage (PTS), and which is based on "sensible heat" storage in large thermal reservoirs. During the charging phase, the system effectively operates as a high temperature-ratio heat pump, extracting heat from a cold reservoir and delivering heat to a hot one. In the discharge phase the processes are reversed and it operates as a heat engine. The round-trip efficiency is limited only by process irreversibilities (as opposed to Second Law limitations on the coefficient of performance and the thermal efficiency of the heat pump and heat engine respectively). PTS is currently being developed in both France and England. In both cases, the schemes operate on the Joule-Brayton (gas turbine) cycle, using argon as the working fluid. However, the French scheme proposes the use of turbomachinery for compression and expansion, whereas for that being developed in England reciprocating devices are proposed. The current paper focuses on the impact of the various process irreversibilities on the thermodynamic round-trip efficiency of the scheme. Consideration is given to compression and expansion losses and pressure losses (in pipe-work, valves and thermal reservoirs); heat transfer related irreversibility in the thermal reservoirs is discussed but not included in the analysis. Results are presented demonstrating how the various loss parameters and operating conditions influence the overall performance.
Sengupta, Arkajyoti; Ramabhadran, Raghunath O; Raghavachari, Krishnan
2014-08-14
In this study we have used the connectivity-based hierarchy (CBH) method to derive accurate heats of formation of a range of biomolecules, 18 amino acids and 10 barbituric acid/uracil derivatives. The hierarchy is based on the connectivity of the different atoms in a large molecule. It results in error-cancellation reaction schemes that are automated, general, and can be readily used for a broad range of organic molecules and biomolecules. Herein, we first locate stable conformational and tautomeric forms of these biomolecules using an accurate level of theory (viz. CCSD(T)/6-311++G(3df,2p)). Subsequently, the heats of formation of the amino acids are evaluated using the CBH-1 and CBH-2 schemes and routinely employed density functionals or wave function-based methods. The calculated heats of formation obtained herein using modest levels of theory and are in very good agreement with those obtained using more expensive W1-F12 and W2-F12 methods on amino acids and G3 results on barbituric acid derivatives. Overall, the present study (a) highlights the small effect of including multiple conformers in determining the heats of formation of biomolecules and (b) in concurrence with previous CBH studies, proves that use of the more effective error-cancelling isoatomic scheme (CBH-2) results in more accurate heats of formation with modestly sized basis sets along with common density functionals or wave function-based methods.
Towards Accurate Application Characterization for Exascale (APEX)
Energy Technology Data Exchange (ETDEWEB)
Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-09-01
Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.
Accurate Thermal Conductivities from First Principles
Carbogno, Christian
2015-03-01
In spite of significant research efforts, a first-principles determination of the thermal conductivity at high temperatures has remained elusive. On the one hand, Boltzmann transport techniques that include anharmonic effects in the nuclear dynamics only perturbatively become inaccurate or inapplicable under such conditions. On the other hand, non-equilibrium molecular dynamics (MD) methods suffer from enormous finite-size artifacts in the computationally feasible supercells, which prevent an accurate extrapolation to the bulk limit of the thermal conductivity. In this work, we overcome this limitation by performing ab initio MD simulations in thermodynamic equilibrium that account for all orders of anharmonicity. The thermal conductivity is then assessed from the auto-correlation function of the heat flux using the Green-Kubo formalism. Foremost, we discuss the fundamental theory underlying a first-principles definition of the heat flux using the virial theorem. We validate our approach and in particular the techniques developed to overcome finite time and size effects, e.g., by inspecting silicon, the thermal conductivity of which is particularly challenging to converge. Furthermore, we use this framework to investigate the thermal conductivity of ZrO2, which is known for its high degree of anharmonicity. Our calculations shed light on the heat resistance mechanism active in this material, which eventually allows us to discuss how the thermal conductivity can be controlled by doping and co-doping. This work has been performed in collaboration with R. Ramprasad (University of Connecticut), C. G. Levi and C. G. Van de Walle (University of California Santa Barbara).
Accurate Biomass Estimation via Bayesian Adaptive Sampling
Wheeler, K.; Knuth, K.; Castle, P.
2005-12-01
and IKONOS imagery and the 3-D volume estimates. The combination of these then allow for a rapid and hopefully very accurate estimation of biomass.
Optimizing cell arrays for accurate functional genomics.
Fengler, Sven; Bastiaens, Philippe I H; Grecco, Hernán E; Roda-Navarro, Pedro
2012-07-17
Cellular responses emerge from a complex network of dynamic biochemical reactions. In order to investigate them is necessary to develop methods that allow perturbing a high number of gene products in a flexible and fast way. Cell arrays (CA) enable such experiments on microscope slides via reverse transfection of cellular colonies growing on spotted genetic material. In contrast to multi-well plates, CA are susceptible to contamination among neighboring spots hindering accurate quantification in cell-based screening projects. Here we have developed a quality control protocol for quantifying and minimizing contamination in CA. We imaged checkered CA that express two distinct fluorescent proteins and segmented images into single cells to quantify the transfection efficiency and interspot contamination. Compared with standard procedures, we measured a 3-fold reduction of contaminants when arrays containing HeLa cells were washed shortly after cell seeding. We proved that nucleic acid uptake during cell seeding rather than migration among neighboring spots was the major source of contamination. Arrays of MCF7 cells developed without the washing step showed 7-fold lower percentage of contaminant cells, demonstrating that contamination is dependent on specific cell properties. Previously published methodological works have focused on achieving high transfection rate in densely packed CA. Here, we focused in an equally important parameter: The interspot contamination. The presented quality control is essential for estimating the rate of contamination, a major source of false positives and negatives in current microscopy based functional genomics screenings. We have demonstrated that a washing step after seeding enhances CA quality for HeLA but is not necessary for MCF7. The described method provides a way to find optimal seeding protocols for cell lines intended to be used for the first time in CA.
Optimizing cell arrays for accurate functional genomics
Directory of Open Access Journals (Sweden)
Fengler Sven
2012-07-01
Full Text Available Abstract Background Cellular responses emerge from a complex network of dynamic biochemical reactions. In order to investigate them is necessary to develop methods that allow perturbing a high number of gene products in a flexible and fast way. Cell arrays (CA enable such experiments on microscope slides via reverse transfection of cellular colonies growing on spotted genetic material. In contrast to multi-well plates, CA are susceptible to contamination among neighboring spots hindering accurate quantification in cell-based screening projects. Here we have developed a quality control protocol for quantifying and minimizing contamination in CA. Results We imaged checkered CA that express two distinct fluorescent proteins and segmented images into single cells to quantify the transfection efficiency and interspot contamination. Compared with standard procedures, we measured a 3-fold reduction of contaminants when arrays containing HeLa cells were washed shortly after cell seeding. We proved that nucleic acid uptake during cell seeding rather than migration among neighboring spots was the major source of contamination. Arrays of MCF7 cells developed without the washing step showed 7-fold lower percentage of contaminant cells, demonstrating that contamination is dependent on specific cell properties. Conclusions Previously published methodological works have focused on achieving high transfection rate in densely packed CA. Here, we focused in an equally important parameter: The interspot contamination. The presented quality control is essential for estimating the rate of contamination, a major source of false positives and negatives in current microscopy based functional genomics screenings. We have demonstrated that a washing step after seeding enhances CA quality for HeLA but is not necessary for MCF7. The described method provides a way to find optimal seeding protocols for cell lines intended to be used for the first time in CA.
How flatbed scanners upset accurate film dosimetry.
van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S
2016-01-21
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.
Important Nearby Galaxies without Accurate Distances
McQuinn, Kristen
2014-10-01
The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.
Energy Technology Data Exchange (ETDEWEB)
Lee, Ho; Xing Lei [Department of Radiation Oncology, Stanford University, Stanford, CA 94305-5847 (United States); Lee, Jeongjin [Department of Digital Media, Catholic University of Korea, Gyeonggi-do, 420-743 (Korea, Republic of); Shin, Yeong Gil [School of Computer Science and Engineering, Seoul National University, Seoul, 151-742 (Korea, Republic of); Lee, Rena, E-mail: leeho@stanford.ed [Department of Radiation Oncology, Ewha Womans University School of Medicine, Seoul, 158-710 (Korea, Republic of)
2010-06-21
This paper presents a fast and accurate marker-based automatic registration technique for aligning uncalibrated projections taken from a transmission electron microscope (TEM) with different tilt angles and orientations. Most of the existing TEM image alignment methods estimate the similarity between images using the projection model with least-squares metric and guess alignment parameters by computationally expensive nonlinear optimization schemes. Approaches based on the least-squares metric which is sensitive to outliers may cause misalignment since automatic tracking methods, though reliable, can produce a few incorrect trajectories due to a large number of marker points. To decrease the influence of outliers, we propose a robust similarity measure using the projection model with a Gaussian weighting function. This function is very effective in suppressing outliers that are far from correct trajectories and thus provides a more robust metric. In addition, we suggest a fast search strategy based on the non-gradient Powell's multidimensional optimization scheme to speed up optimization as only meaningful parameters are considered during iterative projection model estimation. Experimental results show that our method brings more accurate alignment with less computational cost compared to conventional automatic alignment methods.
A classification scheme for risk assessment methods.
Energy Technology Data Exchange (ETDEWEB)
Stamp, Jason Edwin; Campbell, Philip LaRoche
2004-08-01
This report presents a classification scheme for risk assessment methods. This scheme, like all classification schemes, provides meaning by imposing a structure that identifies relationships. Our scheme is based on two orthogonal aspects--level of detail, and approach. The resulting structure is shown in Table 1 and is explained in the body of the report. Each cell in the Table represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. This report imposes structure on the set of risk assessment methods in order to reveal their relationships and thus optimize their usage.We present a two-dimensional structure in the form of a matrix, using three abstraction levels for the rows and three approaches for the columns. For each of the nine cells in the matrix we identify the method type by name and example. The matrix helps the user understand: (1) what to expect from a given method, (2) how it relates to other methods, and (3) how best to use it. Each cell in the matrix represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. The matrix, with type names in the cells, is introduced in Table 2 on page 13 below. Unless otherwise stated we use the word 'method' in this report to refer to a 'risk assessment method', though often times we use the full phrase. The use of the terms 'risk assessment' and 'risk management' are close enough that we do not attempt to distinguish them in this report. The remainder of this report is organized as follows. In
Directory of Open Access Journals (Sweden)
Gottesman Irving I
2006-05-01
Full Text Available Abstract Background Two large independent studies funded by the US government have assessed the impact of the Vietnam War on the prevalence of PTSD in US veterans. The National Vietnam Veterans Readjustment Study (NVVRS estimated the current PTSD prevalence to be 15.2% while the Vietnam Experience Study (VES estimated the prevalence to be 2.2%. We compared alternative criteria for estimating the prevalence of PTSD using the NVVRS and VES public use data sets collected more than 10 years after the United States withdrew troops from Vietnam. Methods We applied uniform diagnostic procedures to the male veterans from the NVVRS and VES to estimate PTSD prevalences based on varying criteria including one-month and lifetime prevalence estimates, combat and non-combat prevalence estimates, and prevalence estimates using both single and multiple indicator models. Results Using a narrow and specific set of criteria, we derived current prevalence estimates for combat-related PTSD of 2.5% and 2.9% for the VES and the NVVRS, respectively. Using a more broad and sensitive set of criteria, we derived current prevalence estimates for combat-related PTSD of 12.2% and 15.8% for the VES and NVVRS, respectively. Conclusion When comparable methods were applied to available data we reconciled disparate results and estimated similar current prevalences for both narrow and broad definitions of combat-related diagnoses of PTSD.
Morris, Carlos F M; Tahir, Muhammad; Arshid, Samina; Castro, Mariana S; Fontes, Wagner
2015-01-01
Inflammatory cascades and mechanisms are ubiquitous during host responses to various types of insult. Biological models and interventional strategies have been devised as an effort to better understand and modulate inflammation-driven injuries. Amongst those the two-hit model stands as a plausible and intuitive framework that explains some of the most frequent clinical outcomes seen in injuries like trauma and sepsis. This model states that a first hit serves as a priming event upon which sequential insults can build on, culminating on maladaptive inflammatory responses. On a different front, ischemic preconditioning (IPC) has risen to light as a readily applicable tool for modulating the inflammatory response to ischemia and reperfusion. The idea is that mild ischemic insults, either remote or local, can cause organs and tissues to be more resilient to further ischemic insults. This seemingly contradictory role that the two models attribute to a first inflammatory hit, as priming in the former and protective in the latter, has set these two theories on opposing corners of the literature. The present review tries to reconcile both models by showing that, rather than debunking each other, each framework offers unique insights in understanding and modulating inflammation-related injuries.
Directory of Open Access Journals (Sweden)
Neil Powell
2017-12-01
Full Text Available This paper considers how to achieve equitable water governance and the flow-on effects it has in terms of supporting sustainable development, drawing on case studies from the international climate change adaptation and governance project (CADWAGO. Water governance, like many other global issues, is becoming increasingly intractable (wicked with climate change and is, by the international community, being linked to instances of threats to human security, the war in the Sudanese Darfur and more recently the acts of terrorism perpetuated by ISIS. In this paper, we ask the question: how can situations characterized by water controversy (exacerbated by the uncertainties posed by climate change be reconciled? The main argument is based on a critique of the way the water security discourse appropriates expert (normal claims about human-biophysical relationships. When water challenges become increasingly securitized by the climate change discourse it becomes permissible to enact processes that legitimately transgress normative positions through post-normal actions. In contrast, the water equity discourse offers an alternative reading of wicked and post-normal water governance situations. We contend that by infusing norm critical considerations into the process of securitization, new sub-national constellations of agents will be empowered to enact changes; thereby bypassing vicious cycles of power brokering that characterize contemporary processes intended to address controversies.
Wilson, Maximiliano A; Joubert, Sven; Ferré, Perrine; Belleville, Sylvie; Ansaldo, Ana Inés; Joanette, Yves; Rouleau, Isabelle; Brambati, Simona Maria
2012-05-01
Semantic dementia (SD) is a neurodegenerative disease that occurs following the atrophy of the anterior temporal lobes (ATLs). It is characterised by the degradation of semantic knowledge and difficulties in reading exception words (surface dyslexia). This disease has highlighted the role of the ATLs in the process of exception word reading. However, imaging studies in healthy subjects have failed to detect activation of the ATLs during exception word reading. The aim of the present study was to test whether the functional brain regions that mediate exception word reading in normal readers overlap those brain regions atrophied in SD. In Study One, we map the brain regions of grey matter atrophy in AF, a patient with mild SD and surface dyslexia profile. In Study Two, we map the activation pattern associated with exception word compared to pseudoword reading in young, healthy participants using fMRI. The results revealed areas of significant activation in healthy subjects engaged in the exception word reading task in the left anterior middle temporal gyrus, in a region observed to be atrophic in the patient AF. These results reconcile neuropsychological and functional imaging data, revealing the critical role of the left ATL in exception word reading. Copyright © 2012 Elsevier Inc. All rights reserved.
Mitrovica, Jerry X; Hay, Carling C; Morrow, Eric; Kopp, Robert E; Dumberry, Mathieu; Stanley, Sabine
2015-12-01
In 2002, Munk defined an important enigma of 20th century global mean sea-level (GMSL) rise that has yet to be resolved. First, he listed three canonical observations related to Earth's rotation [(i) the slowing of Earth's rotation rate over the last three millennia inferred from ancient eclipse observations, and changes in the (ii) amplitude and (iii) orientation of Earth's rotation vector over the last century estimated from geodetic and astronomic measurements] and argued that they could all be fit by a model of ongoing glacial isostatic adjustment (GIA) associated with the last ice age. Second, he demonstrated that prevailing estimates of the 20th century GMSL rise (~1.5 to 2.0 mm/year), after correction for the maximum signal from ocean thermal expansion, implied mass flux from ice sheets and glaciers at a level that would grossly misfit the residual GIA-corrected observations of Earth's rotation. We demonstrate that the combination of lower estimates of the 20th century GMSL rise (up to 1990) improved modeling of the GIA process and that the correction of the eclipse record for a signal due to angular momentum exchange between the fluid outer core and the mantle reconciles all three Earth rotation observations. This resolution adds confidence to recent estimates of individual contributions to 20th century sea-level change and to projections of GMSL rise to the end of the 21st century based on them.
Kulynych, Jennifer; Greely, Henry T
2017-04-01
Widespread use of medical records for research, without consent, attracts little scrutiny compared to biospecimen research, where concerns about genomic privacy prompted recent federal proposals to mandate consent. This paper explores an important consequence of the proliferation of electronic health records (EHRs) in this permissive atmosphere: with the advent of clinical gene sequencing, EHR-based secondary research poses genetic privacy risks akin to those of biospecimen research, yet regulators still permit researchers to call gene sequence data 'de-identified', removing such data from the protection of the federal Privacy Rule and federal human subjects regulations. Medical centers and other providers seeking to offer genomic 'personalized medicine' now confront the problem of governing the secondary use of clinical genomic data as privacy risks escalate. We argue that regulators should no longer permit HIPAA-covered entities to treat dense genomic data as de-identified health information. Even with this step, the Privacy Rule would still permit disclosure of clinical genomic data for research, without consent, under a data use agreement, so we also urge that providers give patients specific notice before disclosing clinical genomic data for research, permitting (where possible) some degree of choice and control. To aid providers who offer clinical gene sequencing, we suggest both general approaches and specific actions to reconcile patients' rights and interests with genomic research.
Greely, Henry T.
2017-01-01
Abstract Widespread use of medical records for research, without consent, attracts little scrutiny compared to biospecimen research, where concerns about genomic privacy prompted recent federal proposals to mandate consent. This paper explores an important consequence of the proliferation of electronic health records (EHRs) in this permissive atmosphere: with the advent of clinical gene sequencing, EHR-based secondary research poses genetic privacy risks akin to those of biospecimen research, yet regulators still permit researchers to call gene sequence data ‘de-identified’, removing such data from the protection of the federal Privacy Rule and federal human subjects regulations. Medical centers and other providers seeking to offer genomic ‘personalized medicine’ now confront the problem of governing the secondary use of clinical genomic data as privacy risks escalate. We argue that regulators should no longer permit HIPAA-covered entities to treat dense genomic data as de-identified health information. Even with this step, the Privacy Rule would still permit disclosure of clinical genomic data for research, without consent, under a data use agreement, so we also urge that providers give patients specific notice before disclosing clinical genomic data for research, permitting (where possible) some degree of choice and control. To aid providers who offer clinical gene sequencing, we suggest both general approaches and specific actions to reconcile patients’ rights and interests with genomic research. PMID:28852559
Hou, Chen; Amunugama, Kaushalya
2015-07-01
The relationship between energy expenditure and longevity has been a central theme in aging studies. Empirical studies have yielded controversial results, which cannot be reconciled by existing theories. In this paper, we present a simple theoretical model based on first principles of energy conservation and allometric scaling laws. The model takes into considerations the energy tradeoffs between life history traits and the efficiency of the energy utilization, and offers quantitative and qualitative explanations for a set of seemingly contradictory empirical results. We show that oxidative metabolism can affect cellular damage and longevity in different ways in animals with different life histories and under different experimental conditions. Qualitative data and the linearity between energy expenditure, cellular damage, and lifespan assumed in previous studies are not sufficient to understand the complexity of the relationships. Our model provides a theoretical framework for quantitative analyses and predictions. The model is supported by a variety of empirical studies, including studies on the cellular damage profile during ontogeny; the intra- and inter-specific correlations between body mass, metabolic rate, and lifespan; and the effects on lifespan of (1) diet restriction and genetic modification of growth hormone, (2) the cold and exercise stresses, and (3) manipulations of antioxidant. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Are patients referred to rehabilitation diagnosed accurately?
Tederko, Piotr; Krasuski, Marek; Nyka, Izabella; Mycielski, Jerzy; Tarnacka, Beata
2017-07-17
An accurate diagnosis of the leading health condition and comorbidities is a prerequisite for safe and effective rehabilitation. The problem of diagnostic errors in physical and rehabilitation medicine (PRM) has not been addressed sufficiently. The responsibility of a referring physician is to determine indications and contraindications for rehabilitation. To assess the rate of and risk factors for inaccurate referral diagnoses (RD) in patients referred to a rehabilitation facility. We hypothesized that inaccurate RD would be more common in patients 1) referred by non-PRM physicians; 2) waiting longer for the admission; 3) older patients. Retrospective observational study. 1000 randomly selected patients admitted between 2012 and 2016 to a day- rehabilitation center (DRC). University DRC specialized in musculoskeletal diseases. On admission all cases underwent clinical verification of RD. Inappropriateness regarding primary diagnoses and comorbidities were noted. Influence of several factors affecting probability of inaccurate RD was analyzed with multiple binary regression model applied to 6 categories of diseases. The rate of inaccurate RD was 25.2%. Higher frequency of inaccurate RD was noted among patients referred by non-PRM specialists (30.3% vs 17.3% in cases referred by PRM specialists). Application of logit regression showed highly significant influence of the specialty of a referring physician on the odds of inaccurate RD (joint Wald test ch2(6)=38.98, p- value=0.000), controlling for the influence of other variables. This may reflect a suboptimal knowledge of the rehabilitation process and a tendency to neglect of comorbidities by non-PRM specialists. The rate of inaccurate RD did not correlate with time between referral and admission (joint Wald test of all odds ratios equal to 1, chi2(6)=5.62, p-value=0.467), however, mean and median waiting times were relatively short (35.7 and 25 days respectively).A high risk of overlooked multimorbidity was
An automated method for accurate vessel segmentation
Yang, Xin; Liu, Chaoyue; Le Minh, Hung; Wang, Zhiwei; Chien, Aichi; (Tim Cheng, Kwang-Ting
2017-05-01
Vessel segmentation is a critical task for various medical applications, such as diagnosis assistance of diabetic retinopathy, quantification of cerebral aneurysm’s growth, and guiding surgery in neurosurgical procedures. Despite technology advances in image segmentation, existing methods still suffer from low accuracy for vessel segmentation in the two challenging while common scenarios in clinical usage: (1) regions with a low signal-to-noise-ratio (SNR), and (2) at vessel boundaries disturbed by adjacent non-vessel pixels. In this paper, we present an automated system which can achieve highly accurate vessel segmentation for both 2D and 3D images even under these challenging scenarios. Three key contributions achieved by our system are: (1) a progressive contrast enhancement method to adaptively enhance contrast of challenging pixels that were otherwise indistinguishable, (2) a boundary refinement method to effectively improve segmentation accuracy at vessel borders based on Canny edge detection, and (3) a content-aware region-of-interests (ROI) adjustment method to automatically determine the locations and sizes of ROIs which contain ambiguous pixels and demand further verification. Extensive evaluation of our method is conducted on both 2D and 3D datasets. On a public 2D retinal dataset (named DRIVE (Staal 2004 IEEE Trans. Med. Imaging 23 501-9)) and our 2D clinical cerebral dataset, our approach achieves superior performance to the state-of-the-art methods including a vesselness based method (Frangi 1998 Int. Conf. on Medical Image Computing and Computer-Assisted Intervention) and an optimally oriented flux (OOF) based method (Law and Chung 2008 European Conf. on Computer Vision). An evaluation on 11 clinical 3D CTA cerebral datasets shows that our method can achieve 94% average accuracy with respect to the manual segmentation reference, which is 23% to 33% better than the five baseline methods (Yushkevich 2006 Neuroimage 31 1116-28; Law and Chung 2008
Scheme for Quantum Computing Immune to Decoherence
Williams, Colin; Vatan, Farrokh
2008-01-01
A constructive scheme has been devised to enable mapping of any quantum computation into a spintronic circuit in which the computation is encoded in a basis that is, in principle, immune to quantum decoherence. The scheme is implemented by an algorithm that utilizes multiple physical spins to encode each logical bit in such a way that collective errors affecting all the physical spins do not disturb the logical bit. The scheme is expected to be of use to experimenters working on spintronic implementations of quantum logic. Spintronic computing devices use quantum-mechanical spins (typically, electron spins) to encode logical bits. Bits thus encoded (denoted qubits) are potentially susceptible to errors caused by noise and decoherence. The traditional model of quantum computation is based partly on the assumption that each qubit is implemented by use of a single two-state quantum system, such as an electron or other spin-1.2 particle. It can be surprisingly difficult to achieve certain gate operations . most notably, those of arbitrary 1-qubit gates . in spintronic hardware according to this model. However, ironically, certain 2-qubit interactions (in particular, spin-spin exchange interactions) can be achieved relatively easily in spintronic hardware. Therefore, it would be fortunate if it were possible to implement any 1-qubit gate by use of a spin-spin exchange interaction. While such a direct representation is not possible, it is possible to achieve an arbitrary 1-qubit gate indirectly by means of a sequence of four spin-spin exchange interactions, which could be implemented by use of four exchange gates. Accordingly, the present scheme provides for mapping any 1-qubit gate in the logical basis into an equivalent sequence of at most four spin-spin exchange interactions in the physical (encoded) basis. The complexity of the mathematical derivation of the scheme from basic quantum principles precludes a description within this article; it must suffice to report
McKenzie, Emily; Potestio, Melissa L; Boyd, Jamie M; Niven, Daniel J; Brundin-Mather, Rebecca; Bagshaw, Sean M; Stelfox, Henry T
2017-12-01
Providers have traditionally established priorities for quality improvement; however, patients and their family members have recently become involved in priority setting. Little is known about how to reconcile priorities of different stakeholder groups into a single prioritized list that is actionable for organizations. To describe the decision-making process for establishing consensus used by a diverse panel of stakeholders to reconcile two sets of quality improvement priorities (provider/decision maker priorities n=9; patient/family priorities n=19) into a single prioritized list. We employed a modified Delphi process with a diverse group of panellists to reconcile priorities for improving care of critically ill patients in the intensive care unit (ICU). Proceedings were audio-recorded, transcribed and analysed using qualitative content analysis to explore the decision-making process for establishing consensus. Nine panellists including three providers, three decision makers and three family members of previously critically ill patients. Panellists rated and revised 28 priorities over three rounds of review and reached consensus on the "Top 5" priorities for quality improvement: transition of patient care from ICU to hospital ward; family presence and effective communication; delirium screening and management; early mobilization; and transition of patient care between ICU providers. Four themes were identified as important for establishing consensus: storytelling (sharing personal experiences), amalgamating priorities (negotiating priority scope), considering evaluation criteria and having a priority champion. Our study demonstrates the feasibility of incorporating families of patients into a multistakeholder prioritization exercise. The approach described can be used to guide consensus building and reconcile priorities of diverse stakeholder groups. © 2017 The Authors Health Expectations Published by John Wiley & Sons Ltd.
Transverse Hilbert schemes and completely integrable systems
Directory of Open Access Journals (Sweden)
Donin Niccolò Lora Lamia
2017-12-01
Full Text Available In this paper we consider a special class of completely integrable systems that arise as transverse Hilbert schemes of d points of a complex symplectic surface S projecting onto ℂ via a surjective map p which is a submersion outside a discrete subset of S. We explicitly endow the transverse Hilbert scheme Sp[d] with a symplectic form and an endomorphism A of its tangent space with 2-dimensional eigenspaces and such that its characteristic polynomial is the square of its minimum polynomial and show it has the maximal number of commuting Hamiltonians.We then provide the inverse construction, starting from a 2ddimensional holomorphic integrable system W which has an endomorphism A: TW → TW satisfying the above properties and recover our initial surface S with W ≌ Sp[d].
A hybrid Lagrangian Voronoi-SPH scheme
Fernandez-Gutierrez, D.; Souto-Iglesias, A.; Zohdi, T. I.
2017-11-01
A hybrid Lagrangian Voronoi-SPH scheme, with an explicit weakly compressible formulation for both the Voronoi and SPH sub-domains, has been developed. The SPH discretization is substituted by Voronoi elements close to solid boundaries, where SPH consistency and boundary conditions implementation become problematic. A buffer zone to couple the dynamics of both sub-domains is used. This zone is formed by a set of particles where fields are interpolated taking into account SPH particles and Voronoi elements. A particle may move in or out of the buffer zone depending on its proximity to a solid boundary. The accuracy of the coupled scheme is discussed by means of a set of well-known verification benchmarks.
Cryptanalysis of Two Fault Countermeasure Schemes
DEFF Research Database (Denmark)
Banik, Subhadeep; Bogdanov, Andrey
2015-01-01
is meant for the protection of block ciphers like AES. The second countermeasure was proposed in IEEE-HOST 2015 and protects the Grain-128 stream cipher. The design divides the output function used in Grain-128 into two components. The first called the masking function, masks the input bits to the output...... use the internally generated random bits which make these designs vulnerable. We will outline attacks that cryptanalyze the above schemes using 66 and 512 faults respectively....... function with some additional randomness and computes the value of the function. The second called the unmasking function, is computed securely using a different register and undoes the effect of the masking with random bits. We will show that there exists a weakness in the way in which both these schemes...
Fixed Wordsize Implementation of Lifting Schemes
Directory of Open Access Journals (Sweden)
Tanja Karp
2007-01-01
Full Text Available We present a reversible nonlinear discrete wavelet transform with predefined fixed wordsize based on lifting schemes. Restricting the dynamic range of the wavelet domain coefficients due to a fixed wordsize may result in overflow. We show how this overflow has to be handled in order to maintain reversibility of the transform. We also perform an analysis on how large a wordsize of the wavelet coefficients is needed to perform optimal lossless and lossy compressions of images. The scheme is advantageous to well-known integer-to-integer transforms since the wordsize of adders and multipliers can be predefined and does not increase steadily. This also results in significant gains in hardware implementations.
New communication schemes based on adaptive synchronization.
Yu, Wenwu; Cao, Jinde; Wong, Kwok-Wo; Lü, Jinhu
2007-09-01
In this paper, adaptive synchronization with unknown parameters is discussed for a unified chaotic system by using the Lyapunov method and the adaptive control approach. Some communication schemes, including chaotic masking, chaotic modulation, and chaotic shift key strategies, are then proposed based on the modified adaptive method. The transmitted signal is masked by chaotic signal or modulated into the system, which effectively blurs the constructed return map and can resist this return map attack. The driving system with unknown parameters and functions is almost completely unknown to the attackers, so it is more secure to apply this method into the communication. Finally, some simulation examples based on the proposed communication schemes and some cryptanalysis works are also given to verify the theoretical analysis in this paper.
Optimization of Train Trip Package Operation Scheme
Directory of Open Access Journals (Sweden)
Lu Tong
2015-01-01
Full Text Available Train trip package transportation is an advanced form of railway freight transportation, realized by a specialized train which has fixed stations, fixed time, and fixed path. Train trip package transportation has lots of advantages, such as large volume, long distance, high speed, simple forms of organization, and high margin, so it has become the main way of railway freight transportation. This paper firstly analyzes the related factors of train trip package transportation from its organizational forms and characteristics. Then an optimization model for train trip package transportation is established to provide optimum operation schemes. The proposed model is solved by the genetic algorithm. At last, the paper tests the model on the basis of the data of 8 regions. The results show that the proposed method is feasible for solving operation scheme issues of train trip package.
A numerical relativity scheme for cosmological simulations
Daverio, David; Dirian, Yves; Mitsou, Ermis
2017-12-01
Cosmological simulations involving the fully covariant gravitational dynamics may prove relevant in understanding relativistic/non-linear features and, therefore, in taking better advantage of the upcoming large scale structure survey data. We propose a new 3 + 1 integration scheme for general relativity in the case where the matter sector contains a minimally-coupled perfect fluid field. The original feature is that we completely eliminate the fluid components through the constraint equations, thus remaining with a set of unconstrained evolution equations for the rest of the fields. This procedure does not constrain the lapse function and shift vector, so it holds in arbitrary gauge and also works for arbitrary equation of state. An important advantage of this scheme is that it allows one to define and pass an adaptation of the robustness test to the cosmological context, at least in the case of pressureless perfect fluid matter, which is the relevant one for late-time cosmology.
Optimizing multiplexing scheme in interferometric microscopy
Tayebi, Behnam; Jaferzadeh, Keyvan; Sharif, Farnaz; Han, Jae-Ho
2016-08-01
In single exposure off-axis interferometry, multiple information can be recorded by spatial frequency multiplexing. We investigate optimum conditions for designing 2D sampling schemes to record larger field of view in off-axis interferometry multiplexing. The spatial resolution of the recorded image is related to the numerical aperture of the system and sensor pixel size. The spatial resolution should preserve by avoiding crosstalk in the frequency domain. Furthermore, the field of view depends on the sensor size and magnification of the imaging system. In order to preserve resolution and have a larger field of view, the frequency domain should be designed correctly. The experimental results demonstrate that selecting the wrong geometrical scheme in frequency domain decrease the recorded image area.
Exclusion from the Health Insurance Scheme
2003-01-01
A CERN pensioner, member of the Organization's Health Insurance Scheme (CHIS), recently provided fake documents in support of claims for medical expenses, in order to receive unjustified reimbursement from the CHIS. The Administrator of the CHIS, UNIQA, suspected a case of fraud: Accordingly, an investigation and interview of the person concerned was carried out and brought the Organization to the conclusion that fraud had actually taken place. Consequently and in accordance with Article VIII 3.12 of the CHIS Rules, it was decided to exclude this member permanently from the CHIS. The Organization takes the opportunity to remind Scheme members that any fraud or attempt to fraud established within the framework of the CHIS exposes them to: - disciplinary action, according to the Staff Rules and Regulations, for CERN members of the personnel; - definitive exclusion from the CHIS for members affiliated on a voluntary basis. Human Resources Division Tel. 73635
Directory of Open Access Journals (Sweden)
Rivanda Meira Teixeira
2016-03-01
Full Text Available Women have gained more and more space in various professional areas and this development also occurs in the field of entrepreneurship. In Brazil GEM 2013 identified for the first time, that the number of new woman entrepreneurs was higher than male entrepreneurs. However, it is recognized that women entrepreneurs face many difficulties when trying to reconcile their companies with the family. The main objective of this research is to analyse the challenges faced by women entrepreneurs of travel agencies to reconcile the conflict between work and family. This study adopted the multiple cases research strategy and were selected seven women creators and managers of travel agencies in the cities of Aracaju and the Barra dos Coqueiros, in the state of Sergipe (east coast of Brazil. In an attempt to reconcile well the multiple roles these women often face the frustration and guilt. At this moment, it shows the importance of emotional contribution of husband and children. It is noticed that the search for balance between the conflicting demands generate emotional distress and / or physical.
Wang, Yu-Nu; Shyu, Yea-Ing Lotus; Chen, Min-Chi; Yang, Pei-Shan
2011-04-01
This paper is a report of a study that examined the effects of work demands, including employment status, work inflexibility and difficulty reconciling work and family caregiving, on role strain and depressive symptoms of adult-child family caregivers of older people with dementia. Family caregivers also employed for pay are known to be affected by work demands, i.e. excessive workload and time pressures. However, few studies have shown how these work demands and reconciliation between work and family caregiving influence caregivers' role strain and depressive symptoms. For this cross-sectional study, secondary data were analysed for 119 adult-child family caregivers of older people with dementia in Taiwan using hierarchical multiple regression. After adjusting for demographic characteristics, resources and role demands overload, family caregivers with full-time jobs (β=0.25, Proles (β=0.36, Prole strain than family caregivers working part-time or unemployed. Family caregivers with more work inflexibility reported more depressive symptoms (β=0.29, Pfamily caregivers' role strain and depressive symptoms. Working full-time and having more difficulty reconciling work and caregiving roles predicted role strain; work inflexibility predicted depressive symptoms. These results can help clinicians identify high-risk groups for role strain and depression. Nurses need to assess family caregivers for work flexibility when screening for high-risk groups and encourage them to reconcile working with family-care responsibilities to reduce role strain. © 2010 Blackwell Publishing Ltd.
METAPHORIC MECHANISMS IN IMAGE SCHEME DEVELOPMENT
Pankratova, S.A.
2017-01-01
Problems of knowledge representation by means of images are still cognitively significant and invariably modern. The article deals with the image heuristic potential of a bookish sphere as a donor of meanings, aiding metaphoric scheme development of another modern sphere of cinematography. The key factor here is the differentiation between two basically different metaphor types – heuristic epiphora and image diaphora. The author had offered a unique methodology of counting of quantitative par...
[A scheme to support teenagers in care].
Maldera, David
In some family situations, the placement of a teenager, even in the case of a court decision, proves ineffective. The accumulation of all kinds of difficulties requires a different type of support, based on responsiveness, attention and above all time to come together. A dedicated scheme helps to prevent situations of waywardness or marginalisation among these teenagers and to support the families. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
The Emergent Universe scheme and tunneling
Energy Technology Data Exchange (ETDEWEB)
Labraña, Pedro [Departamento de Física, Universidad del Bío-Bío, Avenida Collao 1202, Casilla 5-C, Concepción, Chile and Departament d' Estructura i Constituents de la Matèria and Institut de Ciències del Cosmos, Universitat (Spain)
2014-07-23
We present an alternative scheme for an Emergent Universe scenario, developed previously in Phys. Rev. D 86, 083524 (2012), where the universe is initially in a static state supported by a scalar field located in a false vacuum. The universe begins to evolve when, by quantum tunneling, the scalar field decays into a state of true vacuum. The Emergent Universe models are interesting since they provide specific examples of non-singular inflationary universes.
Failure of a proposed superluminal scheme
Furuya, K.; Milonni, P. W.; Steinberg, A. M.; Wolinsky, M.
1999-02-01
We consider a “superluminal quantum Morse telegraph”, recently proposed by Garuccio, involving a polarization-correlated photon pair and a Michelson interferometer in which one of the mirrors is replaced by a phase-conjugating mirror (PCM). Superluminal information transfer in this scheme is precluded by the impossibility of distinguishing between unpolarized photons prepared by mixing linear polarization states or by mixing circular polarization states.
A Scatter Storage Scheme for Dictionary Lookups
Directory of Open Access Journals (Sweden)
D. M. Murray
1970-09-01
Full Text Available Scatter storage schemes are examined with respect to their applicability to dictionary lookup procedures. Of particular interest are virtual scatter methods which combine the advantages of rapid search speed and reasonable storage requirements. The theoretical aspects of computing hash addresses are developed, and several algorithms are evaluated. Finally, experiments with an actual text lookup process are described, and a possible library application is discussed.
Cost Comparison Among Provable Data Possession Schemes
2016-03-01
one more than the total number of blocks or one more than `, whichever is less. min ( f s/bs, `) + 1 (4.10) 4.3.1 MAC-PDP For local data experiments, we...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS COST COMPARISON AMONG PROVABLE DATA POSSESSION SCHEMES by Stephen J. Bremer March 2016 Thesis...response, including the time for reviewing instruction, searching existing data sources, gathering and maintaining the data needed, and completing and
A rational function based scheme for solving advection equation
Energy Technology Data Exchange (ETDEWEB)
Xiao, Feng [Gunma Univ., Kiryu (Japan). Faculty of Engineering; Yabe, Takashi
1995-07-01
A numerical scheme for solving advection equations is presented. The scheme is derived from a rational interpolation function. Some properties of the scheme with respect to convex-concave preserving and monotone preserving are discussed. We find that the scheme is attractive in surpressinging overshoots and undershoots even in the vicinities of discontinuity. The scheme can also be easily swicthed as the CIP (Cubic interpolated Pseudo-Particle) method to get a third-order accuracy in smooth region. Numbers of numerical tests are carried out to show the non-oscillatory and less diffusive nature of the scheme. (author).
Cryptanalytic Performance Appraisal of Improved CCH2 Proxy Multisignature Scheme
Directory of Open Access Journals (Sweden)
Raman Kumar
2014-01-01
Full Text Available Many of the signature schemes are proposed in which the t out of n threshold schemes are deployed, but they still lack the property of security. In this paper, we have discussed implementation of improved CCH1 and improved CCH2 proxy multisignature scheme based on elliptic curve cryptosystem. We have represented time complexity, space complexity, and computational overhead of improved CCH1 and CCH2 proxy multisignature schemes. We have presented cryptanalysis of improved CCH2 proxy multisignature scheme and showed that improved CCH2 scheme suffered from various attacks, that is, forgery attack and framing attack.
A weak blind signature scheme based on quantum cryptography
Wen, Xiaojun; Niu, Xiamu; Ji, Liping; Tian, Yuan
2009-02-01
In this paper, we present a weak blind signature scheme based on the correlation of EPR (Einstein-Padolsky-Rosen) pairs. Different from classical blind signature schemes and current quantum signature schemes, our quantum blind signature scheme could guarantee not only the unconditionally security but also the anonymity of the message owner. To achieve that, quantum key distribution and one-time pad are adopted in our scheme. Experimental analysis proved that our scheme have the characteristics of non-counterfeit, non-disavowal, blindness and traceability. It has a wide application to E-payment system, E-government, E-business, and etc.
Fractal-based image sequence compression scheme
Li, Haibo; Novak, Mirek; Forchheimer, Robert
1993-07-01
The dominant image transformation used in the existing fractal coding schemes is the affine function. Although an affine transformation is easy to compute and understand, its linear approximation ability limits the employment of larger range blocks, that is, it limits further improvement in compression efficiency. We generalize the image transformation from the usual affine form to the more general quadratic form, and provide theoretical requirements for the generalized transformation to be contractive. Based on the self-transformation system (STS) model, an image sequence coding scheme--fractal-based image sequence coding--is proposed. In this coding scheme, our generalized transformation is used to model the self- transformation is used to model the self-transformation from the domain block to its range blocks. Experimental results on a real image sequence show that for the same size of blocks, the SNR can be improved by 10 dB, or, for the same SNR of the decoded image sequence, the compression ratio is raised twofold when the new generalized transformation is used to replace the usual affine transformation. In addition, due to the utilization of the STS model, the computational complexity is only linearly related to the size of the 3-D blocks. This provides for fast encoding and decoding.
Doppler Shift Compensation Schemes in VANETs
Directory of Open Access Journals (Sweden)
F. Nyongesa
2015-01-01
Full Text Available Over the last decade vehicle-to-vehicle (V2V communication has received a lot of attention as it is a crucial issue in intravehicle communication as well as in Intelligent Transportation System (ITS. In ITS the focus is placed on integration of communication between mobile and fixed infrastructure to execute road safety as well as nonsafety information dissemination. The safety application such as emergence alerts lays emphasis on low-latency packet delivery rate (PDR, whereas multimedia and infotainment call for high data rates at low bit error rate (BER. The nonsafety information includes multimedia streaming for traffic information and infotainment applications such as playing audio content, utilizing navigation for driving, and accessing Internet. A lot of vehicular ad hoc network (VANET research has focused on specific areas including channel multiplexing, antenna diversity, and Doppler shift compensation schemes in an attempt to optimize BER performance. Despite this effort few surveys have been conducted to highlight the state-of-the-art collection on Doppler shift compensation schemes. Driven by this cause we survey some of the recent research activities in Doppler shift compensation schemes and highlight challenges and solutions as a stock-taking exercise. Moreover, we present open issues to be further investigated in order to address the challenges of Doppler shift in VANETs.
Progress on Implementing Additional Physics Schemes into ...
The U.S. Environmental Protection Agency (USEPA) has a team of scientists developing a next generation air quality modeling system employing the Model for Prediction Across Scales – Atmosphere (MPAS-A) as its meteorological foundation. Several preferred physics schemes and options available in the Weather Research and Forecasting (WRF) model are regularly used by the USEPA with the Community Multiscale Air Quality (CMAQ) model to conduct retrospective air quality simulations. These include the Pleim surface layer, the Pleim-Xiu (PX) land surface model with fractional land use for a 40-class National Land Cover Database (NLCD40), the Asymmetric Convective Model 2 (ACM2) planetary boundary layer scheme, the Kain-Fritsch (KF) convective parameterization with subgrid-scale cloud feedback to the radiation schemes and a scale-aware convective time scale, and analysis nudging four-dimensional data assimilation (FDDA). All of these physics modules and options have already been implemented by the USEPA into MPAS-A v4.0, tested, and evaluated (please see the presentations of R. Gilliam and R. Bullock at this workshop). Since the release of MPAS v5.1 in May 2017, work has been under way to implement these preferred physics options into the MPAS-A v5.1 code. Test simulations of a summer month are being conducted on a global variable resolution mesh with the higher resolution cells centered over the contiguous United States. Driving fields for the FDDA and soil nudging are
INFORMATION FROM THE CERN HEALTH INSURANCE SCHEME
Tel : 7-3635
2002-01-01
Please note that, from 1 July 2002, the tariff agreement between CERN and the Hôpital de la Tour will no longer be in force. As a result the members of the CERN Health Insurance Scheme will no longer obtain a 5% discount for quick payment of bills. More information on the termination of the agreement and the implications for our Health Insurance Scheme will be provided in the next issue of the CHIS Bull', due for publication in the first half of July. It will be sent to your home address, so, if you have moved recently, please check that your divisional secretariat has your current address. Tel.: 73635 The Organization's Health Insurance Scheme (CHIS) has launched its own Web pages, located on the Website of the Social & Statutory Conditions Group of HR Division (HR-SOC). The address is short and easy-to-remember www.cern.ch/chis The pages currently available concentrate on providing basic information. Over the coming months it is planned to fill out the details and introduce new topics. Please give us ...
Dynamical decoupling schemes derived from Hamilton cycles
Rötteler, Martin
2008-04-01
We address the problem of decoupling the interactions in a spin network governed by a pair-interaction Hamiltonian. Combinatorial schemes for decoupling and for manipulating the couplings of Hamiltonians have been developed, which use selective pulses. In this paper, we consider an additional requirement on these pulse sequences: as few different control operations as possible should be used. This requirement is motivated by the fact that to find an optimal implementation of each individual selective pulse will be expensive since it requires to solve a pulse shaping problem. Hence, it is desirable to use as few different selective pulses as possible. As a first result, we show that for d-dimensional systems, where d ⩾2, the ability to implement only two control operations is sufficient to turn off the time evolution. Next, we address the case of a bipartite system with local control and show that four different control operations are sufficient. Finally, turning to networks consisting of several d-dimensional nodes, we show that decoupling can be achieved if one is able to control a number of different control operations, which is logarithmic in the number of nodes. We give an explicit family of efficient decoupling schemes with logarithmic number of different pulses based on the classic Hamming codes. We also provide a table of the best known decoupling schemes for small networks of qubits.
Zhu, Wuming; Trickey, S B
2017-12-28
In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li + , Be + , and B + , in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B
Molthan, Andrew L.
2010-01-01
High resolution weather forecast models with explicit prediction of hydrometeor type, size distribution, and fall speed may be useful in the development of precipitation retrievals, by providing representative characteristics of frozen hydrometeors. Several single or double-moment microphysics schemes are currently available within the Weather Research and Forecasting (WRF) model, allowing for the prediction of up to three ice species. Each scheme incorporates different assumptions regarding the characteristics of their ice classes, particularly in terms of size distribution, density, and fall speed. In addition to the prediction of hydrometeor content, these schemes must accurately represent the vertical profile of water vapor to account for possible attenuation, along with the size distribution, density, and shape characteristics of ice crystals that are relevant to microwave scattering. An evaluation of a particular scheme requires the availability of field campaign measurements. The Canadian CloudSat/CALIPSO Validation Project (C3VP) obtained measurements of ice crystal shapes, size distributions, fall speeds, and precipitation during several intensive observation periods. In this study, C3VP observations obtained during the 22 January 2007 synoptic-scale snowfall event are compared against WRF model output, based upon forecasts using four single-moment and two double-moment schemes available as of version 3.1. Schemes are compared against aircraft observations by examining differences in size distribution, density, and content. In addition to direct measurements from aircraft probes, simulated precipitation can also be converted to equivalent, remotely sensed characteristics through the use of the NASA Goddard Satellite Data Simulator Unit. Outputs from high resolution forecasts are compared against radar and satellite observations emphasizing differences in assumed crystal shape and size distribution characteristics.
Amir, Sahar Z.
2013-05-01
We introduce an efficient thermodynamically consistent technique to extrapolate and interpolate normalized Canonical NVT ensemble averages like pressure and energy for Lennard-Jones (L-J) fluids. Preliminary results show promising applicability in oil and gas modeling, where accurate determination of thermodynamic properties in reservoirs is challenging. The thermodynamic interpolation and thermodynamic extrapolation schemes predict ensemble averages at different thermodynamic conditions from expensively simulated data points. The methods reweight and reconstruct previously generated database values of Markov chains at neighboring temperature and density conditions. To investigate the efficiency of these methods, two databases corresponding to different combinations of normalized density and temperature are generated. One contains 175 Markov chains with 10,000,000 MC cycles each and the other contains 3000 Markov chains with 61,000,000 MC cycles each. For such massive database creation, two algorithms to parallelize the computations have been investigated. The accuracy of the thermodynamic extrapolation scheme is investigated with respect to classical interpolation and extrapolation. Finally, thermodynamic interpolation benefiting from four neighboring Markov chains points is implemented and compared with previous schemes. The thermodynamic interpolation scheme using knowledge from the four neighboring points proves to be more accurate than the thermodynamic extrapolation from the closest point only, while both thermodynamic extrapolation and thermodynamic interpolation are more accurate than the classical interpolation and extrapolation. The investigated extrapolation scheme has great potential in oil and gas reservoir modeling.That is, such a scheme has the potential to speed up the MCMC thermodynamic computation to be comparable with conventional Equation of State approaches in efficiency. In particular, this makes it applicable to large-scale optimization of L
Effects of Mesh Irregularities on Accuracy of Finite-Volume Discretization Schemes
Diskin, Boris; Thomas, James L.
2012-01-01
The effects of mesh irregularities on accuracy of unstructured node-centered finite-volume discretizations are considered. The focus is on an edge-based approach that uses unweighted least-squares gradient reconstruction with a quadratic fit. For inviscid fluxes, the discretization is nominally third order accurate on general triangular meshes. For viscous fluxes, the scheme is an average-least-squares formulation that is nominally second order accurate and contrasted with a common Green-Gauss discretization scheme. Gradient errors, truncation errors, and discretization errors are separately studied according to a previously introduced comprehensive methodology. The methodology considers three classes of grids: isotropic grids in a rectangular geometry, anisotropic grids typical of adapted grids, and anisotropic grids over a curved surface typical of advancing layer grids. The meshes within the classes range from regular to extremely irregular including meshes with random perturbation of nodes. Recommendations are made concerning the discretization schemes that are expected to be least sensitive to mesh irregularities in applications to turbulent flows in complex geometries.
Uncertainty of Microphysics Schemes in CRMs
Tao, W. K.; van den Heever, S. C.; Wu, D.; Saleeby, S. M.; Lang, S. E.
2015-12-01
Microphysics is the framework through which to understand the links between interactive aerosol, cloud and precipitation processes. These processes play a critical role in the water and energy cycle. CRMs with advanced microphysics schemes have been used to study the interaction between aerosol, cloud and precipitation processes at high resolution. But, there are still many uncertainties associated with these microphysics schemes. This has arisen, in part, from the fact microphysical processes cannot be measured directly; instead, cloud properties, which can be measured, are and have been used to validate model results. The utilization of current and future global high-resolution models is rapidly increasing and are at what has been traditional CRM resolutions and are using microphysics schemes that were developed in traditional CRMs. A potential NASA satellite mission called the Cloud and Precipitation Processes Mission (CaPPM) is currently being planned for submission to the NASA Earth Science Decadal Survey. This mission could provide the necessary global estimates of cloud and precipitation properties with which to evaluate and improve dynamical and microphysical parameterizations and the feedbacks. In order to facilitate the development of this mission, CRM simulations have been conducted to identify microphysical processes responsible for the greatest uncertainties in CRMs. In this talk, we will present results from numerical simulations conducted using two CRMs (NU-WRF and RAMS) with different dynamics, radiation, land surface and microphysics schemes. Specifically, we will conduct sensitivity tests to examine the uncertainty of the some of the key ice processes (i.e. riming, melting, freezing and shedding) in these two-microphysics schemes. The idea is to quantify how these two different models' respond (surface rainfall and its intensity, strength of cloud drafts, LWP/IWP, convective-stratiform-anvil area distribution) to changes of these key ice
Jha, Pradeep Kumar
Capturing the effects of detailed-chemistry on turbulent combustion processes is a central challenge faced by the numerical combustion community. However, the inherent complexity and non-linear nature of both turbulence and chemistry require that combustion models rely heavily on engineering approximations to remain computationally tractable. This thesis proposes a computationally efficient algorithm for modelling detailed-chemistry effects in turbulent diffusion flames and numerically predicting the associated flame properties. The cornerstone of this combustion modelling tool is the use of parallel Adaptive Mesh Refinement (AMR) scheme with the recently proposed Flame Prolongation of Intrinsic low-dimensional manifold (FPI) tabulated-chemistry approach for modelling complex chemistry. The effect of turbulence on the mean chemistry is incorporated using a Presumed Conditional Moment (PCM) approach based on a beta-probability density function (PDF). The two-equation k-w turbulence model is used for modelling the effects of the unresolved turbulence on the mean flow field. The finite-rate of methane-air combustion is represented here by using the GRI-Mech 3.0 scheme. This detailed mechanism is used to build the FPI tables. A state of the art numerical scheme based on a parallel block-based solution-adaptive algorithm has been developed to solve the Favre-averaged Navier-Stokes (FANS) and other governing partial-differential equations using a second-order accurate, fully-coupled finite-volume formulation on body-fitted, multi-block, quadrilateral/hexahedral mesh for two-dimensional and three-dimensional flow geometries, respectively. A standard fourth-order Runge-Kutta time-marching scheme is used for time-accurate temporal discretizations. Numerical predictions of three different diffusion flames configurations are considered in the present work: a laminar counter-flow flame; a laminar co-flow diffusion flame; and a Sydney bluff-body turbulent reacting flow
Accurate direct Eulerian simulation of dynamic elastic-plastic flow
Energy Technology Data Exchange (ETDEWEB)
Kamm, James R [Los Alamos National Laboratory; Walter, John W [Los Alamos National Laboratory
2009-01-01
The simulation of dynamic, large strain deformation is an important, difficult, and unsolved computational challenge. Existing Eulerian schemes for dynamic material response are plagued by unresolved issues. We present a new scheme for the first-order system of elasto-plasticity equations in the Eulerian frame. This system has an intrinsic constraint on the inverse deformation gradient. Standard Godunov schemes do not satisfy this constraint. The method of Flux Distributions (FD) was devised to discretely enforce such constraints for numerical schemes with cell-centered variables. We describe a Flux Distribution approach that enforces the inverse deformation gradient constraint. As this approach is new and novel, we do not yet have numerical results to validate our claims. This paper is the first installment of our program to develop this new method.
HYBRID SYSTEM BASED FUZZY-PID CONTROL SCHEMES FOR UNPREDICTABLE PROCESS
Directory of Open Access Journals (Sweden)
M.K. Tan
2011-07-01
Full Text Available In general, the primary aim of polymerization industry is to enhance the process operation in order to obtain high quality and purity product. However, a sudden and large amount of heat will be released rapidly during the mixing process of two reactants, i.e. phenol and formalin due to its exothermic behavior. The unpredictable heat will cause deviation of process temperature and hence affect the quality of the product. Therefore, it is vital to control the process temperature during the polymerization. In the modern industry, fuzzy logic is commonly used to auto-tune PID controller to control the process temperature. However, this method needs an experienced operator to fine tune the fuzzy membership function and universe of discourse via trial and error approach. Hence, the setting of fuzzy inference system might not be accurate due to the human errors. Besides that, control of the process can be challenging due to the rapid changes in the plant parameters which will increase the process complexity. This paper proposes an optimization scheme using hybrid of Q-learning (QL and genetic algorithm (GA to optimize the fuzzy membership function in order to allow the conventional fuzzy-PID controller to control the process temperature more effectively. The performances of the proposed optimization scheme are compared with the existing fuzzy-PID scheme. The results show that the proposed optimization scheme is able to control the process temperature more effectively even if disturbance is introduced.
Application of Central Upwind Scheme for Solving Special Relativistic Hydrodynamic Equations
Yousaf, Muhammad; Ghaffar, Tayabia; Qamar, Shamsul
2015-01-01
The accurate modeling of various features in high energy astrophysical scenarios requires the solution of the Einstein equations together with those of special relativistic hydrodynamics (SRHD). Such models are more complicated than the non-relativistic ones due to the nonlinear relations between the conserved and state variables. A high-resolution shock-capturing central upwind scheme is implemented to solve the given set of equations. The proposed technique uses the precise information of local propagation speeds to avoid the excessive numerical diffusion. The second order accuracy of the scheme is obtained with the use of MUSCL-type initial reconstruction and Runge-Kutta time stepping method. After a discussion of the equations solved and of the techniques employed, a series of one and two-dimensional test problems are carried out. To validate the method and assess its accuracy, the staggered central and the kinetic flux-vector splitting schemes are also applied to the same model. The scheme is robust and efficient. Its results are comparable to those obtained from the sophisticated algorithms, even in the case of highly relativistic two-dimensional test problems. PMID:26070067
A Mass Conservation Scheme for Level Set Method Applied to Multiphase Incompressible Flows
Salih, A.; Ghosh Moulic, S.
2013-06-01
Despite the inherent advantages of the level set method in the computation of multiphase flows, the principal drawback has been the lack of conservation of mass (or volume in incompressible flows). While the level set community has resorted to the use of highly accurate schemes like a fifth-order Weighted Essentially Non-Oscillatory (WENO) scheme for the solution of level set equations, it is seen that for certain classes of problems the volume loss is still high. In order to circumvent this limitation of the level set method, in this paper we propose a volume-reinitialization scheme, wherein volume correction is accomplished by solving an appropriate equation for level set function after every time step. The volume-reinitialization scheme recognizes the local curvature of the interface while correcting the volume loss. The efficacy of the proposed technique has been tested for several problems that include determination of equilibrium shape of free surface in a rotating cylindrical container and simulation of zero-gravity drop oscillations. It is seen that there is a dramatic increase in the performance of the level set method when used in conjunction with volume-reinitialization and this strategy seems to hold promise for a wide class of problems.
Recent advances on pumping schemes for mid-IR PCF lasers
Falconi, M. C.; Palma, G.; Starecki, F.; Nazabal, V.; Troles, J.; Adam, J.-L.; Taccheo, S.; Ferrari, M.; Prudenzano, F.
2017-02-01
The design of two pumping schemes for mid-IR lasers based on photonic crystal fibers (PCFs) is illustrated. The PCFs considered in both pumping schemes are made of dysprosium-doped chalcogenide glass Dy3+:Ga5Ge20Sb10S65. The two optical sources are accurately simulated by taking into account the spectroscopic parameters measured on a rare earth-doped glass sample. A home-made numerical model based on power propagation equations and solving the ion population rate equations of the rare earth is developed and employed to perform a feasibility investigation. The first pumping scheme is based on optical power pumping at 1700 nm wavelength and allows beam emission close to 4400 nm wavelength, the efficiency is increased till about η = 22% by integrating a suitable optical amplifier after the laser cavity. The second pumping scheme exploits two pump beams at the wavelengths close to 2800 nm and 4100 nm and enables a laser emission close to 4400 nm wavelength with an efficiency higher than η = 30%. Both these sources could promote a number of promising applications in different areas such as satellite remote sensing, laser surgery, chemical/biological spectroscopy and mid-IR optical communication.
High Order Finite Volume Nonlinear Schemes for the Boltzmann Transport Equation
Energy Technology Data Exchange (ETDEWEB)
Bihari, B L; Brown, P N
2005-03-29
The authors apply the nonlinear WENO (Weighted Essentially Nonoscillatory) scheme to the spatial discretization of the Boltzmann Transport Equation modeling linear particle transport. The method is a finite volume scheme which ensures not only conservation, but also provides for a more natural handling of boundary conditions, material properties and source terms, as well as an easier parallel implementation and post processing. It is nonlinear in the sense that the stencil depends on the solution at each time step or iteration level. By biasing the gradient calculation towards the stencil with smaller derivatives, the scheme eliminates the Gibb's phenomenon with oscillations of size O(1) and reduces them to O(h{sup r}), where h is the mesh size and r is the order of accuracy. The current implementation is three-dimensional, generalized for unequally spaced meshes, fully parallelized, and up to fifth order accurate (WENO5) in space. For unsteady problems, the resulting nonlinear spatial discretization yields a set of ODE's in time, which in turn is solved via high order implicit time-stepping with error control. For the steady-state case, they need to solve the non-linear system, typically by Newton-Krylov iterations. There are several numerical examples presented to demonstrate the accuracy, non-oscillatory nature and efficiency of these high order methods, in comparison with other fixed-stencil schemes.
Certificateless Key-Insulated Generalized Signcryption Scheme without Bilinear Pairings
Directory of Open Access Journals (Sweden)
Caixue Zhou
2017-01-01
Full Text Available Generalized signcryption (GSC can be applied as an encryption scheme, a signature scheme, or a signcryption scheme with only one algorithm and one key pair. A key-insulated mechanism can resolve the private key exposure problem. To ensure the security of cloud storage, we introduce the key-insulated mechanism into GSC and propose a concrete scheme without bilinear pairings in the certificateless cryptosystem setting. We provide a formal definition and a security model of certificateless key-insulated GSC. Then, we prove that our scheme is confidential under the computational Diffie-Hellman (CDH assumption and unforgeable under the elliptic curve discrete logarithm (EC-DL assumption. Our scheme also supports both random-access key update and secure key update. Finally, we evaluate the efficiency of our scheme and demonstrate that it is highly efficient. Thus, our scheme is more suitable for users who communicate with the cloud using mobile devices.
A New Adaptive Hungarian Mating Scheme in Genetic Algorithms
Directory of Open Access Journals (Sweden)
Chanju Jung
2016-01-01
Full Text Available In genetic algorithms, selection or mating scheme is one of the important operations. In this paper, we suggest an adaptive mating scheme using previously suggested Hungarian mating schemes. Hungarian mating schemes consist of maximizing the sum of mating distances, minimizing the sum, and random matching. We propose an algorithm to elect one of these Hungarian mating schemes. Every mated pair of solutions has to vote for the next generation mating scheme. The distance between parents and the distance between parent and offspring are considered when they vote. Well-known combinatorial optimization problems, the traveling salesperson problem, and the graph bisection problem are used for the test bed of our method. Our adaptive strategy showed better results than not only pure and previous hybrid schemes but also existing distance-based mating schemes.
7 CFR 400.458 - Scheme or device.
2010-01-01
... AGRICULTURE GENERAL ADMINISTRATIVE REGULATIONS Administrative Remedies for Non-Compliance § 400.458 Scheme or... material scheme or device to obtain catastrophic risk protection, other plans of insurance coverage, or...
Energy Technology Data Exchange (ETDEWEB)
Biermann, F.
2000-12-01
This paper argues that to reconcile the objectives of free trade and environmental protection, limited reforms of international trade law are required. There is a need to guarantee, first, that universally accepted international environmental agreements that mandate trade-restrictions remain compatible with international trade law, in particular with the General Agreement on Tariffs and Trade. Second, it is necessary to ensure that the interests of small and vulnerable states are protected against environmental unilateralism of the major trading nations. This reform agenda could be realized, it is argued, through an authoritative interpretation of international trade law by the Ministerial Conference of the World Trade Organization (WTO). This interpretation should stipulate that environmentally-motivated trade restrictions which are related to processes and production methods, and which are intended to protect environmental goods outside the importing country, be compatible with WTO law, but only if mandated by international environmental agreements that have been previously accepted by the Ministerial Conference. This paper outlines the rationale for such authoritative interpretation and offers a possible legal draft. This clarification of the relationship between international environmental and international trade law would protect the sovereign right of smaller trading nations, particularly developing countries, to enact their own environmental standards as may be appropriate and feasible according to their specific situation. It would also maintain the supremacy of multilateralism in both international trade and environmental policies, as opposed to unilateral action. The principle of international co-operation and the rule of law would be strengthened, and attempts to use the international trade system for the enforcement of unilaterally decided environmental standards would be precluded. (orig.)
Directory of Open Access Journals (Sweden)
Bochaton Audrey
2007-06-01
Full Text Available Abstract Background Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. Methods We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. Application We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. Conclusion This paper describes the conceptual reasoning behind
Reconciling Long-Term Trends in Air Quality with Bottom-up Emission Inventories for Los Angeles
Mcdonald, B. C.; Kim, S. W.; Frost, G. J.; Harley, R.; Trainer, M.
2014-12-01
Significant long-term changes in air quality have been observed in the United States over several decades. However, reconciling ambient observations with bottom-up emission inventories has proved challenging. In this study, we perform WRF-Chem modeling in the Los Angeles basin for carbon monoxide (CO), nitrogen oxides (NOx), volatile organic compounds (VOCs), and ozone (O3) over a long time period (1987-2010). To improve reconciliation of emission inventories with atmospheric observations, we incorporate new high-resolution emissions maps of a major to dominant source of urban air pollution, motor vehicles. A fuel-based approach is used to estimate motor vehicle emissions utilizing annual fuel sales reports, traffic count data that capture spatial and temporal patterns of vehicle activity, and pollutant emission factors measured from roadway studies performed over the last twenty years. We also update emissions from stationary sources using Continuous Emissions Monitoring Systems (CEMS) data when available, and use emission inventories developed by the South Coast Air Quality Management District (SCAQMD) and California Air Resources Board (ARB) for other important emission source categories. WRF-Chem modeling is performed in three years where field-intensive measurements were made: 1987 (SCAQS: Southern California Air Quality Study), 2002 (ITCT: Intercontinental Transport and Chemical Transformation Study), and 2010 (CALNEX). We assess the ability of the improved bottom-up emissions inventory to predict long-term changes in ambient levels of CO, NOx, and O3, which are known to have occurred over this time period. We also assess changing spatial and temporal patterns of primary (CO and NOx) and secondary (O3) pollutant concentrations across the Los Angeles basin, which has important implications on human health.
Khadke, Piyush; Patne, Nita; Singh, Arvind; Shinde, Gulab
2016-01-01
In this article, a novel and accurate scheme for fault detection, classification and fault distance estimation for a fixed series compensated transmission line is proposed. The proposed scheme is based on artificial neural network (ANN) and metal oxide varistor (MOV) energy, employing Levenberg-Marquardt training algorithm. The novelty of this scheme is the use of MOV energy signals of fixed series capacitors (FSC) as input to train the ANN. Such approach has never been used in any earlier fault analysis algorithms in the last few decades. Proposed scheme uses only single end measurement energy signals of MOV in all the 3 phases over one cycle duration from the occurrence of a fault. Thereafter, these MOV energy signals are fed as input to ANN for fault distance estimation. Feasibility and reliability of the proposed scheme have been evaluated for all ten types of fault in test power system model at different fault inception angles over numerous fault locations. Real transmission system parameters of 3-phase 400 kV Wardha-Aurangabad transmission line (400 km) with 40 % FSC at Power Grid Wardha Substation, India is considered for this research. Extensive simulation experiments show that the proposed scheme provides quite accurate results which demonstrate complete protection scheme with high accuracy, simplicity and robustness.
Ramakrishnan, Vivek; Ramesh, K.
2017-05-01
Varied spatial resolution of isochromatic fringes over the domain influences the accuracy of fringe order estimation using TFP/RGB photoelasticity. This has been brought out in the first part of the work. The existing scanning schemes do not take this into account, which leads to the propagation of noise from the low spatial resolution zones. In this paper, a method is proposed for creating a whole field map which represents the spatial resolution of the isochromatic fringe pattern. A novel scanning scheme is then proposed whose progression is guided by the spatial resolution of the fringes in the isochromatic image. The efficacy of the scanning scheme is demonstrated using three problems - an inclined crack under bi-axial loading, a thick ring subjected to internal pressure and a stress frozen specimen of an aerospace component. The proposed scheme has use in a range of applications. The scanning scheme is effective even if the model has random zones of noise which is demonstrated using a plate subjected to concentrated load. This aspect is well utilised to extract fringe data from thin slices cut from a stereo-lithographic model that has characteristic random noise due to layered manufacturing.
A Fuzzy Commitment Scheme with McEliece's Cipher
Directory of Open Access Journals (Sweden)
Deo Brat Ojha
2010-04-01
Full Text Available In this paper an attempt has been made to explain a fuzzy commitment scheme with McEliece scheme. The efficiency and security of this cryptosystem is comparatively better than any other cryptosystem. This scheme is one of the interesting candidates for post quantum cryptography. Hence our interest to deal with this system with fuzzy commitment scheme. The concept itself is illustrated with the help of a simple situation and the validation of mathematical experimental verification is provided.
Hierarchical Recognition Scheme for Human Facial Expression Recognition Systems
Siddiqi, Muhammad Hameed; Lee, Sungyoung; Lee, Young-Koo; Khan, Adil Mehmood; Truc, Phan Tran Ho
2013-01-01
Over the last decade, human facial expressions recognition (FER) has emerged as an important research area. Several factors make FER a challenging research problem. These include varying light conditions in training and test images; need for automatic and accurate face detection before feature extraction; and high similarity among different expressions that makes it difficult to distinguish these expressions with a high accuracy. This work implements a hierarchical linear discriminant analysis-based facial expressions recognition (HL-FER) system to tackle these problems. Unlike the previous systems, the HL-FER uses a pre-processing step to eliminate light effects, incorporates a new automatic face detection scheme, employs methods to extract both global and local features, and utilizes a HL-FER to overcome the problem of high similarity among different expressions. Unlike most of the previous works that were evaluated using a single dataset, the performance of the HL-FER is assessed using three publicly available datasets under three different experimental settings: n-fold cross validation based on subjects for each dataset separately; n-fold cross validation rule based on datasets; and, finally, a last set of experiments to assess the effectiveness of each module of the HL-FER separately. Weighted average recognition accuracy of 98.7% across three different datasets, using three classifiers, indicates the success of employing the HL-FER for human FER. PMID:24316568
Automatic inpainting scheme for video text detection and removal.
Mosleh, Ali; Bouguila, Nizar; Ben Hamza, Abdessamad
2013-11-01
We present a two stage framework for automatic video text removal to detect and remove embedded video texts and fill-in their remaining regions by appropriate data. In the video text detection stage, text locations in each frame are found via an unsupervised clustering performed on the connected components produced by the stroke width transform (SWT). Since SWT needs an accurate edge map, we develop a novel edge detector which benefits from the geometric features revealed by the bandlet transform. Next, the motion patterns of the text objects of each frame are analyzed to localize video texts. The detected video text regions are removed, then the video is restored by an inpainting scheme. The proposed video inpainting approach applies spatio-temporal geometric flows extracted by bandlets to reconstruct the missing data. A 3D volume regularization algorithm, which takes advantage of bandlet bases in exploiting the anisotropic regularities, is introduced to carry out the inpainting task. The method does not need extra processes to satisfy visual consistency. The experimental results demonstrate the effectiveness of both our proposed video text detection approach and the video completion technique, and consequently the entire automatic video text removal and restoration process.
Zhang, Yichuan; Wang, Jiangping
2015-07-01
Rivers serve as a highly valued component in ecosystem and urban infrastructures. River planning should follow basic principles of maintaining or reconstructing the natural landscape and ecological functions of rivers. Optimization of planning scheme is a prerequisite for successful construction of urban rivers. Therefore, relevant studies on optimization of scheme for natural ecology planning of rivers is crucial. In the present study, four planning schemes for Zhaodingpal River in Xinxiang City, Henan Province were included as the objects for optimization. Fourteen factors that influenced the natural ecology planning of urban rivers were selected from five aspects so as to establish the ANP model. The data processing was done using Super Decisions software. The results showed that important degree of scheme 3 was highest. A scientific, reasonable and accurate evaluation of schemes could be made by ANP method on natural ecology planning of urban rivers. This method could be used to provide references for sustainable development and construction of urban rivers. ANP method is also suitable for optimization of schemes for urban green space planning and design.
Jin, Juliang; Li, Lei; Wang, Wensheng; Zhang, Ming
2006-10-01
The optimal selection of schemes of water transportation projects is a process of choosing a relatively optimal scheme from a number of schemes of water transportation programming and management projects, which is of importance in both theory and practice in water resource systems engineering. In order to achieve consistency and eliminate the dimensions of fuzzy qualitative and fuzzy quantitative evaluation indexes, to determine the weights of the indexes objectively, and to increase the differences among the comprehensive evaluation index values of water transportation project schemes, a projection pursuit method, named FPRM-PP for short, was developed in this work for selecting the optimal water transportation project scheme based on the fuzzy preference relation matrix. The research results show that FPRM-PP is intuitive and practical, the correction range of the fuzzy preference relation matrix A it produces is relatively small, and the result obtained is both stable and accurate; therefore FPRM-PP can be widely used in the optimal selection of different multi-factor decision-making schemes.
Setting aside transactions from pyramid schemes as impeachable ...
African Journals Online (AJOL)
The point of contention in this case was whether the illegality of the business of the scheme was a relevant consideration in determining whether the pay-outs were made in the ordinary course of business of the scheme. This paper discusses pyramid schemes in the context of impeachable dispositions in terms of the ...
a comparative study of prioritized handoff schemes with guard
African Journals Online (AJOL)
eobe
A COMPARATIVE STUDY OF PRIORITIZED HANDOFF SCHEMES WITH GUARD CHANNELS IN WIRELESS NETWORKS. D. U. Onyishi. D. U. Onyishi, et al. Nigerian Journal of Technology. Vol. 34, No. 3, July 2015 600 application of resource allocation schemes.This gives precedence to handoff calls. Such schemes ...
Matters of Coercion-Resistance in Cryptographic Voting Schemes
Kempka, Carmen
2014-01-01
This work addresses coercion-resistance in cryptographic voting schemes. It focuses on three particularly challenging cases: write-in candidates, internet elections and delegated voting. Furthermore, this work presents a taxonomy for analyzing and comparing a huge variety of voting schemes, and presents practical experiences with the voting scheme Bingo Voting.
Efficient rate control scheme using modified inter-layer dependency ...
Indian Academy of Sciences (India)
In this paper, a spatial-resolutionratio-based MB mode decision scheme is proposed for spatially enhanced layers. The scheme uses the motion estimated at the base layer, to encode the respective MBs in the enhancement layers. The spatial–temporalsearch schemes at the enhancement layers are used to derive motion ...
7 CFR 623.21 - Scheme and device.
2010-01-01
... 7 Agriculture 6 2010-01-01 2010-01-01 false Scheme and device. 623.21 Section 623.21 Agriculture... AGRICULTURE WATER RESOURCES EMERGENCY WETLANDS RESERVE PROGRAM § 623.21 Scheme and device. (a) If it is determined by NRCS that a landowner has employed a scheme or device to defeat the purposes of this part, any...
7 CFR 1421.305 - Misrepresentation and scheme or device.
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false Misrepresentation and scheme or device. 1421.305... scheme or device. (a) A producer shall be ineligible to receive payments under this subpart if it is determined by DAFP, the State committee, or the county committee to have: (1) Adopted any scheme or device...
7 CFR 1430.310 - Misrepresentation and scheme or device.
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false Misrepresentation and scheme or device. 1430.310... Disaster Assistance Payment Program § 1430.310 Misrepresentation and scheme or device. (a) In addition to... assistance under this program if the producer is determined by FSA or CCC to have: (1) Adopted any scheme or...
7 CFR 795.17 - Scheme or device.
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false Scheme or device. 795.17 Section 795.17 Agriculture... PROVISIONS COMMON TO MORE THAN ONE PROGRAM PAYMENT LIMITATION General § 795.17 Scheme or device. All or any... person adopts or participates in adopting any scheme or device designed to evade or which has the effect...
7 CFR 760.819 - Misrepresentation, scheme, or device.
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false Misrepresentation, scheme, or device. 760.819 Section....819 Misrepresentation, scheme, or device. (a) A person is ineligible to receive assistance under this part if it is determined that such person has: (1) Adopted any scheme or device that tends to defeat...
7 CFR 784.9 - Misrepresentation and scheme or device.
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false Misrepresentation and scheme or device. 784.9 Section... Misrepresentation and scheme or device. (a) A sheep and lamb operation shall be ineligible to receive assistance...) Adopted any scheme or device that tends to defeat the purpose of this program; (2) Made any fraudulent...
7 CFR 636.14 - Misrepresentation and scheme or device.
2010-01-01
... 7 Agriculture 6 2010-01-01 2010-01-01 false Misrepresentation and scheme or device. 636.14 Section....14 Misrepresentation and scheme or device. (a) A participant who is determined to have erroneously... they are determined to have knowingly: (1) Adopted any scheme or device that tends to defeat the...
Developing and Rewarding Excellent Teachers: The Scottish Chartered Teacher Scheme
Ingvarson, Lawrence
2009-01-01
The Scottish Chartered Teacher Scheme was designed to recognise and reward teachers who attained high standards of practice. The scheme emerged in 2001 as part of an agreement between government, local employing authorities and teacher organisations. Policies such as the chartered teacher scheme aim to benefit students in two main ways: by…
7 CFR 1470.36 - Misrepresentation and scheme or device.
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false Misrepresentation and scheme or device. 1470.36... General Administration § 1470.36 Misrepresentation and scheme or device. (a) If NRCS determines that an... to have: (1) Adopted any scheme or device that tends to defeat the purpose of the program; (2) Made...
7 CFR 1430.610 - Misrepresentation and scheme or device.
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false Misrepresentation and scheme or device. 1430.610... Disaster Assistance Payment Program II (DDAP-II) § 1430.610 Misrepresentation and scheme or device. (a) In... receive assistance under this program if the producer is determined by CCC to have: (1) Adopted any scheme...