Accurate phenotyping: Reconciling approaches through Bayesian model averaging.
Directory of Open Access Journals (Sweden)
Carla Chia-Ming Chen
Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.
A more accurate scheme for calculating Earth's skin temperature
Tsuang, Ben-Jei; Tu, Chia-Ying; Tsai, Jeng-Lin; Dracup, John A.; Arpe, Klaus; Meyers, Tilden
2009-02-01
The theoretical framework of the vertical discretization of a ground column for calculating Earth’s skin temperature is presented. The suggested discretization is derived from the evenly heat-content discretization with the optimal effective thickness for layer-temperature simulation. For the same level number, the suggested discretization is more accurate in skin temperature as well as surface ground heat flux simulations than those used in some state-of-the-art models. A proposed scheme (“op(3,2,0)”) can reduce the normalized root-mean-square error (or RMSE/STD ratio) of the calculated surface ground heat flux of a cropland site significantly to 2% (or 0.9 W m-2), from 11% (or 5 W m-2) by a 5-layer scheme used in ECMWF, from 19% (or 8 W m-2) by a 5-layer scheme used in ECHAM, and from 74% (or 32 W m-2) by a single-layer scheme used in the UCLA GCM. Better accuracy can be achieved by including more layers to the vertical discretization. Similar improvements are expected for other locations with different land types since the numerical error is inherited into the models for all the land types. The proposed scheme can be easily implemented into state-of-the-art climate models for the temperature simulation of snow, ice and soil.
A fast and accurate dihedral interpolation loop subdivision scheme
Shi, Zhuo; An, Yalei; Wang, Zhongshuai; Yu, Ke; Zhong, Si; Lan, Rushi; Luo, Xiaonan
2018-04-01
In this paper, we propose a fast and accurate dihedral interpolation Loop subdivision scheme for subdivision surfaces based on triangular meshes. In order to solve the problem of surface shrinkage, we keep the limit condition unchanged, which is important. Extraordinary vertices are handled using modified Butterfly rules. Subdivision schemes are computationally costly as the number of faces grows exponentially at higher levels of subdivision. To address this problem, our approach is to use local surface information to adaptively refine the model. This is achieved simply by changing the threshold value of the dihedral angle parameter, i.e., the angle between the normals of a triangular face and its adjacent faces. We then demonstrate the effectiveness of the proposed method for various 3D graphic triangular meshes, and extensive experimental results show that it can match or exceed the expected results at lower computational cost.
An accurate scheme by block method for third order ordinary ...
African Journals Online (AJOL)
problems of ordinary differential equations is presented in this paper. The approach of collocation approximation is adopted in the derivation of the scheme and then the scheme is applied as simultaneous integrator to special third order initial value problem of ordinary differential equations. This implementation strategy is ...
Liu, Meilin; Bagci, Hakan
2011-01-01
A discontinuous Galerkin finite element method (DG-FEM) with a highly-accurate time integration scheme is presented. The scheme achieves its high accuracy using numerically constructed predictor-corrector integration coefficients. Numerical results
International Nuclear Information System (INIS)
Mathieu, Mathilde; Ruedinger, Andreas
2016-01-01
The reform for a greater integration of support schemes in the electricity market is not a marginal development, and should allow for a transition period for market actors to adapt. Lessons from the experience of neighboring countries will be valuable, especially in view of greater regional harmonization in the future. Better integration of solutions for reducing demand and greater system flexibility would also be advisable going forward. Finally, it is also essential to evaluate the impact of the reform on the risk of electricity market concentration and a reduced diversity of actors; as well as of the potential increase in barriers to entry which could hinder the emergence of collaborative or citizen projects, as these are crucial for improving project acceptance and sharing RES costs. Through stronger exposure to market signals, market premia can assist the technical and economic integration of renewable energy (RES). The resultant advantages in terms of improvements in forecasting and marketing tools, negative price management and support for more valuable technologies and practices in the system closely depends, however, on the precise calibration of the mechanisms involved. To address this, it seems essential to learn from the experiences of neighboring countries and to plan an adequate transition period for all actors to adapt to the change in regulation. The rise in transaction costs and risk premia can lead to additional costs under the new mechanisms. Direct costs, which are linked to the marketing of electricity and to rules aimed at curtailing negative prices, remain limited. However, a cost-benefit analysis must consider the impact of changes in regulation on risk perception and the cost of capital for financing projects - a determinant factor in the economic viability of the project. This further implies a need to consider complementary measures aimed at reducing the financial risks to limit production costs and incremental costs for society. The push
Liu, Meilin
2011-07-01
A discontinuous Galerkin finite element method (DG-FEM) with a highly-accurate time integration scheme is presented. The scheme achieves its high accuracy using numerically constructed predictor-corrector integration coefficients. Numerical results show that this new time integration scheme uses considerably larger time steps than the fourth-order Runge-Kutta method when combined with a DG-FEM using higher-order spatial discretization/basis functions for high accuracy. © 2011 IEEE.
International Nuclear Information System (INIS)
Abgrall, Remi; Mezine, Mohamed
2003-01-01
The aim of this paper is to construct upwind residual distribution schemes for the time accurate solution of hyperbolic conservation laws. To do so, we evaluate a space-time fluctuation based on a space-time approximation of the solution and develop new residual distribution schemes which are extensions of classical steady upwind residual distribution schemes. This method has been applied to the solution of scalar advection equation and to the solution of the compressible Euler equations both in two space dimensions. The first version of the scheme is shown to be, at least in its first order version, unconditionally energy stable and possibly conditionally monotonicity preserving. Using an idea of Csik et al. [Space-time residual distribution schemes for hyperbolic conservation laws, 15th AIAA Computational Fluid Dynamics Conference, Anahein, CA, USA, AIAA 2001-2617, June 2001], we modify the formulation to end up with a scheme that is unconditionally energy stable and unconditionally monotonicity preserving. Several numerical examples are shown to demonstrate the stability and accuracy of the method
Accurate B-spline-based 3-D interpolation scheme for digital volume correlation
Ren, Maodong; Liang, Jin; Wei, Bin
2016-12-01
An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.
Development of highly accurate approximate scheme for computing the charge transfer integral
Energy Technology Data Exchange (ETDEWEB)
Pershin, Anton; Szalay, Péter G. [Laboratory for Theoretical Chemistry, Institute of Chemistry, Eötvös Loránd University, P.O. Box 32, H-1518 Budapest (Hungary)
2015-08-21
The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.
DEFF Research Database (Denmark)
Fasano, Andrea; Rasmussen, Henrik K.
2017-01-01
A third order accurate, in time and space, finite element scheme for the numerical simulation of three- dimensional time-dependent flow of the molecular stress function type of fluids in a generalized formu- lation is presented. The scheme is an extension of the K-BKZ Lagrangian finite element me...
Directory of Open Access Journals (Sweden)
De Vuyst Florian
2016-11-01
Full Text Available In a recent paper [Poncet R., Peybernes M., Gasc T., De Vuyst F. (2016 Performance modeling of a compressible hydrodynamics solver on multicore CPUs, in “Parallel Computing: on the road to Exascale”], we have achieved the performance analysis of staggered Lagrange-remap schemes, a class of solvers widely used for hydrodynamics applications. This paper is devoted to the rethinking and redesign of the Lagrange-remap process for achieving better performance using today’s computing architectures. As an unintended outcome, the analysis has lead us to the discovery of a new family of solvers – the so-called Lagrange-flux schemes – that appear to be promising for the CFD community.
Rokhzadi, Arman; Mohammadian, Abdolmajid; Charron, Martin
2018-01-01
The objective of this paper is to develop an optimized implicit-explicit (IMEX) Runge-Kutta scheme for atmospheric applications focusing on stability and accuracy. Following the common terminology, the proposed method is called IMEX-SSP2(2,3,2), as it has second-order accuracy and is composed of diagonally implicit two-stage and explicit three-stage parts. This scheme enjoys the Strong Stability Preserving (SSP) property for both parts. This new scheme is applied to nonhydrostatic compressible Boussinesq equations in two different arrangements, including (i) semiimplicit and (ii) Horizontally Explicit-Vertically Implicit (HEVI) forms. The new scheme preserves the SSP property for larger regions of absolute monotonicity compared to the well-studied scheme in the same class. In addition, numerical tests confirm that the IMEX-SSP2(2,3,2) improves the maximum stable time step as well as the level of accuracy and computational cost compared to other schemes in the same class. It is demonstrated that the A-stability property as well as satisfying "second-stage order" and stiffly accurate conditions lead the proposed scheme to better performance than existing schemes for the applications examined herein.
Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi
2018-06-01
Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.
International Nuclear Information System (INIS)
Abgrall, Remi; Mezine, Mohamed
2004-01-01
After having recalled the basic concepts of residual distribution (RD) schemes, we provide a systematic construction of distribution schemes able to handle general unstructured meshes, extending the work of Sidilkover. Then, by using the concept of simple waves, we show how to generalize this technique to symmetrizable linear systems. A stability analysis is provided. We formally extend this construction to the Euler equations. Several test cases are presented to validate our approach
Directory of Open Access Journals (Sweden)
Andrew Erwin
Full Text Available In this paper, a novel haptic feedback scheme, used for accurately positioning a 1DOF virtual wrist prosthesis through sensory substitution, is presented. The scheme employs a three-node tactor array and discretely and selectively modulates the stimulation frequency of each tactor to relay 11 discrete haptic stimuli to the user. Able-bodied participants were able to move the virtual wrist prosthesis via a surface electromyography based controller. The participants evaluated the feedback scheme without visual or audio feedback and relied solely on the haptic feedback alone to correctly position the hand. The scheme was evaluated through both normal (perpendicular and shear (lateral stimulations applied on the forearm. Normal stimulations were applied through a prototype device previously developed by the authors while shear stimulations were generated using an ubiquitous coin motor vibrotactor. Trials with no feedback served as a baseline to compare results within the study and to the literature. The results indicated that using normal and shear stimulations resulted in accurately positioning the virtual wrist, but were not significantly different. Using haptic feedback was substantially better than no feedback. The results found in this study are significant since the feedback scheme allows for using relatively few tactors to relay rich haptic information to the user and can be learned easily despite a relatively short amount of training. Additionally, the results are important for the haptic community since they contradict the common conception in the literature that normal stimulation is inferior to shear. From an ergonomic perspective normal stimulation has the potential to benefit upper limb amputees since it can operate at lower frequencies than shear-based vibrotactors while also generating less noise. Through further tuning of the novel haptic feedback scheme and normal stimulation device, a compact and comfortable sensory substitution
Numerical Investigation of a Novel Wiring Scheme Enabling Simple and Accurate Impedance Cytometry
Directory of Open Access Journals (Sweden)
Federica Caselli
2017-09-01
Full Text Available Microfluidic impedance cytometry is a label-free approach for high-throughput analysis of particles and cells. It is based on the characterization of the dielectric properties of single particles as they flow through a microchannel with integrated electrodes. However, the measured signal depends not only on the intrinsic particle properties, but also on the particle trajectory through the measuring region, thus challenging the resolution and accuracy of the technique. In this work we show via simulation that this issue can be overcome without resorting to particle focusing, by means of a straightforward modification of the wiring scheme for the most typical and widely used microfluidic impedance chip.
Accurate and simple measurement method of complex decay schemes radionuclide activity
International Nuclear Information System (INIS)
Legrand, J.; Clement, C.; Bac, C.
1975-01-01
A simple method for the measurement of the activity is described. It consists of using a well-type sodium iodide crystal whose efficiency mith monoenergetic photon rays has been computed or measured. For each radionuclide with a complex decay scheme a total efficiency is computed; it is shown that the efficiency is very high, near 100%. The associated incertainty is low, in spite of the important uncertainties on the different parameters used in the computation. The method has been applied to the measurement of the 152 Eu primary reference [fr
Asymptotically stable fourth-order accurate schemes for the diffusion equation on complex shapes
International Nuclear Information System (INIS)
Abarbanel, S.; Ditkowski, A.
1997-01-01
An algorithm which solves the multidimensional diffusion equation on complex shapes to fourth-order accuracy and is asymptotically stable in time is presented. This bounded-error result is achieved by constructing, on a rectangular grid, a differentiation matrix whose symmetric part is negative definite. The differentiation matrix accounts for the Dirichlet boundary condition by imposing penalty-like terms. Numerical examples in 2-D show that the method is effective even where standard schemes, stable by traditional definitions, fail. The ability of the paradigm to be applied to arbitrary geometric domains is an important feature of the algorithm. 5 refs., 14 figs
Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung
2016-01-01
Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.
International Nuclear Information System (INIS)
Chang, Chih-Hao; Liou, Meng-Sing
2007-01-01
In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations. Secondly, the AUSM + scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM + -up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion
Reconciling privacy and security
Lieshout, M.J. van; Friedewald, M.; Wright, D.; Gutwirth, S.
2013-01-01
This paper considers the relationship between privacy and security and, in particular, the traditional "trade-off" paradigm. The issue is this: how, in a democracy, can one reconcile the trend towards increasing security (for example, as manifested by increasing surveillance) with the fundamental
Trujillo Bueno, J.; Fabiani Bendicho, P.
1995-12-01
Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel
Energy Technology Data Exchange (ETDEWEB)
Rybynok, V O; Kyriacou, P A [City University, London (United Kingdom)
2007-10-15
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.
Rybynok, V. O.; Kyriacou, P. A.
2007-10-01
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.
International Nuclear Information System (INIS)
Rybynok, V O; Kyriacou, P A
2007-01-01
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media
International Nuclear Information System (INIS)
Silva, Goncalo; Talon, Laurent; Ginzburg, Irina
2017-01-01
and FEM is thoroughly evaluated in three benchmark tests, which are run throughout three distinctive permeability regimes. The first configuration is a horizontal porous channel, studied with a symbolic approach, where we construct the exact solutions of FEM and BF/IBF with different boundary schemes. The second problem refers to an inclined porous channel flow, which brings in as new challenge the formation of spurious boundary layers in LBM; that is, numerical artefacts that arise due to a deficient accommodation of the bulk solution by the low-accurate boundary scheme. The third problem considers a porous flow past a periodic square array of solid cylinders, which intensifies the previous two tests with the simulation of a more complex flow pattern. The ensemble of numerical tests provides guidelines on the effect of grid resolution and the TRT free collision parameter over the accuracy and the quality of the velocity field, spanning from Stokes to Darcy permeability regimes. It is shown that, with the use of the high-order accurate boundary schemes, the simple, uniform-mesh-based TRT-LBM formulation can even surpass the accuracy of FEM employing hardworking body-fitted meshes.
Energy Technology Data Exchange (ETDEWEB)
Silva, Goncalo, E-mail: goncalo.nuno.silva@gmail.com [Irstea, Antony Regional Centre, HBAN, 1 rue Pierre-Gilles de Gennes CS 10030, 92761 Antony cedex (France); Talon, Laurent, E-mail: talon@fast.u-psud.fr [CNRS (UMR 7608), Laboratoire FAST, Batiment 502, Campus University, 91405 Orsay (France); Ginzburg, Irina, E-mail: irina.ginzburg@irstea.fr [Irstea, Antony Regional Centre, HBAN, 1 rue Pierre-Gilles de Gennes CS 10030, 92761 Antony cedex (France)
2017-04-15
and FEM is thoroughly evaluated in three benchmark tests, which are run throughout three distinctive permeability regimes. The first configuration is a horizontal porous channel, studied with a symbolic approach, where we construct the exact solutions of FEM and BF/IBF with different boundary schemes. The second problem refers to an inclined porous channel flow, which brings in as new challenge the formation of spurious boundary layers in LBM; that is, numerical artefacts that arise due to a deficient accommodation of the bulk solution by the low-accurate boundary scheme. The third problem considers a porous flow past a periodic square array of solid cylinders, which intensifies the previous two tests with the simulation of a more complex flow pattern. The ensemble of numerical tests provides guidelines on the effect of grid resolution and the TRT free collision parameter over the accuracy and the quality of the velocity field, spanning from Stokes to Darcy permeability regimes. It is shown that, with the use of the high-order accurate boundary schemes, the simple, uniform-mesh-based TRT-LBM formulation can even surpass the accuracy of FEM employing hardworking body-fitted meshes.
Energy Technology Data Exchange (ETDEWEB)
Lefrancois, Daniel; Dreuw, Andreas, E-mail: dreuw@uni-heidelberg.de [Interdisciplinary Center for Scientific Computing, Ruprecht-Karls University, Im Neuenheimer Feld 205, 69120 Heidelberg (Germany); Rehn, Dirk R. [Departments of Physics, Chemistry and Biology, Linköping University, SE-581 83 Linköping (Sweden)
2016-08-28
For the calculation of adiabatic singlet-triplet gaps (STG) in diradicaloid systems the spin-flip (SF) variant of the algebraic diagrammatic construction (ADC) scheme for the polarization propagator in third order perturbation theory (SF-ADC(3)) has been applied. Due to the methodology of the SF approach the singlet and triplet states are treated on an equal footing since they are part of the same determinant subspace. This leads to a systematically more accurate description of, e.g., diradicaloid systems than with the corresponding non-SF single-reference methods. Furthermore, using analytical excited state gradients at ADC(3) level, geometry optimizations of the singlet and triplet states were performed leading to a fully consistent description of the systems, leading to only small errors in the calculated STGs ranging between 0.6 and 2.4 kcal/mol with respect to experimental references.
Pan, Liang; Xu, Kun; Li, Qibing; Li, Jiequan
2016-12-01
For computational fluid dynamics (CFD), the generalized Riemann problem (GRP) solver and the second-order gas-kinetic scheme (GKS) provide a time-accurate flux function starting from a discontinuous piecewise linear flow distributions around a cell interface. With the adoption of time derivative of the flux function, a two-stage Lax-Wendroff-type (L-W for short) time stepping method has been recently proposed in the design of a fourth-order time accurate method for inviscid flow [21]. In this paper, based on the same time-stepping method and the second-order GKS flux function [42], a fourth-order gas-kinetic scheme is constructed for the Euler and Navier-Stokes (NS) equations. In comparison with the formal one-stage time-stepping third-order gas-kinetic solver [24], the current fourth-order method not only reduces the complexity of the flux function, but also improves the accuracy of the scheme. In terms of the computational cost, a two-dimensional third-order GKS flux function takes about six times of the computational time of a second-order GKS flux function. However, a fifth-order WENO reconstruction may take more than ten times of the computational cost of a second-order GKS flux function. Therefore, it is fully legitimate to develop a two-stage fourth order time accurate method (two reconstruction) instead of standard four stage fourth-order Runge-Kutta method (four reconstruction). Most importantly, the robustness of the fourth-order GKS is as good as the second-order one. In the current computational fluid dynamics (CFD) research, it is still a difficult problem to extend the higher-order Euler solver to the NS one due to the change of governing equations from hyperbolic to parabolic type and the initial interface discontinuity. This problem remains distinctively for the hypersonic viscous and heat conducting flow. The GKS is based on the kinetic equation with the hyperbolic transport and the relaxation source term. The time-dependent GKS flux function
Reconciling Islam and feminism.
Hashim, I
1999-03-01
This paper objects to the popular view that Islam supports a segregated social system where women are marginalized, and argues that certain Islamic texts are supportive of women's rights. The article proposes that Islam reconcile with feminism by returning to the Qur'an. The Qur'an provides rights which address the common complaints of women such as lack of freedom to make decisions for themselves and the inability to earn an income. One example is a verse in the Qur'an (4:34) that is frequently interpreted as giving women complete control over their own income and property. This article also explains how Islam has been used as a method of controlling women, particularly in the practices of veiling and purdah (seclusion). The article points out the need to engage in Islam from a position of knowing, and to ensure that Muslim women have access to this knowledge. It is only through this knowledge that women can assert their rights and challenge patriarchal interpretations of Islam.
Directory of Open Access Journals (Sweden)
Han Zou
2016-02-01
Full Text Available The location and contextual status (indoor or outdoor is fundamental and critical information for upper-layer applications, such as activity recognition and location-based services (LBS for individuals. In addition, optimizations of building management systems (BMS, such as the pre-cooling or heating process of the air-conditioning system according to the human traffic entering or exiting a building, can utilize the information, as well. The emerging mobile devices, which are equipped with various sensors, become a feasible and flexible platform to perform indoor-outdoor (IO detection. However, power-hungry sensors, such as GPS and WiFi, should be used with caution due to the constrained battery storage on mobile device. We propose BlueDetect: an accurate, fast response and energy-efficient scheme for IO detection and seamless LBS running on the mobile device based on the emerging low-power iBeacon technology. By leveraging the on-broad Bluetooth module and our proposed algorithms, BlueDetect provides a precise IO detection service that can turn on/off on-board power-hungry sensors smartly and automatically, optimize their performances and reduce the power consumption of mobile devices simultaneously. Moreover, seamless positioning and navigation services can be realized by it, especially in a semi-outdoor environment, which cannot be achieved by GPS or an indoor positioning system (IPS easily. We prototype BlueDetect on Android mobile devices and evaluate its performance comprehensively. The experimental results have validated the superiority of BlueDetect in terms of IO detection accuracy, localization accuracy and energy consumption.
Zou, Han; Jiang, Hao; Luo, Yiwen; Zhu, Jianjie; Lu, Xiaoxuan; Xie, Lihua
2016-02-22
The location and contextual status (indoor or outdoor) is fundamental and critical information for upper-layer applications, such as activity recognition and location-based services (LBS) for individuals. In addition, optimizations of building management systems (BMS), such as the pre-cooling or heating process of the air-conditioning system according to the human traffic entering or exiting a building, can utilize the information, as well. The emerging mobile devices, which are equipped with various sensors, become a feasible and flexible platform to perform indoor-outdoor (IO) detection. However, power-hungry sensors, such as GPS and WiFi, should be used with caution due to the constrained battery storage on mobile device. We propose BlueDetect: an accurate, fast response and energy-efficient scheme for IO detection and seamless LBS running on the mobile device based on the emerging low-power iBeacon technology. By leveraging the on-broad Bluetooth module and our proposed algorithms, BlueDetect provides a precise IO detection service that can turn on/off on-board power-hungry sensors smartly and automatically, optimize their performances and reduce the power consumption of mobile devices simultaneously. Moreover, seamless positioning and navigation services can be realized by it, especially in a semi-outdoor environment, which cannot be achieved by GPS or an indoor positioning system (IPS) easily. We prototype BlueDetect on Android mobile devices and evaluate its performance comprehensively. The experimental results have validated the superiority of BlueDetect in terms of IO detection accuracy, localization accuracy and energy consumption.
Reconciling current approaches to blindsight
DEFF Research Database (Denmark)
Overgaard, Morten; Mogensen, Jesper
2015-01-01
After decades of research, blindsight is still a mysterious and controversial topic in consciousness research. Currently, many researchers tend to think of it as an ideal phenomenon to investigate neural correlates of consciousness, whereas others believe that blindsight is in fact a kind...... of degraded vision rather than "truly blind". This article considers both perspectives and finds that both have difficulties understanding all existing evidence about blindsight. In order to reconcile the perspectives, we suggest two specific criteria for a good model of blindsight, able to encompass all...
Reconcile: A Coreference Resolution Research Platform
Energy Technology Data Exchange (ETDEWEB)
Stoyanov, V; Cardie, C; Gilbert, N; Riloff, E; Buttler, D; Hysom, D
2009-10-29
Despite the availability of standard data sets and metrics, approaches to the problem of noun phrase coreference resolution are hard to compare empirically due to the different evaluation setting stemming, in part, from the lack of comprehensive coreference resolution research platforms. In this tech report we present Reconcile, a coreference resolution research platform that aims to facilitate the implementation of new approaches to coreference resolution as well as the comparison of existing approaches. We discuss Reconcile's architecture and give results of running Reconcile on six data sets using four evaluation metrics, showing that Reconcile's performance is comparable to state-of-the-art systems in coreference resolution.
Reconciling Work and Family Life
DEFF Research Database (Denmark)
Holt, Helle
The problems of balancing work and family life have within the last years been heavily debated in the countries of the European Union. This anthology deals with the question of how to obtain a better balance between work and family life. Focus is set on the role of companies. The anthology tries...... to shed some light on questions such as: How can compagnies become more family friendly? What are the barriers and how can they be overcome? What is the social outcome when companies are playing an active role in employees’ possiblities for combining family life and work life? How are the solutions...... on work/ family unbalance/ problems related to the growing social problems related to unemployment? The anthology is the result of a reseach-network on ”Work-place Contributions ro Reconcile Work and Family Life” funded by the European Commission, DG V, and co-coordinated by the editors....
Accurate thermoelastic tensor and acoustic velocities of NaCl
Energy Technology Data Exchange (ETDEWEB)
Marcondes, Michel L., E-mail: michel@if.usp.br [Physics Institute, University of Sao Paulo, Sao Paulo, 05508-090 (Brazil); Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Shukla, Gaurav, E-mail: shukla@physics.umn.edu [School of Physics and Astronomy, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States); Silveira, Pedro da [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Wentzcovitch, Renata M., E-mail: wentz002@umn.edu [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States)
2015-12-15
Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.
Reconciling Anti-essentialism and Quantitative Methodology
DEFF Research Database (Denmark)
Jensen, Mathias Fjællegaard
2017-01-01
Quantitative methodology has a contested role in feminist scholarship which remains almost exclusively qualitative. Considering Irigaray’s notion of mimicry, Spivak’s strategic essentialism, and Butler’s contingent foundations, the essentialising implications of quantitative methodology may prove...... the potential to reconcile anti-essentialism and quantitative methodology, and thus, to make peace in the quantitative/qualitative Paradigm Wars....
Reconciling tensor and scalar observables in G-inflation
Ramírez, Héctor; Passaglia, Samuel; Motohashi, Hayato; Hu, Wayne; Mena, Olga
2018-04-01
The simple m2phi2 potential as an inflationary model is coming under increasing tension with limits on the tensor-to-scalar ratio r and measurements of the scalar spectral index ns. Cubic Galileon interactions in the context of the Horndeski action can potentially reconcile the observables. However, we show that this cannot be achieved with only a constant Galileon mass scale because the interactions turn off too slowly, leading also to gradient instabilities after inflation ends. Allowing for a more rapid transition can reconcile the observables but moderately breaks the slow-roll approximation leading to a relatively large and negative running of the tilt αs that can be of order ns‑1. We show that the observables on CMB and large scale structure scales can be predicted accurately using the optimized slow-roll approach instead of the traditional slow-roll expansion. Upper limits on |αs| place a lower bound of rgtrsim 0.005 and, conversely, a given r places a lower bound on |αs|, both of which are potentially observable with next generation CMB and large scale structure surveys.
International Nuclear Information System (INIS)
Hwang, Tsang-Lin; Zijl, Peter C.M. van; Mori, Susumu
1998-01-01
Measurement of exchange rates between water and NH protons by magnetization transfer methods is often complicated by artifacts, such as intramolecular NOEs, and/or TOCSY transfer from Cα protons coincident with the water frequency, or exchange-relayed NOEs from fast exchanging hydroxyl or amine protons. By applying the Phase-Modulated CLEAN chemical EXchange (CLEANEX-PM) spin-locking sequence, 135 o (x) 120 o (-x) 110 o (x) 110 o (-x) 120 o (x) 135 o (-x) during the mixing period, these artifacts can be eliminated, revealing an unambiguous water-NH exchange spectrum. In this paper, the CLEANEX-PM mixing scheme is combined with Fast-HSQC (FHSQC) detection and used to obtain accurate chemical exchange rates from the initial slope analysis for a sample of 15N labeled staphylococcal nuclease. The results are compared to rates obtained using Water EXchange filter (WEX) II-FHSQC, and spin-echo-filtered WEX II-FHSQC measurements, and clearly identify the spurious NOE contributions in the exchange system
Has bioscience reconciled mind and body?
Davies, Carmel; Redmond, Catherine; Toole, Sinead O; Coughlan, Barbara
2016-09-01
The aim of this discursive paper is to explore the question 'has biological science reconciled mind and body?'. This paper has been inspired by the recognition that bioscience has a historical reputation for privileging the body over the mind. The disregard for the mind (emotions and behaviour) cast bioscience within a 'mind-body problem' paradigm. It has also led to inherent limitations in its capacity to contribute to understanding the complex nature of health. This is a discursive paper. Literature from the history and sociology of science and psychoneuroimmunology (1975-2015) inform the arguments in this paper. The historical and sociological literature provides the basis for a socio-cultural debate on mind-body considerations in science since the 1970s. The psychoneuroimmunology literature draws on mind-body bioscientific theory as a way to demonstrate how science is reconciling mind and body and advancing its understanding of the interconnections between emotions, behaviour and health. Using sociological and biological evidence, this paper demonstrates how bioscience is embracing and advancing its understanding of mind-body interconnectedness. It does this by demonstrating the emotional and behavioural alterations that are caused by two common phenomena; prolonged, chronic peripheral inflammation and prolonged psychological stress. The evidence and arguments provided has global currency that advances understanding of the inter-relationship between emotions, behaviour and health. This paper shows how bioscience has reconciled mind and body. In doing so, it has advanced an understanding of science's contribution to the inter-relationship between emotions, behaviour and health. The biological evidence supporting mind-body science has relevance to clinical practice for nurses and other healthcare professions. This paper discusses how this evidence can inform and enhance clinical practice directly and through research, education and policy. © 2015 John Wiley
Reconciling atmospheric temperatures in the early Archean
DEFF Research Database (Denmark)
Pope, Emily Catherine; Bird, Dennis K.; Rosing, Minik Thorleif
rock record. The goal of this study is to compile and reconcile Archean geologic and geochemical features that are in some way controlled by surface temperature and/or atmospheric composition, so that at the very least paleoclimate models can be checked by physical limits. Data used to this end include...... weathering on climate). Selective alteration of δD in Isua rocks to values of -130 to -100‰ post-dates ca. 3.55Ga Ameralik dikes, but may be associated with a poorly defined 2.6-2.8Ga metamorphic event that is coincident with the amalgamation of the “Kenorland supercontinent.”...
Reconciling controversies about the 'global warming hiatus'.
Medhaug, Iselin; Stolpe, Martin B; Fischer, Erich M; Knutti, Reto
2017-05-03
Between about 1998 and 2012, a time that coincided with political negotiations for preventing climate change, the surface of Earth seemed hardly to warm. This phenomenon, often termed the 'global warming hiatus', caused doubt in the public mind about how well anthropogenic climate change and natural variability are understood. Here we show that apparently contradictory conclusions stem from different definitions of 'hiatus' and from different datasets. A combination of changes in forcing, uptake of heat by the oceans, natural variability and incomplete observational coverage reconciles models and data. Combined with stronger recent warming trends in newer datasets, we are now more confident than ever that human influence is dominant in long-term warming.
Rocklin, Gabriel J; Mobley, David L; Dill, Ken A; Hünenberger, Philippe H
2013-11-14
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol(-1)) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB
Energy Technology Data Exchange (ETDEWEB)
Rocklin, Gabriel J. [Department of Pharmaceutical Chemistry, University of California San Francisco, 1700 4th St., San Francisco, California 94143-2550, USA and Biophysics Graduate Program, University of California San Francisco, 1700 4th St., San Francisco, California 94143-2550 (United States); Mobley, David L. [Departments of Pharmaceutical Sciences and Chemistry, University of California Irvine, 147 Bison Modular, Building 515, Irvine, California 92697-0001, USA and Department of Chemistry, University of New Orleans, 2000 Lakeshore Drive, New Orleans, Louisiana 70148 (United States); Dill, Ken A. [Laufer Center for Physical and Quantitative Biology, 5252 Stony Brook University, Stony Brook, New York 11794-0001 (United States); Hünenberger, Philippe H., E-mail: phil@igc.phys.chem.ethz.ch [Laboratory of Physical Chemistry, Swiss Federal Institute of Technology, ETH, 8093 Zürich (Switzerland)
2013-11-14
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol{sup −1}) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non
Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.
2013-11-01
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol-1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB
Reconciling Contracts and Relational Governance through Strategic Contracting
DEFF Research Database (Denmark)
Petersen, Bent; Østergaard, Kim
2018-01-01
on contract types, such as strategic versus conventional, may reconcile the enduring research controversy between the substitution and complements perspectives. Practical implications: Today, formal contracts with foreign distributors tend to resemble “prenuptial agreements”. The opportunity for relational...
DEFF Research Database (Denmark)
van Leeuwen, Theo
2013-01-01
This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation.......This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation....
Further test of new pairing scheme used in overhaul of BCS theory
International Nuclear Information System (INIS)
Zheng, X.H.; Walmsley, D.G.
2014-01-01
Highlights: • Explanation of a new pairing scheme to overhaul BCS theory. • Prediction of superconductor properties from normal state resistivity. • Applications to Nb, Pb, Al, Ta, Mo, Ir and W, T c between 9.5 and 0.012 K. • High accuracy compared with measured energy gap of Nb, Pb, Al and Ta. • Prediction of energy gap for Mo, Ir and W (so far not measured). - Abstract: A new electron pairing scheme, rectifying a fundamental flaw of the BCS theory, is tested extensively. It postulates that superconductivity arises solely from residual umklapp scattering when it is not in competition for the same destination electron states with normal scattering. It reconciles a long standing theoretical discrepancy in the strength of the electron–phonon interaction between the normal and superconductive states. The new scheme is exploited to calculate the superconductive electron–phonon spectral density, α 2 F(ν), entirely on the basis of normal state electrical resistivity. This leads to first principles superconductive properties (zero temperature energy gap and tunnelling conductance) in seven metals which turn out to be highly accurate when compared with known data; in other cases experimental verification is invited. The transition temperatures involved vary over almost three orders of magnitude: from 9.5 K for niobium to 0.012 K for tungsten
J.K. Hoogland (Jiri); C.D.D. Neumann
2000-01-01
textabstractIn this article we present a new approach to the numerical valuation of derivative securities. The method is based on our previous work where we formulated the theory of pricing in terms of tradables. The basic idea is to fit a finite difference scheme to exact solutions of the pricing
(Ir)reconcilable differences? The debate concerning nursing and technology.
Sandelowski, M
1997-01-01
To review and critique the debate concerning nursing and technology. Technology has been considered both at one and at odds with nursing. Mitcham's (1994) concepts of technological optimism and romanticism. Nursing literature since 1960. Historical analysis. Technological optimists in nursing have viewed technology as an extension of and as readily assimilable into humanistic nursing practice, and nursing as socially advantaged by technology. Technological romantics have viewed technology as irreconcilable with nursing culture, as an expression of masculine culture, and as recirculating existing gender and social inequalities. Both optimists and romantics essentialize technology and nursing, treating the two as singular and fixed entities. The (ir)reconcilability of nursing and technology may be a function of how devices are used by people in different contexts, or of the (ir)reconcilability of views of technology in nursing.
How self-interactions can reconcile sterile neutrinos with cosmology.
Hannestad, Steen; Hansen, Rasmus Sloth; Tram, Thomas
2014-01-24
Short baseline neutrino oscillation experiments have shown hints of the existence of additional sterile neutrinos in the eV mass range. However, such neutrinos seem incompatible with cosmology because they have too large of an impact on cosmic structure formation. Here we show that new interactions in the sterile neutrino sector can prevent their production in the early Universe and reconcile short baseline oscillation experiments with cosmology.
Reconciling Ethnic and National Identities in a Divided Society: The ...
African Journals Online (AJOL)
Reconciling Ethnic and National Identities in a Divided Society: The Nigerian Dilemma of Nation-State Building. Abu Bakarr Bah. Abstract. « Réconcilier les Identités Ethniques et Nationales dans une Société Divisée: Le Dilemme Nigérian de la Construction de l\\'Etat-Nation ». Résumé Il s\\'agit ici d\\'une analyse théorique et ...
ENSEMBLE methods to reconcile disparate national long range dispersion forecasts
Mikkelsen, Torben; Galmarini, S.; Bianconi, R.; French, S.
2003-01-01
ENSEMBLE is a web-based decision support system for real-time exchange and evaluation of national long-range dispersion forecasts of nuclear releases with cross-boundary consequences. The system is developed with the purpose to reconcile among disparatenational forecasts for long-range dispersion. ENSEMBLE addresses the problem of achieving a common coherent strategy across European national emergency management when national long-range dispersion forecasts differ from one another during an a...
A Novel Iris Segmentation Scheme
Directory of Open Access Journals (Sweden)
Chen-Chung Liu
2014-01-01
Full Text Available One of the key steps in the iris recognition system is the accurate iris segmentation from its surrounding noises including pupil, sclera, eyelashes, and eyebrows of a captured eye-image. This paper presents a novel iris segmentation scheme which utilizes the orientation matching transform to outline the outer and inner iris boundaries initially. It then employs Delogne-Kåsa circle fitting (instead of the traditional Hough transform to further eliminate the outlier points to extract a more precise iris area from an eye-image. In the extracted iris region, the proposed scheme further utilizes the differences in the intensity and positional characteristics of the iris, eyelid, and eyelashes to detect and delete these noises. The scheme is then applied on iris image database, UBIRIS.v1. The experimental results show that the presented scheme provides a more effective and efficient iris segmentation than other conventional methods.
Additive operator-difference schemes splitting schemes
Vabishchevich, Petr N
2013-01-01
Applied mathematical modeling isconcerned with solving unsteady problems. This bookshows how toconstruct additive difference schemes to solve approximately unsteady multi-dimensional problems for PDEs. Two classes of schemes are highlighted: methods of splitting with respect to spatial variables (alternating direction methods) and schemes of splitting into physical processes. Also regionally additive schemes (domain decomposition methods)and unconditionally stable additive schemes of multi-component splitting are considered for evolutionary equations of first and second order as well as for sy
ENSEMBLE methods to reconcile disparate national long range dispersion forecasts
DEFF Research Database (Denmark)
Mikkelsen, Torben; Galmarini, S.; Bianconi, R.
2003-01-01
ENSEMBLE is a web-based decision support system for real-time exchange and evaluation of national long-range dispersion forecasts of nuclear releases with cross-boundary consequences. The system is developed with the purpose to reconcile among disparatenational forecasts for long-range dispersion...... emergency and meteorological forecasting centres, which may choose to integrate them directly intooperational emergency information systems, or possibly use them as a basis for future system development.......ENSEMBLE is a web-based decision support system for real-time exchange and evaluation of national long-range dispersion forecasts of nuclear releases with cross-boundary consequences. The system is developed with the purpose to reconcile among disparatenational forecasts for long-range dispersion....... ENSEMBLE addresses the problem of achieving a common coherent strategy across European national emergency management when national long-range dispersion forecasts differ from one another during an accidentalatmospheric release of radioactive material. A series of new decision-making “ENSEMBLE” procedures...
Understanding the persistence of measles: reconciling theory, simulation and observation.
Keeling, Matt J; Grenfell, Bryan T
2002-01-01
Ever since the pattern of localized extinction associated with measles was discovered by Bartlett in 1957, many models have been developed in an attempt to reproduce this phenomenon. Recently, the use of constant infectious and incubation periods, rather than the more convenient exponential forms, has been presented as a simple means of obtaining realistic persistence levels. However, this result appears at odds with rigorous mathematical theory; here we reconcile these differences. Using a deterministic approach, we parameterize a variety of models to fit the observed biennial attractor, thus determining the level of seasonality by the choice of model. We can then compare fairly the persistence of the stochastic versions of these models, using the 'best-fit' parameters. Finally, we consider the differences between the observed fade-out pattern and the more theoretically appealing 'first passage time'. PMID:11886620
Reconciling controversies about the ‘global warming hiatus’
Medhaug, Iselin; Stolpe, Martin B.; Fischer, Erich M.; Knutti, Reto
2017-05-01
Between about 1998 and 2012, a time that coincided with political negotiations for preventing climate change, the surface of Earth seemed hardly to warm. This phenomenon, often termed the ‘global warming hiatus’, caused doubt in the public mind about how well anthropogenic climate change and natural variability are understood. Here we show that apparently contradictory conclusions stem from different definitions of ‘hiatus’ and from different datasets. A combination of changes in forcing, uptake of heat by the oceans, natural variability and incomplete observational coverage reconciles models and data. Combined with stronger recent warming trends in newer datasets, we are now more confident than ever that human influence is dominant in long-term warming.
Reconciling parenting and smoking in the context of child development.
Bottorff, Joan L; Oliffe, John L; Kelly, Mary T; Johnson, Joy L; Chan, Anna
2013-08-01
In this article we explore the micro-social context of parental tobacco use in the first years of a child's life and early childhood. We conducted individual interviews with 28 mothers and fathers during the 4 years following the birth of their child. Using grounded theory methods, we identified the predominant explanatory concept in parents' accounts as the need to reconcile being a parent and smoking. Desires to become smoke-free coexisted with five types of parent-child interactions: (a) protecting the defenseless child, (b) concealing smoking and cigarettes from the mimicking child, (c) reinforcing smoking as bad with the communicative child, (d) making guilt-driven promises to the fearful child, and (e) relinquishing personal responsibility to the autonomous child. We examine the agency of the child in influencing parents' smoking practices, the importance of children's observational learning in the early years, and the reciprocal nature of parent-child interactions related to parents' smoking behavior.
Accurate and Simple Calibration of DLP Projector Systems
DEFF Research Database (Denmark)
Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus
2014-01-01
does not rely on an initial camera calibration, and so does not carry over the error into projector calibration. A radial interpolation scheme is used to convert features coordinates into projector space, thereby allowing for a very accurate procedure. This allows for highly accurate determination...
Spectrally accurate contour dynamics
International Nuclear Information System (INIS)
Van Buskirk, R.D.; Marcus, P.S.
1994-01-01
We present an exponentially accurate boundary integral method for calculation the equilibria and dynamics of piece-wise constant distributions of potential vorticity. The method represents contours of potential vorticity as a spectral sum and solves the Biot-Savart equation for the velocity by spectrally evaluating a desingularized contour integral. We use the technique in both an initial-value code and a newton continuation method. Our methods are tested by comparing the numerical solutions with known analytic results, and it is shown that for the same amount of computational work our spectral methods are more accurate than other contour dynamics methods currently in use
Watt, Melissa H; Eaton, Lisa A; Dennis, Alexis C; Choi, Karmel W; Kalichman, Seth C; Skinner, Donald; Sikkema, Kathleen J
2016-01-01
Due to high rates of fetal alcohol spectrum disorder (FASD) in South Africa, reducing alcohol use during pregnancy is a pressing public health priority. The aim of this study was to qualitatively explore knowledge and attitudes about maternal alcohol consumption among women who reported alcohol use during pregnancy. The study was conducted in Cape Town, South Africa. Participants were pregnant or within 1 year postpartum and self-reported alcohol use during pregnancy. In-depth interviews explored personal experiences with drinking during pregnancy, community norms and attitudes towards maternal drinking, and knowledge about FASD. Transcripts were analyzed using a content analytic approach, including narrative memos and data display matrices. Interviews revealed competing attitudes. Women received anti-drinking messages from several sources, but these sources were not highly valued and the messages often contradicted social norms. Women were largely unfamiliar with FASD, and their knowledge of impacts of fetal alcohol exposure was often inaccurate. Participants' personal experiences influenced their attitudes about the effects of alcohol during pregnancy, which led to internalization of misinformation. The data revealed a moral conflict that confronted women in this setting, leaving women feeling judged, ambivalent, or defensive about their behaviors, and ultimately creating uncertainty about their alcohol use behaviors. Data revealed the need to deliver accurate information about the harms of fetal alcohol exposure through sources perceived as trusted and reliable. Individual-level interventions to help women reconcile competing attitudes and identify motivations for reducing alcohol use during pregnancy would be beneficial.
A Practical Voter-Verifiable Election Scheme.
Chaum, D; Ryan, PYA; Schneider, SA
2005-01-01
We present an election scheme designed to allow voters to verify that their vote is accurately included in the count. The scheme provides a high degree of transparency whilst ensuring the secrecy of votes. Assurance is derived from close auditing of all the steps of the vote recording and counting process with minimal dependence on the system components. Thus, assurance arises from verification of the election rather than having to place trust in the correct behaviour of components of the vot...
Duru, Kenneth; Virta, Kristoffer
2014-01-01
to be discontinuous. The key feature is the highly accurate and provably stable treatment of interfaces where media discontinuities arise. We discretize in space using high order accurate finite difference schemes that satisfy the summation by parts rule. Conditions
TVD schemes in one and two space dimensions
International Nuclear Information System (INIS)
Leveque, R.J.; Goodman, J.B.; New York Univ., NY)
1985-01-01
The recent development of schemes which are second order accurate in smooth regions has made it possible to overcome certain difficulties which used to arise in numerical computations of discontinuous solutions of conservation laws. The present investigation is concerned with scalar conservation laws, taking into account the employment of total variation diminishing (TVD) schemes. The concept of a TVD scheme was introduced by Harten et al. (1976). Harten et al. first constructed schemes which are simultaneously TVD and second order accurate on smooth solutions. In the present paper, a summary is provided of recently conducted work in this area. Attention is given to TVD schemes in two space dimensions, a second order accurate TVD scheme in one dimension, and the entropy condition and spreading of rarefaction waves. 19 references
Reconciling projections of the Antarctic contribution to sea level rise
Edwards, Tamsin; Holden, Philip; Edwards, Neil; Wernecke, Andreas
2017-04-01
Two recent studies of the Antarctic contribution to sea level rise this century had best estimates that differed by an order of magnitude (around 10 cm and 1 m by 2100). The first, Ritz et al. (2015), used a model calibrated with satellite data, giving a 5% probability of exceeding 30cm by 2100 for sea level rise due to Antarctic instability. The second, DeConto and Pollard (2016), used a model evaluated with reconstructions of palaeo-sea level. They did not estimate probabilities, but using a simple assumption here about the distribution shape gives up to a 5% chance of Antarctic contribution exceeding 2.3 m this century with total sea level rise approaching 3 m. If robust, this would have very substantial implications for global adaptation to climate change. How are we to make sense of this apparent inconsistency? How much is down to the data - does the past tell us we will face widespread and rapid Antarctic ice losses in the future? How much is due to the mechanism of rapid ice loss ('cliff failure') proposed in the latter paper, or other parameterisation choices in these low resolution models (GRISLI and PISM, respectively)? How much is due to choices made in the ensemble design and calibration? How do these projections compare with high resolution, grounding line resolving models such as BISICLES? Could we reduce the huge uncertainties in the palaeo-study? Emulation provides a powerful tool for understanding these questions and reconciling the projections. By describing the three numerical ice sheet models with statistical models, we can re-analyse the ensembles and re-do the calibrations under a common statistical framework. This reduces uncertainty in the PISM study because it allows massive sampling of the parameter space, which reduces the sensitivity to reconstructed palaeo-sea level values and also narrows the probability intervals because the simple assumption about distribution shape above is no longer needed. We present reconciled probabilistic
Sman, van der R.G.M.
2006-01-01
In the special case of relaxation parameter = 1 lattice Boltzmann schemes for (convection) diffusion and fluid flow are equivalent to finite difference/volume (FD) schemes, and are thus coined finite Boltzmann (FB) schemes. We show that the equivalence is inherent to the homology of the
ENSEMBLE methods to reconcile disparate national long range dispersion forecasting
Energy Technology Data Exchange (ETDEWEB)
Mikkelsen, T; Galmarini, S; Bianconi, R; French, S [eds.
2003-11-01
ENSEMBLE is a web-based decision support system for real-time exchange and evaluation of national long-range dispersion forecasts of nuclear releases with cross-boundary consequences. The system is developed with the purpose to reconcile among disparate national forecasts for long-range dispersion. ENSEMBLE addresses the problem of achieving a common coherent strategy across European national emergency management when national long-range dispersion forecasts differ from one another during an accidental atmospheric release of radioactive material. A series of new decision-making 'ENSEMBLE' procedures and Web-based software evaluation and exchange tools have been created for real-time reconciliation and harmonisation of real-time dispersion forecasts from meteorological and emergency centres across Europe during an accident. The new ENSEMBLE software tools is available to participating national emergency and meteorological forecasting centres, which may choose to integrate them directly into operational emergency information systems, or possibly use them as a basis for future system development. (au)
ENSEMBLE methods to reconcile disparate national long range dispersion forecasting
Energy Technology Data Exchange (ETDEWEB)
Mikkelsen, T.; Galmarini, S.; Bianconi, R.; French, S. (eds.)
2003-11-01
ENSEMBLE is a web-based decision support system for real-time exchange and evaluation of national long-range dispersion forecasts of nuclear releases with cross-boundary consequences. The system is developed with the purpose to reconcile among disparate national forecasts for long-range dispersion. ENSEMBLE addresses the problem of achieving a common coherent strategy across European national emergency management when national long-range dispersion forecasts differ from one another during an accidental atmospheric release of radioactive material. A series of new decision-making 'ENSEMBLE' procedures and Web-based software evaluation and exchange tools have been created for real-time reconciliation and harmonisation of real-time dispersion forecasts from meteorological and emergency centres across Europe during an accident. The new ENSEMBLE software tools is available to participating national emergency and meteorological forecasting centres, which may choose to integrate them directly into operational emergency information systems, or possibly use them as a basis for future system development. (au)
Reconciling societal and scientific definitions for the monsoon
Reeve, Mathew; Stephenson, David
2014-05-01
Science defines the monsoon in numerous ways. We can apply these definitions to forecast data, reanalysis data, observations, GCMs and more. In a basic research setting, we hope that this work will advance science and our understanding of the monsoon system. In an applied research setting, we often hope that this work will benefit a specific stakeholder or community. We may want to inform a stakeholder when the monsoon starts, now and in the future. However, what happens if the stakeholders cannot relate to the information because their perceptions do not align with the monsoon definition we use in our analysis? We can resolve this either by teaching the stakeholders or learning from them about how they define the monsoon and when they perceive it to begin. In this work we reconcile different scientific monsoon definitions with the perceptions of agricultural communities in Bangladesh. We have developed a statistical technique that rates different scientific definitions against the people's perceptions of when the monsoon starts and ends. We construct a probability mass function (pmf) around each of the respondent's answers in a questionnaire survey. We can use this pmf to analyze the time series of monsoon onsets and withdrawals from the different scientific definitions. We can thereby quantitatively judge which definition may be most appropriate for a specific applied research setting.
Ravens reconcile after aggressive conflicts with valuable partners.
Fraser, Orlaith N; Bugnyar, Thomas
2011-03-25
Reconciliation, a post-conflict affiliative interaction between former opponents, is an important mechanism for reducing the costs of aggressive conflict in primates and some other mammals as it may repair the opponents' relationship and reduce post-conflict distress. Opponents who share a valuable relationship are expected to be more likely to reconcile as for such partners the benefits of relationship repair should outweigh the risk of renewed aggression. In birds, however, post-conflict behavior has thus far been marked by an apparent absence of reconciliation, suggested to result either from differing avian and mammalian strategies or because birds may not share valuable relationships with partners with whom they engage in aggressive conflict. Here, we demonstrate the occurrence of reconciliation in a group of captive subadult ravens (Corvus corax) and show that it is more likely to occur after conflicts between partners who share a valuable relationship. Furthermore, former opponents were less likely to engage in renewed aggression following reconciliation, suggesting that reconciliation repairs damage caused to their relationship by the preceding conflict. Our findings suggest not only that primate-like valuable relationships exist outside the pair bond in birds, but that such partners may employ the same mechanisms in birds as in primates to ensure that the benefits afforded by their relationships are maintained even when conflicts of interest escalate into aggression. These results provide further support for a convergent evolution of social strategies in avian and mammalian species.
Reconciling change blindness with long-term memory for objects.
Wood, Katherine; Simons, Daniel J
2017-02-01
How can we reconcile remarkably precise long-term memory for thousands of images with failures to detect changes to similar images? We explored whether people can use detailed, long-term memory to improve change detection performance. Subjects studied a set of images of objects and then performed recognition and change detection tasks with those images. Recognition memory performance exceeded change detection performance, even when a single familiar object in the postchange display consistently indicated the change location. In fact, participants were no better when a familiar object predicted the change location than when the displays consisted of unfamiliar objects. When given an explicit strategy to search for a familiar object as a way to improve performance on the change detection task, they performed no better than in a 6-alternative recognition memory task. Subjects only benefited from the presence of familiar objects in the change detection task when they had more time to view the prechange array before it switched. Once the cost to using the change detection information decreased, subjects made use of it in conjunction with memory to boost performance on the familiar-item change detection task. This suggests that even useful information will go unused if it is sufficiently difficult to extract.
Thermally-Driven Mantle Plumes Reconcile Hot-spot Observations
Davies, D.; Davies, J.
2008-12-01
Hot-spots are anomalous regions of magmatism that cannot be directly associated with plate tectonic processes (e.g. Morgan, 1972). They are widely regarded as the surface expression of upwelling mantle plumes. Hot-spots exhibit variable life-spans, magmatic productivity and fixity (e.g. Ito and van Keken, 2007). This suggests that a wide-range of upwelling structures coexist within Earth's mantle, a view supported by geochemical and seismic evidence, but, thus far, not reproduced by numerical models. Here, results from a new, global, 3-D spherical, mantle convection model are presented, which better reconcile hot-spot observations, the key modification from previous models being increased convective vigor. Model upwellings show broad-ranging dynamics; some drift slowly, while others are more mobile, displaying variable life-spans, intensities and migration velocities. Such behavior is consistent with hot-spot observations, indicating that the mantle must be simulated at the correct vigor and in the appropriate geometry to reproduce Earth-like dynamics. Thermally-driven mantle plumes can explain the principal features of hot-spot volcanism on Earth.
Reconciling medical expenditure estimates from the MEPS and NHEA, 2007.
Bernard, Didem; Cowan, Cathy; Selden, Thomas; Cai, Liming; Catlin, Aaron; Heffler, Stephen
2012-01-01
Provide a comparison of health care expenditure estimates for 2007 from the Medical Expenditure Panel Survey (MEPS) and the National Health Expenditure Accounts (NHEA). Reconciling these estimates serves two important purposes. First, it is an important quality assurance exercise for improving and ensuring the integrity of each source's estimates. Second, the reconciliation provides a consistent baseline of health expenditure data for policy simulations. Our results assist researchers to adjust MEPS to be consistent with the NHEA so that the projected costs as well as budgetary and tax implications of any policy change are consistent with national health spending estimates. The Medical Expenditure Panel Survey produced by the Agency for Healthcare Research and Quality, and the National Health Center for Health Statistics and the National Health Expenditures produced by the Centers for Medicare & Medicaid Service's Office of the Actuary. In this study, we focus on the personal health care (PHC) sector, which includes the goods and services rendered to treat or prevent a specific disease or condition in an individual. The official 2007 NHEA estimate for PHC spending is $1,915 billion and the MEPS estimate is $1,126 billion. Adjusting the NHEA estimates for differences in underlying populations, covered services, and other measurement concepts reduces the NHEA estimate for 2007 to $1,366 billion. As a result, MEPS is $240 billion, or 17.6 percent, less than the adjusted NHEA total.
Implicit time accurate simulation of unsteady flow
van Buuren, René; Kuerten, Hans; Geurts, Bernard J.
2001-03-01
Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright
Accurate quantum chemical calculations
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
Scheme Program Documentation Tools
DEFF Research Database (Denmark)
Nørmark, Kurt
2004-01-01
are separate and intended for different documentation purposes they are related to each other in several ways. Both tools are based on XML languages for tool setup and for documentation authoring. In addition, both tools rely on the LAML framework which---in a systematic way---makes an XML language available...... as named functions in Scheme. Finally, the Scheme Elucidator is able to integrate SchemeDoc resources as part of an internal documentation resource....
Kalnisky, Esther; Baratz, Lea
2018-01-01
This study investigates the manner in which new and veteran Ethiopian immigrant students in Israel perceive their identity by investigating their attitudes towards children's books written in both Hebrew and Amharic. Two major types of identity were revealed: (1) a non-reconciled identity that seeks to minimise the visibility of one's ethnic…
International Nuclear Information System (INIS)
Humphries, C.E.L.; Humphries, R.N.; Wesemann, H.
1999-01-01
Since the 1940s the restoration of opencast coal sites in the UK has been predominantly to productive agriculture and forestry. With new UK government policies on sustainability and biodiversity such land uses may be no longer be acceptable or appropriate in the upland areas of South Wales. A scheme was prepared for the upland Nant Helen site with the objective of restoring the landscape ecology of the site; it included acid grassland to provide the landscape setting and for grazing. The scheme met with the approval of the planning authority. An initial forty hectares (about 13% of the site) was restored between 1993 and 1996. While the approved low intensity grazing and low fertilizer regime met the requirements of the planning authority and the statutory agencies, it was not meeting the expectations of the grazers who had grazing rights to the land. To help reconcile the apparent conflict a fertilizer trial was set up. The trial demonstrated that additional fertilizer and intensive grazing was required to meet the nutritional needs of sheep. It also showed typical upland stocking densities of sheep could be achieved with the acid grassland without the need for reseeding with lowland types. However this was not acceptable to the authority and agencies as such fertilizer and grazing regimes would be detrimental to the landscape and ecological objectives of the restoration scheme. A compromise was agreed whereby grazing intensity and additional fertilizer have been zoned. This has been implemented and is working to the satisfaction of all parties. Without the fertilizer trial it is unlikely that the different interests could have been reconciled
Kravitz, Richard L; Bell, Robert A
2013-01-01
Over the past 30 years, patients' options for accessing information about prescription drugs have expanded dramatically. In this narrative review, we address four questions: (1) What information sources are patients exposed to, and are they paying attention? (2) Is the information they hear credible and accurate? (3) When patients ask for a prescription, what do they really want and need? Finally, (4) How can physicians reconcile what patients hear, want, and need? A critical synthesis of the literature is reported. Observations indicate that the public is generally aware of and attends to a growing body of health information resources, including traditional news media, advertising, and social networking. However, lay audiences often have no reliable way to assess the accuracy of health information found in the media, on the Internet, or in direct-to-consumer advertising. This inability to assess the information can lead to decision paralysis, with patients questioning what is known, what is knowable, and what their physicians know. Many patients have specific expectations for the care they wish to receive and have little difficulty making those expectations known. However, there are hazards in assuming that patients' expressed desires are direct reflections of their underlying wants or needs. In trying to reconcile patients' wants and needs for information about prescription medicines, a combination of policy and clinical initiatives may offer greater promise than either approach alone. Patients are bombarded by information about medicines. The problem is not a lack of information; rather, it is knowing what information to trust. Making sure patients get the medications they need and are prepared to take them safely requires a combination of policy and clinical interventions.
Reconciling divergent trends and millennial variations in Holocene temperatures
Marsicek, Jeremiah; Shuman, Bryan N.; Bartlein, Patrick J.; Shafer, Sarah L.; Brewer, Simon
2018-02-01
Cooling during most of the past two millennia has been widely recognized and has been inferred to be the dominant global temperature trend of the past 11,700 years (the Holocene epoch). However, long-term cooling has been difficult to reconcile with global forcing, and climate models consistently simulate long-term warming. The divergence between simulations and reconstructions emerges primarily for northern mid-latitudes, for which pronounced cooling has been inferred from marine and coastal records using multiple approaches. Here we show that temperatures reconstructed from sub-fossil pollen from 642 sites across North America and Europe closely match simulations, and that long-term warming, not cooling, defined the Holocene until around 2,000 years ago. The reconstructions indicate that evidence of long-term cooling was limited to North Atlantic records. Early Holocene temperatures on the continents were more than two degrees Celsius below those of the past two millennia, consistent with the simulated effects of remnant ice sheets in the climate model Community Climate System Model 3 (CCSM3). CCSM3 simulates increases in ‘growing degree days’—a measure of the accumulated warmth above five degrees Celsius per year—of more than 300 kelvin days over the Holocene, consistent with inferences from the pollen data. It also simulates a decrease in mean summer temperatures of more than two degrees Celsius, which correlates with reconstructed marine trends and highlights the potential importance of the different subseasonal sensitivities of the records. Despite the differing trends, pollen- and marine-based reconstructions are correlated at millennial-to-centennial scales, probably in response to ice-sheet and meltwater dynamics, and to stochastic dynamics similar to the temperature variations produced by CCSM3. Although our results depend on a single source of palaeoclimatic data (pollen) and a single climate-model simulation, they reinforce the notion that
Multiresolution signal decomposition schemes
J. Goutsias (John); H.J.A.M. Heijmans (Henk)
1998-01-01
textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis
Directory of Open Access Journals (Sweden)
R. Sitharthan
2016-09-01
Full Text Available This paper aims at modelling an electronically coupled distributed energy resource with an adaptive protection scheme. The electronically coupled distributed energy resource is a microgrid framework formed by coupling the renewable energy source electronically. Further, the proposed adaptive protection scheme provides a suitable protection to the microgrid for various fault conditions irrespective of the operating mode of the microgrid: namely, grid connected mode and islanded mode. The outstanding aspect of the developed adaptive protection scheme is that it monitors the microgrid and instantly updates relay fault current according to the variations that occur in the system. The proposed adaptive protection scheme also employs auto reclosures, through which the proposed adaptive protection scheme recovers faster from the fault and thereby increases the consistency of the microgrid. The effectiveness of the proposed adaptive protection is studied through the time domain simulations carried out in the PSCAD⧹EMTDC software environment.
Readiness to reconcile and post-traumatic distress in German survivors of wartime rapes in 1945.
Eichhorn, S; Stammel, N; Glaesmer, H; Klauer, T; Freyberger, H J; Knaevelsrud, C; Kuwert, P
2015-05-01
Sexual violence and wartime rapes are prevalent crimes in violent conflicts all over the world. Processes of reconciliation are growing challenges in post-conflict settings. Despite this, so far few studies have examined the psychological consequences and their mediating factors. Our study aimed at investigating the degree of longtime readiness to reconcile and its associations with post-traumatic distress within a sample of German women who experienced wartime rapes in 1945. A total of 23 wartime rape survivors were compared to age- and gender-matched controls with WWII-related non-sexual traumatic experiences. Readiness to reconcile was assessed with the Readiness to Reconcile Inventory (RRI-13). The German version of the Post-traumatic Diagnostic Scale (PDS) was used to assess post-traumatic stress disorder (PTSD) symptomatology. Readiness to reconcile in wartime rape survivors was higher in those women who reported less post-traumatic distress, whereas the subscale "openness to interaction" showed the strongest association with post-traumatic symptomatology. Moreover, wartime rape survivors reported fewer feelings of revenge than women who experienced other traumatization in WWII. Our results are in line with previous research, indicating that readiness to reconcile impacts healing processes in the context of conflict-related traumatic experiences. Based on the long-lasting post-traumatic symptomatology we observed that our findings highlight the need for psychological treatment of wartime rape survivors worldwide, whereas future research should continue focusing on reconciliation within the therapeutic process.
An efficient numerical scheme for the simulation of parallel-plate active magnetic regenerators
DEFF Research Database (Denmark)
Torregrosa-Jaime, Bárbara; Corberán, José M.; Payá, Jorge
2015-01-01
A one-dimensional model of a parallel-plate active magnetic regenerator (AMR) is presented in this work. The model is based on an efficient numerical scheme which has been developed after analysing the heat transfer mechanisms in the regenerator bed. The new finite difference scheme optimally com...... to the fully implicit scheme, the proposed scheme achieves more accurate results, prevents numerical errors and requires less computational effort. In AMR simulations the new scheme can reduce the computational time by 88%....
Threshold Signature Schemes Application
Directory of Open Access Journals (Sweden)
Anastasiya Victorovna Beresneva
2015-10-01
Full Text Available This work is devoted to an investigation of threshold signature schemes. The systematization of the threshold signature schemes was done, cryptographic constructions based on interpolation Lagrange polynomial, elliptic curves and bilinear pairings were examined. Different methods of generation and verification of threshold signatures were explored, the availability of practical usage of threshold schemes in mobile agents, Internet banking and e-currency was shown. The topics of further investigation were given and it could reduce a level of counterfeit electronic documents signed by a group of users.
Accurate method of the magnetic field measurement of quadrupole magnets
International Nuclear Information System (INIS)
Kumada, M.; Sakai, I.; Someya, H.; Sasaki, H.
1983-01-01
We present an accurate method of the magnetic field measurement of the quadrupole magnet. The method of obtaining the information of the field gradient and the effective focussing length is given. A new scheme to obtain the information of the skew field components is also proposed. The relative accuracy of the measurement was 1 x 10 -4 or less. (author)
DEFF Research Database (Denmark)
Pötz, Katharina Anna; Haas, Rainer; Balzarova, Michaela
2013-01-01
of schemes that can be categorized on focus areas, scales, mechanisms, origins, types and commitment levels. Research limitations/implications – The findings contribute to conceptual and empirical research on existing models to compare and analyse CSR standards. Sampling technique and depth of analysis limit......Purpose – The rise of CSR followed a demand for CSR standards and guidelines. In a sector already characterized by a large number of standards, the authors seek to ask what CSR schemes apply to agribusiness, and how they can be systematically compared and analysed. Design....../methodology/approach – Following a deductive-inductive approach the authors develop a model to compare and analyse CSR schemes based on existing studies and on coding qualitative data on 216 CSR schemes. Findings – The authors confirm that CSR standards and guidelines have entered agribusiness and identify a complex landscape...
Energy Technology Data Exchange (ETDEWEB)
Willcock, J J; Lumsdaine, A; Quinlan, D J
2008-08-19
Tabled execution is a generalization of memorization developed by the logic programming community. It not only saves results from tabled predicates, but also stores the set of currently active calls to them; tabled execution can thus provide meaningful semantics for programs that seemingly contain infinite recursions with the same arguments. In logic programming, tabled execution is used for many purposes, both for improving the efficiency of programs, and making tasks simpler and more direct to express than with normal logic programs. However, tabled execution is only infrequently applied in mainstream functional languages such as Scheme. We demonstrate an elegant implementation of tabled execution in Scheme, using a mix of continuation-passing style and mutable data. We also show the use of tabled execution in Scheme for a problem in formal language and automata theory, demonstrating that tabled execution can be a valuable tool for Scheme users.
Evaluating statistical cloud schemes
Grützun, Verena; Quaas, Johannes; Morcrette , Cyril J.; Ament, Felix
2015-01-01
Statistical cloud schemes with prognostic probability distribution functions have become more important in atmospheric modeling, especially since they are in principle scale adaptive and capture cloud physics in more detail. While in theory the schemes have a great potential, their accuracy is still questionable. High-resolution three-dimensional observational data of water vapor and cloud water, which could be used for testing them, are missing. We explore the potential of ground-based re...
Gamma spectrometry; level schemes
International Nuclear Information System (INIS)
Blachot, J.; Bocquet, J.P.; Monnand, E.; Schussler, F.
1977-01-01
The research presented dealt with: a new beta emitter, isomer of 131 Sn; the 136 I levels fed through the radioactive decay of 136 Te (20.9s); the A=145 chain (β decay of Ba, La and Ce, and level schemes for 145 La, 145 Ce, 145 Pr); the A=47 chain (La and Ce, β decay, and the level schemes of 147 Ce and 147 Pr) [fr
International Nuclear Information System (INIS)
2002-04-01
This scheme defines the objectives relative to the renewable energies and the rational use of the energy in the framework of the national energy policy. It evaluates the needs and the potentialities of the regions and preconizes the actions between the government and the territorial organizations. The document is presented in four parts: the situation, the stakes and forecasts; the possible actions for new measures; the scheme management and the regional contributions analysis. (A.L.B.)
Asynchronous discrete event schemes for PDEs
Stone, D.; Geiger, S.; Lord, G. J.
2017-08-01
A new class of asynchronous discrete-event simulation schemes for advection-diffusion-reaction equations is introduced, based on the principle of allowing quanta of mass to pass through faces of a (regular, structured) Cartesian finite volume grid. The timescales of these events are linked to the flux on the face. The resulting schemes are self-adaptive, and local in both time and space. Experiments are performed on realistic physical systems related to porous media flow applications, including a large 3D advection diffusion equation and advection diffusion reaction systems. The results are compared to highly accurate reference solutions where the temporal evolution is computed with exponential integrator schemes using the same finite volume discretisation. This allows a reliable estimation of the solution error. Our results indicate a first order convergence of the error as a control parameter is decreased, and we outline a framework for analysis.
International Nuclear Information System (INIS)
Berman, Jules
2005-01-01
For over 150 years, pathologists have relied on histomorphology to classify and diagnose neoplasms. Their success has been stunning, permitting the accurate diagnosis of thousands of different types of neoplasms using only a microscope and a trained eye. In the past two decades, cancer genomics has challenged the supremacy of histomorphology by identifying genetic alterations shared by morphologically diverse tumors and by finding genetic features that distinguish subgroups of morphologically homogeneous tumors. The Developmental Lineage Classification and Taxonomy of Neoplasms groups neoplasms by their embryologic origin. The putative value of this classification is based on the expectation that tumors of a common developmental lineage will share common metabolic pathways and common responses to drugs that target these pathways. The purpose of this manuscript is to show that grouping tumors according to their developmental lineage can reconcile certain fundamental discrepancies resulting from morphologic and molecular approaches to neoplasm classification. In this study, six issues in tumor classification are described that exemplify the growing rift between morphologic and molecular approaches to tumor classification: 1) the morphologic separation between epithelial and non-epithelial tumors; 2) the grouping of tumors based on shared cellular functions; 3) the distinction between germ cell tumors and pluripotent tumors of non-germ cell origin; 4) the distinction between tumors that have lost their differentiation and tumors that arise from uncommitted stem cells; 5) the molecular properties shared by morphologically disparate tumors that have a common developmental lineage, and 6) the problem of re-classifying morphologically identical but clinically distinct subsets of tumors. The discussion of these issues in the context of describing different methods of tumor classification is intended to underscore the clinical value of a robust tumor classification. A
Reconciling Gases With Glasses: Magma Degassing, Overturn and Mixing at Kilauea Volcano, Hawai`i
Edmonds, M.; Gerlach, T. M.
2006-12-01
well as between them; this has important implications for volcano monitoring. Application of this new, remote and accurate technique to measure volcanic gases allows data concerning the volatile budget, both from glasses and from gases, to be reconciled and used in tandem to provide more detailed and complete models for magma migration, storage and transport at Kilauea Volcano.
Reconciling threshold and subthreshold expansions for pion-nucleon scattering
Siemens, D.; Ruiz de Elvira, J.; Epelbaum, E.; Hoferichter, M.; Krebs, H.; Kubis, B.; Meißner, U.-G.
2017-07-01
Heavy-baryon chiral perturbation theory (ChPT) at one loop fails in relating the pion-nucleon amplitude in the physical region and for subthreshold kinematics due to loop effects enhanced by large low-energy constants. Studying the chiral convergence of threshold and subthreshold parameters up to fourth order in the small-scale expansion, we address the question to what extent this tension can be mitigated by including the Δ (1232) as an explicit degree of freedom and/or using a covariant formulation of baryon ChPT. We find that the inclusion of the Δ indeed reduces the low-energy constants to more natural values and thereby improves consistency between threshold and subthreshold kinematics. In addition, even in the Δ-less theory the resummation of 1 /mN corrections in the covariant scheme improves the results markedly over the heavy-baryon formulation, in line with previous observations in the single-baryon sector of ChPT that so far have evaded a profound theoretical explanation.
Reconciling threshold and subthreshold expansions for pion–nucleon scattering
Directory of Open Access Journals (Sweden)
D. Siemens
2017-07-01
Full Text Available Heavy-baryon chiral perturbation theory (ChPT at one loop fails in relating the pion–nucleon amplitude in the physical region and for subthreshold kinematics due to loop effects enhanced by large low-energy constants. Studying the chiral convergence of threshold and subthreshold parameters up to fourth order in the small-scale expansion, we address the question to what extent this tension can be mitigated by including the Δ(1232 as an explicit degree of freedom and/or using a covariant formulation of baryon ChPT. We find that the inclusion of the Δ indeed reduces the low-energy constants to more natural values and thereby improves consistency between threshold and subthreshold kinematics. In addition, even in the Δ-less theory the resummation of 1/mN corrections in the covariant scheme improves the results markedly over the heavy-baryon formulation, in line with previous observations in the single-baryon sector of ChPT that so far have evaded a profound theoretical explanation.
Law, James; Huby, Guro; Irving, Anne-Marie; Pringle, Ann-Marie; Conochie, Douglas; Haworth, Catherine; Burston, Amanda
2010-01-01
Background: It is widely accepted that service users should be actively involved in new service developments, but there remain issues about how best to consult with them and how to reconcile their views with those of service providers. Aims: This paper uses data from The Aphasia in Scotland study, set up by NHS Quality Improvement Scotland to…
England, Richard
2009-01-01
Since before the time of writers such as Plato in his "Republic" and "Timaeus"; Martianus Capella in "The Marriage of Mercury and Philology"; Boethius in "De institutione musica"; Kepler in "The Harmony of the Universe"; and many others, there have been attempts to reconcile the various disciplines in the sciences, arts, humanities, and religion…
Allen, Michele L; Garcia-Huidobro, Diego; Bastian, Tiana; Hurtado, G Ali; Linares, Roxana; Svetaz, María Veronica
2017-06-01
Participatory research (PR) trials aim to achieve the dual, and at times competing, demands of producing an intervention and research process that address community perspectives and priorities, while establishing intervention effectiveness. To identify research and community priorities that must be reconciled in the areas of collaborative processes, study design and aim and study implementation quality in order to successfully conduct a participatory trial. We describe how this reconciliation was approached in the smoking prevention participatory trial Padres Informados/Jovenes Preparados (Informed Parents/Prepared Youth) and evaluate the success of our reconciled priorities. Data sources to evaluate success of the reconciliations included a survey of all partners regarding collaborative group processes, intervention participant recruitment and attendance and surveys of enrolled study participants assessing intervention outcomes. While we successfully achieved our reconciled collaborative processes and implementation quality goals, we did not achieve our reconciled goals in study aim and design. Due in part to the randomized wait-list control group design chosen in the reconciliation process, we were not able to demonstrate overall efficacy of the intervention or offer timely services to families in need of support. Achieving the goals of participatory trials is challenging but may yield community and research benefits. Innovative research designs are needed to better support the complex goals of participatory trials. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Reconciling Ourselves to Reality: Arendt, Education and the Challenge of Being at Home in the World
Biesta, Gert
2016-01-01
In this paper, I explore the educational significance of the work of Hannah Arendt through reflections on four papers that constitute this special issue. I focus on the challenge of reconciling ourselves to reality, that is, of being at home in the world. Although Arendt's idea of being at home in the world is connected to her explorations of…
Towards Symbolic Encryption Schemes
DEFF Research Database (Denmark)
Ahmed, Naveed; Jensen, Christian D.; Zenner, Erik
2012-01-01
, namely an authenticated encryption scheme that is secure under chosen ciphertext attack. Therefore, many reasonable encryption schemes, such as AES in the CBC or CFB mode, are not among the implementation options. In this paper, we report new attacks on CBC and CFB based implementations of the well......Symbolic encryption, in the style of Dolev-Yao models, is ubiquitous in formal security models. In its common use, encryption on a whole message is specified as a single monolithic block. From a cryptographic perspective, however, this may require a resource-intensive cryptographic algorithm......-known Needham-Schroeder and Denning-Sacco protocols. To avoid such problems, we advocate the use of refined notions of symbolic encryption that have natural correspondence to standard cryptographic encryption schemes....
Energy Technology Data Exchange (ETDEWEB)
Placidi, M.; Jung, J. -Y.; Ratti, A.; Sun, C.
2014-07-25
This paper describes beam distribution schemes adopting a novel implementation based on low amplitude vertical deflections combined with horizontal ones generated by Lambertson-type septum magnets. This scheme offers substantial compactness in the longitudinal layouts of the beam lines and increased flexibility for beam delivery of multiple beam lines on a shot-to-shot basis. Fast kickers (FK) or transverse electric field RF Deflectors (RFD) provide the low amplitude deflections. Initially proposed at the Stanford Linear Accelerator Center (SLAC) as tools for beam diagnostics and more recently adopted for multiline beam pattern schemes, RFDs offer repetition capabilities and a likely better amplitude reproducibility when compared to FKs, which, in turn, offer more modest financial involvements both in construction and operation. Both solutions represent an ideal approach for the design of compact beam distribution systems resulting in space and cost savings while preserving flexibility and beam quality.
A new numerical scheme for the simulation of active magnetic regenerators
DEFF Research Database (Denmark)
Torregrosa-Jaime, B.; Engelbrecht, Kurt; Payá, J.
2014-01-01
A 1D model of a parallel-plate active magnetic regenerator (AMR) has been developed based on a new numerical scheme. With respect to the implicit scheme, the new scheme achieves accurate results, minimizes computational time and prevents numerical errors. The model has been used to check the boun...
New analytic unitarization schemes
International Nuclear Information System (INIS)
Cudell, J.-R.; Predazzi, E.; Selyugin, O. V.
2009-01-01
We consider two well-known classes of unitarization of Born amplitudes of hadron elastic scattering. The standard class, which saturates at the black-disk limit includes the standard eikonal representation, while the other class, which goes beyond the black-disk limit to reach the full unitarity circle, includes the U matrix. It is shown that the basic properties of these schemes are independent of the functional form used for the unitarization, and that U matrix and eikonal schemes can be extended to have similar properties. A common form of unitarization is proposed interpolating between both classes. The correspondence with different nonlinear equations are also briefly examined.
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 2. Electronic Commerce - Payment Schemes. V Rajaraman. Series Article Volume 6 Issue 2 February 2001 pp 6-13. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/006/02/0006-0013 ...
Ronald, R.; Smith, S.J.; Elsinga, M.; Eng, O.S.; Fox O'Mahony, L.; Wachter, S.
2012-01-01
Contractual saving schemes for housing are institutionalised savings programmes normally linked to rights to loans for home purchase. They are diverse types as they have been developed differently in each national context, but normally fall into categories of open, closed, compulsory, and ‘free
Alternative reprocessing schemes evaluation
International Nuclear Information System (INIS)
1979-02-01
This paper reviews the parameters which determine the inaccessibility of the plutonium in reprocessing plants. Among the various parameters, the physical and chemical characteristics of the materials, the various processing schemes and the confinement are considered. The emphasis is placed on that latter parameter, and the advantages of an increased confinement in the socalled PIPEX reprocessing plant type are presented
Introduction to association schemes
Seidel, J.J.
1991-01-01
The present paper gives an introduction to the theory of association schemes, following Bose-Mesner (1959), Biggs (1974), Delsarte (1973), Bannai-Ito (1984) and Brouwer-Cohen-Neumaier (1989). Apart from definitions and many examples, also several proofs and some problems are included. The paragraphs
Reaction schemes of immunoanalysis
International Nuclear Information System (INIS)
Delaage, M.; Barbet, J.
1991-01-01
The authors apply a general theory for multiple equilibria to the reaction schemes of immunoanalysis, competition and sandwich. This approach allows the manufacturer to optimize the system and provide the user with interpolation functions for the standard curve and its first derivative as well, thus giving access to variance [fr
Alternative health insurance schemes
DEFF Research Database (Denmark)
Keiding, Hans; Hansen, Bodil O.
2002-01-01
In this paper, we present a simple model of health insurance with asymmetric information, where we compare two alternative ways of organizing the insurance market. Either as a competitive insurance market, where some risks remain uninsured, or as a compulsory scheme, where however, the level...... competitive insurance; this situation turns out to be at least as good as either of the alternatives...
Accurate Evaluation of Quantum Integrals
Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)
1995-01-01
Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.
Reconciling water harvesting and soil erosion control by thoughtful implementation of SWC measures
Bellin, N.; Vanacker, V.; van Wesemael, B.
2012-04-01
-agricultural catchments have been found only partially filled with sediments. Extensive reforestation programs, recovery of natural vegetation (dense matorral) and abandonment of agricultural fields in the Sierras led to a strong reduction of the sediment transport towards the river system. Although the effect of the check dams on the transport of sediment has not been important, the check dams have played a major role in flood control in the area. Our data indicate that thoughtful design of SWC schemes is necessary to reconcile water harvesting, erosion mitigation and flood control. Currently, the erosion hotspots are clearly localized in the agricultural fields, and not in the marginal lands in the Sierras. The combination of on-site and off-site SWC measures in the agricultural areas is highly efficient to reduce fluxes of sediment and surface water.
On Converting Secret Sharing Scheme to Visual Secret Sharing Scheme
Directory of Open Access Journals (Sweden)
Wang Daoshun
2010-01-01
Full Text Available Abstract Traditional Secret Sharing (SS schemes reconstruct secret exactly the same as the original one but involve complex computation. Visual Secret Sharing (VSS schemes decode the secret without computation, but each share is m times as big as the original and the quality of the reconstructed secret image is reduced. Probabilistic visual secret sharing (Prob.VSS schemes for a binary image use only one subpixel to share the secret image; however the probability of white pixels in a white area is higher than that in a black area in the reconstructed secret image. SS schemes, VSS schemes, and Prob. VSS schemes have various construction methods and advantages. This paper first presents an approach to convert (transform a -SS scheme to a -VSS scheme for greyscale images. The generation of the shadow images (shares is based on Boolean XOR operation. The secret image can be reconstructed directly by performing Boolean OR operation, as in most conventional VSS schemes. Its pixel expansion is significantly smaller than that of VSS schemes. The quality of the reconstructed images, measured by average contrast, is the same as VSS schemes. Then a novel matrix-concatenation approach is used to extend the greyscale -SS scheme to a more general case of greyscale -VSS scheme.
Selectively strippable paint schemes
Stein, R.; Thumm, D.; Blackford, Roger W.
1993-03-01
In order to meet the requirements of more environmentally acceptable paint stripping processes many different removal methods are under evaluation. These new processes can be divided into mechanical and chemical methods. ICI has developed a paint scheme with intermediate coat and fluid resistant polyurethane topcoat which can be stripped chemically in a short period of time with methylene chloride free and phenol free paint strippers.
Scalable Nonlinear Compact Schemes
Energy Technology Data Exchange (ETDEWEB)
Ghosh, Debojyoti [Argonne National Lab. (ANL), Argonne, IL (United States); Constantinescu, Emil M. [Univ. of Chicago, IL (United States); Brown, Jed [Univ. of Colorado, Boulder, CO (United States)
2014-04-01
In this work, we focus on compact schemes resulting in tridiagonal systems of equations, specifically the fifth-order CRWENO scheme. We propose a scalable implementation of the nonlinear compact schemes by implementing a parallel tridiagonal solver based on the partitioning/substructuring approach. We use an iterative solver for the reduced system of equations; however, we solve this system to machine zero accuracy to ensure that no parallelization errors are introduced. It is possible to achieve machine-zero convergence with few iterations because of the diagonal dominance of the system. The number of iterations is specified a priori instead of a norm-based exit criterion, and collective communications are avoided. The overall algorithm thus involves only point-to-point communication between neighboring processors. Our implementation of the tridiagonal solver differs from and avoids the drawbacks of past efforts in the following ways: it introduces no parallelization-related approximations (multiprocessor solutions are exactly identical to uniprocessor ones), it involves minimal communication, the mathematical complexity is similar to that of the Thomas algorithm on a single processor, and it does not require any communication and computation scheduling.
Reconciling White-Box and Black-Box Perspectives on Behavioral Self-adaptation
DEFF Research Database (Denmark)
Bruni, Roberto; Corradini, Andrea; Gadducci, Fabio
2015-01-01
This paper proposes to reconcile two perspectives on behavioral adaptation commonly taken at different stages of the engineering of autonomic computing systems. Requirements engineering activities often take a black-box perspective: A system is considered to be adaptive with respect to an environ......This paper proposes to reconcile two perspectives on behavioral adaptation commonly taken at different stages of the engineering of autonomic computing systems. Requirements engineering activities often take a black-box perspective: A system is considered to be adaptive with respect...... to an environment whenever the system is able to satisfy its goals irrespectively of the environment perturbations. Modeling and programming engineering activities often take a white-box perspective: A system is equipped with suitable adaptation mechanisms and its behavior is classified as adaptive depending...
Multigrid time-accurate integration of Navier-Stokes equations
Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.
1993-01-01
Efficient acceleration techniques typical of explicit steady-state solvers are extended to time-accurate calculations. Stability restrictions are greatly reduced by means of a fully implicit time discretization. A four-stage Runge-Kutta scheme with local time stepping, residual smoothing, and multigridding is used instead of traditional time-expensive factorizations. Some applications to natural and forced unsteady viscous flows show the capability of the procedure.
Reconciling the Chinese Financial Development with its Economic Growth: A Discursive Essay
Maswana, Jean-Claude
2005-01-01
China‘s strong economic performance and its financial development outcomes are extremely difficult to reconcile with the dominant verdict that its financial system is seriously inefficient. Using an evolutionary perspective as a metaphor, this essay offered suggestions that adaptive efficiency criteria may help solve the apparent puzzle. An adaptive efficiency criterion offers conceptual as well as methodological approaches to resolving this puzzle and contradiction. The essay‘s discussions r...
The Threat Detection System that Cried Wolf: Reconciling Developers with Operators
2017-01-01
human response time. Journal of Experimental Psychology: Applied , 1(1), 19–33. doi:10.1037/1076-898X.1.1.19 L3 Communications Cyterra. (2012). AN/PSS...taking the chance that a true threat will not appear. This article reviews statistical concepts to reconcile the performance metrics that summarize a...concepts are already well known within the statistics and human factors communities, they are not often immediately understood in the DoD and DHS
AOM reconciling of crystal field parameters for UCl 3, UBr 3, UI 3 series
Gajek, Z.; Mulak, J.
1990-07-01
Available inelastic neutron scattering interpretations of crystal field effect in the uranium trihalides have been verified in terms of Angular Overlap Model. For UCl 3 a good reconciling of both INS and optical interpretations of crystal field effect has been obtained. On the contrary, the parameterizations for UBr 3 and UI 3 were found to be highly artificial and suggestion is given to experimentalists to reinterpret their INS spectra.
AOM reconciling of crystal field parameters for UCl3, UBr3, Ul3 series
International Nuclear Information System (INIS)
Gajek, Z.; Mulak, J.
1990-01-01
Available inelastic neutron scattering interpretations of crystal field effect in the uranium trihalides have been verified in terms of Angular Overlap Model. For UCl 3 a good reconciling of both INS and optical interpretations of crystal field effect has been obtained. On the contrary, the parameterizations for UBr 3 and UI 3 were found to be highly artificial and suggestion is given to experimentalists to reinterpret their INS spectra
ADER discontinuous Galerkin schemes for general-relativistic ideal magnetohydrodynamics
Fambri, F.; Dumbser, M.; Köppel, S.; Rezzolla, L.; Zanotti, O.
2018-03-01
We present a new class of high-order accurate numerical algorithms for solving the equations of general-relativistic ideal magnetohydrodynamics in curved spacetimes. In this paper we assume the background spacetime to be given and static, i.e. we make use of the Cowling approximation. The governing partial differential equations are solved via a new family of fully-discrete and arbitrary high-order accurate path-conservative discontinuous Galerkin (DG) finite-element methods combined with adaptive mesh refinement and time accurate local timestepping. In order to deal with shock waves and other discontinuities, the high-order DG schemes are supplemented with a novel a-posteriori subcell finite-volume limiter, which makes the new algorithms as robust as classical second-order total-variation diminishing finite-volume methods at shocks and discontinuities, but also as accurate as unlimited high-order DG schemes in smooth regions of the flow. We show the advantages of this new approach by means of various classical two- and three-dimensional benchmark problems on fixed spacetimes. Finally, we present a performance and accuracy comparisons between Runge-Kutta DG schemes and ADER high-order finite-volume schemes, showing the higher efficiency of DG schemes.
Directory of Open Access Journals (Sweden)
Marie-Eve Lamontagne
2010-12-01
Full Text Available Background: Having a common vision among network stakeholders is an important ingredient to developing a performance evaluation process. Consensus methods may be a viable means to reconcile the perceptions of different stakeholders about the dimensions to include in a performance evaluation framework.Objectives: To determine whether individual organizations within traumatic brain injury (TBI networks differ in perceptions about the importance of performance dimensions for the evaluation of TBI networks and to explore the extent to which group consensus sessions could reconcile these perceptions.Methods: We used TRIAGE, a consensus technique that combines an individual and a group data collection phase to explore the perceptions of network stakeholders and to reach a consensus within structured group discussions.Results: One hundred and thirty-nine professionals from 43 organizations within eight TBI networks participated in the individual data collection; 62 professionals from these same organisations contributed to the group data collection. The extent of consensus based on questionnaire results (e.g. individual data collection was low, however, 100% agreement was obtained for each network during the consensus group sessions. The median importance scores and mean ranks attributed to the dimensions by individuals compared to groups did not differ greatly. Group discussions were found useful in understanding the reasons motivating the scoring, for resolving differences among participants, and for harmonizing their values.Conclusion: Group discussions, as part of a consensus technique, appear to be a useful process to reconcile diverging perceptions of network performance among stakeholders.
Directory of Open Access Journals (Sweden)
Karena Shaw
2013-05-01
Full Text Available Shale gas proponents argue this unconventional fossil fuel offers a “bridge” towards a cleaner energy system by offsetting higher-carbon fuels such as coal. The technical feasibility of reconciling shale gas development with climate action remains contested. However, we here argue that governance challenges are both more pressing and more profound. Reconciling shale gas and climate action requires institutions capable of responding effectively to uncertainty; intervening to mandate emissions reductions and internalize costs to industry; and managing the energy system strategically towards a lower carbon future. Such policy measures prove challenging, particularly in jurisdictions that stand to benefit economically from unconventional fuels. We illustrate this dilemma through a case study of shale gas development in British Columbia, Canada, a global leader on climate policy that is nonetheless struggling to manage gas development for mitigation. The BC case is indicative of the constraints jurisdictions face both to reconcile gas development and climate action, and to manage the industry adequately to achieve social licence and minimize resistance. More broadly, the case attests to the magnitude of change required to transform our energy systems to mitigate climate change.
Towards accurate emergency response behavior
International Nuclear Information System (INIS)
Sargent, T.O.
1981-01-01
Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail
An Energy Decaying Scheme for Nonlinear Dynamics of Shells
Bottasso, Carlo L.; Bauchau, Olivier A.; Choi, Jou-Young; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
A novel integration scheme for nonlinear dynamics of geometrically exact shells is developed based on the inextensible director assumption. The new algorithm is designed so as to imply the strict decay of the system total mechanical energy at each time step, and consequently unconditional stability is achieved in the nonlinear regime. Furthermore, the scheme features tunable high frequency numerical damping and it is therefore stiffly accurate. The method is tested for a finite element spatial formulation of shells based on mixed interpolations of strain tensorial components and on a two-parameter representation of director rotations. The robustness of the, scheme is illustrated with the help of numerical examples.
Liu, Meilin
2012-08-01
A discontinuous Galerkin finite element method (DG-FEM) with a highly accurate time integration scheme for solving Maxwell equations is presented. The new time integration scheme is in the form of traditional predictor-corrector algorithms, PE CE m, but it uses coefficients that are obtained using a numerical scheme with fully controllable accuracy. Numerical results demonstrate that the proposed DG-FEM uses larger time steps than DG-FEM with classical PE CE) m schemes when high accuracy, which could be obtained using high-order spatial discretization, is required. © 1963-2012 IEEE.
Liu, Meilin; Sirenko, Kostyantyn; Bagci, Hakan
2012-01-01
A discontinuous Galerkin finite element method (DG-FEM) with a highly accurate time integration scheme for solving Maxwell equations is presented. The new time integration scheme is in the form of traditional predictor-corrector algorithms, PE CE m, but it uses coefficients that are obtained using a numerical scheme with fully controllable accuracy. Numerical results demonstrate that the proposed DG-FEM uses larger time steps than DG-FEM with classical PE CE) m schemes when high accuracy, which could be obtained using high-order spatial discretization, is required. © 1963-2012 IEEE.
Yasas, F M
1977-01-01
In response to a United Nations resolution, the Mobile Training Scheme (MTS) was set up to provide training to the trainers of national cadres engaged in frontline and supervisory tasks in social welfare and rural development. The training is innovative in its being based on an analysis of field realities. The MTS team consisted of a leader, an expert on teaching methods and materials, and an expert on action research and evaluation. The country's trainers from different departments were sent to villages to work for a short period and to report their problems in fulfilling their roles. From these grass roots experiences, they made an analysis of the job, determining what knowledge, attitude and skills it required. Analysis of daily incidents and problems were used to produce indigenous teaching materials drawn from actual field practice. How to consider the problems encountered through government structures for policy making and decisions was also learned. Tasks of the students were to identify the skills needed for role performance by job analysis, daily diaries and project histories; to analyze the particular community by village profiles; to produce indigenous teaching materials; and to practice the role skills by actual role performance. The MTS scheme was tried in Nepal in 1974-75; 3 training programs trained 25 trainers and 51 frontline workers; indigenous teaching materials were created; technical papers written; and consultations were provided. In Afghanistan the scheme was used in 1975-76; 45 participants completed the training; seminars were held; and an ongoing Council was created. It is hoped that the training program will be expanded to other countries.
Bonus schemes and trading activity
Pikulina, E.S.; Renneboog, L.D.R.; ter Horst, J.R.; Tobler, P.N.
2014-01-01
Little is known about how different bonus schemes affect traders' propensity to trade and which bonus schemes improve traders' performance. We study the effects of linear versus threshold bonus schemes on traders' behavior. Traders buy and sell shares in an experimental stock market on the basis of
DEFF Research Database (Denmark)
Juhl, Hans Jørn; Stacey, Julia
2001-01-01
. In the spring of 2001 MAPP carried out an extensive consumer study with special emphasis on the Nordic environmentally friendly label 'the swan'. The purpose was to find out how much consumers actually know and use various labelling schemes. 869 households were contacted and asked to fill in a questionnaire...... it into consideration when I go shopping. The respondent was asked to pick the most suitable answer, which described her use of each label. 29% - also called 'the labelling blind' - responded that they basically only knew the recycling label and the Government controlled organic label 'Ø-mærket'. Another segment of 6...
International Nuclear Information System (INIS)
Grashilin, V.A.; Karyshev, Yu.Ya.
1982-01-01
A 6-cycle scheme of step motor is described. The block-diagram and the basic circuit of the step motor control are presented. The step motor control comprises a pulse shaper, electronic commutator and power amplifiers. The step motor supply from 6-cycle electronic commutator provides for higher reliability and accuracy than from 3-cycle commutator. The control of step motor work is realised by the program given by the external source of control signals. Time-dependent diagrams for step motor control are presented. The specifications of the step-motor is given
Reconciling EFT and hybrid calculations of the light MSSM Higgs-boson mass
Energy Technology Data Exchange (ETDEWEB)
Bahl, Henning; Hollik, Wolfgang [Max-Planck Institut fuer Physik, Munich (Germany); Heinemeyer, Sven [Campus of International Excellence UAM+CSIC, Madrid (Spain); Universidad Autonoma de Madrid, Instituto de Fisica Teorica, (UAM/CSIC), Madrid (Spain); Instituto de Fisica Cantabria (CSIC-UC), Santander (Spain); Weiglein, Georg [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany)
2018-01-15
Various methods are used in the literature for predicting the lightest CP-even Higgs boson mass in the Minimal Supersymmetric Standard Model (MSSM). Fixed-order diagrammatic calculations capture all effects at a given order and yield accurate results for scales of supersymmetric (SUSY) particles that are not separated too much from the weak scale. Effective field theory calculations allow a resummation of large logarithmic contributions up to all orders and therefore yield accurate results for a high SUSY scale. A hybrid approach, where both methods have been combined, is implemented in the computer code FeynHiggs. So far, however, at large scales sizeable differences have been observed between FeynHiggs and other pure EFT codes. In this work, the various approaches are analytically compared with each other in a simple scenario in which all SUSY mass scales are chosen to be equal to each other. Three main sources are identified that account for the major part of the observed differences. Firstly, it is shown that the scheme conversion of the input parameters that is commonly used for the comparison of fixed-order results is not adequate for the comparison of results containing a series of higher-order logarithms. Secondly, the treatment of higher-order terms arising from the determination of the Higgs propagator pole is addressed. Thirdly, the effect of different parametrizations in particular of the top Yukawa coupling in the non-logarithmic terms is investigated. Taking into account all of these effects, in the considered simple scenario very good agreement is found for scales above 1 TeV between the results obtained using the EFT approach and the hybrid approach of FeynHiggs. (orig.)
Reconciling EFT and hybrid calculations of the light MSSM Higgs-boson mass
International Nuclear Information System (INIS)
Bahl, Henning; Hollik, Wolfgang; Heinemeyer, Sven; Weiglein, Georg
2017-06-01
Various methods are used in the literature for predicting the lightest CP-even Higgs boson mass in the Minimal Supersymmetric Standard Model (MSSM). Fixed-order diagrammatic calculations capture all effects at a given order and yield accurate results for scales of supersymmetric (SUSY) particles that are not separated too much from the weak scale. Effective field theory calculations allow a resummation of large logarithmic contributions up to all orders and therefore yield accurate results for a high SUSY scale. A hybrid approach, where both methods have been combined, is implemented in the computer code FeynHiggs. So far, however, at large scales sizeable differences have been observed between FeynHiggs and other pure EFT codes. In this work, the various approaches are analytically compared with each other in a simple scenario in which all SUSY mass scales are chosen to be equal to each other. Three main sources are identified that account for the major part of the observed differences. Firstly, it is shown that the scheme conversion of the input parameters that is commonly used for the comparison of fixed-order results is not adequate for the comparison of results containing a series of higher-order logarithms. Secondly, the treatment of higher-order terms arising from the determination of the Higgs propagator pole is addressed. Thirdly, the effect of different parametrizations in particular of the top Yukawa coupling in the non-logarithmic terms is investigated. Taking into account all of these effects, in the considered simple scenario very good agreement is found for scales above 1 TeV between the results obtained using the EFT approach and the hybrid approach of FeynHiggs.
Reconciling EFT and hybrid calculations of the light MSSM Higgs-boson mass
Energy Technology Data Exchange (ETDEWEB)
Bahl, Henning; Hollik, Wolfgang [Max-Planck-Institut fuer Physik, Muenchen (Germany); Heinemeyer, Sven [Campus of International Excellence UAM+CSIC, Madrid (Spain); Univ. Autonoma de Madrid (Spain). Inst. de Fisica Teorica; Instituto de Fisica Cantabria (CSIC-UC), Santander (Spain); Weiglein, Georg [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2017-06-15
Various methods are used in the literature for predicting the lightest CP-even Higgs boson mass in the Minimal Supersymmetric Standard Model (MSSM). Fixed-order diagrammatic calculations capture all effects at a given order and yield accurate results for scales of supersymmetric (SUSY) particles that are not separated too much from the weak scale. Effective field theory calculations allow a resummation of large logarithmic contributions up to all orders and therefore yield accurate results for a high SUSY scale. A hybrid approach, where both methods have been combined, is implemented in the computer code FeynHiggs. So far, however, at large scales sizeable differences have been observed between FeynHiggs and other pure EFT codes. In this work, the various approaches are analytically compared with each other in a simple scenario in which all SUSY mass scales are chosen to be equal to each other. Three main sources are identified that account for the major part of the observed differences. Firstly, it is shown that the scheme conversion of the input parameters that is commonly used for the comparison of fixed-order results is not adequate for the comparison of results containing a series of higher-order logarithms. Secondly, the treatment of higher-order terms arising from the determination of the Higgs propagator pole is addressed. Thirdly, the effect of different parametrizations in particular of the top Yukawa coupling in the non-logarithmic terms is investigated. Taking into account all of these effects, in the considered simple scenario very good agreement is found for scales above 1 TeV between the results obtained using the EFT approach and the hybrid approach of FeynHiggs.
Efficient Scheme for Chemical Flooding Simulation
Directory of Open Access Journals (Sweden)
Braconnier Benjamin
2014-07-01
Full Text Available In this paper, we investigate an efficient implicit scheme for the numerical simulation of chemical enhanced oil recovery technique for oil fields. For the sake of brevity, we only focus on flows with polymer to describe the physical and numerical models. In this framework, we consider a black oil model upgraded with the polymer modeling. We assume the polymer only transported in the water phase or adsorbed on the rock following a Langmuir isotherm. The polymer reduces the water phase mobility which can change drastically the behavior of water oil interfaces. Then, we propose a fractional step technique to resolve implicitly the system. The first step is devoted to the resolution of the black oil subsystem and the second to the polymer mass conservation. In such a way, jacobian matrices coming from the implicit formulation have a moderate size and preserve solvers efficiency. Nevertheless, the coupling between the black-oil subsystem and the polymer is not fully resolved. For efficiency and accuracy comparison, we propose an explicit scheme for the polymer for which large time step is prohibited due to its CFL (Courant-Friedrichs-Levy criterion and consequently approximates accurately the coupling. Numerical experiments with polymer are simulated : a core flood, a 5-spot reservoir with surfactant and ions and a 3D real case. Comparisons are performed between the polymer explicit and implicit scheme. They prove that our polymer implicit scheme is efficient, robust and resolves accurately the coupling physics. The development and the simulations have been performed with the software PumaFlow [PumaFlow (2013 Reference manual, release V600, Beicip Franlab].
Packet reversed packet combining scheme
International Nuclear Information System (INIS)
Bhunia, C.T.
2006-07-01
The packet combining scheme is a well defined simple error correction scheme with erroneous copies at the receiver. It offers higher throughput combined with ARQ protocols in networks than that of basic ARQ protocols. But packet combining scheme fails to correct errors when the errors occur in the same bit locations of two erroneous copies. In the present work, we propose a scheme that will correct error if the errors occur at the same bit location of the erroneous copies. The proposed scheme when combined with ARQ protocol will offer higher throughput. (author)
International Nuclear Information System (INIS)
Ma Hai-Qiang; Wei Ke-Jin; Yang Jian-Hui; Li Rui-Xue; Zhu Wu
2014-01-01
We present a full quantum network scheme using a modified BB84 protocol. Unlike other quantum network schemes, it allows quantum keys to be distributed between two arbitrary users with the help of an intermediary detecting user. Moreover, it has good expansibility and prevents all potential attacks using loopholes in a detector, so it is more practical to apply. Because the fiber birefringence effects are automatically compensated, the scheme is distinctly stable in principle and in experiment. The simple components for every user make our scheme easier for many applications. The experimental results demonstrate the stability and feasibility of this scheme. (general)
When Is Network Lasso Accurate?
Directory of Open Access Journals (Sweden)
Alexander Jung
2018-01-01
Full Text Available The “least absolute shrinkage and selection operator” (Lasso method has been adapted recently for network-structured datasets. In particular, this network Lasso method allows to learn graph signals from a small number of noisy signal samples by using the total variation of a graph signal for regularization. While efficient and scalable implementations of the network Lasso are available, only little is known about the conditions on the underlying network structure which ensure network Lasso to be accurate. By leveraging concepts of compressed sensing, we address this gap and derive precise conditions on the underlying network topology and sampling set which guarantee the network Lasso for a particular loss function to deliver an accurate estimate of the entire underlying graph signal. We also quantify the error incurred by network Lasso in terms of two constants which reflect the connectivity of the sampled nodes.
Reconciling Top-Down and Bottom-Up Estimates of Oil and Gas Methane Emissions in the Barnett Shale
Hamburg, S.
2015-12-01
Top-down approaches that use aircraft, tower, or satellite-based measurements of well-mixed air to quantify regional methane emissions have typically estimated higher emissions from the natural gas supply chain when compared to bottom-up inventories. A coordinated research campaign in October 2013 used simultaneous top-down and bottom-up approaches to quantify total and fossil methane emissions in the Barnett Shale region of Texas. Research teams have published individual results including aircraft mass-balance estimates of regional emissions and a bottom-up, 25-county region spatially-resolved inventory. This work synthesizes data from the campaign to directly compare top-down and bottom-up estimates. A new analytical approach uses statistical estimators to integrate facility emission rate distributions from unbiased and targeted high emission site datasets, which more rigorously incorporates the fat-tail of skewed distributions to estimate regional emissions of well pads, compressor stations, and processing plants. The updated spatially-resolved inventory was used to estimate total and fossil methane emissions from spatial domains that match seven individual aircraft mass balance flights. Source apportionment of top-down emissions between fossil and biogenic methane was corroborated with two independent analyses of methane and ethane ratios. Reconciling top-down and bottom-up estimates of fossil methane emissions leads to more accurate assessment of natural gas supply chain emission rates and the relative contribution of high emission sites. These results increase our confidence in our understanding of the climate impacts of natural gas relative to more carbon-intensive fossil fuels and the potential effectiveness of mitigation strategies.
The Accurate Particle Tracer Code
Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi
2016-01-01
The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusio...
International Nuclear Information System (INIS)
Deslattes, R.D.
1987-01-01
Heavy ion accelerators are the most flexible and readily accessible sources of highly charged ions. These having only one or two remaining electrons have spectra whose accurate measurement is of considerable theoretical significance. Certain features of ion production by accelerators tend to limit the accuracy which can be realized in measurement of these spectra. This report aims to provide background about spectroscopic limitations and discuss how accelerator operations may be selected to permit attaining intrinsically limited data
Could a scheme for licensing smokers work in Australia?
Magnusson, Roger S; Currow, David C
2013-08-05
In this article, we evaluate the possible advantages and disadvantages of a licensing scheme that would require adult smokers to verify their right to purchase tobacco products at point of sale using a smart-card licence. A survey of Australian secondary school students conducted in 2011 found that half of 17-2013-old smokers and one-fifth of 12-2013-old smokers believed it was "easy" or "very easy" to purchase cigarettes themselves. Reducing tobacco use by adolescents now is central to the future course of the current epidemic of tobacco-caused disease, since most current adult smokers began to smoke as adolescents--at a time when they were unable to purchase tobacco lawfully. The requirement for cigarette retailers to reconcile all stock purchased from wholesalers against a digital record of retail sales to licensed smokers would create a robust incentive for retailers to comply with laws that prohibit tobacco sales to children. Foreseeable objections to introducing a smokers licence need to be taken into account, but once we move beyond the "shock of the new", it is difficult to identify anything about a smokers licence that is particularly offensive or demeaning. A smoker licensing scheme deserves serious consideration for its potential to dramatically curtail retailers' violation of the law against selling tobacco to minors, to impose stricter accountability for sale of a uniquely harmful drug and to allow intelligent use of information about smokers' purchases to help smokers quit.
Zahner, William; Dent, Nick
2014-01-01
Sometimes a student's unexpected solution turns a routine classroom task into a real problem, one that the teacher cannot resolve right away. Although not knowing the answer can be uncomfortable for a teacher, these moments of uncertainty are also an opportunity to model authentic problem solving. This article describes such a moment in Zahner's…
A Note on Symplectic, Multisymplectic Scheme in Finite Element Method
Institute of Scientific and Technical Information of China (English)
GUO Han-Ying; JI Xiao-Mei; LI Yu-Qi; WU Ke
2001-01-01
We find that with uniform mesh, the numerical schemes derived from finite element method can keep a preserved symplectic structure in one-dimensional case and a preserved multisymplectic structure in two-dimensional case respectively. These results are in fact the intrinsic reason why the numerical experiments show that such finite element algorithms are accurate in practice.``
An extrapolation scheme for solid-state NMR chemical shift calculations
Nakajima, Takahito
2017-06-01
Conventional quantum chemical and solid-state physical approaches include several problems to accurately calculate solid-state nuclear magnetic resonance (NMR) properties. We propose a reliable computational scheme for solid-state NMR chemical shifts using an extrapolation scheme that retains the advantages of these approaches but reduces their disadvantages. Our scheme can satisfactorily yield solid-state NMR magnetic shielding constants. The estimated values have only a small dependence on the low-level density functional theory calculation with the extrapolation scheme. Thus, our approach is efficient because the rough calculation can be performed in the extrapolation scheme.
Accurate determination of antenna directivity
DEFF Research Database (Denmark)
Dich, Mikael
1997-01-01
The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...
Modified Aggressive Packet Combining Scheme
International Nuclear Information System (INIS)
Bhunia, C.T.
2010-06-01
In this letter, a few schemes are presented to improve the performance of aggressive packet combining scheme (APC). To combat error in computer/data communication networks, ARQ (Automatic Repeat Request) techniques are used. Several modifications to improve the performance of ARQ are suggested by recent research and are found in literature. The important modifications are majority packet combining scheme (MjPC proposed by Wicker), packet combining scheme (PC proposed by Chakraborty), modified packet combining scheme (MPC proposed by Bhunia), and packet reversed packet combining (PRPC proposed by Bhunia) scheme. These modifications are appropriate for improving throughput of conventional ARQ protocols. Leung proposed an idea of APC for error control in wireless networks with the basic objective of error control in uplink wireless data network. We suggest a few modifications of APC to improve its performance in terms of higher throughput, lower delay and higher error correction capability. (author)
Transmission usage cost allocation schemes
International Nuclear Information System (INIS)
Abou El Ela, A.A.; El-Sehiemy, R.A.
2009-01-01
This paper presents different suggested transmission usage cost allocation (TCA) schemes to the system individuals. Different independent system operator (ISO) visions are presented using the proportional rata and flow-based TCA methods. There are two proposed flow-based TCA schemes (FTCA). The first FTCA scheme generalizes the equivalent bilateral exchanges (EBE) concepts for lossy networks through two-stage procedure. The second FTCA scheme is based on the modified sensitivity factors (MSF). These factors are developed from the actual measurements of power flows in transmission lines and the power injections at different buses. The proposed schemes exhibit desirable apportioning properties and are easy to implement and understand. Case studies for different loading conditions are carried out to show the capability of the proposed schemes for solving the TCA problem. (author)
Efficiency of High-Order Accurate Difference Schemes for the Korteweg-de Vries Equation
Directory of Open Access Journals (Sweden)
Kanyuta Poochinapan
2014-01-01
Full Text Available Two numerical models to obtain the solution of the KdV equation are proposed. Numerical tools, compact fourth-order and standard fourth-order finite difference techniques, are applied to the KdV equation. The fundamental conservative properties of the equation are preserved by the finite difference methods. Linear stability analysis of two methods is presented by the Von Neumann analysis. The new methods give second- and fourth-order accuracy in time and space, respectively. The numerical experiments show that the proposed methods improve the accuracy of the solution significantly.
DEFF Research Database (Denmark)
He, Jinwei; Li, Yun Wei; Blaabjerg, Frede
2013-01-01
To address inaccurate power sharing problems in autonomous islanding microgrids, an enhanced droop control method through adaptive virtual impedance adjustment is proposed. First, a term associated with DG reactive power, imbalance power or harmonic power is added to the conventional real power...
DEFF Research Database (Denmark)
Christiansen, Anders Vest; Auken, Esben; Kirkegaard, Casper
2016-01-01
Airborne transient electromagnetic (TEM) methods target a range of applications that all rely on analysis of extremely large datasets, but with widely varying requirements with regard to accuracy and computing time. Certain applications have larger intrinsic tolerances with regard to modelling...... inaccuracy, and there can be varying degrees of tolerance throughout different phases of interpretation. It is thus desirable to be able to tune a custom balance between accuracy and compute time when modelling of airborne datasets. This balance, however, is not necessarily easy to obtain in practice....... Typically, a significant reduction in computational time can only be obtained by moving to a much simpler physical description of the system, e.g. by employing a simpler forward model. This will often lead to a significant loss of accuracy, without an indication of computational precision. We demonstrate...
A digital memories based user authentication scheme with privacy preservation.
Directory of Open Access Journals (Sweden)
JunLiang Liu
Full Text Available The traditional username/password or PIN based authentication scheme, which still remains the most popular form of authentication, has been proved insecure, unmemorable and vulnerable to guessing, dictionary attack, key-logger, shoulder-surfing and social engineering. Based on this, a large number of new alternative methods have recently been proposed. However, most of them rely on users being able to accurately recall complex and unmemorable information or using extra hardware (such as a USB Key, which makes authentication more difficult and confusing. In this paper, we propose a Digital Memories based user authentication scheme adopting homomorphic encryption and a public key encryption design which can protect users' privacy effectively, prevent tracking and provide multi-level security in an Internet & IoT environment. Also, we prove the superior reliability and security of our scheme compared to other schemes and present a performance analysis and promising evaluation results.
Efficient scheme for parametric fitting of data in arbitrary dimensions.
Pang, Ning-Ning; Tzeng, Wen-Jer; Kao, Hisen-Ching
2008-07-01
We propose an efficient scheme for parametric fitting expressed in terms of the Legendre polynomials. For continuous systems, our scheme is exact and the derived explicit expression is very helpful for further analytical studies. For discrete systems, our scheme is almost as accurate as the method of singular value decomposition. Through a few numerical examples, we show that our algorithm costs much less CPU time and memory space than the method of singular value decomposition. Thus, our algorithm is very suitable for a large amount of data fitting. In addition, the proposed scheme can also be used to extract the global structure of fluctuating systems. We then derive the exact relation between the correlation function and the detrended variance function of fluctuating systems in arbitrary dimensions and give a general scaling analysis.
International Nuclear Information System (INIS)
Botchorishvili, Ramaz; Pironneau, Olivier
2003-01-01
We develop here a new class of finite volume schemes on unstructured meshes for scalar conservation laws with stiff source terms. The schemes are of equilibrium type, hence with uniform bounds on approximate solutions, valid in cell entropy inequalities and exact for some equilibrium states. Convergence is investigated in the framework of kinetic schemes. Numerical tests show high computational efficiency and a significant advantage over standard cell centered discretization of source terms. Equilibrium type schemes produce accurate results even on test problems for which the standard approach fails. For some numerical tests they exhibit exponential type convergence rate. In two of our numerical tests an equilibrium type scheme with 441 nodes on a triangular mesh is more accurate than a standard scheme with 5000 2 grid points
A Hierarchical Control Scheme for Reactive Power and Harmonic Current Sharing in Islanded Microgrids
DEFF Research Database (Denmark)
Lorzadeh, Iman; Firoozabadi, Mehdi Savaghebi; Askarian Abyaneh, Hossein
2015-01-01
In this paper, a hierarchical control scheme consisting of primary and secondary levels is proposed for achieving accurate reactive power and harmonic currents sharing among interface inverters of distributed generators (DGs) in islanded microgrids. Firstly, fundamental and main harmonic componen...
A stable higher order space time Galerkin marching-on-in-time scheme
Pray, Andrew J.; Shanker, Balasubramaniam; Bagci, Hakan
2013-01-01
We present a method for the stable solution of time-domain integral equations. The method uses a technique developed in [1] to accurately evaluate matrix elements. As opposed to existing stabilization schemes, the method presented uses higher order
Indexed variation graphs for efficient and accurate resistome profiling.
Rowe, Will P M; Winn, Martyn D
2018-05-14
Antimicrobial resistance remains a major threat to global health. Profiling the collective antimicrobial resistance genes within a metagenome (the "resistome") facilitates greater understanding of antimicrobial resistance gene diversity and dynamics. In turn, this can allow for gene surveillance, individualised treatment of bacterial infections and more sustainable use of antimicrobials. However, resistome profiling can be complicated by high similarity between reference genes, as well as the sheer volume of sequencing data and the complexity of analysis workflows. We have developed an efficient and accurate method for resistome profiling that addresses these complications and improves upon currently available tools. Our method combines a variation graph representation of gene sets with an LSH Forest indexing scheme to allow for fast classification of metagenomic sequence reads using similarity-search queries. Subsequent hierarchical local alignment of classified reads against graph traversals enables accurate reconstruction of full-length gene sequences using a scoring scheme. We provide our implementation, GROOT, and show it to be both faster and more accurate than a current reference-dependent tool for resistome profiling. GROOT runs on a laptop and can process a typical 2 gigabyte metagenome in 2 minutes using a single CPU. Our method is not restricted to resistome profiling and has the potential to improve current metagenomic workflows. GROOT is written in Go and is available at https://github.com/will-rowe/groot (MIT license). will.rowe@stfc.ac.uk. Supplementary data are available at Bioinformatics online.
Stock, Ron; Scott, Jim; Gurtel, Sharon
2009-05-01
Although medication safety has largely focused on reducing medication errors in hospitals, the scope of adverse drug events in the outpatient setting is immense. A fundamental problem occurs when a clinician lacks immediate access to an accurate list of the medications that a patient is taking. Since 2001, PeaceHealth Medical Group (PHMG), a multispecialty physician group, has been using an electronic prescribing system that includes medication-interaction warnings and allergy checks. Yet, most practitioners recognized the remaining potential for error, especially because there was no assurance regarding the accuracy of information on the electronic medical record (EMR)-generated medication list. PeaceHealth developed and implemented a standardized approach to (1) review and reconcile the medication list for every patient at each office visit and (2) report on the results obtained within the PHMG clinics. In 2005, PeaceHealth established the ambulatory medication reconciliation project to develop a reliable, efficient process for maintaining accurate patient medication lists. Each of PeaceHealth's five regions created a medication reconciliation task force to redesign its clinical practice, incorporating the systemwide aims and agreed-on key process components for every ambulatory visit. Implementation of the medication reconciliation process at the PHMG clinics resulted in a substantial increase in the number of accurate medication lists, with fewer discrepancies between what the patient is actually taking and what is recorded in the EMR. The PeaceHealth focus on patient safety, and particularly the reduction of medication errors, has involved a standardized approach for reviewing and reconciling medication lists for every patient visiting a physician office. The standardized processes can be replicated at other ambulatory clinics-whether or not electronic tools are available.
Hybrid flux splitting schemes for numerical resolution of two-phase flows
Energy Technology Data Exchange (ETDEWEB)
Flaatten, Tore
2003-07-01
This thesis deals with the construction of numerical schemes for approximating. solutions to a hyperbolic two-phase flow model. Numerical schemes for hyperbolic models are commonly divided in two main classes: Flux Vector Splitting (FVS) schemes which are based on scalar computations and Flux Difference Splitting (FDS) schemes which are based on matrix computations. FVS schemes are more efficient than FDS schemes, but FDS schemes are more accurate. The canonical FDS schemes are the approximate Riemann solvers which are based on a local decomposition of the system into its full wave structure. In this thesis the mathematical structure of the model is exploited to construct a class of hybrid FVS/FDS schemes, denoted as Mixture Flux (MF) schemes. This approach is based on a splitting of the system in two components associated with the pressure and volume fraction variables respectively, and builds upon hybrid FVS/FDS schemes previously developed for one-phase flow models. Through analysis and numerical experiments it is demonstrated that the MF approach provides several desirable features, including (1) Improved efficiency compared to standard approximate Riemann solvers, (2) Robustness under stiff conditions, (3) Accuracy on linear and nonlinear phenomena. In particular it is demonstrated that the framework allows for an efficient weakly implicit implementation, focusing on an accurate resolution of slow transients relevant for the petroleum industry. (author)
Accurate performance analysis of opportunistic decode-and-forward relaying
Tourki, Kamel
2011-07-01
In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path may be considered unusable, and the destination may use a selection combining technique. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end outage probability for a transmission rate R. Furthermore, we evaluate the asymptotical performance analysis and the diversity order is deduced. Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over different network architectures. © 2011 IEEE.
Preuß, M
2015-07-01
Today, an increasing proportion of society has to reconcile eldercare and work. This task poses challenges for them, which they meet through an adjustment of their everyday living arrangements. These coping strategies have been so far scarcely noted within research on the reconciliation of elder care and employment. Knowledge about the active dealing with this parallel involvement in both spheres of life is of vital importance when wanting to derive precisely tailored support measures for employed care givers. A goal of this article is to deliver insight on reconciling activities of employed women who provide care, while it tries to specify respective factors which determine those actions. Moreover, an ideal typology is presented, which systematizes these associations. With this ideal typology, conceptual instruments have been developed which illustrate the complex reality of the reconciliation actions and the dependence on various coping resources. In gerontological practice, these findings may provide support to design an intervention strategy tailored to the individual situation that addresses the everyday level of action and strengthens the performance of those affected.
Baldwin, A; Mills, J; Birks, M; Budden, L
2017-12-01
Role modelling by experienced nurses, including nurse academics, is a key factor in the process of preparing undergraduate nursing students for practice, and may contribute to longevity in the workforce. A grounded theory study was undertaken to investigate the phenomenon of nurse academics' role modelling for undergraduate students. The study sought to answer the research question: how do nurse academics role model positive professional behaviours for undergraduate students? The aims of this study were to: theorise a process of nurse academic role modelling for undergraduate students; describe the elements that support positive role modelling by nurse academics; and explain the factors that influence the implementation of academic role modelling. The study sample included five second year nursing students and sixteen nurse academics from Australia and the United Kingdom. Data was collected from observation, focus groups and individual interviews. This study found that in order for nurse academics to role model professional behaviours for nursing students, they must reconcile their own professional identity. This paper introduces the theory of reconciling professional identity and discusses the three categories that comprise the theory, creating a context for learning, creating a context for authentic rehearsal and mirroring identity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Criel Bart
2007-07-01
Full Text Available Abstract Background Despite the promotion of Community Health Insurance (CHI in Uganda in the second half of the 90's, mainly under the impetus of external aid organisations, overall membership has remained low. Today, some 30,000 persons are enrolled in about a dozen different schemes located in Central and Southern Uganda. Moreover, most of these schemes were created some 10 years ago but since then, only one or two new schemes have been launched. The dynamic of CHI has apparently come to a halt. Methods A case study evaluation was carried out on two selected CHI schemes: the Ishaka and the Save for Health Uganda (SHU schemes. The objective of this evaluation was to explore the reasons for the limited success of CHI. The evaluation involved review of the schemes' records, key informant interviews and exit polls with both insured and non-insured patients. Results Our research points to a series of not mutually exclusive explanations for this under-achievement at both the demand and the supply side of health care delivery. On the demand side, the following elements have been identified: lack of basic information on the scheme's design and operation, limited understanding of the principles underlying CHI, limited community involvement and lack of trust in the management of the schemes, and, last but not least, problems in people's ability to pay the insurance premiums. On the supply-side, we have identified the following explanations: limited interest and knowledge of health care providers and managers of CHI, and the absence of a coherent policy framework for the development of CHI. Conclusion The policy implications of this study refer to the need for the government to provide the necessary legislative, technical and regulative support to CHI development. The main policy challenge however is the need to reconcile the government of Uganda's interest in promoting CHI with the current policy of abolition of user fees in public facilities.
Accurate Modeling of Advanced Reflectarrays
DEFF Research Database (Denmark)
Zhou, Min
to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...
Accurate thickness measurement of graphene
International Nuclear Information System (INIS)
Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T
2016-01-01
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1–1.3 nm to 0.1–0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials. (paper)
International Nuclear Information System (INIS)
Lee, Goung Jin; Kim, Soong Pyung
1990-01-01
In solving the convection-diffusion phenomena, it is common to use central difference scheme or upwind scheme. The central difference scheme has second order accuracy, while the upwind scheme is only first order accurate. However, since the variation rising in the convection-diffusion problem is exponential, central difference scheme ceased to be a good method for anything but extremely small values of Δx. At large values of Δx, which is all one can afford in most practical problems, it is the upwind scheme that gives more reasonable results than the central scheme. But in the conventional upwind scheme, since the accuracy is only first order, false diffusion is somewhat large, and when the real diffusion is smaller than the numerical diffusion, solutions may be very errorneous. So in this paper, a method to reduce the numerical diffusion of upwind scheme is studied. Developed scheme uses same number of nodes as conventional upwind scheme, but it considers the direction of flow more sophistically. As a conclusion, the developed scheme shows very good results. It can reduce false diffusion greatly with the cost of small complexity. Also, algorithm of the developed scheme is presented at appendix. (Author)
International Nuclear Information System (INIS)
Minesaki, Yukitaka
2013-01-01
For the restricted three-body problem, we propose an accurate orbital integration scheme that retains all conserved quantities of the two-body problem with two primaries and approximately preserves the Jacobi integral. The scheme is obtained by taking the limit as mass approaches zero in the discrete-time general three-body problem. For a long time interval, the proposed scheme precisely reproduces various periodic orbits that cannot be accurately computed by other generic integrators
Coordinated renewable energy support schemes
DEFF Research Database (Denmark)
Morthorst, P.E.; Jensen, S.G.
2006-01-01
. The first example covers countries with regional power markets that also regionalise their support schemes, the second countries with separate national power markets that regionalise their support schemes. The main findings indicate that the almost ideal situation exists if the region prior to regionalising...
CANONICAL BACKWARD DIFFERENTIATION SCHEMES FOR ...
African Journals Online (AJOL)
This paper describes a new nonlinear backward differentiation schemes for the numerical solution of nonlinear initial value problems of first order ordinary differential equations. The schemes are based on rational interpolation obtained from canonical polynomials. They are A-stable. The test problems show that they give ...
The accurate particle tracer code
Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun
2017-11-01
The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.
hybrid modulation scheme fo rid modulation scheme fo dulation
African Journals Online (AJOL)
eobe
control technique is done through simulations and ex control technique .... HYBRID MODULATION SCHEME FOR CASCADED H-BRIDGE INVERTER CELLS. C. I. Odeh ..... and OR operations. Referring to ... MATLAB/SIMULINK environment.
A Method for Capturing and Reconciling Stakeholder Intentions Based on the Formal Concept Analysis
Aoyama, Mikio
Information systems are ubiquitous in our daily life. Thus, information systems need to work appropriately anywhere at any time for everybody. Conventional information systems engineering tends to engineer systems from the viewpoint of systems functionality. However, the diversity of the usage context requires fundamental change compared to our current thinking on information systems; from the functionality the systems provide to the goals the systems should achieve. The intentional approach embraces the goals and related aspects of the information systems. This chapter presents a method for capturing, structuring and reconciling diverse goals of multiple stakeholders. The heart of the method lies in the hierarchical structuring of goals by goal lattice based on the formal concept analysis, a semantic extension of the lattice theory. We illustrate the effectiveness of the presented method through application to the self-checkout systems for large-scale supermarkets.
The Effect(s) of Teen Pregnancy: Reconciling Theory, Methods, and Findings.
Diaz, Christina J; Fiel, Jeremy E
2016-02-01
Although teenage mothers have lower educational attainment and earnings than women who delay fertility, causal interpretations of this relationship remain controversial. Scholars argue that there are reasons to predict negative, trivial, or even positive effects, and different methodological approaches provide some support for each perspective. We reconcile this ongoing debate by drawing on two heuristics: (1) each methodological strategy emphasizes different women in estimation procedures, and (2) the effects of teenage fertility likely vary in the population. Analyses of the Child and Young Adult Cohorts of the National Longitudinal Survey of Youth (N = 3,661) confirm that teen pregnancy has negative effects on most women's attainment and earnings. More striking, however, is that effects on college completion and early earnings vary considerably and are most pronounced among those least likely to experience an early pregnancy. Further analyses suggest that teen pregnancy is particularly harmful for those with the brightest socioeconomic prospects and who are least prepared for the transition to motherhood.
Deference or Interrogation? Contrasting Models for Reconciling Religion, Gender and Equality
Directory of Open Access Journals (Sweden)
Moira Dustin
2012-01-01
Full Text Available Abstract Since the late 1990s, the extension of the equality framework in the United Kingdom has been accompanied by the recognition of religion within that framework and new measures to address religious discrimination. This development has been contested, with many arguing that religion is substantively different to other discrimination grounds and that increased protection against religious discrimination may undermine equality for other marginalized groups – in particular, women and lesbian, gay, bisexual and transgender (LGBT people. This paper considers these concerns from the perspective of minoritized women in the UK. It analyses two theoretical approaches to reconciling religious claims with gender equality – one based on privileging, the other based on challenging religious claims – before considering which, if either, reflects experiences in the UK in recent years and what this means for gender equality.
International Nuclear Information System (INIS)
Mlinar, Vladan
2015-01-01
To facilitate the design and optimization of nanomaterials for a given application it is necessary to understand the relationship between structure and physical properties. For large nanomaterials, there is imprecise structural information so the full structure is only resolved at the level of partial representations. Here we show how to reconcile partial structural representations using constraints from structural characterization measurements and theory to maximally exploit the limited amount of data available from experiment. We determine a range of parameter space where predictive theory can be used to design and optimize the structure. Using an example of variation of chemical composition profile across the interface of two nanomaterials, we demonstrate how, given experimental and theoretical constraints, to find a region of structure-parameter space within which computationally explored partial representations of the full structure will have observable real-world counterparts. (paper)
Wilk, Szymon; Michalowski, Martin; Michalowski, Wojtek; Hing, Marisela Mainegra; Farion, Ken
2011-01-01
This paper describes a new methodological approach to reconciling adverse and contradictory activities (called points of contention) occurring when a patient is managed according to two or more concurrently used clinical practice guidelines (CPGs). The need to address these inconsistencies occurs when a patient with more than one disease, each of which is a comorbid condition, has to be managed according to different treatment regimens. We propose an automatic procedure that constructs a mathematical guideline model using the Constraint Logic Programming (CLP) methodology, uses this model to identify and mitigate encountered points of contention, and revises the considered CPGs accordingly. The proposed procedure is used as an alerting mechanism and coupled with a guideline execution engine warns the physician about potential problems with the concurrent application of two or more guidelines. We illustrate the operation of our procedure in a clinical scenario describing simultaneous use of CPGs for duodenal ulcer and transient ischemic attack.
A method for accurate computation of elastic and discrete inelastic scattering transfer matrix
International Nuclear Information System (INIS)
Garcia, R.D.M.; Santina, M.D.
1986-05-01
A method for accurate computation of elastic and discrete inelastic scattering transfer matrices is discussed. In particular, a partition scheme for the source energy range that avoids integration over intervals containing points where the integrand has discontinuous derivative is developed. Five-figure accurate numerical results are obtained for several test problems with the TRAMA program which incorporates the porposed method. A comparison with numerical results from existing processing codes is also presented. (author) [pt
Czech, Brian
2008-12-01
The conflict between economic growth and biodiversity conservation is understood in portions of academia and sometimes acknowledged in political circles. Nevertheless, there is not a unified response. In political and policy circles, the environmental Kuznets curve (EKC) is posited to solve the conflict between economic growth and environmental protection. In academia, however, the EKC has been deemed fallacious in macroeconomic scenarios and largely irrelevant to biodiversity. A more compelling response to the conflict is that it may be resolved with technological progress. Herein I review the conflict between economic growth and biodiversity conservation in the absence of technological progress, explore the prospects for technological progress to reconcile that conflict, and provide linguistic suggestions for describing the relationships among economic growth, technological progress, and biodiversity conservation. The conflict between economic growth and biodiversity conservation is based on the first two laws of thermodynamics and principles of ecology such as trophic levels and competitive exclusion. In this biophysical context, the human economy grows at the competitive exclusion of nonhuman species in the aggregate. Reconciling the conflict via technological progress has not occurred and is infeasible because of the tight linkage between technological progress and economic growth at current levels of technology. Surplus production in existing economic sectors is required for conducting the research and development necessary for bringing new technologies to market. Technological regimes also reflect macroeconomic goals, and if the goal is economic growth, reconciliatory technologies are less likely to be developed. As the economy grows, the loss of biodiversity may be partly mitigated with end-use innovation that increases technical efficiency, but this type of technological progress requires policies that are unlikely if the conflict between economic growth
Reconciling Long-Wavelength Dynamic Topography, Geoid Anomalies and Mass Distribution on Earth
Hoggard, M.; Richards, F. D.; Ghelichkhan, S.; Austermann, J.; White, N.
2017-12-01
Since the first satellite observations in the late 1950s, we have known that that the Earth's non-hydrostatic geoid is dominated by spherical harmonic degree 2 (wavelengths of 16,000 km). Peak amplitudes are approximately ± 100 m, with highs centred on the Pacific Ocean and Africa, encircled by lows in the vicinity of the Pacific Ring of Fire and at the poles. Initial seismic tomography models revealed that the shear-wave velocity, and therefore presumably the density structure, of the lower mantle is also dominated by degree 2. Anti-correlation of slow, probably low density regions beneath geoid highs indicates that the mantle is affected by large-scale flow. Thus, buoyant features are rising and exert viscous normal stresses that act to deflect the surface and core-mantle boundary (CMB). Pioneering studies in the 1980s showed that a viscosity jump between the upper and lower mantle is required to reconcile these geoid and tomographically inferred density anomalies. These studies also predict 1-2 km of dynamic topography at the surface, dominated by degree 2. In contrast to this prediction, a global observational database of oceanic residual depth measurements indicates that degree 2 dynamic topography has peak amplitudes of only 500 m. Here, we attempt to reconcile observations of dynamic topography, geoid, gravity anomalies and CMB topography using instantaneous flow kernels. We exploit a density structure constructed from blended seismic tomography models, combining deep mantle imaging with higher resolution upper mantle features. Radial viscosity structure is discretised, and we invert for the best-fitting viscosity profile using a conjugate gradient search algorithm, subject to damping. Our results suggest that, due to strong sensitivity to radial viscosity structure, the Earth's geoid seems to be compatible with only ± 500 m of degree 2 dynamic topography.
Egli, Lukas; Meyer, Carsten; Scherber, Christoph; Kreft, Holger; Tscharntke, Teja
2018-05-01
Closing yield gaps within existing croplands, and thereby avoiding further habitat conversions, is a prominently and controversially discussed strategy to meet the rising demand for agricultural products, while minimizing biodiversity impacts. The agricultural intensification associated with such a strategy poses additional threats to biodiversity within agricultural landscapes. The uneven spatial distribution of both yield gaps and biodiversity provides opportunities for reconciling agricultural intensification and biodiversity conservation through spatially optimized intensification. Here, we integrate distribution and habitat information for almost 20,000 vertebrate species with land-cover and land-use datasets. We estimate that projected agricultural intensification between 2000 and 2040 would reduce the global biodiversity value of agricultural lands by 11%, relative to 2000. Contrasting these projections with spatial land-use optimization scenarios reveals that 88% of projected biodiversity loss could be avoided through globally coordinated land-use planning, implying huge efficiency gains through international cooperation. However, global-scale optimization also implies a highly uneven distribution of costs and benefits, resulting in distinct "winners and losers" in terms of national economic development, food security, food sovereignty or conservation. Given conflicting national interests and lacking effective governance mechanisms to guarantee equitable compensation of losers, multinational land-use optimization seems politically unlikely. In turn, 61% of projected biodiversity loss could be avoided through nationally focused optimization, and 33% through optimization within just 10 countries. Targeted efforts to improve the capacity for integrated land-use planning for sustainable intensification especially in these countries, including the strengthening of institutions that can arbitrate subnational land-use conflicts, may offer an effective, yet
Good governance for pension schemes
Thornton, Paul
2011-01-01
Regulatory and market developments have transformed the way in which UK private sector pension schemes operate. This has increased demands on trustees and advisors and the trusteeship governance model must evolve in order to remain fit for purpose. This volume brings together leading practitioners to provide an overview of what today constitutes good governance for pension schemes, from both a legal and a practical perspective. It provides the reader with an appreciation of the distinctive characteristics of UK occupational pension schemes, how they sit within the capital markets and their social and fiduciary responsibilities. Providing a holistic analysis of pension risk, both from the trustee and the corporate perspective, the essays cover the crucial role of the employer covenant, financing and investment risk, developments in longevity risk hedging and insurance de-risking, and best practice scheme administration.
Optimum RA reactor fuelling scheme
International Nuclear Information System (INIS)
Strugar, P.; Nikolic, V.
1965-10-01
Ideal reactor refueling scheme can be achieved only by continuous fuel elements movement in the core, which is not possible, and thus approximations are applied. One of the possible approximations is discontinuous movement of fuel elements groups in radial direction. This enables higher burnup especially if axial exchange is possible. Analysis of refueling schemes in the RA reactor core and schemes with mixing the fresh and used fuel elements show that 30% higher burnup can be achieved by applying mixing, and even 40% if reactivity due to decrease in experimental space is taken into account. Up to now, mean burnup of 4400 MWd/t has been achieved, and the proposed fueling scheme with reduction of experimental space could achieve mean burnup of 6300 MWd/t which means about 25 Mwd/t per fuel channel [sr
Numerical schemes for explosion hazards
International Nuclear Information System (INIS)
Therme, Nicolas
2015-01-01
In nuclear facilities, internal or external explosions can cause confinement breaches and radioactive materials release in the environment. Hence, modeling such phenomena is crucial for safety matters. Blast waves resulting from explosions are modeled by the system of Euler equations for compressible flows, whereas Navier-Stokes equations with reactive source terms and level set techniques are used to simulate the propagation of flame front during the deflagration phase. The purpose of this thesis is to contribute to the creation of efficient numerical schemes to solve these complex models. The work presented here focuses on two major aspects: first, the development of consistent schemes for the Euler equations, then the buildup of reliable schemes for the front propagation. In both cases, explicit in time schemes are used, but we also introduce a pressure correction scheme for the Euler equations. Staggered discretization is used in space. It is based on the internal energy formulation of the Euler system, which insures its positivity and avoids tedious discretization of the total energy over staggered grids. A discrete kinetic energy balance is derived from the scheme and a source term is added in the discrete internal energy balance equation to preserve the exact total energy balance at the limit. High order methods of MUSCL type are used in the discrete convective operators, based solely on material velocity. They lead to positivity of density and internal energy under CFL conditions. This ensures that the total energy cannot grow and we can furthermore derive a discrete entropy inequality. Under stability assumptions of the discrete L8 and BV norms of the scheme's solutions one can prove that a sequence of converging discrete solutions necessarily converges towards the weak solution of the Euler system. Besides it satisfies a weak entropy inequality at the limit. Concerning the front propagation, we transform the flame front evolution equation (the so called
Bozkurt, Gulay
2017-01-01
This article examines the literature associated with social constructivism. It discusses whether social constructivism succeeds in reconciling individual cognition with social teaching and learning practices. After reviewing the meaning of individual cognition and social constructivism, two views--Piaget and Vygotsky's--accounting for learning…
Energy Technology Data Exchange (ETDEWEB)
Boyd, G.A.
1995-06-01
The project is motivated by recommendations that were made by industry in a number of different forums: the Industry Workshop of the White House Conference on Climate Change, and more recently, industry consultations for EPAct Section 131(c) and Section 160(b). These recommendations were related to reconciling conflicts in environmental goals, productivity improvements and increased energy efficiency in the industrial sector.
Breeding schemes in reindeer husbandry
Directory of Open Access Journals (Sweden)
Lars Rönnegård
2003-04-01
Full Text Available The objective of the paper was to investigate annual genetic gain from selection (G, and the influence of selection on the inbreeding effective population size (Ne, for different possible breeding schemes within a reindeer herding district. The breeding schemes were analysed for different proportions of the population within a herding district included in the selection programme. Two different breeding schemes were analysed: an open nucleus scheme where males mix and mate between owner flocks, and a closed nucleus scheme where the males in non-selected owner flocks are culled to maximise G in the whole population. The theory of expected long-term genetic contributions was used and maternal effects were included in the analyses. Realistic parameter values were used for the population, modelled with 5000 reindeer in the population and a sex ratio of 14 adult females per male. The standard deviation of calf weights was 4.1 kg. Four different situations were explored and the results showed: 1. When the population was randomly culled, Ne equalled 2400. 2. When the whole population was selected on calf weights, Ne equalled 1700 and the total annual genetic gain (direct + maternal in calf weight was 0.42 kg. 3. For the open nucleus scheme, G increased monotonically from 0 to 0.42 kg as the proportion of the population included in the selection programme increased from 0 to 1.0, and Ne decreased correspondingly from 2400 to 1700. 4. In the closed nucleus scheme the lowest value of Ne was 1300. For a given proportion of the population included in the selection programme, the difference in G between a closed nucleus scheme and an open one was up to 0.13 kg. We conclude that for mass selection based on calf weights in herding districts with 2000 animals or more, there are no risks of inbreeding effects caused by selection.
Third Order Reconstruction of the KP Scheme for Model of River Tinnelva
Directory of Open Access Journals (Sweden)
Susantha Dissanayake
2017-01-01
Full Text Available The Saint-Venant equation/Shallow Water Equation is used to simulate flow of river, flow of liquid in an open channel, tsunami etc. The Kurganov-Petrova (KP scheme which was developed based on the local speed of discontinuity propagation, can be used to solve hyperbolic type partial differential equations (PDEs, hence can be used to solve the Saint-Venant equation. The KP scheme is semi discrete: PDEs are discretized in the spatial domain, resulting in a set of Ordinary Differential Equations (ODEs. In this study, the common 2nd order KP scheme is extended into 3rd order scheme while following the Weighted Essentially Non-Oscillatory (WENO and Central WENO (CWENO reconstruction steps. Both the 2nd order and 3rd order schemes have been used in simulation in order to check the suitability of the KP schemes to solve hyperbolic type PDEs. The simulation results indicated that the 3rd order KP scheme shows some better stability compared to the 2nd order scheme. Computational time for the 3rd order KP scheme for variable step-length ode solvers in MATLAB is less compared to the computational time of the 2nd order KP scheme. In addition, it was confirmed that the order of the time integrators essentially should be lower compared to the order of the spatial discretization. However, for computation of abrupt step changes, the 2nd order KP scheme shows a more accurate solution.
The new Exponential Directional Iterative (EDI) 3-D Sn scheme for parallel adaptive differencing
International Nuclear Information System (INIS)
Sjoden, G.E.
2005-01-01
The new Exponential Directional Iterative (EDI) discrete ordinates (Sn) scheme for 3-D Cartesian Coordinates is presented. The EDI scheme is a logical extension of the positive, efficient Exponential Directional Weighted (EDW) Sn scheme currently used as the third level of the adaptive spatial differencing algorithm in the PENTRAN parallel discrete ordinates solver. Here, the derivation and advantages of the EDI scheme are presented; EDI uses EDW-rendered exponential coefficients as initial starting values to begin a fixed point iteration of the exponential coefficients. One issue that required evaluation was an iterative cutoff criterion to prevent the application of an unstable fixed point iteration; although this was needed in some cases, it was readily treated with a default to EDW. Iterative refinement of the exponential coefficients in EDI typically converged in fewer than four fixed point iterations. Moreover, EDI yielded more accurate angular fluxes compared to the other schemes tested, particularly in streaming conditions. Overall, it was found that the EDI scheme was up to an order of magnitude more accurate than the EDW scheme on a given mesh interval in streaming cases, and is potentially a good candidate as a fourth-level differencing scheme in the PENTRAN adaptive differencing sequence. The 3-D Cartesian computational cost of EDI was only about 20% more than the EDW scheme, and about 40% more than Diamond Zero (DZ). More evaluation and testing are required to determine suitable upgrade metrics for EDI to be fully integrated into the current adaptive spatial differencing sequence in PENTRAN. (author)
Defect correction and multigrid for an efficient and accurate computation of airfoil flows
Koren, B.
1988-01-01
Results are presented for an efficient solution method for second-order accurate discretizations of the 2D steady Euler equations. The solution method is based on iterative defect correction. Several schemes are considered for the computation of the second-order defect. In each defect correction
Controlled braking scheme for a wheeled walking aid
Coyle, Eugene; O'Dwyer, Aidan; Young, Eileen; Sullivan, Kevin; Toner, A.
2006-01-01
A wheeled walking aid with an embedded controlled braking system is described. The frame of the prototype is based on combining features of standard available wheeled walking aids. A braking scheme has been designed using hydraulic disc brakes to facilitate accurate and sensitive controlled stopping of the walker by the user, and if called upon, by automatic action. Braking force is modulated via a linear actuating stepping motor. A microcontroller is used for control of both stepper movement...
Multiuser switched diversity scheduling schemes
Shaqfeh, Mohammad; Alnuweiri, Hussein M.; Alouini, Mohamed-Slim
2012-01-01
Multiuser switched-diversity scheduling schemes were recently proposed in order to overcome the heavy feedback requirements of conventional opportunistic scheduling schemes by applying a threshold-based, distributed, and ordered scheduling mechanism. The main idea behind these schemes is that slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we characterize the achievable rate region of multiuser switched diversity systems and compare it with the rate region of full feedback multiuser diversity systems. We propose also a novel proportional fair multiuser switched-based scheduling scheme and we demonstrate that it can be optimized using a practical and distributed method to obtain the feedback thresholds. We finally demonstrate by numerical examples that switched-diversity scheduling schemes operate within 0.3 bits/sec/Hz from the ultimate network capacity of full feedback systems in Rayleigh fading conditions. © 2012 IEEE.
Nonlinear secret image sharing scheme.
Shin, Sang-Ho; Lee, Gil-Je; Yoo, Kee-Young
2014-01-01
Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2 m⌉ bit-per-pixel (bpp), respectively.
Multiuser switched diversity scheduling schemes
Shaqfeh, Mohammad
2012-09-01
Multiuser switched-diversity scheduling schemes were recently proposed in order to overcome the heavy feedback requirements of conventional opportunistic scheduling schemes by applying a threshold-based, distributed, and ordered scheduling mechanism. The main idea behind these schemes is that slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we characterize the achievable rate region of multiuser switched diversity systems and compare it with the rate region of full feedback multiuser diversity systems. We propose also a novel proportional fair multiuser switched-based scheduling scheme and we demonstrate that it can be optimized using a practical and distributed method to obtain the feedback thresholds. We finally demonstrate by numerical examples that switched-diversity scheduling schemes operate within 0.3 bits/sec/Hz from the ultimate network capacity of full feedback systems in Rayleigh fading conditions. © 2012 IEEE.
Space-Time Transformation in Flux-form Semi-Lagrangian Schemes
Directory of Open Access Journals (Sweden)
Peter C. Chu Chenwu Fan
2010-01-01
Full Text Available With a finite volume approach, a flux-form semi-Lagrangian (TFSL scheme with space-time transformation was developed to provide stable and accurate algorithm in solving the advection-diffusion equation. Different from the existing flux-form semi-Lagrangian schemes, the temporal integration of the flux from the present to the next time step is transformed into a spatial integration of the flux at the side of a grid cell (space for the present time step using the characteristic-line concept. The TFSL scheme not only keeps the good features of the semi-Lagrangian schemes (no Courant number limitation, but also has higher accuracy (of a second order in both time and space. The capability of the TFSL scheme is demonstrated by the simulation of the equatorial Rossby-soliton propagation. Computational stability and high accuracy makes this scheme useful in ocean modeling, computational fluid dynamics, and numerical weather prediction.
Reconciling the good patient persona with problematic and non-problematic humour: a grounded theory.
McCreaddie, May; Wiggins, Sally
2009-08-01
Humour is a complex phenomenon, incorporating cognitive, emotional, behavioural, physiological and social aspects. Research to date has concentrated on reviewing (rehearsed) humour and 'healthy' individuals via correlation studies using personality-trait based measurements, principally on psychology students in laboratory conditions. Nurses are key participants in modern healthcare interactions however, little is known about their (spontaneous) humour use. A middle-range theory that accounted for humour use in CNS-patient interactions was the aim of the study. The study reviewed the antecedents of humour exploring the use of humour in relation to (motivational) humour theories. Twenty Clinical Nurse Specialist-patient interactions and their respective peer groups in a country of the United Kingdom. An evolved constructivist grounded theory approach investigated a complex and dynamic phenomenon in situated contexts. Naturally occurring interactions provided the basis of the data corpus with follow-up interviews, focus groups, observation and field notes. A constant comparative approach to data collection and analysis was applied until theoretical sufficiency incorporating an innovative interpretative and illustrative framework. This paper reports the grounded theory and is principally based upon 20 CNS-patient interactions and follow-up data. The negative case analysis and peer group interactions will be reported in separate publications. The theory purports that patients' use humour to reconcile a good patient persona. The core category of the good patient persona, two of its constituent elements (compliance, sycophancy), conditions under which it emerges and how this relates to the use of humour are outlined and discussed. In seeking to establish and maintain a meaningful and therapeutic interaction with the CNS, patients enact a good patient persona to varying degrees depending upon the situated context. The good patient persona needs to be maintained within the
Free will: A case study in reconciling phenomenological philosophy with reductionist sciences.
Hong, Felix T
2015-12-01
Phenomenology aspires to philosophical analysis of humans' subjective experience while it strives to avoid pitfalls of subjectivity. The first step towards naturalizing phenomenology - making phenomenology scientific - is to reconcile phenomenology with modern physics, on the one hand, and with modern cellular and molecular neuroscience, on the other hand. In this paper, free will is chosen for a case study to demonstrate the feasibility. Special attention is paid to maintain analysis with mathematical precision, if possible, and to evade the inherent deceptive power of natural language. Laplace's determinism is re-evaluated along with the concept of microscopic reversibility. A simple and transparent version of proof demonstrates that microscopic reversibility is irreconcilably incompatible with macroscopic irreversibility, contrary to Boltzmann's claim. But the verdict also exalts Boltzmann's statistical mechanics to the new height of a genuine paradigm shift, thus cutting the umbilical cord linking it to Newtonian mechanics. Laplace's absolute determinism must then be replaced with a weaker form of causality called quasi-determinism. Biological indeterminism is also affirmed with numerous lines of evidence. The strongest evidence is furnished by ion channel fluctuations, which obey an indeterministic stochastic phenomenological law. Furthermore, quantum indeterminacy is shown to be relevant in biology, contrary to the opinion of Erwin Schrödinger. In reconciling phenomenology of free will with modern sciences, three issues - alternativism, intelligibility and origination - of free will must be accounted for. Alternativism and intelligibility can readily be accounted for by quasi-determinism. In order to account for origination of free will, the concept of downward causation must be invoked. However, unlike what is commonly believed, there is no evidence that downward causation can influence, shield off, or overpower low-level physical forces already known to
Electrical Injection Schemes for Nanolasers
DEFF Research Database (Denmark)
Lupi, Alexandra; Chung, Il-Sug; Yvind, Kresten
2014-01-01
Three electrical injection schemes based on recently demonstrated electrically pumped photonic crystal nanolasers have been numerically investigated: 1) a vertical p-i-n junction through a post structure; 2) a lateral p-i-n junction with a homostructure; and 3) a lateral p-i-n junction....... For this analysis, the properties of different schemes, i.e., electrical resistance, threshold voltage, threshold current, and internal efficiency as energy requirements for optical interconnects are compared and the physics behind the differences is discussed....
Signal multiplexing scheme for LINAC
International Nuclear Information System (INIS)
Sujo, C.I.; Mohan, Shyam; Joshi, Gopal; Singh, S.K.; Karande, Jitendra
2004-01-01
For the proper operation of the LINAC some signals, RF (radio frequency) as well as LF (low frequency) have to be available at the Master Control Station (MCS). These signals are needed to control, calibrate and characterize the RF fields in the resonators. This can be achieved by proper multiplexing of various signals locally and then routing the selected signals to the MCS. A multiplexing scheme has been designed and implemented, which will allow the signals from the selected cavity to the MCS. High isolation between channels and low insertion loss for a given signal are important issues while selecting the multiplexing scheme. (author)
Capacity-achieving CPM schemes
Perotti, Alberto; Tarable, Alberto; Benedetto, Sergio; Montorsi, Guido
2008-01-01
The pragmatic approach to coded continuous-phase modulation (CPM) is proposed as a capacity-achieving low-complexity alternative to the serially-concatenated CPM (SC-CPM) coding scheme. In this paper, we first perform a selection of the best spectrally-efficient CPM modulations to be embedded into SC-CPM schemes. Then, we consider the pragmatic capacity (a.k.a. BICM capacity) of CPM modulations and optimize it through a careful design of the mapping between input bits and CPM waveforms. The s...
International Nuclear Information System (INIS)
Ardisson, Claire; Ardisson, Gerard.
1976-01-01
A 165 Ho level scheme was constructed which led to the interpretation of sixty γ rays belonging to the decay of 165 Dy. A new 702.9keV level was identified to be the 5/2 - member of the 1/2 ) 7541{ Nilsson orbit. )] [fr
Homogenization scheme for acoustic metamaterials
Yang, Min; Ma, Guancong; Wu, Ying; Yang, Zhiyu; Sheng, Ping
2014-01-01
the scattering amplitudes. We verify our scheme by applying it to three different examples: a layered lattice, a two-dimensional hexagonal lattice, and a decorated-membrane system. It is shown that the predicted characteristics and wave fields agree almost
Homogenization scheme for acoustic metamaterials
Yang, Min
2014-02-26
We present a homogenization scheme for acoustic metamaterials that is based on reproducing the lowest orders of scattering amplitudes from a finite volume of metamaterials. This approach is noted to differ significantly from that of coherent potential approximation, which is based on adjusting the effective-medium parameters to minimize scatterings in the long-wavelength limit. With the aid of metamaterials’ eigenstates, the effective parameters, such as mass density and elastic modulus can be obtained by matching the surface responses of a metamaterial\\'s structural unit cell with a piece of homogenized material. From the Green\\'s theorem applied to the exterior domain problem, matching the surface responses is noted to be the same as reproducing the scattering amplitudes. We verify our scheme by applying it to three different examples: a layered lattice, a two-dimensional hexagonal lattice, and a decorated-membrane system. It is shown that the predicted characteristics and wave fields agree almost exactly with numerical simulations and experiments and the scheme\\'s validity is constrained by the number of dominant surface multipoles instead of the usual long-wavelength assumption. In particular, the validity extends to the full band in one dimension and to regimes near the boundaries of the Brillouin zone in two dimensions.
New practicable Siberian Snake schemes
International Nuclear Information System (INIS)
Steffen, K.
1983-07-01
Siberian Snake schemes can be inserted in ring accelerators for making the spin tune almost independent of energy. Two such schemes are here suggested which lend particularly well to practical application over a wide energy range. Being composed of horizontal and vertical bending magnets, the proposed snakes are designed to have a small maximum beam excursion in one plane. By applying in this plane a bending correction that varies with energy, they can be operated at fixed geometry in the other plane where most of the bending occurs, thus avoiding complicated magnet motion or excessively large magnet apertures that would otherwise be needed for large energy variations. The first of the proposed schemes employs a pair of standard-type Siberian Snakes, i.e. of the usual 1st and 2nd kind which rotate the spin about the longitudinal and the transverse horizontal axis, respectively. The second scheme employs a pair of novel-type snakes which rotate the spin about either one of the horizontal axes that are at 45 0 to the beam direction. In obvious reference to these axes, they are called left-pointed and right-pointed snakes. (orig.)
Nonlinear Secret Image Sharing Scheme
Directory of Open Access Journals (Sweden)
Sang-Ho Shin
2014-01-01
efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB and 1.74tlog2m bit-per-pixel (bpp, respectively.
Reconciling the Reynolds number dependence of scalar roughness length and laminar resistance
Li, D.; Rigden, A. J.; Salvucci, G.; Liu, H.
2017-12-01
The scalar roughness length and laminar resistance are necessary for computing scalar fluxes in numerical simulations and experimental studies. Their dependence on flow properties such as the Reynolds number remains controversial. In particular, two important power laws (1/4 and 1/2), proposed by Brutsaert and Zilitinkevich, respectively, are commonly seen in various parameterizations and models. Building on a previously proposed phenomenological model for interactions between the viscous sublayer and the turbulent flow, it is shown here that the two scaling laws can be reconciled. The "1/4" power law corresponds to the situation where the vertical diffusion is balanced by the temporal change or advection due to a constant velocity in the viscous sublayer, while the "1/2" power law scaling corresponds to the situation where the vertical diffusion is balanced by the advection due to a linear velocity profile in the viscous sublayer. In addition, the recently proposed "1" power law scaling is also recovered, which corresponds to the situation where molecular diffusion dominates the scalar budget in the viscous sublayer. The formulation proposed here provides a unified framework for understanding the onset of these different scaling laws and offers a new perspective on how to evaluate them experimentally.
Reconciling results of LSND, MiniBooNE and other experiments with soft decoherence
Farzan, Yasaman; Smirnov, Alexei Yu
2008-01-01
We propose an explanation of the LSND signal via quantum-decoherence of the mass states, which leads to damping of the interference terms in the oscillation probabilities. The decoherence parameters as well as their energy dependence are chosen in such a way that the damping affects only oscillations with the large (atmospheric) $\\Delta m^2$ and rapidly decreases with the neutrino energy. This allows us to reconcile the positive LSND signal with MiniBooNE and other null-result experiments. The standard explanations of solar, atmospheric, KamLAND and MINOS data are not affected. No new particles, and in particular, no sterile neutrinos are needed. The LSND signal is controlled by the 1-3 mixing angle $\\theta_{13}$ and, depending on the degree of damping, yields $0.0014 < \\sin^2\\theta_{13} < 0.034$ at $3\\sigma$. The scenario can be tested at upcoming $\\theta_{13}$ searches: while the comparison of near and far detector measurements at reactors should lead to a null-result a positive signal for $\\theta_{13...
Deep Mixing of 3He: Reconciling Big Bang and Stellar Nucleosynthesis
International Nuclear Information System (INIS)
Eggleton, P P; Dearborn, D P; Lattanzio, J
2006-01-01
Low-mass stars, ∼ 1-2 solar masses, near the Main Sequence are efficient at producing 3 He, which they mix into the convective envelope on the giant branch and should distribute into the Galaxy by way of envelope loss. This process is so efficient that it is difficult to reconcile the low observed cosmic abundance of 3 He with the predictions of both stellar and Big Bang nucleosynthesis. In this paper we find, by modeling a red giant with a fully three-dimensional hydrodynamic code and a full nucleosynthetic network, that mixing arises in the supposedly stable and radiative zone between the hydrogen-burning shell and the base of the convective envelope. This mixing is due to Rayleigh-Taylor instability within a zone just above the hydrogen-burning shell, where a nuclear reaction lowers the mean molecular weight slightly. Thus we are able to remove the threat that 3 He production in low-mass stars poses to the Big Bang nucleosynthesis of 3 He
Worden, John R; Bloom, A Anthony; Pandey, Sudhanshu; Jiang, Zhe; Worden, Helen M; Walker, Thomas W; Houweling, Sander; Röckmann, Thomas
2017-12-20
Several viable but conflicting explanations have been proposed to explain the recent ~8 p.p.b. per year increase in atmospheric methane after 2006, equivalent to net emissions increase of ~25 Tg CH 4 per year. A concurrent increase in atmospheric ethane implicates a fossil source; a concurrent decrease in the heavy isotope content of methane points toward a biogenic source, while other studies propose a decrease in the chemical sink (OH). Here we show that biomass burning emissions of methane decreased by 3.7 (±1.4) Tg CH 4 per year from the 2001-2007 to the 2008-2014 time periods using satellite measurements of CO and CH 4 , nearly twice the decrease expected from prior estimates. After updating both the total and isotopic budgets for atmospheric methane with these revised biomass burning emissions (and assuming no change to the chemical sink), we find that fossil fuels contribute between 12-19 Tg CH 4 per year to the recent atmospheric methane increase, thus reconciling the isotopic- and ethane-based results.
Carlisle, Nancy B; Woodman, Geoffrey F
2013-10-01
Maintaining a representation in working memory has been proposed to be sufficient for the execution of top-down attentional control. Two recent electrophysiological studies that recorded event-related potentials (ERPs) during similar paradigms have tested this proposal, but have reported contradictory findings. The goal of the present study was to reconcile these previous reports. To this end, we used the stimuli from one study (Kumar, Soto, & Humphreys, 2009) combined with the task manipulations from the other (Carlisle & Woodman, 2011b). We found that when an item matching a working memory representation was presented in a visual search array, we could use ERPs to quantify the size of the covert attention effect. When the working memory matches were consistently task-irrelevant, we observed a weak attentional bias to these items. However, when the same item indicated the location of the search target, we found that the covert attention effect was approximately four times larger. This shows that simply maintaining a representation in working memory is not equivalent to having a top-down attentional set for that item. Our findings indicate that high-level goals mediate the relationship between the contents of working memory and perceptual attention.
Reconciling the self and morality: an empirical model of moral centrality development.
Frimer, Jeremy A; Walker, Lawrence J
2009-11-01
Self-interest and moral sensibilities generally compete with one another, but for moral exemplars, this tension appears to not be in play. This study advances the reconciliation model, which explains this anomaly within a developmental framework by positing that the relationship between the self's interests and moral concerns ideally transforms from one of mutual competition to one of synergy. The degree to which morality is central to an individual's identity-or moral centrality-was operationalized in terms of values advanced implicitly in self-understanding narratives; a measure was developed and then validated. Participants were 97 university students who responded to a self-understanding interview and to several measures of morally relevant behaviors. Results indicated that communal values (centered on concerns for others) positively predicted and agentic (self-interested) values negatively predicted moral behavior. At the same time, the tendency to coordinate both agentic and communal values within narrative thought segments positively predicted moral behavior, indicating that the 2 motives can be adaptively reconciled. Moral centrality holds considerable promise in explaining moral motivation and its development.
Contrasting microbial community assembly hypotheses: a reconciling tale from the Río Tinto.
Palacios, Carmen; Zettler, Erik; Amils, Ricardo; Amaral-Zettler, Linda
2008-01-01
The Río Tinto (RT) is distinguished from other acid mine drainage systems by its natural and ancient origins. Microbial life from all three domains flourishes in this ecosystem, but bacteria dominate metabolic processes that perpetuate environmental extremes. While the patchy geochemistry of the RT likely influences the dynamics of bacterial populations, demonstrating which environmental variables shape microbial diversity and unveiling the mechanisms underlying observed patterns, remain major challenges in microbial ecology whose answers rely upon detailed assessments of community structures coupled with fine-scale measurements of physico-chemical parameters. By using high-throughput environmental tag sequencing we achieved saturation of richness estimators for the first time in the RT. We found that environmental factors dictate the distribution of the most abundant taxa in this system, but stochastic niche differentiation processes, such as mutation and dispersal, also contribute to observed diversity patterns. We predict that studies providing clues to the evolutionary and ecological processes underlying microbial distributions will reconcile the ongoing debate between the Baas Becking vs. Hubbell community assembly hypotheses.
Contrasting microbial community assembly hypotheses: a reconciling tale from the Río Tinto.
Directory of Open Access Journals (Sweden)
Carmen Palacios
Full Text Available The Río Tinto (RT is distinguished from other acid mine drainage systems by its natural and ancient origins. Microbial life from all three domains flourishes in this ecosystem, but bacteria dominate metabolic processes that perpetuate environmental extremes. While the patchy geochemistry of the RT likely influences the dynamics of bacterial populations, demonstrating which environmental variables shape microbial diversity and unveiling the mechanisms underlying observed patterns, remain major challenges in microbial ecology whose answers rely upon detailed assessments of community structures coupled with fine-scale measurements of physico-chemical parameters.By using high-throughput environmental tag sequencing we achieved saturation of richness estimators for the first time in the RT. We found that environmental factors dictate the distribution of the most abundant taxa in this system, but stochastic niche differentiation processes, such as mutation and dispersal, also contribute to observed diversity patterns.We predict that studies providing clues to the evolutionary and ecological processes underlying microbial distributions will reconcile the ongoing debate between the Baas Becking vs. Hubbell community assembly hypotheses.
Reconcilability of Socio-Economic Development and Environmental Conservation in Sub-Saharan Africa
Rudi, Lisa-Marie; Azadi, Hossein; Witlox, Frank
2012-04-01
Are the achievements of sustainable development and the improvement of environmental standards mutually exclusive in the 21st century? Is there a possibility to combine the two? This study is an effort to investigate the mutual exclusiveness of the two policy areas and asks for the necessity and possibility to combine the two with a reference to Sub-Saharan Africa (SSA). After describing the historical, geographical, and climatic backgrounds of SSA, negative effects of global warming and local environmentally harmful practices are discussed. Subsequently, the appropriate development measures for the region are elaborated in order to understand their compatibility with regards to improving the environment. It is concluded that to change the dependency on agriculture, the economy needs to be restructured towards technologies. Furthermore, it is found that there is a direct link between global warming and economic efficiency. Theories, which imply that some regions are simply 'too poor to be green', are investigated and rebutted by another theory, which states that it is indeed possible to industrialize in an environmentally friendly way. It follows that environmental and development measures are interconnected, equally important and can be reconciled. The paper finally concludes that the threat posed by global warming and the previously practised environmentally-harmful local measures might be so pressing that it may be too tragic to go for 'develop first and clean up later' approach.
Deep mixing of 3He: reconciling Big Bang and stellar nucleosynthesis.
Eggleton, Peter P; Dearborn, David S P; Lattanzio, John C
2006-12-08
Low-mass stars, approximately 1 to 2 solar masses, near the Main Sequence are efficient at producing the helium isotope 3He, which they mix into the convective envelope on the giant branch and should distribute into the Galaxy by way of envelope loss. This process is so efficient that it is difficult to reconcile the low observed cosmic abundance of 3He with the predictions of both stellar and Big Bang nucleosynthesis. Here we find, by modeling a red giant with a fully three-dimensional hydrodynamic code and a full nucleosynthetic network, that mixing arises in the supposedly stable and radiative zone between the hydrogen-burning shell and the base of the convective envelope. This mixing is due to Rayleigh-Taylor instability within a zone just above the hydrogen-burning shell, where a nuclear reaction lowers the mean molecular weight slightly. Thus, we are able to remove the threat that 3He production in low-mass stars poses to the Big Bang nucleosynthesis of 3He.
Reconciling international human rights and cultural relativism: the case of female circumcision.
James, Stephen A
1994-01-01
How can we reconcile, in a non-ethnocentric fashion, the enforcement of international, universal human rights standards with the protection of cultural diversity? Examining this question, taking the controversy over female circumcision as a case study, this article will try to bridge the gap between the traditional anthropological view that human rights are non-existent -- or completely relativised to particular cultures -- and the view of Western naturalistic philosophers (including Lockeian philosophers in the natural rights tradition, and Aquinas and neo-Thomists in the natural law tradition) that they are universal -- simply derived from a basic human nature we all share. After briefly defending a universalist conception of human rights, the article will provide a critique of female circumcision as a human rights violation by three principal means: by an internal critique of the practice using the condoning cultures' own functionalist criteria; by identifying supra-national norms the cultures subscribe to which conflict with the practice; and by the identification of traditional and novel values in the cultures, conducive to those norms. Through this analysis, it will be seen that cultural survival, diversity and flourishing need not be incompatible with upholding international, universal human rights standards.
Reconciling estimates of the ratio of heat and salt fluxes at the ice-ocean interface
Keitzl, T.; Mellado, J. P.; Notz, D.
2016-12-01
The heat exchange between floating ice and the underlying ocean is determined by the interplay of diffusive fluxes directly at the ice-ocean interface and turbulent fluxes away from it. In this study, we examine this interplay through direct numerical simulations of free convection. Our results show that an estimation of the interface flux ratio based on direct measurements of the turbulent fluxes can be difficult because the flux ratio varies with depth. As an alternative, we present a consistent evaluation of the flux ratio based on the total heat and salt fluxes across the boundary layer. This approach allows us to reconcile previous estimates of the ice-ocean interface conditions. We find that the ratio of heat and salt fluxes directly at the interface is 83-100 rather than 33 as determined by previous turbulence measurements in the outer layer. This can cause errors in the estimated ice-ablation rate from field measurements of up to 40% if they are based on the three-equation formulation.
Campbell, S M; Sheaff, R; Sibbald, B; Marshall, M N; Pickard, S; Gask, L; Halliwell, S; Rogers, A; Roland, M O
2002-03-01
To investigate the concept of clinical governance being advocated by primary care groups/trusts (PCG/Ts), approaches being used to implement clinical governance, and potential barriers to its successful implementation in primary care. Qualitative case studies using semi-structured interviews and documentation review. Twelve purposively sampled PCG/Ts in England. Fifty senior staff including chief executives, clinical governance leads, mental health leads, and lay board members. Participants' perceptions of the role of clinical governance in PCG/Ts. PCG/Ts recognise that the successful implementation of clinical governance in general practice will require cultural as well as organisational changes, and the support of practices. They are focusing their energies on supporting practices and getting them involved in quality improvement activities. These activities include, but move beyond, conventional approaches to quality assessment (audit, incentives) to incorporate approaches which emphasise corporate and shared learning. PCG/Ts are also engaged in setting up systems for monitoring quality and for dealing with poor performance. Barriers include structural barriers (weak contractual levers to influence general practices), resource barriers (perceived lack of staff or money), and cultural barriers (suspicion by practice staff or problems overcoming the perceived blame culture associated with quality assessment). PCG/Ts are focusing on setting up systems for implementing clinical governance which seek to emphasise developmental and supportive approaches which will engage health professionals. Progress is intentionally incremental but formidable challenges lie ahead, not least reconciling the dual role of supporting practices while monitoring (and dealing with poor) performance.
An intercomparison of biogenic emissions estimates from BEIS2 and BIOME: Reconciling the differences
Energy Technology Data Exchange (ETDEWEB)
Wilkinson, J.G. [Alpine Geophysics, Pittsburgh, PA (United States); Emigh, R.A. [Alpine Geophysics, Boulder, CO (United States); Pierce, T.E. [Atmospheric Characterization and Modeling Division/NOAA, Research Triangle Park, NC (United States)
1996-12-31
Biogenic emissions play a critical role in urban and regional air quality. For instance, biogenic emissions contribute upwards of 76% of the daily hydrocarbon emissions in the Atlanta, Georgia airshed. The Biogenic Emissions Inventory System-Version 2.0 (BEIS2) and the Biogenic Model for Emissions (BIOME) are two models that compute biogenic emissions estimates. BEIS2 is a FORTRAN-based system, and BIOME is an ARC/INFO{reg_sign} - and SAS{reg_sign}-based system. Although the technical formulations of the models are similar, the models produce different biogenic emissions estimates for what appear to be essentially the same inputs. The goals of our study are the following: (1) Determine why BIOME and BEIS2 produce different emissions estimates; (2) Attempt to understand the impacts that the differences have on the emissions estimates; (3) Reconcile the differences where possible; and (4) Present a framework for the use of BEIS2 and BIOME. In this study, we used the Coastal Oxidant Assessment for Southeast Texas (COAST) biogenics data which were supplied to us courtesy of the Texas Natural Resource Conservation Commission (TNRCC), and we extracted the BEIS2 data for the same domain. We compared the emissions estimates of the two models using their respective data sets BIOME Using TNRCC data and BEIS2 using BEIS2 data.
High-Order Hyperbolic Residual-Distribution Schemes on Arbitrary Triangular Grids
Mazaheri, Alireza; Nishikawa, Hiroaki
2015-01-01
In this paper, we construct high-order hyperbolic residual-distribution schemes for general advection-diffusion problems on arbitrary triangular grids. We demonstrate that the second-order accuracy of the hyperbolic schemes can be greatly improved by requiring the scheme to preserve exact quadratic solutions. We also show that the improved second-order scheme can be easily extended to third-order by further requiring the exactness for cubic solutions. We construct these schemes based on the LDA and the SUPG methodology formulated in the framework of the residual-distribution method. For both second- and third-order-schemes, we construct a fully implicit solver by the exact residual Jacobian of the second-order scheme, and demonstrate rapid convergence of 10-15 iterations to reduce the residuals by 10 orders of magnitude. We demonstrate also that these schemes can be constructed based on a separate treatment of the advective and diffusive terms, which paves the way for the construction of hyperbolic residual-distribution schemes for the compressible Navier-Stokes equations. Numerical results show that these schemes produce exceptionally accurate and smooth solution gradients on highly skewed and anisotropic triangular grids, including curved boundary problems, using linear elements. We also present Fourier analysis performed on the constructed linear system and show that an under-relaxation parameter is needed for stabilization of Gauss-Seidel relaxation.
International Nuclear Information System (INIS)
Shin, J. K.; Choi, Y. D.
1992-01-01
QUICKER scheme has several attractive properties. However, under highly convective conditions, it produces overshoots and possibly some oscillations on each side of steps in the dependent variable when the flow is convected at an angle oblique to the grid line. Fortunately, it is possible to modify the QUICKER scheme using non-linear and linear functional relationship. Details of the development of polynomial upwinding scheme are given in this paper, where it is seen that this non-linear scheme has also third order accuracy. This polynomial upwinding scheme is used as the basis for the SHARPER and SMARTER schemes. Another revised scheme was developed by partial modification of QUICKER scheme using CDS and UPWIND schemes (QUICKUP). These revised schemes are tested at the well known bench mark flows, Two-Dimensional Pure Convection Flows in Oblique-Step, Lid Driven Cavity Flows and Buoyancy Driven Cavity Flows. For remain absolutely monotonic without overshoot and oscillation. QUICKUP scheme is more accurate than any other scheme in their relative accuracy. In high Reynolds number Lid Driven Catity Flow, SMARTER and SHARPER schemes retain lower computational cost than QUICKER and QUICKUP schemes, but computed velocity values in the revised schemes produced less predicted values than QUICKER scheme which is strongly effected by overshoot and undershoot values. Also, in Buoyancy Driven Cavity Flow, SMARTER, SHARPER and QUICKUP schemes give acceptable results. (Author)
Secure Dynamic access control scheme of PHR in cloud computing.
Chen, Tzer-Shyong; Liu, Chia-Hui; Chen, Tzer-Long; Chen, Chin-Sheng; Bau, Jian-Guo; Lin, Tzu-Ching
2012-12-01
With the development of information technology and medical technology, medical information has been developed from traditional paper records into electronic medical records, which have now been widely applied. The new-style medical information exchange system "personal health records (PHR)" is gradually developed. PHR is a kind of health records maintained and recorded by individuals. An ideal personal health record could integrate personal medical information from different sources and provide complete and correct personal health and medical summary through the Internet or portable media under the requirements of security and privacy. A lot of personal health records are being utilized. The patient-centered PHR information exchange system allows the public autonomously maintain and manage personal health records. Such management is convenient for storing, accessing, and sharing personal medical records. With the emergence of Cloud computing, PHR service has been transferred to storing data into Cloud servers that the resources could be flexibly utilized and the operation cost can be reduced. Nevertheless, patients would face privacy problem when storing PHR data into Cloud. Besides, it requires a secure protection scheme to encrypt the medical records of each patient for storing PHR into Cloud server. In the encryption process, it would be a challenge to achieve accurately accessing to medical records and corresponding to flexibility and efficiency. A new PHR access control scheme under Cloud computing environments is proposed in this study. With Lagrange interpolation polynomial to establish a secure and effective PHR information access scheme, it allows to accurately access to PHR with security and is suitable for enormous multi-users. Moreover, this scheme also dynamically supports multi-users in Cloud computing environments with personal privacy and offers legal authorities to access to PHR. From security and effectiveness analyses, the proposed PHR access
Fast and accurate determination of modularity and its effect size
International Nuclear Information System (INIS)
Treviño, Santiago III; Nyberg, Amy; Bassler, Kevin E; Del Genio, Charo I
2015-01-01
We present a fast spectral algorithm for community detection in complex networks. Our method searches for the partition with the maximum value of the modularity via the interplay of several refinement steps that include both agglomeration and division. We validate the accuracy of the algorithm by applying it to several real-world benchmark networks. On all these, our algorithm performs as well or better than any other known polynomial scheme. This allows us to extensively study the modularity distribution in ensembles of Erdős–Rényi networks, producing theoretical predictions for means and variances inclusive of finite-size corrections. Our work provides a way to accurately estimate the effect size of modularity, providing a z-score measure of it and enabling a more informative comparison of networks with different numbers of nodes and links. (paper)
An Integrative Approach to Accurate Vehicle Logo Detection
Directory of Open Access Journals (Sweden)
Hao Pan
2013-01-01
required for many applications in intelligent transportation systems and automatic surveillance. The task is challenging considering the small target of logos and the wide range of variability in shape, color, and illumination. A fast and reliable vehicle logo detection approach is proposed following visual attention mechanism from the human vision. Two prelogo detection steps, that is, vehicle region detection and a small RoI segmentation, rapidly focalize a small logo target. An enhanced Adaboost algorithm, together with two types of features of Haar and HOG, is proposed to detect vehicles. An RoI that covers logos is segmented based on our prior knowledge about the logos’ position relative to license plates, which can be accurately localized from frontal vehicle images. A two-stage cascade classier proceeds with the segmented RoI, using a hybrid of Gentle Adaboost and Support Vector Machine (SVM, resulting in precise logo positioning. Extensive experiments were conducted to verify the efficiency of the proposed scheme.
Asymptotic analysis of discrete schemes for non-equilibrium radiation diffusion
International Nuclear Information System (INIS)
Cui, Xia; Yuan, Guang-wei; Shen, Zhi-jun
2016-01-01
Motivated by providing well-behaved fully discrete schemes in practice, this paper extends the asymptotic analysis on time integration methods for non-equilibrium radiation diffusion in [2] to space discretizations. Therein studies were carried out on a two-temperature model with Larsen's flux-limited diffusion operator, both the implicitly balanced (IB) and linearly implicit (LI) methods were shown asymptotic-preserving. In this paper, we focus on asymptotic analysis for space discrete schemes in dimensions one and two. First, in construction of the schemes, in contrast to traditional first-order approximations, asymmetric second-order accurate spatial approximations are devised for flux-limiters on boundary, and discrete schemes with second-order accuracy on global spatial domain are acquired consequently. Then by employing formal asymptotic analysis, the first-order asymptotic-preserving property for these schemes and furthermore for the fully discrete schemes is shown. Finally, with the help of manufactured solutions, numerical tests are performed, which demonstrate quantitatively the fully discrete schemes with IB time evolution indeed have the accuracy and asymptotic convergence as theory predicts, hence are well qualified for both non-equilibrium and equilibrium radiation diffusion. - Highlights: • Provide AP fully discrete schemes for non-equilibrium radiation diffusion. • Propose second order accurate schemes by asymmetric approach for boundary flux-limiter. • Show first order AP property of spatially and fully discrete schemes with IB evolution. • Devise subtle artificial solutions; verify accuracy and AP property quantitatively. • Ideas can be generalized to 3-dimensional problems and higher order implicit schemes.
Support Schemes and Ownership Structures
DEFF Research Database (Denmark)
Ropenus, Stephanie; Schröder, Sascha Thorsten; Costa, Ana
, Denmark, France and Portugal. Another crucial aspect for the diffusion of the mCHP technology is possible ownership structures. These may range from full consumer ownership to ownership by utilities and energy service companies, which is discussed in Section 6. Finally, a conclusion (Section 7) wraps up......In recent years, fuel cell based micro‐combined heat and power has received increasing attention due to its potential contribution to energy savings, efficiency gains, customer proximity and flexibility in operation and capacity size. The FC4Home project assesses technical and economic aspects...... of support scheme simultaneously affects risk and technological development, which is the focus of Section 4. Subsequent to this conceptual overview, Section 5 takes a glance at the national application of support schemes for mCHP in practice, notably in the three country cases of the FC4Home project...
[PICS: pharmaceutical inspection cooperation scheme].
Morénas, J
2009-01-01
The pharmaceutical inspection cooperation scheme (PICS) is a structure containing 34 participating authorities located worldwide (October 2008). It has been created in 1995 on the basis of the pharmaceutical inspection convention (PIC) settled by the European free trade association (EFTA) in1970. This scheme has different goals as to be an international recognised body in the field of good manufacturing practices (GMP), for training inspectors (by the way of an annual seminar and experts circles related notably to active pharmaceutical ingredients [API], quality risk management, computerized systems, useful for the writing of inspection's aide-memoires). PICS is also leading to high standards for GMP inspectorates (through regular crossed audits) and being a room for exchanges on technical matters between inspectors but also between inspectors and pharmaceutical industry.
Project financing renewable energy schemes
International Nuclear Information System (INIS)
Brandler, A.
1993-01-01
The viability of many Renewable Energy projects is critically dependent upon the ability of these projects to secure the necessary financing on acceptable terms. The principal objective of the study was to provide an overview to project developers of project financing techniques and the conditions under which project finance for Renewable Energy schemes could be raised, focussing on the potential sources of finance, the typical project financing structures that could be utilised for Renewable Energy schemes and the risk/return and security requirements of lenders, investors and other potential sources of financing. A second objective is to describe the appropriate strategy and tactics for developers to adopt in approaching the financing markets for such projects. (author)
Network Regulation and Support Schemes
DEFF Research Database (Denmark)
Ropenus, Stephanie; Schröder, Sascha Thorsten; Jacobsen, Henrik
2009-01-01
-in tariffs to market-based quota systems, and network regulation approaches, comprising rate-of-return and incentive regulation. National regulation and the vertical structure of the electricity sector shape the incentives of market agents, notably of distributed generators and network operators......At present, there exists no explicit European policy framework on distributed generation. Various Directives encompass distributed generation; inherently, their implementation is to the discretion of the Member States. The latter have adopted different kinds of support schemes, ranging from feed....... This article seeks to investigate the interactions between the policy dimensions of support schemes and network regulation and how they affect the deployment of distributed generation. Firstly, a conceptual analysis examines how the incentives of the different market agents are affected. In particular...
Distance labeling schemes for trees
DEFF Research Database (Denmark)
Alstrup, Stephen; Gørtz, Inge Li; Bistrup Halvorsen, Esben
2016-01-01
We consider distance labeling schemes for trees: given a tree with n nodes, label the nodes with binary strings such that, given the labels of any two nodes, one can determine, by looking only at the labels, the distance in the tree between the two nodes. A lower bound by Gavoille et al. [Gavoille...... variants such as, for example, small distances in trees [Alstrup et al., SODA, 2003]. We improve the known upper and lower bounds of exact distance labeling by showing that 1/4 log2(n) bits are needed and that 1/2 log2(n) bits are sufficient. We also give (1 + ε)-stretch labeling schemes using Theta...
Small-scale classification schemes
DEFF Research Database (Denmark)
Hertzum, Morten
2004-01-01
Small-scale classification schemes are used extensively in the coordination of cooperative work. This study investigates the creation and use of a classification scheme for handling the system requirements during the redevelopment of a nation-wide information system. This requirements...... classification inherited a lot of its structure from the existing system and rendered requirements that transcended the framework laid out by the existing system almost invisible. As a result, the requirements classification became a defining element of the requirements-engineering process, though its main...... effects remained largely implicit. The requirements classification contributed to constraining the requirements-engineering process by supporting the software engineers in maintaining some level of control over the process. This way, the requirements classification provided the software engineers...
Ulku, Huseyin Arda; Bagci, Hakan; Michielssen, Eric
2012-01-01
An explicit yet stable marching-on-in-time (MOT) scheme for solving the time domain magnetic field integral equation (TD-MFIE) is presented. The stability of the explicit scheme is achieved via (i) accurate evaluation of the MOT matrix elements using closed form expressions and (ii) a PE(CE) m type linear multistep method for time marching. Numerical results demonstrate the accuracy and stability of the proposed explicit MOT-TD-MFIE solver. © 2012 IEEE.
Ulku, Huseyin Arda
2012-09-01
An explicit yet stable marching-on-in-time (MOT) scheme for solving the time domain magnetic field integral equation (TD-MFIE) is presented. The stability of the explicit scheme is achieved via (i) accurate evaluation of the MOT matrix elements using closed form expressions and (ii) a PE(CE) m type linear multistep method for time marching. Numerical results demonstrate the accuracy and stability of the proposed explicit MOT-TD-MFIE solver. © 2012 IEEE.
In vitro transcription accurately predicts lac repressor phenotype in vivo in Escherichia coli
Directory of Open Access Journals (Sweden)
Matthew Almond Sochor
2014-07-01
Full Text Available A multitude of studies have looked at the in vivo and in vitro behavior of the lac repressor binding to DNA and effector molecules in order to study transcriptional repression, however these studies are not always reconcilable. Here we use in vitro transcription to directly mimic the in vivo system in order to build a self consistent set of experiments to directly compare in vivo and in vitro genetic repression. A thermodynamic model of the lac repressor binding to operator DNA and effector is used to link DNA occupancy to either normalized in vitro mRNA product or normalized in vivo fluorescence of a regulated gene, YFP. An accurate measurement of repressor, DNA and effector concentrations were made both in vivo and in vitro allowing for direct modeling of the entire thermodynamic equilibrium. In vivo repression profiles are accurately predicted from the given in vitro parameters when molecular crowding is considered. Interestingly, our measured repressor–operator DNA affinity differs significantly from previous in vitro measurements. The literature values are unable to replicate in vivo binding data. We therefore conclude that the repressor-DNA affinity is much weaker than previously thought. This finding would suggest that in vitro techniques that are specifically designed to mimic the in vivo process may be necessary to replicate the native system.
Cambridge community Optometry Glaucoma Scheme.
Keenan, Jonathan; Shahid, Humma; Bourne, Rupert R; White, Andrew J; Martin, Keith R
2015-04-01
With a higher life expectancy, there is an increased demand for hospital glaucoma services in the United Kingdom. The Cambridge community Optometry Glaucoma Scheme (COGS) was initiated in 2010, where new referrals for suspected glaucoma are evaluated by community optometrists with a special interest in glaucoma, with virtual electronic review and validation by a consultant ophthalmologist with special interest in glaucoma. 1733 patients were evaluated by this scheme between 2010 and 2013. Clinical assessment is performed by the optometrist at a remote site. Goldmann applanation tonometry, pachymetry, monoscopic colour optic disc photographs and automated Humphrey visual field testing are performed. A clinical decision is made as to whether a patient has glaucoma or is a suspect, and referred on or discharged as a false positive referral. The clinical findings, optic disc photographs and visual field test results are transmitted electronically for virtual review by a consultant ophthalmologist. The number of false positive referrals from initial referral into the scheme. Of the patients, 46.6% were discharged at assessment and a further 5.7% were discharged following virtual review. Of the patients initially discharged, 2.8% were recalled following virtual review. Following assessment at the hospital, a further 10.5% were discharged after a single visit. The COGS community-based glaucoma screening programme is a safe and effective way of evaluating glaucoma referrals in the community and reducing false-positive referrals for glaucoma into the hospital system. © 2014 Royal Australian and New Zealand College of Ophthalmologists.
New schemes for particle accelerators
International Nuclear Information System (INIS)
Nishida, Y.
1985-01-01
In the present paper, the authors propose new schemes for realizing the v/sub p/xB accelerator, by using no plasma system for producing the strong longitudinal waves. The first method is to use a grating for obtaining extended interaction of an electron beam moving along the grating surface with light beam incident also along the surface. Here, the light beam propagates obliquely to the grating grooves for producing strong electric field, and the electron beam propagates in parallel to the light beam. The static magnetic field is applied perpendicularly to the grating surface. In the present system, the beam interacts synchronously with the p-polarized wave which has the electric field be parallel to the grating surface. Another conventional scheme is to use a delay circuit. Here, the light beam propagates obliquely between a pair of array of conductor fins or slots. The phase velocity of the spatial harmonics in the y-direction (right angle to the array of slots) is slower than the speed of light. With the aid of powerful laser light or microwave source, it should be possible to miniaturise linacs by using the v/sub p/xB effect and schemes proposed here
Reconciling the understanding of 'hydrophobicity' with physics-based models of proteins.
Harris, Robert C; Pettitt, B Montgomery
2016-03-02
The idea that a 'hydrophobic energy' drives protein folding, aggregation, and binding by favoring the sequestration of bulky residues from water into the protein interior is widespread. The solvation free energies (ΔGsolv) of small nonpolar solutes increase with surface area (A), and the free energies of creating macroscopic cavities in water increase linearly with A. These observations seem to imply that there is a hydrophobic component (ΔGhyd) of ΔGsolv that increases linearly with A, and this assumption is widely used in implicit solvent models. However, some explicit-solvent molecular dynamics studies appear to contradict these ideas. For example, one definition (ΔG(LJ)) of ΔGhyd is that it is the free energy of turning on the Lennard-Jones (LJ) interactions between the solute and solvent. However, ΔG(LJ) decreases with A for alanine and glycine peptides. Here we argue that these apparent contradictions can be reconciled by defining ΔGhyd to be a near hard core insertion energy (ΔGrep), as in the partitioning proposed by Weeks, Chandler, and Andersen. However, recent results have shown that ΔGrep is not a simple function of geometric properties of the molecule, such as A and the molecular volume, and that the free energy of turning on the attractive part of the LJ potential cannot be computed from first-order perturbation theory for proteins. The theories that have been developed from these assumptions to predict ΔGhyd are therefore inadequate for proteins.
Lemaire, Gilles; Gastal, François; Franzluebbers, Alan; Chabbi, Abad
2015-11-01
A need to increase agricultural production across the world to ensure continued food security appears to be at odds with the urgency to reduce the negative environmental impacts of intensive agriculture. Around the world, intensification has been associated with massive simplification and uniformity at all levels of organization, i.e., field, farm, landscape, and region. Therefore, we postulate that negative environmental impacts of modern agriculture are due more to production simplification than to inherent characteristics of agricultural productivity. Thus by enhancing diversity within agricultural systems, it should be possible to reconcile high quantity and quality of food production with environmental quality. Intensification of livestock and cropping systems separately within different specialized regions inevitably leads to unacceptable environmental impacts because of the overly uniform land use system in intensive cereal areas and excessive N-P loads in intensive animal areas. The capacity of grassland ecosystems to couple C and N cycles through microbial-soil-plant interactions as a way for mitigating the environmental impacts of intensive arable cropping system was analyzed in different management options: grazing, cutting, and ley duration, in order to minimize trade-offs between production and the environment. We suggest that integrated crop-livestock systems are an appropriate strategy to enhance diversity. Sod-based rotations can temporally and spatially capture the benefits of leys for minimizing environmental impacts, while still maintaining periods and areas of intensive cropping. Long-term experimental results illustrate the potential of such systems to sequester C in soil and to reduce and control N emissions to the atmosphere and hydrosphere.
Schneider, David P.; Deser, Clara
2017-09-01
Recent work suggests that natural variability has played a significant role in the increase of Antarctic sea ice extent during 1979-2013. The ice extent has responded strongly to atmospheric circulation changes, including a deepened Amundsen Sea Low (ASL), which in part has been driven by tropical variability. Nonetheless, this increase has occurred in the context of externally forced climate change, and it has been difficult to reconcile observed and modeled Antarctic sea ice trends. To understand observed-model disparities, this work defines the internally driven and radiatively forced patterns of Antarctic sea ice change and exposes potential model biases using results from two sets of historical experiments of a coupled climate model compared with observations. One ensemble is constrained only by external factors such as greenhouse gases and stratospheric ozone, while the other explicitly accounts for the influence of tropical variability by specifying observed SST anomalies in the eastern tropical Pacific. The latter experiment reproduces the deepening of the ASL, which drives an increase in regional ice extent due to enhanced ice motion and sea surface cooling. However, the overall sea ice trend in every ensemble member of both experiments is characterized by ice loss and is dominated by the forced pattern, as given by the ensemble-mean of the first experiment. This pervasive ice loss is associated with a strong warming of the ocean mixed layer, suggesting that the ocean model does not locally store or export anomalous heat efficiently enough to maintain a surface environment conducive to sea ice expansion. The pervasive upper-ocean warming, not seen in observations, likely reflects ocean mean-state biases.
Between Scylla and Charybdis: reconciling competing data management demands in the life sciences.
Bezuidenhout, Louise M; Morrison, Michael
2016-05-17
The widespread sharing of biologicaConcluding Comments: Teaching Responsible Datal and biomedical data is recognised as a key element in facilitating translation of scientific discoveries into novel clinical applications and services. At the same time, twenty-first century states are increasingly concerned that this data could also be used for purposes of bioterrorism. There is thus a tension between the desire to promote the sharing of data, as encapsulated by the Open Data movement, and the desire to prevent this data from 'falling into the wrong hands' as represented by 'dual use' policies. Both frameworks posit a moral duty for life sciences researchers with respect to how they should make their data available. However, Open data and dual use concerns are rarely discussed in concert and their implementation can present scientists with potentially conflicting ethical requirements. Both dual use and Open data policies frame scientific data and data dissemination in particular, though different, ways. As such they contain implicit models for how data is translated. Both approaches are limited by a focus on abstract conceptions of data and data sharing. This works to impede consensus-building between the two ethical frameworks. As an alternative, this paper proposes that an ethics of responsible management of scientific data should be based on a more nuanced understanding of the everyday data practices of life scientists. Responsibility for these 'micromovements' of data must consider the needs and duties of scientists as individuals and as collectively-organised groups. Researchers in the life sciences are faced with conflicting ethical responsibilities to share data as widely as possible, but prevent it being used for bioterrorist purposes. In order to reconcile the responsibilities posed by the Open Data and dual use frameworks, approaches should focus more on the everyday practices of laboratory scientists and less on abstract conceptions of data.
Davies, Althea L; White, Rehema M
2012-12-15
The challenges of integrated, adaptive and ecosystem management are leading government agencies to adopt participatory modes of engagement. Collaborative governance is a form of participation in which stakeholders co-produce goals and strategies and share responsibilities and resources. We assess the potential and challenges of collaborative governance as a mechanism to provide an integrated, ecosystem approach to natural resource management, using red deer in Scotland as a case study. Collaborative Deer Management Groups offer a well-established example of a 'bridging organisation', intended to reduce costs and facilitate decision making and learning across institutions and scales. We examine who initiates collaborative processes and why, what roles different actors adopt and how these factors influence the outcomes, particularly at a time of changing values, management and legislative priorities. Our findings demonstrate the need for careful consideration of where and how shared responsibility might be best implemented and sustained as state agencies often remain key to the process, despite the partnership intention. Differing interpretations between agencies and landowners of the degree of autonomy and division of responsibilities involved in 'collaboration' can create tension, while the diversity of landowner priorities brings additional challenges for defining shared goals in red deer management and in other cases. Effective maintenance depends on appropriate role allocation and adoption of responsibilities, definition of convergent values and goals, and establishing communication and trust in institutional networks. Options that may help private stakeholders offset the costs of accepting responsibility for delivering public benefits need to be explicitly addressed to build capacity and support adaptation. This study indicates that collaborative governance has the potential to help reconcile statutory obligations with stakeholder empowerment. The potential of
Schneider, David P.; Deser, Clara
2018-06-01
Recent work suggests that natural variability has played a significant role in the increase of Antarctic sea ice extent during 1979-2013. The ice extent has responded strongly to atmospheric circulation changes, including a deepened Amundsen Sea Low (ASL), which in part has been driven by tropical variability. Nonetheless, this increase has occurred in the context of externally forced climate change, and it has been difficult to reconcile observed and modeled Antarctic sea ice trends. To understand observed-model disparities, this work defines the internally driven and radiatively forced patterns of Antarctic sea ice change and exposes potential model biases using results from two sets of historical experiments of a coupled climate model compared with observations. One ensemble is constrained only by external factors such as greenhouse gases and stratospheric ozone, while the other explicitly accounts for the influence of tropical variability by specifying observed SST anomalies in the eastern tropical Pacific. The latter experiment reproduces the deepening of the ASL, which drives an increase in regional ice extent due to enhanced ice motion and sea surface cooling. However, the overall sea ice trend in every ensemble member of both experiments is characterized by ice loss and is dominated by the forced pattern, as given by the ensemble-mean of the first experiment. This pervasive ice loss is associated with a strong warming of the ocean mixed layer, suggesting that the ocean model does not locally store or export anomalous heat efficiently enough to maintain a surface environment conducive to sea ice expansion. The pervasive upper-ocean warming, not seen in observations, likely reflects ocean mean-state biases.
Flávio, H M; Ferreira, P; Formigo, N; Svendsen, J C
2017-10-15
Agriculture is widespread across the EU and has caused considerable impacts on freshwater ecosystems. To revert the degradation caused to streams and rivers, research and restoration efforts have been developed to recover ecosystem functions and services, with the European Water Framework Directive (WFD) playing a significant role in strengthening the progress. Analysing recent peer-reviewed European literature (2009-2016), this review explores 1) the conflicts and difficulties faced when restoring agriculturally impacted streams, 2) the aspects relevant to effectively reconcile agricultural land uses and healthy riverine ecosystems and 3) the effects and potential shortcomings of the first WFD management cycle. Our analysis reveals significant progress in restoration efforts, but it also demonstrates an urgent need for a higher number and detail of restoration projects reported in the peer-reviewed literature. The first WFD cycle ended in 2015 without reaching the goal of good ecological status in many European water-bodies. Addressing limitations reported in recent papers, including difficulties in stakeholder integration and importance of small headwater streams, is crucial. Analysing recent developments on stakeholder engagement through structured participatory processes will likely reduce perception discrepancies and increase stakeholder interest during the next WFD planning cycle. Despite an overall dominance of nutrient-related research, studies are spreading across many important topics (e.g. stakeholder management, land use conflicts, climate change effects), which may play an important role in guiding future policy. Our recommendations are important for the second WFD cycle because they 1) help secure the development and dissemination of science-based restoration strategies and 2) provide guidance for future research needs. Copyright © 2017 Elsevier B.V. All rights reserved.
RECONCILING THE OBSERVED STAR-FORMING SEQUENCE WITH THE OBSERVED STELLAR MASS FUNCTION
International Nuclear Information System (INIS)
Leja, Joel; Van Dokkum, Pieter G.; Franx, Marijn; Whitaker, Katherine E.
2015-01-01
We examine the connection between the observed star-forming sequence (SFR ∝ M α ) and the observed evolution of the stellar mass function in the range 0.2 < z < 2.5. We find that the star-forming sequence cannot have a slope α ≲ 0.9 at all masses and redshifts because this would result in a much higher number density at 10 < log (M/M ☉ ) < 11 by z = 1 than is observed. We show that a transition in the slope of the star-forming sequence, such that α = 1 at log (M/M ☉ ) < 10.5 and α = 0.7-0.13z (Whitaker et al.) at log (M/M ☉ ) > 10.5, greatly improves agreement with the evolution of the stellar mass function. We then derive a star-forming sequence that reproduces the evolution of the mass function by design. This star-forming sequence is also well described by a broken power law, with a shallow slope at high masses and a steep slope at low masses. At z = 2, it is offset by ∼0.3 dex from the observed star-forming sequence, consistent with the mild disagreement between the cosmic star formation rate (SFR) and recent observations of the growth of the stellar mass density. It is unclear whether this problem stems from errors in stellar mass estimates, errors in SFRs, or other effects. We show that a mass-dependent slope is also seen in other self-consistent models of galaxy evolution, including semianalytical, hydrodynamical, and abundance-matching models. As part of the analysis, we demonstrate that neither mergers nor hidden low-mass quiescent galaxies are likely to reconcile the evolution of the mass function and the star-forming sequence. These results are supported by observations from Whitaker et al
Johnson, A.; Reinhard, C. T.; Romaniello, S. J.; Greaney, A. T.; Garcia-Robledo, E.; Revsbech, N. P.; Canfield, D. E.; Lyons, T. W.; Anbar, A. D.
2016-12-01
The Archean-Proterozoic transition is marked by the first appreciable accumulation of O2 in Earth's oceans and atmosphere at 2.4 billion years ago (Ga). However, this Great Oxidation Event (GOE) is not the first evidence for O2 in Earth's surface environment. Paleoredox proxies preserved in ancient marine shales (Mo, Cr, Re, U) suggest transient episodes of oxidative weathering before the GOE, perhaps as early as 3.0 Ga. One marine shale in particular, the 2.5 Ga Mount McRae Shale of Western Australia, contains a euxinic interval with Mo enrichments up to 50 ppm. This enrichment is classically interpreted as the result of oxidative weathering of sulfides on the continental surface. However, prior weathering models based on experiments suggested that sulfides require large amounts of O2 [>10-4 present atmospheric level (PAL) pO2] to produce this weathering signature, in conflict with estimates of Archean pO2 from non-mass-dependent (NMD) sulfur isotope anomalies (molybdenite from 3 - 700 nM O2 (equivalent at equilibrium to 10-5 - 10-3 PAL) to measure oxidation kinetics as a function of the concentration of dissolved O2. We measured rates by injecting oxygenated water at a steady flow rate and monitoring dissolved O2 concentrations with LUMOS sensors. Our data extend the O2 range explored in pyrite oxidation experiments by three orders of magnitude and provide the first rates for molybdenite oxidation at O2 concentrations potentially analogous to those characteristic of the Archean atmosphere. Our results show that pyrite and molybdenite oxidize significantly more rapidly at lower O2 levels than previously thought. As a result, our revised weathering model demonstrates that the Mo enrichments observed in late Archean marine shales are potentially attainable at extremely low atmospheric pO2 values (e.g., <10-5 PAL), reconciling large sedimentary Mo enrichments with co-occurring NMD sulfur isotope anomalies.
Dotta, G; Phalan, B; Silva, T W; Green, R; Balmford, A
2016-06-01
Globally, agriculture is the greatest source of threat to biodiversity, through both ongoing conversion of natural habitat and intensification of existing farmland. Land sparing and land sharing have been suggested as alternative approaches to reconcile this threat with the need for land to produce food. To examine which approach holds most promise for grassland species, we examined how bird population densities changed with farm yield (production per unit area) in the Campos of Brazil and Uruguay. We obtained information on biodiversity and crop yields from 24 sites that differed in agricultural yield. Density-yield functions were fitted for 121 bird species to describe the response of population densities to increasing farm yield, measured in terms of both food energy and profit. We categorized individual species according to how their population changed across the yield gradient as being positively or negatively affected by farming and according to whether the species' total population size was greater under land-sparing, land-sharing, or an intermediate strategy. Irrespective of the yield, most species were negatively affected by farming. Increasing yields reduced densities of approximately 80% of bird species. We estimated land sparing would result in larger populations than other sorts of strategies for 67% to 70% of negatively affected species, given current production levels, including three threatened species. This suggests that increasing yields in some areas while reducing grazing to low levels elsewhere may be the best option for bird conservation in these grasslands. Implementing such an approach would require conservation and production policies to be explicitly linked to support yield increases in farmed areas and concurrently guarantee that larger areas of lightly grazed natural grasslands are set aside for conservation. © 2015 Society for Conservation Biology.
Zebker, H. A.; Wye, L. C.; Janssen, M.; Paganelli, F.; Cassini RADAR Team
2006-12-01
We observe Titan, Saturn's largest moon, using active and passive microwave instruments carried on board the Cassini spacecraft. The 2.2-cm wavelength penetrates the thick atmosphere and provides surface measurements at resolutions from 10-200 km over much of the satellite's surface. The emissivity and reflectivity of surface features are generally anticorrelated, and both values are fairly high. Inversion of either set of data alone yields dielectric constants ranging from 1.5 to 3 or 4, consistent with an icy hydrocarbon or water ice composition. However, the dielectric constants retrieved from radiometric data alone are usually less than those inferred from backscatter measurements, a discrepancy consistent with similar analyses dating back to lunar observations in the 1960's. Here we seek to reconcile Titan's reflectivity and emissivity observations using a single physical model of the surface. Our approach is to calculate the energy scattered by Titan's surface and near subsurface, with the remainder absorbed. In equilibrium the absorption equals the emission, so that both the reflectivity and emissivity are described by the model. We use a form of the Kirchhoff model for modeling surface scatter, and a model based on weak localization of light for the volume scatter. With this model we present dielectric constant and surface roughness parameters that match both sets of Cassini RADAR observations over limited regions on Titan's surface, helping to constrain the composition and roughness of the surface. Most regions display electrical properties consistent with solid surfaces, however some of the darker "lake-like" features at higher latitudes can be modeled as either solid or liquid materials. The ambiguity arises from the limited set of observational angles available.
A Memory Efficient Network Encryption Scheme
El-Fotouh, Mohamed Abo; Diepold, Klaus
In this paper, we studied the two widely used encryption schemes in network applications. Shortcomings have been found in both schemes, as these schemes consume either more memory to gain high throughput or low memory with low throughput. The need has aroused for a scheme that has low memory requirements and in the same time possesses high speed, as the number of the internet users increases each day. We used the SSM model [1], to construct an encryption scheme based on the AES. The proposed scheme possesses high throughput together with low memory requirements.
An Arbitrated Quantum Signature Scheme without Entanglement*
International Nuclear Information System (INIS)
Li Hui-Ran; Luo Ming-Xing; Peng Dai-Yuan; Wang Xiao-Jun
2017-01-01
Several quantum signature schemes are recently proposed to realize secure signatures of quantum or classical messages. Arbitrated quantum signature as one nontrivial scheme has attracted great interests because of its usefulness and efficiency. Unfortunately, previous schemes cannot against Trojan horse attack and DoS attack and lack of the unforgeability and the non-repudiation. In this paper, we propose an improved arbitrated quantum signature to address these secure issues with the honesty arbitrator. Our scheme takes use of qubit states not entanglements. More importantly, the qubit scheme can achieve the unforgeability and the non-repudiation. Our scheme is also secure for other known quantum attacks . (paper)
A second-order iterative implicit-explicit hybrid scheme for hyperbolic systems of conservation laws
International Nuclear Information System (INIS)
Dai, Wenlong; Woodward, P.R.
1996-01-01
An iterative implicit-explicit hybrid scheme is proposed for hyperbolic systems of conservation laws. Each wave in a system may be implicitly, or explicitly, or partially implicitly and partially explicitly treated depending on its associated Courant number in each numerical cell, and the scheme is able to smoothly switch between implicit and explicit calculations. The scheme is of Godunov-type in both explicit and implicit regimes, is in a strict conservation form, and is accurate to second-order in both space and time for all Courant numbers. The computer code for the scheme is easy to vectorize. Multicolors proposed in this paper may reduce the number of iterations required to reach a converged solution by several orders for a large time step. The feature of the scheme is shown through numerical examples. 38 refs., 12 figs
TE/TM scheme for computation of electromagnetic fields in accelerators
International Nuclear Information System (INIS)
Zagorodnov, Igor; Weiland, Thomas
2005-01-01
We propose a new two-level economical conservative scheme for short-range wake field calculation in three dimensions. The scheme does not have dispersion in the longitudinal direction and is staircase free (second order convergent). Unlike the finite-difference time domain method (FDTD), it is based on a TE/TM like splitting of the field components in time. Additionally, it uses an enhanced alternating direction splitting of the transverse space operator that makes the scheme computationally as effective as the conventional FDTD method. Unlike the FDTD ADI and low-order Strang methods, the splitting error in our scheme is only of fourth order. As numerical examples show, the new scheme is much more accurate on the long-time scale than the conventional FDTD approach
About efficient quasi-Newtonian schemes for variational calculations in nuclear structure
International Nuclear Information System (INIS)
Puddu, G.
2009-01-01
The Broyden-Fletcher-Goldhaber-Shanno (BFGS) quasi-Newtonian scheme is known as the most efficient scheme for variational calculations of energies. This scheme is actually a member of a one-parameter family of variational methods, known as the Broyden β-family. In some applications to light nuclei using microscopically derived effective Hamiltonians starting from accurate nucleon-nucleon potentials, we actually found other members of the same family which have better performance than the BFGS method. We also extend the Broyden β -family of algorithms to a two-parameter family of rank-three updates which has even better performances. (orig.)
A national quality control scheme for serum HGH assays
International Nuclear Information System (INIS)
Hunter, W.M.; McKenzie, I.
1979-01-01
In the autumn of 1975 the Supraregional Assay Service established a Quality Control Sub-Committee and the intra-laboratory QC Scheme for Growth Hormone (HGH) assays which is described here has served, in many respects, as a pilot scheme for protein RIA. Major improvements in accuracy, precision and between-laboratory agreement can be brought about by intensively interactive quality control schemes. A common standard is essential and should consist of ampoules used for one or only a small number of assays. Accuracy and agreement were not good enough to allow the overall means to serve as target values but a group of 11 laboratories were sufficiently accurate to provide a 'reference group mean' to so serve. Gross non-specificity was related to poor assay design and was quickly eliminated. Within-laboratory between-batch variability was much worse than that normally claimed for simple protein hormone RIA. A full report on this Scheme will appear shortly in Annals of Clinical Biochemistry. (Auth.)
Equipment upgrade - Accurate positioning of ion chambers
International Nuclear Information System (INIS)
Doane, Harry J.; Nelson, George W.
1990-01-01
Five adjustable clamps were made to firmly support and accurately position the ion Chambers, that provide signals to the power channels for the University of Arizona TRIGA reactor. The design requirements, fabrication procedure and installation are described
Decoupling schemes for the SSC Collider
International Nuclear Information System (INIS)
Cai, Y.; Bourianoff, G.; Cole, B.; Meinke, R.; Peterson, J.; Pilat, F.; Stampke, S.; Syphers, M.; Talman, R.
1993-05-01
A decoupling system is designed for the SSC Collider. This system can accommodate three decoupling schemes by using 44 skew quadrupoles in the different configurations. Several decoupling schemes are studied and compared in this paper
Renormalization scheme-invariant perturbation theory
International Nuclear Information System (INIS)
Dhar, A.
1983-01-01
A complete solution to the problem of the renormalization scheme dependence of perturbative approximants to physical quantities is presented. An equation is derived which determines any physical quantity implicitly as a function of only scheme independent variables. (orig.)
Wireless Broadband Access and Accounting Schemes
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
In this paper, we propose two wireless broadband access and accounting schemes. In both schemes, the accounting system adopts RADIUS protocol, but the access system adopts SSH and SSL protocols respectively.
Tightly Secure Signatures From Lossy Identification Schemes
Abdalla , Michel; Fouque , Pierre-Alain; Lyubashevsky , Vadim; Tibouchi , Mehdi
2015-01-01
International audience; In this paper, we present three digital signature schemes with tight security reductions in the random oracle model. Our first signature scheme is a particularly efficient version of the short exponent discrete log-based scheme of Girault et al. (J Cryptol 19(4):463–487, 2006). Our scheme has a tight reduction to the decisional short discrete logarithm problem, while still maintaining the non-tight reduction to the computational version of the problem upon which the or...
Plasma simulation with the Differential Algebraic Cubic Interpolated Propagation scheme
Energy Technology Data Exchange (ETDEWEB)
Utsumi, Takayuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1998-03-01
A computer code based on the Differential Algebraic Cubic Interpolated Propagation scheme has been developed for the numerical solution of the Boltzmann equation for a one-dimensional plasma with immobile ions. The scheme advects the distribution function and its first derivatives in the phase space for one time step by using a numerical integration method for ordinary differential equations, and reconstructs the profile in phase space by using a cubic polynomial within a grid cell. The method gives stable and accurate results, and is efficient. It is successfully applied to a number of equations; the Vlasov equation, the Boltzmann equation with the Fokker-Planck or the Bhatnagar-Gross-Krook (BGK) collision term and the relativistic Vlasov equation. The method can be generalized in a straightforward way to treat cases such as problems with nonperiodic boundary conditions and higher dimensional problems. (author)
Robust second-order scheme for multi-phase flow computations
Shahbazi, Khosro
2017-06-01
A robust high-order scheme for the multi-phase flow computations featuring jumps and discontinuities due to shock waves and phase interfaces is presented. The scheme is based on high-order weighted-essentially non-oscillatory (WENO) finite volume schemes and high-order limiters to ensure the maximum principle or positivity of the various field variables including the density, pressure, and order parameters identifying each phase. The two-phase flow model considered besides the Euler equations of gas dynamics consists of advection of two parameters of the stiffened-gas equation of states, characterizing each phase. The design of the high-order limiter is guided by the findings of Zhang and Shu (2011) [36], and is based on limiting the quadrature values of the density, pressure and order parameters reconstructed using a high-order WENO scheme. The proof of positivity-preserving and accuracy is given, and the convergence and the robustness of the scheme are illustrated using the smooth isentropic vortex problem with very small density and pressure. The effectiveness and robustness of the scheme in computing the challenging problem of shock wave interaction with a cluster of tightly packed air or helium bubbles placed in a body of liquid water is also demonstrated. The superior performance of the high-order schemes over the first-order Lax-Friedrichs scheme for computations of shock-bubble interaction is also shown. The scheme is implemented in two-dimensional space on parallel computers using message passing interface (MPI). The proposed scheme with limiter features approximately 50% higher number of inter-processor message communications compared to the corresponding scheme without limiter, but with only 10% higher total CPU time. The scheme is provably second-order accurate in regions requiring positivity enforcement and higher order in the rest of domain.
Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme
Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook
1995-01-01
Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.
Optimal Sales Schemes for Network Goods
DEFF Research Database (Denmark)
Parakhonyak, Alexei; Vikander, Nick
consumers simultaneously, serve them all sequentially, or employ any intermediate scheme. We show that the optimal sales scheme is purely sequential, where each consumer observes all previous sales before choosing whether to buy himself. A sequential scheme maximizes the amount of information available...
THROUGHPUT ANALYSIS OF EXTENDED ARQ SCHEMES
African Journals Online (AJOL)
PUBLICATIONS1
ABSTRACT. Various Automatic Repeat Request (ARQ) schemes have been used to combat errors that befall in- formation transmitted in digital communication systems. Such schemes include simple ARQ, mixed mode ARQ and Hybrid ARQ (HARQ). In this study we introduce extended ARQ schemes and derive.
Arbitrated quantum signature scheme with message recovery
International Nuclear Information System (INIS)
Lee, Hwayean; Hong, Changho; Kim, Hyunsang; Lim, Jongin; Yang, Hyung Jin
2004-01-01
Two quantum signature schemes with message recovery relying on the availability of an arbitrator are proposed. One scheme uses a public board and the other does not. However both schemes provide confidentiality of the message and a higher efficiency in transmission
Improvement of a land surface model for accurate prediction of surface energy and water balances
International Nuclear Information System (INIS)
Katata, Genki
2009-02-01
In order to predict energy and water balances between the biosphere and atmosphere accurately, sophisticated schemes to calculate evaporation and adsorption processes in the soil and cloud (fog) water deposition on vegetation were implemented in the one-dimensional atmosphere-soil-vegetation model including CO 2 exchange process (SOLVEG2). Performance tests in arid areas showed that the above schemes have a significant effect on surface energy and water balances. The framework of the above schemes incorporated in the SOLVEG2 and instruction for running the model are documented. With further modifications of the model to implement the carbon exchanges between the vegetation and soil, deposition processes of materials on the land surface, vegetation stress-growth-dynamics etc., the model is suited to evaluate an effect of environmental loads to ecosystems by atmospheric pollutants and radioactive substances under climate changes such as global warming and drought. (author)
Agricultural ammonia emissions in China: reconciling bottom-up and top-down estimates
Directory of Open Access Journals (Sweden)
L. Zhang
2018-01-01
Full Text Available Current estimates of agricultural ammonia (NH3 emissions in China differ by more than a factor of 2, hindering our understanding of their environmental consequences. Here we apply both bottom-up statistical and top-down inversion methods to quantify NH3 emissions from agriculture in China for the year 2008. We first assimilate satellite observations of NH3 column concentration from the Tropospheric Emission Spectrometer (TES using the GEOS-Chem adjoint model to optimize Chinese anthropogenic NH3 emissions at the 1∕2° × 2∕3° horizontal resolution for March–October 2008. Optimized emissions show a strong summer peak, with emissions about 50 % higher in summer than spring and fall, which is underestimated in current bottom-up NH3 emission estimates. To reconcile the latter with the top-down results, we revisit the processes of agricultural NH3 emissions and develop an improved bottom-up inventory of Chinese NH3 emissions from fertilizer application and livestock waste at the 1∕2° × 2∕3° resolution. Our bottom-up emission inventory includes more detailed information on crop-specific fertilizer application practices and better accounts for meteorological modulation of NH3 emission factors in China. We find that annual anthropogenic NH3 emissions are 11.7 Tg for 2008, with 5.05 Tg from fertilizer application and 5.31 Tg from livestock waste. The two sources together account for 88 % of total anthropogenic NH3 emissions in China. Our bottom-up emission estimates also show a distinct seasonality peaking in summer, consistent with top-down results from the satellite-based inversion. Further evaluations using surface network measurements show that the model driven by our bottom-up emissions reproduces the observed spatial and seasonal variations of NH3 gas concentrations and ammonium (NH4+ wet deposition fluxes over China well, providing additional credibility to the improvements we have made to our
The challenge of reconciling development objectives in the context of demographic change
Directory of Open Access Journals (Sweden)
John Provo
2011-04-01
Full Text Available This paper considers whether the US Appalachian Regional Commission (ARC Asset-Based Development Initiative (ABDI reconciles economic development objectives in communities experiencing demographic change. Through a case study approach utilizing key informant interviews in Southwest Virginia communities and a review of ARC-funded projects, the authors consider two main questions. Did community leadership change or adapt to the program? Were new projects demonstrably different in objectives, content, or outcomes than past projects? Economic and demographic similarities between Alpine and Appalachian communities, particularly in the role of in-migrants, suggest that this study’s findings will be relevant for other mountain regions and could contribute to a conversation among international scholars of mountain development.Cet article cherche à déterminer si l’initiative de développement basé sur les ressources (ABDI, Asset-Based Development Initiative de la Commission régionale des Appalaches (ARC, Appalachian Regional Commission aux États-Unis réconcilie les objectifs de développement économique dans les communautés qui présentent un changement démographique. À travers des études de cas reposant sur des entretiens informatifs clés menés dans les communautés de la Virginie Occidentale et un examen de projets financés par l’ARC, les auteurs tentent de répondre à deux questions fondamentales : « Le leadership communautaire a-t-il évolué et s’est-il adapté au programme ? » et « Les nouveaux projets différaient-ils clairement, en termes d’objectifs, de contenu ou de résultats, des projets antérieurs ? ». Les similitudes économiques et démographiques entre les communautés alpines et appalachiennes, notamment en ce qui concerne le rôle des immigrants, suggèrent que les conclusions de cette étude seront pertinentes pour d’autres régions de montagnes et pourraient contribuer à un débat entre sp
Adam, L.; Frehner, M.; Sauer, K. M.; Toy, V.; Guerin-Marthe, S.; Boulton, C. J.
2017-12-01
Reconciling experimental and static-dynamic numerical estimations of seismic anisotropy in Alpine Fault mylonitesLudmila Adam1, Marcel Frehner2, Katrina Sauer3, Virginia Toy3, Simon Guerin-Marthe4, Carolyn Boulton5(1) University of Auckland, New Zealand, (2) ETH Zurich, Switzerland, (3) University of Otago, New Zealand (4) Durham University, Earth Sciences, United Kingdom (5) Victoria University of Wellington, New Zealand Quartzo-feldspathic mylonites and schists are the main contributors to seismic wave anisotropy in the vicinity of the Alpine Fault (New Zealand). We must determine how the physical properties of rocks like these influence elastic wave anisotropy if we want to unravel both the reasons for heterogeneous seismic wave propagation, and interpret deformation processes in fault zones. To study such controls on velocity anisotropy we can: 1) experimentally measure elastic wave anisotropy on cores at in-situ conditions or 2) estimate wave velocities by static (effective medium averaging) or dynamic (finite element) modelling based on EBSD data or photomicrographs. Here we compare all three approaches in study of schist and mylonite samples from the Alpine Fault. Volumetric proportions of intrinsically anisotropic micas in cleavage domains and comparatively isotropic quartz+feldspar in microlithons commonly vary significantly within one sample. Our analysis examines the effects of these phases and their arrangement, and further addresses how heterogeneity influences elastic wave anisotropy. We compare P-wave seismic anisotropy estimates based on millimetres-scale ultrasonic waves under in situ conditions, with simulations that account for micrometre-scale variations in elastic properties of constitutent minerals with the MTEX toolbox and finite-element wave propagation on EBSD images. We observe that the sorts of variations in the distribution of micas and quartz+feldspar within any one of our real core samples can change the elastic wave anisotropy by 10
REM-3D Reference Datasets: Reconciling large and diverse compilations of travel-time observations
Moulik, P.; Lekic, V.; Romanowicz, B. A.
2017-12-01
A three-dimensional Reference Earth model (REM-3D) should ideally represent the consensus view of long-wavelength heterogeneity in the Earth's mantle through the joint modeling of large and diverse seismological datasets. This requires reconciliation of datasets obtained using various methodologies and identification of consistent features. The goal of REM-3D datasets is to provide a quality-controlled and comprehensive set of seismic observations that would not only enable construction of REM-3D, but also allow identification of outliers and assist in more detailed studies of heterogeneity. The community response to data solicitation has been enthusiastic with several groups across the world contributing recent measurements of normal modes, (fundamental mode and overtone) surface waves, and body waves. We present results from ongoing work with body and surface wave datasets analyzed in consultation with a Reference Dataset Working Group. We have formulated procedures for reconciling travel-time datasets that include: (1) quality control for salvaging missing metadata; (2) identification of and reasons for discrepant measurements; (3) homogenization of coverage through the construction of summary rays; and (4) inversions of structure at various wavelengths to evaluate inter-dataset consistency. In consultation with the Reference Dataset Working Group, we retrieved the station and earthquake metadata in several legacy compilations and codified several guidelines that would facilitate easy storage and reproducibility. We find strong agreement between the dispersion measurements of fundamental-mode Rayleigh waves, particularly when made using supervised techniques. The agreement deteriorates substantially in surface-wave overtones, for which discrepancies vary with frequency and overtone number. A half-cycle band of discrepancies is attributed to reversed instrument polarities at a limited number of stations, which are not reflected in the instrument response history
Weisbin, Charles R.; Clark, Pamela; Elfes, Alberto; Smith, Jeffrey H.; Mrozinski, Joseph; Adumitroaie, Virgil; Hua, Hook; Shelton, Kacie; Lincoln, William; Silberg, Robert
2010-01-01
Virtually every NASA space-exploration mission represents a compromise between the interests of two expert, dedicated, but very different communities: scientists, who want to go quickly to the places that interest them most and spend as much time there as possible conducting sophisticated experiments, and the engineers and designers charged with maximizing the probability that a given mission will be successful and cost-effective. Recent work at NASA's Jet Propulsion Laboratory (JPL) seeks to enhance communication between these two groups, and to help them reconcile their interests, by developing advanced modeling capabilities with which they can analyze the achievement of science goals and objectives against engineering design and operational constraints. The analyses conducted prior to this study have been point-design driven. Each analysis has been of one hypothetical case which addresses the question: Given a set of constraints, how much science can be done? But the constraints imposed by the architecture team-e.g., rover speed, time allowed for extravehicular activity (EVA), number of sites at which science experiments are to be conducted- are all in early development and carry a great deal of uncertainty. Variations can be incorporated into the analysis, and indeed that has been done in sensitivity studies designed to see which constraint variations have the greatest impact on results. But if a very large number of variations can be analyzed all at once, producing a table that includes virtually the entire trade space under consideration, then we have a tool that enables scientists and mission architects to ask the inverse question: For a given desired level of science (or any other objective), what is the range of constraints that would be needed? With this tool, mission architects could determine, for example, what combinations of rover speed, EVA duration, and other constraints produce the desired results. Further, this tool would help them identify which
Energy Technology Data Exchange (ETDEWEB)
Touma, Rony [Department of Computer Science & Mathematics, Lebanese American University, Beirut (Lebanon); Zeidan, Dia [School of Basic Sciences and Humanities, German Jordanian University, Amman (Jordan)
2016-06-08
In this paper we extend a central finite volume method on nonuniform grids to the case of drift-flux two-phase flow problems. The numerical base scheme is an unstaggered, non oscillatory, second-order accurate finite volume scheme that evolves a piecewise linear numerical solution on a single grid and uses dual cells intermediately while updating the numerical solution to avoid the resolution of the Riemann problems arising at the cell interfaces. We then apply the numerical scheme and solve a classical drift-flux problem. The obtained results are in good agreement with corresponding ones appearing in the recent literature, thus confirming the potential of the proposed scheme.
Wagner, Karla D; Davidson, Peter J; Pollini, Robin A; Strathdee, Steffanie A; Washburn, Rachel; Palinkas, Lawrence A
2012-01-01
Mixed methods research is increasingly being promoted in the health sciences as a way to gain more comprehensive understandings of how social processes and individual behaviours shape human health. Mixed methods research most commonly combines qualitative and quantitative data collection and analysis strategies. Often, integrating findings from multiple methods is assumed to confirm or validate the findings from one method with the findings from another, seeking convergence or agreement between methods. Cases in which findings from different methods are congruous are generally thought of as ideal, whilst conflicting findings may, at first glance, appear problematic. However, the latter situation provides the opportunity for a process through which apparently discordant results are reconciled, potentially leading to new emergent understandings of complex social phenomena. This paper presents three case studies drawn from the authors' research on HIV risk amongst injection drug users in which mixed methods studies yielded apparently discrepant results. We use these case studies (involving injection drug users [IDUs] using a Needle/Syringe Exchange Program in Los Angeles, CA, USA; IDUs seeking to purchase needle/syringes at pharmacies in Tijuana, Mexico; and young street-based IDUs in San Francisco, CA, USA) to identify challenges associated with integrating findings from mixed methods projects, summarize lessons learned, and make recommendations for how to more successfully anticipate and manage the integration of findings. Despite the challenges inherent in reconciling apparently conflicting findings from qualitative and quantitative approaches, in keeping with others who have argued in favour of integrating mixed methods findings, we contend that such an undertaking has the potential to yield benefits that emerge only through the struggle to reconcile discrepant results and may provide a sum that is greater than the individual qualitative and quantitative parts
REMINDER: Saved Leave Scheme (SLS)
2003-01-01
Transfer of leave to saved leave accounts Under the provisions of the voluntary saved leave scheme (SLS), a maximum total of 10 days'* annual and compensatory leave (excluding saved leave accumulated in accordance with the provisions of Administrative Circular No 22B) can be transferred to the saved leave account at the end of the leave year (30 September). We remind you that unused leave of all those taking part in the saved leave scheme at the closure of the leave year accounts is transferred automatically to the saved leave account on that date. Therefore, staff members have no administrative steps to take. In addition, the transfer, which eliminates the risk of omitting to request leave transfers and rules out calculation errors in transfer requests, will be clearly shown in the list of leave transactions that can be consulted in EDH from October 2003 onwards. Furthermore, this automatic leave transfer optimizes staff members' chances of benefiting from a saved leave bonus provided that they ar...
Quantum Secure Communication Scheme with W State
International Nuclear Information System (INIS)
Wang Jian; Zhang Quan; Tang Chaojng
2007-01-01
We present a quantum secure communication scheme using three-qubit W state. It is unnecessary for the present scheme to use alternative measurement or Bell basis measurement. Compared with the quantum secure direct communication scheme proposed by Cao et al. [H.J. Cao and H.S. Song, Chin. Phys. Lett. 23 (2006) 290], in our scheme, the detection probability for an eavesdropper's attack increases from 8.3% to 25%. We also show that our scheme is secure for a noise quantum channel.
Labeling schemes for bounded degree graphs
DEFF Research Database (Denmark)
Adjiashvili, David; Rotbart, Noy Galil
2014-01-01
We investigate adjacency labeling schemes for graphs of bounded degree Δ = O(1). In particular, we present an optimal (up to an additive constant) log n + O(1) adjacency labeling scheme for bounded degree trees. The latter scheme is derived from a labeling scheme for bounded degree outerplanar...... graphs. Our results complement a similar bound recently obtained for bounded depth trees [Fraigniaud and Korman, SODA 2010], and may provide new insights for closing the long standing gap for adjacency in trees [Alstrup and Rauhe, FOCS 2002]. We also provide improved labeling schemes for bounded degree...
Double beta decay in the generalized seniority scheme
International Nuclear Information System (INIS)
Pittel, S.; Engel, J.; Vogel, P.; Ji Xiangdong
1990-01-01
A generalized-seniority truncation scheme is used in shell-model calculations of double beta decay matrix elements. Calculations are carried out for 78 Ge, 82 Se and 128,130 Te. Matrix elements calculated for the two-neutrino decay mode are small compared to weak-coupling shell-model calculations and support the suppression mechanism first observed in the quasi-particle random phase approximation. Matrix elements for the neutrinoless mode are similar to those of the weak-coupling shell model, suggesting that these matrix elements can be pinned down fairly accurately. (orig.)
Closed-Loop Autofocus Scheme for Scanning Electron Microscope
Directory of Open Access Journals (Sweden)
Cui Le
2015-01-01
Full Text Available In this paper, we present a full scale autofocus approach for scanning electron microscope (SEM. The optimal focus (in-focus position of the microscope is achieved by maximizing the image sharpness using a vision-based closed-loop control scheme. An iterative optimization algorithm has been designed using the sharpness score derived from image gradient information. The proposed method has been implemented and validated using a tungsten gun SEM at various experimental conditions like varying raster scan speed, magnification at real-time. We demonstrate that the proposed autofocus technique is accurate, robust and fast.
Ringnalda, Allard|info:eu-repo/dai/nl/305951696
2014-01-01
Copyright law and cultural heritage policy are an odd couple. Although they have the same aims – or, more accurately, should have the same aims – they are often in conflict. Cultural heritage policy aims to preserve and make accessible works that are deemed to be part of our shared culture – books,
Jaffé, Rodolfo; Prous, Xavier; Zampaulo, Robson; Giannini, Tereza C; Imperatriz-Fonseca, Vera L; Maurity, Clóvis; Oliveira, Guilherme; Brandi, Iuri V; Siqueira, José O
2016-01-01
Caves pose significant challenges for mining projects, since they harbor many endemic and threatened species, and must therefore be protected. Recent discussions between academia, environmental protection agencies, and industry partners, have highlighted problems with the current Brazilian legislation for the protection of caves. While the licensing process is long, complex and cumbersome, the criteria used to assign caves into conservation relevance categories are often subjective, with relevance being mainly determined by the presence of obligate cave dwellers (troglobites) and their presumed rarity. However, the rarity of these troglobitic species is questionable, as most remain unidentified to the species level and their habitats and distribution ranges are poorly known. Using data from 844 iron caves retrieved from different speleology reports for the Carajás region (South-Eastern Amazon, Brazil), one of the world's largest deposits of high-grade iron ore, we assess the influence of different cave characteristics on four biodiversity proxies (species richness, presence of troglobites, presence of rare troglobites, and presence of resident bat populations). We then examine how the current relevance classification scheme ranks caves with different biodiversity indicators. Large caves were found to be important reservoirs of biodiversity, so they should be prioritized in conservation programs. Our results also reveal spatial autocorrelation in all the biodiversity proxies assessed, indicating that iron caves should be treated as components of a cave network immersed in the karst landscape. Finally, we show that by prioritizing the conservation of rare troglobites, the current relevance classification scheme is undermining overall cave biodiversity and leaving ecologically important caves unprotected. We argue that conservation efforts should target subterranean habitats as a whole and propose an alternative relevance ranking scheme, which could help simplify the
Angel, Jordan B.; Banks, Jeffrey W.; Henshaw, William D.
2018-01-01
High-order accurate upwind approximations for the wave equation in second-order form on overlapping grids are developed. Although upwind schemes are well established for first-order hyperbolic systems, it was only recently shown by Banks and Henshaw [1] how upwinding could be incorporated into the second-order form of the wave equation. This new upwind approach is extended here to solve the time-domain Maxwell's equations in second-order form; schemes of arbitrary order of accuracy are formulated for general curvilinear grids. Taylor time-stepping is used to develop single-step space-time schemes, and the upwind dissipation is incorporated by embedding the exact solution of a local Riemann problem into the discretization. Second-order and fourth-order accurate schemes are implemented for problems in two and three space dimensions, and overlapping grids are used to treat complex geometry and problems with multiple materials. Stability analysis of the upwind-scheme on overlapping grids is performed using normal mode theory. The stability analysis and computations confirm that the upwind scheme remains stable on overlapping grids, including the difficult case of thin boundary grids when the traditional non-dissipative scheme becomes unstable. The accuracy properties of the scheme are carefully evaluated on a series of classical scattering problems for both perfect conductors and dielectric materials in two and three space dimensions. The upwind scheme is shown to be robust and provide high-order accuracy.
Fragment separator momentum compression schemes
Energy Technology Data Exchange (ETDEWEB)
Bandura, Laura, E-mail: bandura@anl.gov [Facility for Rare Isotope Beams (FRIB), 1 Cyclotron, East Lansing, MI 48824-1321 (United States); National Superconducting Cyclotron Lab, Michigan State University, 1 Cyclotron, East Lansing, MI 48824-1321 (United States); Erdelyi, Bela [Argonne National Laboratory, Argonne, IL 60439 (United States); Northern Illinois University, DeKalb, IL 60115 (United States); Hausmann, Marc [Facility for Rare Isotope Beams (FRIB), 1 Cyclotron, East Lansing, MI 48824-1321 (United States); Kubo, Toshiyuki [RIKEN Nishina Center, RIKEN, Wako (Japan); Nolen, Jerry [Argonne National Laboratory, Argonne, IL 60439 (United States); Portillo, Mauricio [Facility for Rare Isotope Beams (FRIB), 1 Cyclotron, East Lansing, MI 48824-1321 (United States); Sherrill, Bradley M. [National Superconducting Cyclotron Lab, Michigan State University, 1 Cyclotron, East Lansing, MI 48824-1321 (United States)
2011-07-21
We present a scheme to use a fragment separator and profiled energy degraders to transfer longitudinal phase space into transverse phase space while maintaining achromatic beam transport. The first order beam optics theory of the method is presented and the consequent enlargement of the transverse phase space is discussed. An interesting consequence of the technique is that the first order mass resolving power of the system is determined by the first dispersive section up to the energy degrader, independent of whether or not momentum compression is used. The fragment separator at the Facility for Rare Isotope Beams is a specific application of this technique and is described along with simulations by the code COSY INFINITY.
Fragment separator momentum compression schemes
International Nuclear Information System (INIS)
Bandura, Laura; Erdelyi, Bela; Hausmann, Marc; Kubo, Toshiyuki; Nolen, Jerry; Portillo, Mauricio; Sherrill, Bradley M.
2011-01-01
We present a scheme to use a fragment separator and profiled energy degraders to transfer longitudinal phase space into transverse phase space while maintaining achromatic beam transport. The first order beam optics theory of the method is presented and the consequent enlargement of the transverse phase space is discussed. An interesting consequence of the technique is that the first order mass resolving power of the system is determined by the first dispersive section up to the energy degrader, independent of whether or not momentum compression is used. The fragment separator at the Facility for Rare Isotope Beams is a specific application of this technique and is described along with simulations by the code COSY INFINITY.
Electrical injection schemes for nanolasers
DEFF Research Database (Denmark)
Lupi, Alexandra; Chung, Il-Sug; Yvind, Kresten
2013-01-01
The performance of injection schemes among recently demonstrated electrically pumped photonic crystal nanolasers has been investigated numerically. The computation has been carried out at room temperature using a commercial semiconductor simulation software. For the simulations two electrical...... of 3 InGaAsP QWs on an InP substrate has been chosen for the modeling. In the simulations the main focus is on the electrical and optical properties of the nanolasers i.e. electrical resistance, threshold voltage, threshold current and wallplug efficiency. In the current flow evaluation the lowest...... threshold current has been achieved with the lateral electrical injection through the BH; while the lowest resistance has been obtained from the current post structure even though this model shows a higher current threshold because of the lack of carrier confinement. Final scope of the simulations...
Scheme of thinking quantum systems
International Nuclear Information System (INIS)
Yukalov, V I; Sornette, D
2009-01-01
A general approach describing quantum decision procedures is developed. The approach can be applied to quantum information processing, quantum computing, creation of artificial quantum intelligence, as well as to analyzing decision processes of human decision makers. Our basic point is to consider an active quantum system possessing its own strategic state. Processing information by such a system is analogous to the cognitive processes associated to decision making by humans. The algebra of probability operators, associated with the possible options available to the decision maker, plays the role of the algebra of observables in quantum theory of measurements. A scheme is advanced for a practical realization of decision procedures by thinking quantum systems. Such thinking quantum systems can be realized by using spin lattices, systems of magnetic molecules, cold atoms trapped in optical lattices, ensembles of quantum dots, or multilevel atomic systems interacting with electromagnetic field
International Nuclear Information System (INIS)
Morch, Stein
2004-01-01
The article asserts that there could be an investment boom for wind, hydro and bio power in a common Norwegian-Swedish market scheme for green certificates. The Swedish authorities are ready, and the Norwegian government is preparing a report to the Norwegian Parliament. What are the ambitions of Norway, and will hydro power be included? A green certificate market common to more countries have never before been established and requires the solution of many challenging problems. In Sweden, certificate support is expected to promote primarily bioenergy, wind power and small-scale hydro power. In Norway there is an evident potential for wind power, and more hydro power can be developed if desired
Pomeranchuk conjecture and symmetry schemes
Energy Technology Data Exchange (ETDEWEB)
Galindo, A.; Morales, A.; Ruegg, H. [Junta de Energia Nuclear, Madrid (Spain); European Organization for Nuclear Research, Geneva (Switzerland); University of Geneva, Geneva (Switzerland)
1963-01-15
Pomeranchuk has conjectured that the cross-sections for charge-exchange processes vanish asymptotically as the energy tends to infinity. (By ''charge'' it is meant any internal quantum number, like electric charge, hypercharge, .. . ). It has been stated by several people that this conjecture implies equalities among the total cross-sections whenever any symmetry scheme is invoked for the strong interactions. But to our knowledge no explicit general proof of this statement has been given so far. We want to give this proof for any compact Lie group. We also prove, under certain assumptions, that the equality of the total cross-sections implies that s{sup -l} times the charge-exchange forward scattering absorptive amplitudes tend to zero as s -> ∞.
More accurate picture of human body organs
International Nuclear Information System (INIS)
Kolar, J.
1985-01-01
Computerized tomography and nucler magnetic resonance tomography (NMRT) are revolutionary contributions to radiodiagnosis because they allow to obtain a more accurate image of human body organs. The principles are described of both methods. Attention is mainly devoted to NMRT which has clinically only been used for three years. It does not burden the organism with ionizing radiation. (Ha)
Fast and accurate methods for phylogenomic analyses
Directory of Open Access Journals (Sweden)
Warnow Tandy
2011-10-01
Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.
Accurate overlaying for mobile augmented reality
Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.
1999-01-01
Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency
Accurate activity recognition in a home setting
van Kasteren, T.; Noulas, A.; Englebienne, G.; Kröse, B.
2008-01-01
A sensor system capable of automatically recognizing activities would allow many potential ubiquitous applications. In this paper, we present an easy to install sensor network and an accurate but inexpensive annotation method. A recorded dataset consisting of 28 days of sensor data and its
Highly accurate surface maps from profilometer measurements
Medicus, Kate M.; Nelson, Jessica D.; Mandina, Mike P.
2013-04-01
Many aspheres and free-form optical surfaces are measured using a single line trace profilometer which is limiting because accurate 3D corrections are not possible with the single trace. We show a method to produce an accurate fully 2.5D surface height map when measuring a surface with a profilometer using only 6 traces and without expensive hardware. The 6 traces are taken at varying angular positions of the lens, rotating the part between each trace. The output height map contains low form error only, the first 36 Zernikes. The accuracy of the height map is ±10% of the actual Zernike values and within ±3% of the actual peak to valley number. The calculated Zernike values are affected by errors in the angular positioning, by the centering of the lens, and to a small effect, choices made in the processing algorithm. We have found that the angular positioning of the part should be better than 1?, which is achievable with typical hardware. The centering of the lens is essential to achieving accurate measurements. The part must be centered to within 0.5% of the diameter to achieve accurate results. This value is achievable with care, with an indicator, but the part must be edged to a clean diameter.
Matroids and quantum-secret-sharing schemes
International Nuclear Information System (INIS)
Sarvepalli, Pradeep; Raussendorf, Robert
2010-01-01
A secret-sharing scheme is a cryptographic protocol to distribute a secret state in an encoded form among a group of players such that only authorized subsets of the players can reconstruct the secret. Classically, efficient secret-sharing schemes have been shown to be induced by matroids. Furthermore, access structures of such schemes can be characterized by an excluded minor relation. No such relations are known for quantum secret-sharing schemes. In this paper we take the first steps toward a matroidal characterization of quantum-secret-sharing schemes. In addition to providing a new perspective on quantum-secret-sharing schemes, this characterization has important benefits. While previous work has shown how to construct quantum-secret-sharing schemes for general access structures, these schemes are not claimed to be efficient. In this context the present results prove to be useful; they enable us to construct efficient quantum-secret-sharing schemes for many general access structures. More precisely, we show that an identically self-dual matroid that is representable over a finite field induces a pure-state quantum-secret-sharing scheme with information rate 1.
DSMC-LBM mapping scheme for rarefied and non-rarefied gas flows
Di Staso, G.; Clercx, H.J.H.; Succi, S.; Toschi, F.
2016-01-01
We present the formulation of a kinetic mapping scheme between the Direct Simulation Monte Carlo (DSMC) and the Lattice Boltzmann Method (LBM) which is at the basis of the hybrid model used to couple the two methods in view of efficiently and accurately simulate isothermal flows characterized by
How can conceptual schemes change teaching?
Wickman, Per-Olof
2012-03-01
Lundqvist, Almqvist and Östman describe a teacher's manner of teaching and the possible consequences it may have for students' meaning making. In doing this the article examines a teacher's classroom practice by systematizing the teacher's transactions with the students in terms of certain conceptual schemes, namely the epistemological moves, educational philosophies and the selective traditions of this practice. In connection to their study one may ask how conceptual schemes could change teaching. This article examines how the relationship of the conceptual schemes produced by educational researchers to educational praxis has developed from the middle of the last century to today. The relationship is described as having been transformed in three steps: (1) teacher deficit and social engineering, where conceptual schemes are little acknowledged, (2) reflecting practitioners, where conceptual schemes are mangled through teacher practice to aid the choices of already knowledgeable teachers, and (3) the mangling of the conceptual schemes by researchers through practice with the purpose of revising theory.
[Occlusal schemes of complete dentures--a review of the literature].
Tarazi, E; Ticotsky-Zadok, N
2007-01-01
movements). Linear occlusion scheme occludes cuspless teeth with anatomic teeth that have been modified (bladed teeth) in order to achieve linear occlusal contacts. Linear contacts are the pin-point contacts of the tips of the cusps of the bladed teeth against cuspless teeth that create a plane. The specific design of positioning upper modified teeth on the upper denture and non anatomic teeth on the lower one is called lingualized occlusion. It is characterized by contacts of only the lingual (palatinal, to be more accurate) cusps of the upper teeth with the lower teeth. The lingualized occlusal scheme provides better aesthetics than the monoplane occlusion scheme, and better stability (in the case of resorbed residual ridges) than bilateral occlusion scheme of anatomic teeth. The results of studies that compared different occlusal schemes may well be summarized as inconclusive. However, it does seem that patients preferred anatomic or semi-anatomic (modified) teeth, and that chewing efficiency with anatomic and modified teeth was better than with non anatomic teeth. Similar results were found in studies of occlusal schemes of implant-supported lower dentures opposed by complete upper dentures. There isn't one occlusal scheme that fits all patients in need of complete dentures, in fact, in many cases more than one occlusal scheme might be adequate. Selection of an occlusal scheme for a patient should include correlation of the characteristics of the patient with those of the various occlusal schemes. The characteristics of the patient include: height and width of the residual ridge, aesthetic demands of the patient, skeletal relations (class I/II/III), neuromuscular control, and tendency for para-functional activity. The multiple characteristics of the occlusal schemes were reviewed in this article. Considering all of those factors in relation to a specific patient, the dentist should be able to decide on the most suitable occlusal scheme for the case.
A strong shock tube problem calculated by different numerical schemes
Lee, Wen Ho; Clancy, Sean P.
1996-05-01
Calculated results are presented for the solution of a very strong shock tube problem on a coarse mesh using (1) MESA code, (2) UNICORN code, (3) Schulz hydro, and (4) modified TVD scheme. The first two codes are written in Eulerian coordinates, whereas methods (3) and (4) are in Lagrangian coordinates. MESA and UNICORN codes are both of second order and use different monotonic advection method to avoid the Gibbs phenomena. Code (3) uses typical artificial viscosity for inviscid flow, whereas code (4) uses a modified TVD scheme. The test problem is a strong shock tube problem with a pressure ratio of 109 and density ratio of 103 in an ideal gas. For no mass-matching case, Schulz hydro is better than TVD scheme. In the case of mass-matching, there is no difference between them. MESA and UNICORN results are nearly the same. However, the computed positions such as the contact discontinuity (i.e. the material interface) are not as accurate as the Lagrangian methods.
A strong shock tube problem calculated by different numerical schemes
International Nuclear Information System (INIS)
Lee, W.H.; Clancy, S.P.
1996-01-01
Calculated results are presented for the solution of a very strong shock tube problem on a coarse mesh using (1) MESA code, (2) UNICORN code, (3) Schulz hydro, and (4) modified TVD scheme. The first two codes are written in Eulerian coordinates, whereas methods (3) and (4) are in Lagrangian coordinates. MESA and UNICORN codes are both of second order and use different monotonic advection method to avoid the Gibbs phenomena. Code (3) uses typical artificial viscosity for inviscid flow, whereas code (4) uses a modified TVD scheme. The test problem is a strong shock tube problem with a pressure ratio of 10 9 and density ratio of 10 3 in an ideal gas. For no mass-matching case, Schulz hydro is better than TVD scheme. In the case of mass-matching, there is no difference between them. MESA and UNICORN results are nearly the same. However, the computed positions such as the contact discontinuity (i.e. the material interface) are not as accurate as the Lagrangian methods. copyright 1996 American Institute of Physics
Development of a reference scheme for MOX lattice physics calculations
International Nuclear Information System (INIS)
Finck, P.J.; Stenberg, C.G.; Roy, R.
1998-01-01
The US program to dispose of weapons-grade Pu could involve the irradiation of mixed-oxide (MOX) fuel assemblies in commercial light water reactors. This will require licensing acceptance because of the modifications to the core safety characteristics. In particular, core neutronics will be significantly modified, thus making it necessary to validate the standard suites of neutronics codes for that particular application. Validation criteria are still unclear, but it seems reasonable to expect that the same level of accuracy will be expected for MOX as that which has been achieved for UO 2 . Commercial lattice physics codes are invariably claimed to be accurate for MOX analysis but often lack independent confirmation of their performance on a representative experimental database. Argonne National Laboratory (ANL) has started implementing a public domain suite of codes to provide for a capability to perform independent assessments of MOX core analyses. The DRAGON lattice code was chosen, and fine group ENDF/B-VI.04 and JEF-2.2 libraries have been developed. The objective of this work is to validate the DRAGON algorithms with respect to continuous-energy Monte Carlo for a suite of realistic UO 2 -MOX benchmark cases, with the aim of establishing a reference DRAGON scheme with a demonstrated high level of accuracy and no computing resource constraints. Using this scheme as a reference, future work will be devoted to obtaining simpler and less costly schemes that preserve accuracy as much as possible
Resonance ionization scheme development for europium
Energy Technology Data Exchange (ETDEWEB)
Chrysalidis, K., E-mail: katerina.chrysalidis@cern.ch; Goodacre, T. Day; Fedosseev, V. N.; Marsh, B. A. [CERN (Switzerland); Naubereit, P. [Johannes Gutenberg-Universität, Institiut für Physik (Germany); Rothe, S.; Seiffert, C. [CERN (Switzerland); Kron, T.; Wendt, K. [Johannes Gutenberg-Universität, Institiut für Physik (Germany)
2017-11-15
Odd-parity autoionizing states of europium have been investigated by resonance ionization spectroscopy via two-step, two-resonance excitations. The aim of this work was to establish ionization schemes specifically suited for europium ion beam production using the ISOLDE Resonance Ionization Laser Ion Source (RILIS). 13 new RILIS-compatible ionization schemes are proposed. The scheme development was the first application of the Photo Ionization Spectroscopy Apparatus (PISA) which has recently been integrated into the RILIS setup.
Secure RAID Schemes for Distributed Storage
Huang, Wentao; Bruck, Jehoshua
2016-01-01
We propose secure RAID, i.e., low-complexity schemes to store information in a distributed manner that is resilient to node failures and resistant to node eavesdropping. We generalize the concept of systematic encoding to secure RAID and show that systematic schemes have significant advantages in the efficiencies of encoding, decoding and random access. For the practical high rate regime, we construct three XOR-based systematic secure RAID schemes with optimal or almost optimal encoding and ...
Nie, W.; Zaitchik, B. F.; Kumar, S.; Rodell, M.
2017-12-01
Advanced Land Surface Models (LSM) offer a powerful tool for studying and monitoring hydrological variability. Highly managed systems, however, present a challenge for these models, which typically have simplified or incomplete representations of human water use, if the process is represented at all. GRACE, meanwhile, detects the total change in water storage, including change due to human activities, but does not resolve the source of these changes. Here we examine recent groundwater declines in the US High Plains Aquifer (HPA), a region that is heavily utilized for irrigation and that is also affected by episodic drought. To understand observed decline in groundwater (well observation) and terrestrial water storage (GRACE) during a recent multi-year drought, we modify the Noah-MP LSM to include a groundwater pumping irrigation scheme. To account for seasonal and interannual variability in active irrigated area we apply a monthly time-varying greenness vegetation fraction (GVF) dataset to the model. A set of five experiments were performed to study the impact of irrigation with groundwater withdrawal on the simulated hydrological cycle of the HPA and to assess the importance of time-varying GVF when simulating drought conditions. The results show that including the groundwater pumping irrigation scheme in Noah-MP improves model agreement with GRACE mascon solutions for TWS and well observations of groundwater anomaly in the southern HPA, including Texas and Kansas, and that accounting for time-varying GVF is important for model realism under drought. Results for the HPA in Nebraska are mixed, likely due to misrepresentation of the recharge process. This presentation will highlight the value of the GRACE constraint for model development, present estimates of the relative contribution of climate variability and irrigation to declining TWS in the HPA under drought, and identify opportunities to integrate GRACE-FO with models for water resource monitoring in heavily
Which Quantum Theory Must be Reconciled with Gravity? (And What Does it Mean for Black Holes?
Directory of Open Access Journals (Sweden)
Matthew J. Lake
2016-10-01
Full Text Available We consider the nature of quantum properties in non-relativistic quantum mechanics (QM and relativistic quantum field theories, and examine the connection between formal quantization schemes and intuitive notions of wave-particle duality. Based on the map between classical Poisson brackets and their associated commutators, such schemes give rise to quantum states obeying canonical dispersion relations, obtained by substituting the de Broglie relations into the relevant (classical energy-momentum relation. In canonical QM, this yields a dispersion relation involving ℏ but not c, whereas the canonical relativistic dispersion relation involves both. Extending this logic to the canonical quantization of the gravitational field gives rise to loop quantum gravity, and a map between classical variables containing G and c, and associated commutators involving ℏ. This naturally defines a “wave-gravity duality”, suggesting that a quantum wave packet describing self-gravitating matter obeys a dispersion relation involving G, c and ℏ. We propose an Ansatz for this relation, which is valid in the semi-Newtonian regime of both QM and general relativity. In this limit, space and time are absolute, but imposing v max = c allows us to recover the standard expressions for the Compton wavelength λ C and the Schwarzschild radius r S within the same ontological framework. The new dispersion relation is based on “extended” de Broglie relations, which remain valid for slow-moving bodies of any mass m. These reduce to canonical form for m ≪ m P , yielding λ C from the standard uncertainty principle, whereas, for m ≫ m P , we obtain r S as the natural radius of a self-gravitating quantum object. Thus, the extended de Broglie theory naturally gives rise to a unified description of black holes and fundamental particles in the semi-Newtonian regime.
Accurate guitar tuning by cochlear implant musicians.
Directory of Open Access Journals (Sweden)
Thomas Lu
Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.
Accurate estimation of indoor travel times
DEFF Research Database (Denmark)
Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan
2014-01-01
The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...
A new access scheme in OFDMA systems
Institute of Scientific and Technical Information of China (English)
GU Xue-lin; YAN Wei; TIAN Hui; ZHANG Ping
2006-01-01
This article presents a dynamic random access scheme for orthogonal frequency division multiple access (OFDMA) systems. The key features of the proposed scheme are:it is a combination of both the distributed and the centralized schemes, it can accommodate several delay sensitivity classes,and it can adjust the number of random access channels in a media access control (MAC) frame and the access probability according to the outcome of Mobile Terminals access attempts in previous MAC frames. For floating populated packet-based networks, the proposed scheme possibly leads to high average user satisfaction.
A Spatial Domain Quantum Watermarking Scheme
International Nuclear Information System (INIS)
Wei Zhan-Hong; Chen Xiu-Bo; Niu Xin-Xin; Yang Yi-Xian; Xu Shu-Jiang
2016-01-01
This paper presents a spatial domain quantum watermarking scheme. For a quantum watermarking scheme, a feasible quantum circuit is a key to achieve it. This paper gives a feasible quantum circuit for the presented scheme. In order to give the quantum circuit, a new quantum multi-control rotation gate, which can be achieved with quantum basic gates, is designed. With this quantum circuit, our scheme can arbitrarily control the embedding position of watermark images on carrier images with the aid of auxiliary qubits. Besides reversely acting the given quantum circuit, the paper gives another watermark extracting algorithm based on quantum measurements. Moreover, this paper also gives a new quantum image scrambling method and its quantum circuit. Differ from other quantum watermarking schemes, all given quantum circuits can be implemented with basic quantum gates. Moreover, the scheme is a spatial domain watermarking scheme, and is not based on any transform algorithm on quantum images. Meanwhile, it can make sure the watermark be secure even though the watermark has been found. With the given quantum circuit, this paper implements simulation experiments for the presented scheme. The experimental result shows that the scheme does well in the visual quality and the embedding capacity. (paper)
Quantum signature scheme for known quantum messages
International Nuclear Information System (INIS)
Kim, Taewan; Lee, Hyang-Sook
2015-01-01
When we want to sign a quantum message that we create, we can use arbitrated quantum signature schemes which are possible to sign for not only known quantum messages but also unknown quantum messages. However, since the arbitrated quantum signature schemes need the help of a trusted arbitrator in each verification of the signature, it is known that the schemes are not convenient in practical use. If we consider only known quantum messages such as the above situation, there can exist a quantum signature scheme with more efficient structure. In this paper, we present a new quantum signature scheme for known quantum messages without the help of an arbitrator. Differing from arbitrated quantum signature schemes based on the quantum one-time pad with the symmetric key, since our scheme is based on quantum public-key cryptosystems, the validity of the signature can be verified by a receiver without the help of an arbitrator. Moreover, we show that our scheme provides the functions of quantum message integrity, user authentication and non-repudiation of the origin as in digital signature schemes. (paper)
A comparative study of upwind and MacCormack schemes for CAA benchmark problems
Viswanathan, K.; Sankar, L. N.
1995-01-01
In this study, upwind schemes and MacCormack schemes are evaluated as to their suitability for aeroacoustic applications. The governing equations are cast in a curvilinear coordinate system and discretized using finite volume concepts. A flux splitting procedure is used for the upwind schemes, where the signals crossing the cell faces are grouped into two categories: signals that bring information from outside into the cell, and signals that leave the cell. These signals may be computed in several ways, with the desired spatial and temporal accuracy achieved by choosing appropriate interpolating polynomials. The classical MacCormack schemes employed here are fourth order accurate in time and space. Results for categories 1, 4, and 6 of the workshop's benchmark problems are presented. Comparisons are also made with the exact solutions, where available. The main conclusions of this study are finally presented.
Numerical study of read scheme in one-selector one-resistor crossbar array
Kim, Sungho; Kim, Hee-Dong; Choi, Sung-Jin
2015-12-01
A comprehensive numerical circuit analysis of read schemes of a one selector-one resistance change memory (1S1R) crossbar array is carried out. Three schemes-the ground, V/2, and V/3 schemes-are compared with each other in terms of sensing margin and power consumption. Without the aid of a complex analytical approach or SPICE-based simulation, a simple numerical iteration method is developed to simulate entire current flows and node voltages within a crossbar array. Understanding such phenomena is essential in successfully evaluating the electrical specifications of selectors for suppressing intrinsic drawbacks of crossbar arrays, such as sneaky current paths and series line resistance problems. This method provides a quantitative tool for the accurate analysis of crossbar arrays and provides guidelines for developing an optimal read scheme, array configuration, and selector device specifications.
TE/TM alternating direction scheme for wake field calculation in 3D
Energy Technology Data Exchange (ETDEWEB)
Zagorodnov, Igor [Institut fuer Theorie Elektromagnetischer Felder (TEMF), Technische Universitaet Darmstadt, Schlossgartenstrasse 8, D-64289 Darmstadt (Germany)]. E-mail: zagor@temf.de; Weiland, Thomas [Institut fuer Theorie Elektromagnetischer Felder (TEMF), Technische Universitaet Darmstadt, Schlossgartenstrasse 8, D-64289 Darmstadt (Germany)
2006-03-01
In the future, accelerators with very short bunches will be used. It demands developing new numerical approaches for long-time calculation of electromagnetic fields in the vicinity of relativistic bunches. The conventional FDTD scheme, used in MAFIA, ABCI and other wake and PIC codes, suffers from numerical grid dispersion and staircase approximation problem. As an effective cure of the dispersion problem, a numerical scheme without dispersion in longitudinal direction can be used as it was shown by Novokhatski et al. [Transition dynamics of the wake fields of ultrashort bunches, TESLA Report 2000-03, DESY, 2000] and Zagorodnov et al. [J. Comput. Phys. 191 (2003) 525]. In this paper, a new economical conservative scheme for short-range wake field calculation in 3D is presented. As numerical examples show, the new scheme is much more accurate on long-time scale than the conventional FDTD approach.
On accurate determination of contact angle
Concus, P.; Finn, R.
1992-01-01
Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.
Software Estimation: Developing an Accurate, Reliable Method
2011-08-01
based and size-based estimates is able to accurately plan, launch, and execute on schedule. Bob Sinclair, NAWCWD Chris Rickets , NAWCWD Brad Hodgins...Office by Carnegie Mellon University. SMPSP and SMTSP are service marks of Carnegie Mellon University. 1. Rickets , Chris A, “A TSP Software Maintenance...Life Cycle”, CrossTalk, March, 2005. 2. Koch, Alan S, “TSP Can Be the Building blocks for CMMI”, CrossTalk, March, 2005. 3. Hodgins, Brad, Rickets
Highly Accurate Prediction of Jobs Runtime Classes
Reiner-Benaim, Anat; Grabarnick, Anna; Shmueli, Edi
2016-01-01
Separating the short jobs from the long is a known technique to improve scheduling performance. In this paper we describe a method we developed for accurately predicting the runtimes classes of the jobs to enable this separation. Our method uses the fact that the runtimes can be represented as a mixture of overlapping Gaussian distributions, in order to train a CART classifier to provide the prediction. The threshold that separates the short jobs from the long jobs is determined during the ev...
Accurate multiplicity scaling in isotopically conjugate reactions
International Nuclear Information System (INIS)
Golokhvastov, A.I.
1989-01-01
The generation of accurate scaling of mutiplicity distributions is presented. The distributions of π - mesons (negative particles) and π + mesons in different nucleon-nucleon interactions (PP, NP and NN) are described by the same universal function Ψ(z) and the same energy dependence of the scale parameter which determines the stretching factor for the unit function Ψ(z) to obtain the desired multiplicity distribution. 29 refs.; 6 figs
Mental models accurately predict emotion transitions.
Thornton, Mark A; Tamir, Diana I
2017-06-06
Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.
Mental models accurately predict emotion transitions
Thornton, Mark A.; Tamir, Diana I.
2017-01-01
Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373
Accurate performance analysis of opportunistic decode-and-forward relaying
Tourki, Kamel; Yang, Hongchuan; Alouini, Mohamed-Slim
2011-01-01
In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path may
An accurate projection algorithm for array processor based SPECT systems
International Nuclear Information System (INIS)
King, M.A.; Schwinger, R.B.; Cool, S.L.
1985-01-01
A data re-projection algorithm has been developed for use in single photon emission computed tomography (SPECT) on an array processor based computer system. The algorithm makes use of an accurate representation of pixel activity (uniform square pixel model of intensity distribution), and is rapidly performed due to the efficient handling of an array based algorithm and the Fast Fourier Transform (FFT) on parallel processing hardware. The algorithm consists of using a pixel driven nearest neighbour projection operation to an array of subdivided projection bins. This result is then convolved with the projected uniform square pixel distribution before being compressed to original bin size. This distribution varies with projection angle and is explicitly calculated. The FFT combined with a frequency space multiplication is used instead of a spatial convolution for more rapid execution. The new algorithm was tested against other commonly used projection algorithms by comparing the accuracy of projections of a simulated transverse section of the abdomen against analytically determined projections of that transverse section. The new algorithm was found to yield comparable or better standard error and yet result in easier and more efficient implementation on parallel hardware. Applications of the algorithm include iterative reconstruction and attenuation correction schemes and evaluation of regions of interest in dynamic and gated SPECT
Accurate Recovery of H i Velocity Dispersion from Radio Interferometers
Energy Technology Data Exchange (ETDEWEB)
Ianjamasimanana, R. [Max-Planck Institut für Astronomie, Königstuhl 17, D-69117, Heidelberg (Germany); Blok, W. J. G. de [Netherlands Institute for Radio Astronomy (ASTRON), Postbus 2, 7990 AA Dwingeloo (Netherlands); Heald, George H., E-mail: roger@mpia.de, E-mail: blok@astron.nl, E-mail: George.Heald@csiro.au [Kapteyn Astronomical Institute, University of Groningen, P.O. Box 800, 9700 AV, Groningen (Netherlands)
2017-05-01
Gas velocity dispersion measures the amount of disordered motion of a rotating disk. Accurate estimates of this parameter are of the utmost importance because the parameter is directly linked to disk stability and star formation. A global measure of the gas velocity dispersion can be inferred from the width of the atomic hydrogen (H i) 21 cm line. We explore how several systematic effects involved in the production of H i cubes affect the estimate of H i velocity dispersion. We do so by comparing the H i velocity dispersion derived from different types of data cubes provided by The H i Nearby Galaxy Survey. We find that residual-scaled cubes best recover the H i velocity dispersion, independent of the weighting scheme used and for a large range of signal-to-noise ratio. For H i observations, where the dirty beam is substantially different from a Gaussian, the velocity dispersion values are overestimated unless the cubes are cleaned close to (e.g., ∼1.5 times) the noise level.
Accurate measurement of RF exposure from emerging wireless communication systems
International Nuclear Information System (INIS)
Letertre, Thierry; Toffano, Zeno; Monebhurrun, Vikass
2013-01-01
Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.
Toward accurate and fast iris segmentation for iris biometrics.
He, Zhaofeng; Tan, Tieniu; Sun, Zhenan; Qiu, Xianchao
2009-09-01
Iris segmentation is an essential module in iris recognition because it defines the effective image region used for subsequent processing such as feature extraction. Traditional iris segmentation methods often involve an exhaustive search of a large parameter space, which is time consuming and sensitive to noise. To address these problems, this paper presents a novel algorithm for accurate and fast iris segmentation. After efficient reflection removal, an Adaboost-cascade iris detector is first built to extract a rough position of the iris center. Edge points of iris boundaries are then detected, and an elastic model named pulling and pushing is established. Under this model, the center and radius of the circular iris boundaries are iteratively refined in a way driven by the restoring forces of Hooke's law. Furthermore, a smoothing spline-based edge fitting scheme is presented to deal with noncircular iris boundaries. After that, eyelids are localized via edge detection followed by curve fitting. The novelty here is the adoption of a rank filter for noise elimination and a histogram filter for tackling the shape irregularity of eyelids. Finally, eyelashes and shadows are detected via a learned prediction model. This model provides an adaptive threshold for eyelash and shadow detection by analyzing the intensity distributions of different iris regions. Experimental results on three challenging iris image databases demonstrate that the proposed algorithm outperforms state-of-the-art methods in both accuracy and speed.
Accurate prediction of the enthalpies of formation for xanthophylls.
Lii, Jenn-Huei; Liao, Fu-Xing; Hu, Ching-Han
2011-11-30
This study investigates the applications of computational approaches in the prediction of enthalpies of formation (ΔH(f)) for C-, H-, and O-containing compounds. Molecular mechanics (MM4) molecular mechanics method, density functional theory (DFT) combined with the atomic equivalent (AE) and group equivalent (GE) schemes, and DFT-based correlation corrected atomization (CCAZ) were used. We emphasized on the application to xanthophylls, C-, H-, and O-containing carotenoids which consist of ∼ 100 atoms and extended π-delocaization systems. Within the training set, MM4 predictions are more accurate than those obtained using AE and GE; however a systematic underestimation was observed in the extended systems. ΔH(f) for the training set molecules predicted by CCAZ combined with DFT are in very good agreement with the G3 results. The average absolute deviations (AADs) of CCAZ combined with B3LYP and MPWB1K are 0.38 and 0.53 kcal/mol compared with the G3 data, and are 0.74 and 0.69 kcal/mol compared with the available experimental data, respectively. Consistency of the CCAZ approach for the selected xanthophylls is revealed by the AAD of 2.68 kcal/mol between B3LYP-CCAZ and MPWB1K-CCAZ. Copyright © 2011 Wiley Periodicals, Inc.
Song, Zhixin; Tang, Wenzhong; Shan, Baoqing
2017-10-01
Evaluating heavy metal pollution status and ecological risk in river sediments is a complex task, requiring consideration of contaminant pollution levels, as well as effects of biological processes within the river system. There are currently no simple or low-cost approaches to heavy metal assessment in river sediments. Here, we introduce a system of assessment for pollution status of heavy metals in river sediments, using measurements of Cd in the Shaocun River sediments as a case study. This system can be used to identify high-risk zones of the river that should be given more attention. First, we evaluated the pollution status of Cd in the river sediments based on their total Cd content, and calculated a risk assessment, using local geochemical background values at various sites along the river. Using both acetic acid and ethylenediaminetetraacetic acid to extracted the fractions of Cd in sediments, and used DGT to evaluate the bioavailability of Cd. Thus, DGT provided a measure of potentially bioavailable concentrations of Cd concentrations in the sediments. Last, we measured Cd contents in plant tissue collected at the same site to compare with our other measures. A Pearson's correlation analysis showed that Cd-Plant correlated significantly with Cd-HAc, (r = 0.788, P < 0.01), Cd-EDTA (r = 0.925, P < 0.01), Cd-DGT (r = 0.976, P < 0.01), and Cd-Total (r = 0.635, P < 0.05). We demonstrate that this system of assessment is a useful means of assessing heavy metal pollution status and ecological risk in river sediments. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Thompson, K.G.
2000-01-01
In this work, we develop a new spatial discretization scheme that may be used to numerically solve the neutron transport equation. This new discretization extends the family of corner balance spatial discretizations to include spatial grids of arbitrary polyhedra. This scheme enforces balance on subcell volumes called corners. It produces a lower triangular matrix for sweeping, is algebraically linear, is non-negative in a source-free absorber, and produces a robust and accurate solution in thick diffusive regions. Using an asymptotic analysis, we design the scheme so that in thick diffusive regions it will attain the same solution as an accurate polyhedral diffusion discretization. We then refine the approximations in the scheme to reduce numerical diffusion in vacuums, and we attempt to capture a second order truncation error. After we develop this Upstream Corner Balance Linear (UCBL) discretization we analyze its characteristics in several limits. We complete a full diffusion limit analysis showing that we capture the desired diffusion discretization in optically thick and highly scattering media. We review the upstream and linear properties of our discretization and then demonstrate that our scheme captures strictly non-negative solutions in source-free purely absorbing media. We then demonstrate the minimization of numerical diffusion of a beam and then demonstrate that the scheme is, in general, first order accurate. We also note that for slab-like problems our method actually behaves like a second-order method over a range of cell thicknesses that are of practical interest. We also discuss why our scheme is first order accurate for truly 3D problems and suggest changes in the algorithm that should make it a second-order accurate scheme. Finally, we demonstrate 3D UCBL's performance on several very different test problems. We show good performance in diffusive and streaming problems. We analyze truncation error in a 3D problem and demonstrate robustness in a
Anonymous Credential Schemes with Encrypted Attributes
Guajardo Merchan, J.; Mennink, B.; Schoenmakers, B.
2011-01-01
In anonymous credential schemes, users obtain credentials on certain attributes from an issuer, and later show these credentials to a relying party anonymously and without fully disclosing the attributes. In this paper, we introduce the notion of (anonymous) credential schemes with encrypted
Community healthcare financing scheme: findings among residents ...
African Journals Online (AJOL)
... none were active participants as 2(0.6%) were indifferent. There was a statistically significant relationship, Fischers <0.0001 between sex and the scheme's knowledge. Conclusion: Knowledge of the scheme was poor among majority of the respondents and none were active participants. Bribery and corruption was the ...
Improved Load Shedding Scheme considering Distributed Generation
DEFF Research Database (Denmark)
Das, Kaushik; Nitsas, Antonios; Altin, Müfit
2017-01-01
With high penetration of distributed generation (DG), the conventional under-frequency load shedding (UFLS) face many challenges and may not perform as expected. This article proposes new UFLS schemes, which are designed to overcome the shortcomings of traditional load shedding scheme...
A generalized scheme for designing multistable continuous ...
Indian Academy of Sciences (India)
In this paper, a generalized scheme is proposed for designing multistable continuous dynamical systems. The scheme is based on the concept of partial synchronization of states and the concept of constants of motion. The most important observation is that by coupling two mdimensional dynamical systems, multistable ...
Consolidation of the health insurance scheme
Association du personnel
2009-01-01
In the last issue of Echo, we highlighted CERN’s obligation to guarantee a social security scheme for all employees, pensioners and their families. In that issue we talked about the first component: pensions. This time we shall discuss the other component: the CERN Health Insurance Scheme (CHIS).
A hierarchical classification scheme of psoriasis images
DEFF Research Database (Denmark)
Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær
2003-01-01
A two-stage hierarchical classification scheme of psoriasis lesion images is proposed. These images are basically composed of three classes: normal skin, lesion and background. The scheme combines conventional tools to separate the skin from the background in the first stage, and the lesion from...
Privacy Preserving Mapping Schemes Supporting Comparison
Tang, Qiang
2010-01-01
To cater to the privacy requirements in cloud computing, we introduce a new primitive, namely Privacy Preserving Mapping (PPM) schemes supporting comparison. An PPM scheme enables a user to map data items into images in such a way that, with a set of images, any entity can determine the <, =, >
Mixed ultrasoft/norm-conserved pseudopotential scheme
DEFF Research Database (Denmark)
Stokbro, Kurt
1996-01-01
A variant of the Vanderbilt ultrasoft pseudopotential scheme, where the norm conservation is released for only one or a few angular channels, is presented. Within this scheme some difficulties of the truly ultrasoft pseudopotentials are overcome without sacrificing the pseudopotential softness. (...
Ferreira, Iuri E P; Zocchi, Silvio S; Baron, Daniel
2017-11-01
Reliable fertilizer recommendations depend on the correctness of the crop production models fitted to the data, but generally the crop models are built empirically, neglecting important physiological aspects related with response to fertilizers, or they are based in laws of plant mineral nutrition seen by many authors as conflicting theories: the Liebig's Law of the Minimum and Mitscherlich's Law of Diminishing Returns. We developed a new approach to modelling the crop response to fertilizers that reconcile these laws. In this study, the Liebig's Law is applied at the cellular level to explain plant production and, as a result, crop models compatible with the Law of Diminishing Returns are derived. Some classical crop models appear here as special cases of our methodology, and a new interpretation for Mitscherlich's Law is also provided. Copyright © 2017 Elsevier Inc. All rights reserved.
Robust and accurate vectorization of line drawings.
Hilaire, Xavier; Tombre, Karl
2006-06-01
This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.
The first accurate description of an aurora
Schröder, Wilfried
2006-12-01
As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.
Accurate Charge Densities from Powder Diffraction
DEFF Research Database (Denmark)
Bindzus, Niels; Wahlberg, Nanna; Becker, Jacob
Synchrotron powder X-ray diffraction has in recent years advanced to a level, where it has become realistic to probe extremely subtle electronic features. Compared to single-crystal diffraction, it may be superior for simple, high-symmetry crystals owing to negligible extinction effects and minimal...... peak overlap. Additionally, it offers the opportunity for collecting data on a single scale. For charge densities studies, the critical task is to recover accurate and bias-free structure factors from the diffraction pattern. This is the focal point of the present study, scrutinizing the performance...
Arbitrarily accurate twin composite π -pulse sequences
Torosov, Boyan T.; Vitanov, Nikolay V.
2018-04-01
We present three classes of symmetric broadband composite pulse sequences. The composite phases are given by analytic formulas (rational fractions of π ) valid for any number of constituent pulses. The transition probability is expressed by simple analytic formulas and the order of pulse area error compensation grows linearly with the number of pulses. Therefore, any desired compensation order can be produced by an appropriate composite sequence; in this sense, they are arbitrarily accurate. These composite pulses perform equally well as or better than previously published ones. Moreover, the current sequences are more flexible as they allow total pulse areas of arbitrary integer multiples of π .
Systematization of Accurate Discrete Optimization Methods
Directory of Open Access Journals (Sweden)
V. A. Ovchinnikov
2015-01-01
Full Text Available The object of study of this paper is to define accurate methods for solving combinatorial optimization problems of structural synthesis. The aim of the work is to systemize the exact methods of discrete optimization and define their applicability to solve practical problems.The article presents the analysis, generalization and systematization of classical methods and algorithms described in the educational and scientific literature.As a result of research a systematic presentation of combinatorial methods for discrete optimization described in various sources is given, their capabilities are described and properties of the tasks to be solved using the appropriate methods are specified.
Labelling schemes: From a consumer perspective
DEFF Research Database (Denmark)
Juhl, Hans Jørn; Stacey, Julia
2000-01-01
Labelling of food products attracts a lot of political attention these days. As a result of a number of food scandals, most European countries have acknowledged the need for more information and better protection of consumers. Labelling schemes are one way of informing and guiding consumers....... However, initiatives in relation to labelling schemes seldom take their point of departure in consumers' needs and expectations; and in many cases, the schemes are defined by the institutions guaranteeing the label. It is therefore interesting to study how consumers actually value labelling schemes....... A recent MAPP study has investigated the value consumers attach the Government-controlled labels 'Ø-mærket' and 'Den Blå Lup' and the private supermarket label 'Mesterhakket' when they purchase minced meat. The results reveal four consumer segments that use labelling schemes for food products very...
Birkhoffian Symplectic Scheme for a Quantum System
International Nuclear Information System (INIS)
Su Hongling
2010-01-01
In this paper, a classical system of ordinary differential equations is built to describe a kind of n-dimensional quantum systems. The absorption spectrum and the density of the states for the system are defined from the points of quantum view and classical view. From the Birkhoffian form of the equations, a Birkhoffian symplectic scheme is derived for solving n-dimensional equations by using the generating function method. Besides the Birkhoffian structure-preserving, the new scheme is proven to preserve the discrete local energy conservation law of the system with zero vector f. Some numerical experiments for a 3-dimensional example show that the new scheme can simulate the general Birkhoffian system better than the implicit midpoint scheme, which is well known to be symplectic scheme for Hamiltonian system. (general)
Autonomous droop scheme with reduced generation cost
DEFF Research Database (Denmark)
Nutkani, Inam Ullah; Loh, Poh Chiang; Blaabjerg, Frede
2013-01-01
Droop scheme has been widely applied to the control of Distributed Generators (DGs) in microgrids for proportional power sharing based on their ratings. For standalone microgrid, where centralized management system is not viable, the proportional power sharing based droop might not suit well since...... DGs are usually of different types unlike synchronous generators. This paper presents an autonomous droop scheme that takes into consideration the operating cost, efficiency and emission penalty of each DG since all these factors directly or indirectly contributes to the Total Generation Cost (TGC......) of the overall microgrid. Comparing it with the traditional scheme, the proposed scheme has retained its simplicity, which certainly is a feature preferred by the industry. The overall performance of the proposed scheme has been verified through simulation and experiment....
Five challenges to reconcile agricultural land use and forest ecosystem services in Southeast Asia.
Carrasco, L R; Papworth, S K; Reed, J; Symes, W S; Ickowitz, A; Clements, T; Peh, K S-H; Sunderland, T
2016-10-01
Southeast Asia possesses the highest rates of tropical deforestation globally and exceptional levels of species richness and endemism. Many countries in the region are also recognized for their food insecurity and poverty, making the reconciliation of agricultural production and forest conservation a particular priority. This reconciliation requires recognition of the trade-offs between competing land-use values and the subsequent incorporation of this information into policy making. To date, such reconciliation has been relatively unsuccessful across much of Southeast Asia. We propose an ecosystem services (ES) value-internalization framework that identifies the key challenges to such reconciliation. These challenges include lack of accessible ES valuation techniques; limited knowledge of the links between forests, food security, and human well-being; weak demand and political will for the integration of ES in economic activities and environmental regulation; a disconnect between decision makers and ES valuation; and lack of transparent discussion platforms where stakeholders can work toward consensus on negotiated land-use management decisions. Key research priorities to overcome these challenges are developing easy-to-use ES valuation techniques; quantifying links between forests and well-being that go beyond economic values; understanding factors that prevent the incorporation of ES into markets, regulations, and environmental certification schemes; understanding how to integrate ES valuation into policy making processes, and determining how to reduce corruption and power plays in land-use planning processes. © 2016 Society for Conservation Biology.
Reconciling semiclassical and Bohmian mechanics. II. Scattering states for discontinuous potentials
International Nuclear Information System (INIS)
Trahan, Corey; Poirier, Bill
2006-01-01
In a previous paper [B. Poirier, J. Chem. Phys. 121, 4501 (2004)] a unique bipolar decomposition, Ψ=Ψ 1 +Ψ 2 , was presented for stationary bound states Ψ of the one-dimensional Schroedinger equation, such that the components Ψ 1 and Ψ 2 approach their semiclassical WKB analogs in the large action limit. Moreover, by applying the Madelung-Bohm ansatz to the components rather than to Ψ itself, the resultant bipolar Bohmian mechanical formulation satisfies the correspondence principle. As a result, the bipolar quantum trajectories are classical-like and well behaved, even when Ψ has many nodes or is wildly oscillatory. In this paper, the previous decomposition scheme is modified in order to achieve the same desirable properties for stationary scattering states. Discontinuous potential systems are considered (hard wall, step potential, and square barrier/well), for which the bipolar quantum potential is found to be zero everywhere, except at the discontinuities. This approach leads to an exact numerical method for computing stationary scattering states of any desired boundary conditions, and reflection and transmission probabilities. The continuous potential case will be considered in a companion paper [C. Trahan and B. Poirier, J. Chem. Phys. 124, 034116 (2006), following paper
International Nuclear Information System (INIS)
Bhunia, C.T.
2007-07-01
Packet combining scheme is a well defined simple error correction scheme for the detection and correction of errors at the receiver. Although it permits a higher throughput when compared to other basic ARQ protocols, packet combining (PC) scheme fails to correct errors when errors occur in the same bit locations of copies. In a previous work, a scheme known as Packet Reversed Packet Combining (PRPC) Scheme that will correct errors which occur at the same bit location of erroneous copies, was studied however PRPC does not handle a situation where a packet has more than 1 error bit. The Modified Packet Combining (MPC) Scheme that can correct double or higher bit errors was studied elsewhere. Both PRPC and MPC schemes are believed to offer higher throughput in previous studies, however neither adequate investigation nor exact analysis was done to substantiate this claim of higher throughput. In this work, an exact analysis of both PRPC and MPC is carried out and the results reported. A combined protocol (PRPC and MPC) is proposed and the analysis shows that it is capable of offering even higher throughput and better error correction capability at high bit error rate (BER) and larger packet size. (author)
Analysis of central and upwind compact schemes
International Nuclear Information System (INIS)
Sengupta, T.K.; Ganeriwal, G.; De, S.
2003-01-01
Central and upwind compact schemes for spatial discretization have been analyzed with respect to accuracy in spectral space, numerical stability and dispersion relation preservation. A von Neumann matrix spectral analysis is developed here to analyze spatial discretization schemes for any explicit and implicit schemes to investigate the full domain simultaneously. This allows one to evaluate various boundary closures and their effects on the domain interior. The same method can be used for stability analysis performed for the semi-discrete initial boundary value problems (IBVP). This analysis tells one about the stability for every resolved length scale. Some well-known compact schemes that were found to be G-K-S and time stable are shown here to be unstable for selective length scales by this analysis. This is attributed to boundary closure and we suggest special boundary treatment to remove this shortcoming. To demonstrate the asymptotic stability of the resultant schemes, numerical solution of the wave equation is compared with analytical solution. Furthermore, some of these schemes are used to solve two-dimensional Navier-Stokes equation and a computational acoustic problem to check their ability to solve problems for long time. It is found that those schemes, that were found unstable for the wave equation, are unsuitable for solving incompressible Navier-Stokes equation. In contrast, the proposed compact schemes with improved boundary closure and an explicit higher-order upwind scheme produced correct results. The numerical solution for the acoustic problem is compared with the exact solution and the quality of the match shows that the used compact scheme has the requisite DRP property
Accurate shear measurement with faint sources
International Nuclear Information System (INIS)
Zhang, Jun; Foucaud, Sebastien; Luo, Wentao
2015-01-01
For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys
How Accurately can we Calculate Thermal Systems?
International Nuclear Information System (INIS)
Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A
2004-01-01
I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K eff , for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors
Accurate control testing for clay liner permeability
Energy Technology Data Exchange (ETDEWEB)
Mitchell, R J
1991-08-01
Two series of centrifuge tests were carried out to evaluate the use of centrifuge modelling as a method of accurate control testing of clay liner permeability. The first series used a large 3 m radius geotechnical centrifuge and the second series a small 0.5 m radius machine built specifically for research on clay liners. Two permeability cells were fabricated in order to provide direct data comparisons between the two methods of permeability testing. In both cases, the centrifuge method proved to be effective and efficient, and was found to be free of both the technical difficulties and leakage risks normally associated with laboratory permeability testing of fine grained soils. Two materials were tested, a consolidated kaolin clay having an average permeability coefficient of 1.2{times}10{sup -9} m/s and a compacted illite clay having a permeability coefficient of 2.0{times}10{sup -11} m/s. Four additional tests were carried out to demonstrate that the 0.5 m radius centrifuge could be used for linear performance modelling to evaluate factors such as volumetric water content, compaction method and density, leachate compatibility and other construction effects on liner leakage. The main advantages of centrifuge testing of clay liners are rapid and accurate evaluation of hydraulic properties and realistic stress modelling for performance evaluations. 8 refs., 12 figs., 7 tabs.
An empirical comparison of alternative schemes for combining electricity spot price forecasts
International Nuclear Information System (INIS)
Nowotarski, Jakub; Raviv, Eran; Trück, Stefan; Weron, Rafał
2014-01-01
In this comprehensive empirical study we critically evaluate the use of forecast averaging in the context of electricity prices. We apply seven averaging and one selection scheme and perform a backtesting analysis on day-ahead electricity prices in three major European and US markets. Our findings support the additional benefit of combining forecasts of individual methods for deriving more accurate predictions, however, the performance is not uniform across the considered markets and periods. In particular, equally weighted pooling of forecasts emerges as a simple, yet powerful technique compared with other schemes that rely on estimated combination weights, but only when there is no individual predictor that consistently outperforms its competitors. Constrained least squares regression (CLS) offers a balance between robustness against such well performing individual methods and relatively accurate forecasts, on average better than those of the individual predictors. Finally, some popular forecast averaging schemes – like ordinary least squares regression (OLS) and Bayesian Model Averaging (BMA) – turn out to be unsuitable for predicting day-ahead electricity prices. - Highlights: • So far the most extensive study on combining forecasts for electricity spot prices • 12 stochastic models, 8 forecast combination schemes and 3 markets considered • Our findings support the additional benefit of combining forecasts for deriving more accurate predictions • Methods that allow for unconstrained weights, such as OLS averaging, should be avoided • We recommend a backtesting exercise to identify the preferred forecast averaging method for the data at hand
Agreeable fancy or disagreeable truth? Reconciling self-enhancement and self-verification.
Swann, W B; Pelham, B W; Krull, D S
1989-11-01
Three studies asked why people sometimes seek positive feedback (self-enhance) and sometimes seek subjectively accurate feedback (self-verify). Consistent with self-enhancement theory, people with low self-esteem as well as those with high self-esteem indicated that they preferred feedback pertaining to their positive rather than negative self-views. Consistent with self-verification theory, the very people who sought favorable feedback pertaining to their positive self-conceptions sought unfavorable feedback pertaining to their negative self-views, regardless of their level of global self-esteem. Apparently, although all people prefer to seek feedback regarding their positive self-views, when they seek feedback regarding their negative self-views, they seek unfavorable feedback. Whether people self-enhance or self-verify thus seems to be determined by the positivity of the relevant self-conceptions rather than their level of self-esteem or the type of person they are.
Analysis of a fourth-order compact scheme for convection-diffusion
International Nuclear Information System (INIS)
Yavneh, I.
1997-01-01
In, 1984 Gupta et al. introduced a compact fourth-order finite-difference convection-diffusion operator with some very favorable properties. In particular, this scheme does not seem to suffer excessively from spurious oscillatory behavior, and it converges with standard methods such as Gauss Seidel or SOR (hence, multigrid) regardless of the diffusion. This scheme has been rederived, developed (including some variations), and applied in both convection-diffusion and Navier-Stokes equations by several authors. Accurate solutions to high Reynolds-number flow problems at relatively coarse resolutions have been reported. These solutions were often compared to those obtained by lower order discretizations, such as second-order central differences and first-order upstream discretizations. The latter, it was stated, achieved far less accurate results due to the artificial viscosity, which the compact scheme did not include. We show here that, while the compact scheme indeed does not suffer from a cross-stream artificial viscosity (as does the first-order upstream scheme when the characteristic direction is not aligned with the grid), it does include a streamwise artificial viscosity that is inversely proportional to the natural viscosity. This term is not always benign. 7 refs., 1 fig., 1 tab
Directory of Open Access Journals (Sweden)
S. Szopa
2005-01-01
Full Text Available The objective of this work was to develop and assess an automatic procedure to generate reduced chemical schemes for the atmospheric photooxidation of volatile organic carbon (VOC compounds. The procedure is based on (i the development of a tool for writing the fully explicit schemes for VOC oxidation (see companion paper Aumont et al., 2005, (ii the application of several commonly used reduction methods to the fully explicit scheme, and (iii the assessment of resulting errors based on direct comparison between the reduced and full schemes. The reference scheme included seventy emitted VOCs chosen to be representative of both anthropogenic and biogenic emissions, and their atmospheric degradation chemistry required more than two million reactions among 350000 species. Three methods were applied to reduce the size of the reference chemical scheme: (i use of operators, based on the redundancy of the reaction sequences involved in the VOC oxidation, (ii grouping of primary species having similar reactivities into surrogate species and (iii grouping of some secondary products into surrogate species. The number of species in the final reduced scheme is 147, this being small enough for practical inclusion in current three-dimensional models. Comparisons between the fully explicit and reduced schemes, carried out with a box model for several typical tropospheric conditions, showed that the reduced chemical scheme accurately predicts ozone concentrations and some other aspects of oxidant chemistry for both polluted and clean tropospheric conditions.
A Time Marching Scheme for Solving Volume Integral Equations on Nonlinear Scatterers
Bagci, Hakan
2015-01-01
Transient electromagnetic field interactions on inhomogeneous penetrable scatterers can be analyzed by solving time domain volume integral equations (TDVIEs). TDVIEs are oftentimes solved using marchingon-in-time (MOT) schemes. Unlike finite difference and finite element schemes, MOT-TDVIE solvers require discretization of only the scatterers, do not call for artificial absorbing boundary conditions, and are more robust to numerical phase dispersion. On the other hand, their computational cost is high, they suffer from late-time instabilities, and their implicit nature makes incorporation of nonlinear constitutive relations more difficult. Development of plane-wave time-domain (PWTD) and FFT-based schemes has significantly reduced the computational cost of the MOT-TDVIE solvers. Additionally, latetime instability problem has been alleviated for all practical purposes with the development of accurate integration schemes and specially designed temporal basis functions. Addressing the third challenge is the topic of this presentation. I will talk about an explicit MOT scheme developed for solving the TDVIE on scatterers with nonlinear material properties. The proposed scheme separately discretizes the TDVIE and the nonlinear constitutive relation between electric field intensity and flux density. The unknown field intensity and flux density are expanded using half and full Schaubert-Wilton-Glisson (SWG) basis functions in space and polynomial temporal interpolators in time. The resulting coupled system of the discretized TDVIE and constitutive relation is integrated in time using an explicit P E(CE) m scheme to yield the unknown expansion coefficients. Explicitness of time marching allows for straightforward incorporation of the nonlinearity as a function evaluation on the right hand side of the coupled system of equations. Consequently, the resulting MOT scheme does not call for a Newton-like nonlinear solver. Numerical examples, which demonstrate the applicability
A Time Marching Scheme for Solving Volume Integral Equations on Nonlinear Scatterers
Bagci, Hakan
2015-01-07
Transient electromagnetic field interactions on inhomogeneous penetrable scatterers can be analyzed by solving time domain volume integral equations (TDVIEs). TDVIEs are oftentimes solved using marchingon-in-time (MOT) schemes. Unlike finite difference and finite element schemes, MOT-TDVIE solvers require discretization of only the scatterers, do not call for artificial absorbing boundary conditions, and are more robust to numerical phase dispersion. On the other hand, their computational cost is high, they suffer from late-time instabilities, and their implicit nature makes incorporation of nonlinear constitutive relations more difficult. Development of plane-wave time-domain (PWTD) and FFT-based schemes has significantly reduced the computational cost of the MOT-TDVIE solvers. Additionally, latetime instability problem has been alleviated for all practical purposes with the development of accurate integration schemes and specially designed temporal basis functions. Addressing the third challenge is the topic of this presentation. I will talk about an explicit MOT scheme developed for solving the TDVIE on scatterers with nonlinear material properties. The proposed scheme separately discretizes the TDVIE and the nonlinear constitutive relation between electric field intensity and flux density. The unknown field intensity and flux density are expanded using half and full Schaubert-Wilton-Glisson (SWG) basis functions in space and polynomial temporal interpolators in time. The resulting coupled system of the discretized TDVIE and constitutive relation is integrated in time using an explicit P E(CE) m scheme to yield the unknown expansion coefficients. Explicitness of time marching allows for straightforward incorporation of the nonlinearity as a function evaluation on the right hand side of the coupled system of equations. Consequently, the resulting MOT scheme does not call for a Newton-like nonlinear solver. Numerical examples, which demonstrate the applicability
Directory of Open Access Journals (Sweden)
Lilja Jóhannesdóttir
2017-03-01
Full Text Available Intensified agricultural practices have driven biodiversity loss throughout the world, and although many actions aimed at halting and reversing these declines have been developed, their effectiveness depends greatly on the willingness of stakeholders to take part in conservation management. Knowledge of the willingness and capacity of landowners to engage with conservation can therefore be key to designing successful management strategies in agricultural land. In Iceland, agriculture is currently at a relatively low intensity but is very likely to expand in the near future. At the same time, Iceland supports internationally important breeding populations of many ground-nesting birds that could be seriously impacted by further expansion of agricultural activities. To understand the views of Icelandic farmers toward bird conservation, given the current potential for agricultural expansion, 62 farms across Iceland were visited and farmers were interviewed, using a structured questionnaire survey in which respondents indicated of a series of future actions. Most farmers intend to increase the area of cultivated land in the near future, and despite considering having rich birdlife on their land to be very important, most also report they are unlikely to specifically consider bird conservation in their management, even if financial compensation were available. However, as no agri-environment schemes are currently in place in Iceland, this concept is highly unfamiliar to Icelandic farmers. Nearly all respondents were unwilling, and thought it would be impossible, to delay harvest, but many were willing to consider sparing important patches of land and/or maintaining existing pools within fields (a key habitat feature for breeding waders. Farmers' views on the importance of having rich birdlife on their land and their willingness to participate in bird conservation provide a potential platform for the codesign of conservation management with landowners
Symmetric weak ternary quantum homomorphic encryption schemes
Wang, Yuqi; She, Kun; Luo, Qingbin; Yang, Fan; Zhao, Chao
2016-03-01
Based on a ternary quantum logic circuit, four symmetric weak ternary quantum homomorphic encryption (QHE) schemes were proposed. First, for a one-qutrit rotation gate, a QHE scheme was constructed. Second, in view of the synthesis of a general 3 × 3 unitary transformation, another one-qutrit QHE scheme was proposed. Third, according to the one-qutrit scheme, the two-qutrit QHE scheme about generalized controlled X (GCX(m,n)) gate was constructed and further generalized to the n-qutrit unitary matrix case. Finally, the security of these schemes was analyzed in two respects. It can be concluded that the attacker can correctly guess the encryption key with a maximum probability pk = 1/33n, thus it can better protect the privacy of users’ data. Moreover, these schemes can be well integrated into the future quantum remote server architecture, and thus the computational security of the users’ private quantum information can be well protected in a distributed computing environment.
Ponzi scheme diffusion in complex networks
Zhu, Anding; Fu, Peihua; Zhang, Qinghe; Chen, Zhenyue
2017-08-01
Ponzi schemes taking the form of Internet-based financial schemes have been negatively affecting China's economy for the last two years. Because there is currently a lack of modeling research on Ponzi scheme diffusion within social networks yet, we develop a potential-investor-divestor (PID) model to investigate the diffusion dynamics of Ponzi scheme in both homogeneous and inhomogeneous networks. Our simulation study of artificial and real Facebook social networks shows that the structure of investor networks does indeed affect the characteristics of dynamics. Both the average degree of distribution and the power-law degree of distribution will reduce the spreading critical threshold and will speed up the rate of diffusion. A high speed of diffusion is the key to alleviating the interest burden and improving the financial outcomes for the Ponzi scheme operator. The zero-crossing point of fund flux function we introduce proves to be a feasible index for reflecting the fast-worsening situation of fiscal instability and predicting the forthcoming collapse. The faster the scheme diffuses, the higher a peak it will reach and the sooner it will collapse. We should keep a vigilant eye on the harm of Ponzi scheme diffusion through modern social networks.
Optimal Face-Iris Multimodal Fusion Scheme
Directory of Open Access Journals (Sweden)
Omid Sharifi
2016-06-01
Full Text Available Multimodal biometric systems are considered a way to minimize the limitations raised by single traits. This paper proposes new schemes based on score level, feature level and decision level fusion to efficiently fuse face and iris modalities. Log-Gabor transformation is applied as the feature extraction method on face and iris modalities. At each level of fusion, different schemes are proposed to improve the recognition performance and, finally, a combination of schemes at different fusion levels constructs an optimized and robust scheme. In this study, CASIA Iris Distance database is used to examine the robustness of all unimodal and multimodal schemes. In addition, Backtracking Search Algorithm (BSA, a novel population-based iterative evolutionary algorithm, is applied to improve the recognition accuracy of schemes by reducing the number of features and selecting the optimized weights for feature level and score level fusion, respectively. Experimental results on verification rates demonstrate a significant improvement of proposed fusion schemes over unimodal and multimodal fusion methods.
Sparse Reconstruction Schemes for Nonlinear Electromagnetic Imaging
Desmal, Abdulla
2016-03-01
Electromagnetic imaging is the problem of determining material properties from scattered fields measured away from the domain under investigation. Solving this inverse problem is a challenging task because (i) it is ill-posed due to the presence of (smoothing) integral operators used in the representation of scattered fields in terms of material properties, and scattered fields are obtained at a finite set of points through noisy measurements; and (ii) it is nonlinear simply due the fact that scattered fields are nonlinear functions of the material properties. The work described in this thesis tackles the ill-posedness of the electromagnetic imaging problem using sparsity-based regularization techniques, which assume that the scatterer(s) occupy only a small fraction of the investigation domain. More specifically, four novel imaging methods are formulated and implemented. (i) Sparsity-regularized Born iterative method iteratively linearizes the nonlinear inverse scattering problem and each linear problem is regularized using an improved iterative shrinkage algorithm enforcing the sparsity constraint. (ii) Sparsity-regularized nonlinear inexact Newton method calls for the solution of a linear system involving the Frechet derivative matrix of the forward scattering operator at every iteration step. For faster convergence, the solution of this matrix system is regularized under the sparsity constraint and preconditioned by leveling the matrix singular values. (iii) Sparsity-regularized nonlinear Tikhonov method directly solves the nonlinear minimization problem using Landweber iterations, where a thresholding function is applied at every iteration step to enforce the sparsity constraint. (iv) This last scheme is accelerated using a projected steepest descent method when it is applied to three-dimensional investigation domains. Projection replaces the thresholding operation and enforces the sparsity constraint. Numerical experiments, which are carried out using
An accurate reactive power control study in virtual flux droop control
Wang, Aimeng; Zhang, Jia
2017-12-01
This paper investigates the problem of reactive power sharing based on virtual flux droop method. Firstly, flux droop control method is derived, where complicated multiple feedback loops and parameter regulation are avoided. Then, the reasons for inaccurate reactive power sharing are theoretically analyzed. Further, a novel reactive power control scheme is proposed which consists of three parts: compensation control, voltage recovery control and flux droop control. Finally, the proposed reactive power control strategy is verified in a simplified microgrid model with two parallel DGs. The simulation results show that the proposed control scheme can achieve accurate reactive power sharing and zero deviation of voltage. Meanwhile, it has some advantages of simple control and excellent dynamic and static performance.
Accurate outage analysis of incremental decode-and-forward opportunistic relaying
Tourki, Kamel; Yang, Hongchuan; Alouini, Mohamed-Slim
2011-01-01
In this paper, we investigate a dual-hop decode-and-forward opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. We first derive the exact statistics of received signal-to-noise (SNR) over each hop with co-located relays, in terms of probability density function (PDF). Then, the PDFs are used to determine very accurate closed-form expression for the outage probability for a transmission rate R. Furthermore, we perform asymptotic analysis and we deduce the diversity order of the scheme. We validate our analysis by showing that performance simulation results coincide with our analytical results over different network architectures. © 2011 IEEE.
Accurate outage analysis of incremental decode-and-forward opportunistic relaying
Tourki, Kamel
2011-04-01
In this paper, we investigate a dual-hop decode-and-forward opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. We first derive the exact statistics of received signal-to-noise (SNR) over each hop with co-located relays, in terms of probability density function (PDF). Then, the PDFs are used to determine very accurate closed-form expression for the outage probability for a transmission rate R. Furthermore, we perform asymptotic analysis and we deduce the diversity order of the scheme. We validate our analysis by showing that performance simulation results coincide with our analytical results over different network architectures. © 2011 IEEE.
SOLVING FRACTIONAL-ORDER COMPETITIVE LOTKA-VOLTERRA MODEL BY NSFD SCHEMES
Directory of Open Access Journals (Sweden)
S.ZIBAEI
2016-12-01
Full Text Available In this paper, we introduce fractional-order into a model competitive Lotka- Volterra prey-predator system. We will discuss the stability analysis of this fractional system. The non-standard nite difference (NSFD scheme is implemented to study the dynamic behaviors in the fractional-order Lotka-Volterra system. Proposed non-standard numerical scheme is compared with the forward Euler and fourth order Runge-Kutta methods. Numerical results show that the NSFD approach is easy and accurate for implementing when applied to fractional-order Lotka-Volterra model.
Accurate metacognition for visual sensory memory representations.
Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F
2014-04-01
The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.
An accurate nonlinear Monte Carlo collision operator
International Nuclear Information System (INIS)
Wang, W.X.; Okamoto, M.; Nakajima, N.; Murakami, S.
1995-03-01
A three dimensional nonlinear Monte Carlo collision model is developed based on Coulomb binary collisions with the emphasis both on the accuracy and implementation efficiency. The operator of simple form fulfills particle number, momentum and energy conservation laws, and is equivalent to exact Fokker-Planck operator by correctly reproducing the friction coefficient and diffusion tensor, in addition, can effectively assure small-angle collisions with a binary scattering angle distributed in a limited range near zero. Two highly vectorizable algorithms are designed for its fast implementation. Various test simulations regarding relaxation processes, electrical conductivity, etc. are carried out in velocity space. The test results, which is in good agreement with theory, and timing results on vector computers show that it is practically applicable. The operator may be used for accurately simulating collisional transport problems in magnetized and unmagnetized plasmas. (author)
Accurate predictions for the LHC made easy
CERN. Geneva
2014-01-01
The data recorded by the LHC experiments is of a very high quality. To get the most out of the data, precise theory predictions, including uncertainty estimates, are needed to reduce as much as possible theoretical bias in the experimental analyses. Recently, significant progress has been made in computing Next-to-Leading Order (NLO) computations, including matching to the parton shower, that allow for these accurate, hadron-level predictions. I shall discuss one of these efforts, the MadGraph5_aMC@NLO program, that aims at the complete automation of predictions at the NLO accuracy within the SM as well as New Physics theories. I’ll illustrate some of the theoretical ideas behind this program, show some selected applications to LHC physics, as well as describe the future plans.
Apparatus for accurately measuring high temperatures
Smith, D.D.
The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.
Accurate Modeling Method for Cu Interconnect
Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko
This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.
Multidimensional flux-limited advection schemes
International Nuclear Information System (INIS)
Thuburn, J.
1996-01-01
A general method for building multidimensional shape preserving advection schemes using flux limiters is presented. The method works for advected passive scalars in either compressible or incompressible flow and on arbitrary grids. With a minor modification it can be applied to the equation for fluid density. Schemes using the simplest form of the flux limiter can cause distortion of the advected profile, particularly sideways spreading, depending on the orientation of the flow relative to the grid. This is partly because the simple limiter is too restrictive. However, some straightforward refinements lead to a shape-preserving scheme that gives satisfactory results, with negligible grid-flow angle-dependent distortion
Finite-volume scheme for anisotropic diffusion
Energy Technology Data Exchange (ETDEWEB)
Es, Bram van, E-mail: bramiozo@gmail.com [Centrum Wiskunde & Informatica, P.O. Box 94079, 1090GB Amsterdam (Netherlands); FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, The Netherlands" 1 (Netherlands); Koren, Barry [Eindhoven University of Technology (Netherlands); Blank, Hugo J. de [FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, The Netherlands" 1 (Netherlands)
2016-02-01
In this paper, we apply a special finite-volume scheme, limited to smooth temperature distributions and Cartesian grids, to test the importance of connectivity of the finite volumes. The area of application is nuclear fusion plasma with field line aligned temperature gradients and extreme anisotropy. We apply the scheme to the anisotropic heat-conduction equation, and compare its results with those of existing finite-volume schemes for anisotropic diffusion. Also, we introduce a general model adaptation of the steady diffusion equation for extremely anisotropic diffusion problems with closed field lines.
Vector domain decomposition schemes for parabolic equations
Vabishchevich, P. N.
2017-09-01
A new class of domain decomposition schemes for finding approximate solutions of timedependent problems for partial differential equations is proposed and studied. A boundary value problem for a second-order parabolic equation is used as a model problem. The general approach to the construction of domain decomposition schemes is based on partition of unity. Specifically, a vector problem is set up for solving problems in individual subdomains. Stability conditions for vector regionally additive schemes of first- and second-order accuracy are obtained.
The new WAGR data acquisition scheme
International Nuclear Information System (INIS)
Ellis, W.E.; Leng, J.H.; Smith, I.C.; Smith, M.R.
1976-06-01
The existing WAGR data acquisition equipment was inadequate to meet the requirements introduced by the installation of two additional experimental loops and was in any case due for replacement. A completely new scheme was planned and implemented based on mini-computers, which while preserving all the useful features of the old scheme provided additional flexibility and improved data display. Both the initial objectives of the design and the final implementation are discussed without introducing detailed descriptions of hardware or the programming techniques employed. Although the scheme solves a specific problem the general principles are more widely applicable and could readily be adapted to other data checking and display problems. (author)
Kinematic reversal schemes for the geomagnetic dipole.
Levy, E. H.
1972-01-01
Fluctuations in the distribution of cyclonic convective cells, in the earth's core, can reverse the sign of the geomagnetic field. Two kinematic reversal schemes are discussed. In the first scheme, a field maintained by cyclones concentrated at low latitude is reversed by a burst of cyclones at high latitude. Conversely, in the second scheme, a field maintained predominantly by cyclones in high latitudes is reversed by a fluctuation consisting of a burst of cyclonic convection at low latitude. The precise fluid motions which produce the geomagnetic field are not known. However, it appears that, whatever the details are, a fluctuation in the distribution of cyclonic cells over latitude can cause a geomagnetic reversal.
Autonomous Droop Scheme With Reduced Generation Cost
DEFF Research Database (Denmark)
Nutkani, Inam Ullah; Loh, Poh Chiang; Wang, Peng
2014-01-01
) of the microgrid. To reduce this TGC without relying on fast communication links, an autonomous droop scheme is proposed here, whose resulting power sharing is decided by the individual DG generation costs. Comparing it with the traditional scheme, the proposed scheme retains its simplicity and it is hence more....... This objective might, however, not suit microgrids well since DGs are usually of different types, unlike synchronous generators. Other factors like cost, efficiency, and emission penalty of each DG at different loading must be considered since they contribute directly to the total generation cost (TGC...
Cognitive radio networks dynamic resource allocation schemes
Wang, Shaowei
2014-01-01
This SpringerBrief presents a survey of dynamic resource allocation schemes in Cognitive Radio (CR) Systems, focusing on the spectral-efficiency and energy-efficiency in wireless networks. It also introduces a variety of dynamic resource allocation schemes for CR networks and provides a concise introduction of the landscape of CR technology. The author covers in detail the dynamic resource allocation problem for the motivations and challenges in CR systems. The Spectral- and Energy-Efficient resource allocation schemes are comprehensively investigated, including new insights into the trade-off
Algebraic K-theory of generalized schemes
DEFF Research Database (Denmark)
Anevski, Stella Victoria Desiree
and geometry over the field with one element. It also permits the construction of important Arakelov theoretical objects, such as the completion \\Spec Z of Spec Z. In this thesis, we prove a projective bundle theorem for the eld with one element and compute the Chow rings of the generalized schemes Sp\\ec ZN......Nikolai Durov has developed a generalization of conventional scheme theory in which commutative algebraic monads replace commutative unital rings as the basic algebraic objects. The resulting geometry is expressive enough to encompass conventional scheme theory, tropical algebraic geometry......, appearing in the construction of \\Spec Z....
Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi
2015-02-01
With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.
Energy Technology Data Exchange (ETDEWEB)
Lee, Won Woong; Lee, Jeong Ik [KAIST, Daejeon (Korea, Republic of)
2016-05-15
The existing nuclear system analysis codes such as RELAP5, TRAC, MARS and SPACE use the first-order numerical scheme in both space and time discretization. However, the first-order scheme is highly diffusive and less accurate due to the first order of truncation error. So, the numerical diffusion problem which makes the gradients to be smooth in the regions where the gradients should be steep can occur during the analysis, which often predicts less conservatively than the reality. Therefore, the first-order scheme is not always useful in many applications such as boron solute transport. RELAP7 which is an advanced nuclear reactor system safety analysis code using the second-order numerical scheme in temporal and spatial discretization is being developed by INL (Idaho National Laboratory) since 2011. Therefore, for better predictive performance of the safety of nuclear reactor systems, more accurate nuclear reactor system analysis code is needed for Korea too to follow the global trend of nuclear safety analysis. Thus, this study will evaluate the feasibility of applying the higher-order numerical scheme to the next generation nuclear system analysis code to provide the basis for the better nuclear system analysis code development. The accuracy is enhanced in the spatial second-order scheme and the numerical diffusion problem is alleviated while indicates significantly lower maximum Courant limit and the numerical dispersion issue which produces spurious oscillation and non-physical results in the higher-order scheme. If the spatial scheme is the first order scheme then the temporal second-order scheme provides almost the same result with the temporal firstorder scheme. However, when the temporal second order scheme and the spatial second-order scheme are applied together, the numerical dispersion can occur more severely. For the more in-depth study, the verification and validation of the NTS code built in MATLAB will be conducted further and expanded to handle two
A survey of Strong Convergent Schemes for the Simulation of ...
African Journals Online (AJOL)
We considered strong convergent stochastic schemes for the simulation of stochastic differential equations. The stochastic Taylor's expansion, which is the main tool used for the derivation of strong convergent schemes; the Euler Maruyama, Milstein scheme, stochastic multistep schemes, Implicit and Explicit schemes were ...
Setting aside transactions from pyramid schemes as impeachable ...
African Journals Online (AJOL)
These schemes, which are often referred to as pyramid or Ponzi schemes, are unsustainable operations and give rise to problems in the law of insolvency. Investors in these schemes are often left empty-handed upon the scheme's eventual collapse and insolvency. Investors who received pay-outs from the scheme find ...
Directory of Open Access Journals (Sweden)
Muhammad
2017-01-01
Full Text Available We review harvested energy prediction schemes to be used in wireless sensor networks and explore the relative merits of landmark solutions. We propose enhancements to the well-known Profile-Energy (Pro-Energy model, the so-called Improved Profile-Energy (IPro-Energy, and compare its performance with Accurate Solar Irradiance Prediction Model (ASIM, Pro-Energy, and Weather Conditioned Moving Average (WCMA. The performance metrics considered are the prediction accuracy and the execution time which measure the implementation complexity. In addition, the effectiveness of the considered models, when integrated in an energy management scheme, is also investigated in terms of the achieved throughput and the energy consumption. Both solar irradiance and wind power datasets are used for the evaluation study. Our results indicate that the proposed IPro-Energy scheme outperforms the other candidate models in terms of the prediction accuracy achieved by up to 78% for short term predictions and 50% for medium term prediction horizons. For long term predictions, its prediction accuracy is comparable to the Pro-Energy model but outperforms the other models by up to 64%. In addition, the IPro scheme is able to achieve the highest throughput when integrated in the developed energy management scheme. Finally, the ASIM scheme reports the smallest implementation complexity.
Speeding up Monte Carlo molecular simulation by a non-conservative early rejection scheme
Kadoura, Ahmad Salim
2015-04-23
Monte Carlo (MC) molecular simulation describes fluid systems with rich information, and it is capable of predicting many fluid properties of engineering interest. In general, it is more accurate and representative than equations of state. On the other hand, it requires much more computational effort and simulation time. For that purpose, several techniques have been developed in order to speed up MC molecular simulations while preserving their precision. In particular, early rejection schemes are capable of reducing computational cost by reaching the rejection decision for the undesired MC trials at an earlier stage in comparison to the conventional scheme. In a recent work, we have introduced a ‘conservative’ early rejection scheme as a method to accelerate MC simulations while producing exactly the same results as the conventional algorithm. In this paper, we introduce a ‘non-conservative’ early rejection scheme, which is much faster than the conservative scheme, yet it preserves the precision of the method. The proposed scheme is tested for systems of structureless Lennard-Jones particles in both canonical and NVT-Gibbs ensembles. Numerical experiments were conducted at several thermodynamic conditions for different number of particles. Results show that at certain thermodynamic conditions, the non-conservative method is capable of doubling the speed of the MC molecular simulations in both canonical and NVT-Gibbs ensembles. © 2015 Taylor & Francis
Analysis of sensitivity to different parameterization schemes for a subtropical cyclone
Quitián-Hernández, L.; Fernández-González, S.; González-Alemán, J. J.; Valero, F.; Martín, M. L.
2018-05-01
A sensitivity analysis to diverse WRF model physical parameterization schemes is carried out during the lifecycle of a Subtropical cyclone (STC). STCs are low-pressure systems that share tropical and extratropical characteristics, with hybrid thermal structures. In October 2014, a STC made landfall in the Canary Islands, causing widespread damage from strong winds and precipitation there. The system began to develop on October 18 and its effects lasted until October 21. Accurate simulation of this type of cyclone continues to be a major challenge because of its rapid intensification and unique characteristics. In the present study, several numerical simulations were performed using the WRF model to do a sensitivity analysis of its various parameterization schemes for the development and intensification of the STC. The combination of parameterization schemes that best simulated this type of phenomenon was thereby determined. In particular, the parameterization combinations that included the Tiedtke cumulus schemes had the most positive effects on model results. Moreover, concerning STC track validation, optimal results were attained when the STC was fully formed and all convective processes stabilized. Furthermore, to obtain the parameterization schemes that optimally categorize STC structure, a verification using Cyclone Phase Space is assessed. Consequently, the combination of parameterizations including the Tiedtke cumulus schemes were again the best in categorizing the cyclone's subtropical structure. For strength validation, related atmospheric variables such as wind speed and precipitable water were analyzed. Finally, the effects of using a deterministic or probabilistic approach in simulating intense convective phenomena were evaluated.
On the modelling of compressible inviscid flow problems using AUSM schemes
Directory of Open Access Journals (Sweden)
Hajžman M.
2007-11-01
Full Text Available During last decades, upwind schemes have become a popular method in the field of computational fluid dynamics. Although they are only first order accurate, AUSM (Advection Upstream Splitting Method schemes proved to be well suited for modelling of compressible flows due to their robustness and ability of capturing shock discontinuities. In this paper, we review the composition of the AUSM flux-vector splitting scheme and its improved version noted AUSM+, proposed by Liou, for the solution of the Euler equations. Mach number splitting functions operating with values from adjacent cells are used to determine numerical convective fluxes and pressure splitting is used for the evaluation of numerical pressure fluxes. Both versions of the AUSM scheme are applied for solving some test problems such as one-dimensional shock tube problem and three dimensional GAMM channel. Features of the schemes are discussed in comparison with some explicit central schemes of the first order accuracy (Lax-Friedrichs and of the second order accuracy (MacCormack.
A generalized form of the Bernoulli Trial collision scheme in DSMC: Derivation and evaluation
Roohi, Ehsan; Stefanov, Stefan; Shoja-Sani, Ahmad; Ejraei, Hossein
2018-02-01
The impetus of this research is to present a generalized Bernoulli Trial collision scheme in the context of the direct simulation Monte Carlo (DSMC) method. Previously, a subsequent of several collision schemes have been put forward, which were mathematically based on the Kac stochastic model. These include Bernoulli Trial (BT), Ballot Box (BB), Simplified Bernoulli Trial (SBT) and Intelligent Simplified Bernoulli Trial (ISBT) schemes. The number of considered pairs for a possible collision in the above-mentioned schemes varies between N (l) (N (l) - 1) / 2 in BT, 1 in BB, and (N (l) - 1) in SBT or ISBT, where N (l) is the instantaneous number of particles in the lth cell. Here, we derive a generalized form of the Bernoulli Trial collision scheme (GBT) where the number of selected pairs is any desired value smaller than (N (l) - 1), i.e., Nsel < (N (l) - 1), keeping the same the collision frequency and accuracy of the solution as the original SBT and BT models. We derive two distinct formulas for the GBT scheme, where both formula recover BB and SBT limits if Nsel is set as 1 and N (l) - 1, respectively, and provide accurate solutions for a wide set of test cases. The present generalization further improves the computational efficiency of the BT-based collision models compared to the standard no time counter (NTC) and nearest neighbor (NN) collision models.
Betatron tune correction schemes in nuclotron
International Nuclear Information System (INIS)
Shchepunov, V.A.
1992-01-01
Algorithms of the betatron tune corrections in Nuclotron with sextupolar and octupolar magnets are considered. Second order effects caused by chromaticity correctors are taken into account and sextupolar compensation schemes are proposed to suppress them. 6 refs.; 1 tab
A Directed Signature Scheme and its Applications
Lal, Sunder; Kumar, Manoj
2004-01-01
This paper presents a directed signature scheme with the property that the signature can be verified only with the help of signer or signature receiver. We also propose its applications to share verification of signatures and to threshold cryptosystems.
ONU Power Saving Scheme for EPON System
Mukai, Hiroaki; Tano, Fumihiko; Tanaka, Masaki; Kozaki, Seiji; Yamanaka, Hideaki
PON (Passive Optical Network) achieves FTTH (Fiber To The Home) economically, by sharing an optical fiber among plural subscribers. Recently, global climate change has been recognized as a serious near term problem. Power saving techniques for electronic devices are important. In PON system, the ONU (Optical Network Unit) power saving scheme has been studied and defined in XG-PON. In this paper, we propose an ONU power saving scheme for EPON. Then, we present an analysis of the power reduction effect and the data transmission delay caused by the ONU power saving scheme. According to the analysis, we propose an efficient provisioning method for the ONU power saving scheme which is applicable to both of XG-PON and EPON.
Nigeria's first national social protection scheme | IDRC ...
International Development Research Centre (IDRC) Digital Library (Canada)
2017-06-14
Jun 14, 2017 ... Women and children at an IDP Camp in DRC ... The cash transfer was provided through the Nigerian Ekiti State Social Security Scheme, ... national policy conference to discuss the findings with media and policy stakeholders.
Verifiable Secret Redistribution for Threshold Sharing Schemes
National Research Council Canada - National Science Library
Wong, Theodore M; Wang, Chenxi; Wing, Jeannette M
2002-01-01
.... Our protocol guards against dynamic adversaries. We observe that existing protocols either cannot be readily extended to allow redistribution between different threshold schemes, or have vulnerabilities that allow faulty old shareholders...
Boson expansion theory in the seniority scheme
International Nuclear Information System (INIS)
Tamura, T.; Li, C.; Pedrocchi, V.G.
1985-01-01
A boson expansion formalism in the seniority scheme is presented and its relation with number-conserving quasiparticle calculations is elucidated. Accuracy and convergence are demonstrated numerically. A comparative discussion with other related approaches is given
Designing optimal sampling schemes for field visits
CSIR Research Space (South Africa)
Debba, Pravesh
2008-10-01
Full Text Available This is a presentation of a statistical method for deriving optimal spatial sampling schemes. The research focuses on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting (SFF...
Secret Sharing Schemes and Advanced Encryption Standard
2015-09-01
25 4.7 Computational Example . . . . . . . . . . . . . . . . . . . . . 26 5 Side-Channel Effect on Advanced Encryption Standard ( AES ) 31...improvements, and to build upon them to discuss the side-channel effects on the Advanced Encryption Standard ( AES ). The following questions are asked...secret sharing scheme? • Can the improvements to the current secret sharing scheme prove to be beneficial in strengthening/weakening AES encryption
Cost Comparison Among Provable Data Possession Schemes
2016-03-01
of Acronyms and Abbreviations AE authenticated encryption AWS Amazon Web Services CIO Chief Information Officer DISA Defense Information Systems Agency...the number of possible challenges, H be a cryptographic hash function, AE be an authenticated encryption scheme, f be a keyed pseudo-random function...key kenc R←− Kenc for symmetric encryption scheme Enc, and a random HMAC key kmac R←− Kmac. The secret key is sk = 〈kenc, kmac〉 and public key is pk
A Classification Scheme for Production System Processes
DEFF Research Database (Denmark)
Sørensen, Daniel Grud Hellerup; Brunø, Thomas Ditlev; Nielsen, Kjeld
2018-01-01
Manufacturing companies often have difficulties developing production platforms, partly due to the complexity of many production systems and difficulty determining which processes constitute a platform. Understanding production processes is an important step to identifying candidate processes...... for a production platform based on existing production systems. Reviewing a number of existing classifications and taxonomies, a consolidated classification scheme for processes in production of discrete products has been outlined. The classification scheme helps ensure consistency during mapping of existing...
A scheme for the hadron spectrum
International Nuclear Information System (INIS)
Hoyer, P.
1978-03-01
A theoretically self-consistent dual scheme is proposed for the hadron spectrum, which follows naturally from basic requirements and phenomenology. All resonance properties and couplings are calculable in terms of a limited number of input parameters. A first application to ππ→ππ explains the linear trajectory and small daughter couplings. The Zweig rule and the decoupling of baryonium from mesons are expected to be consequences of the scheme. (Auth.)
Sellafield site (including Drigg) emergency scheme manual
International Nuclear Information System (INIS)
1987-02-01
This Emergency Scheme defines the organisation and procedures available should there be an accident at the Sellafield Site which results in, or may result in, the release of radioactive material, or the generation of a high radiation field, which might present a hazard to employees and/or the general public. This manual covers the general principles of the total emergency scheme and those detailed procedures which are not specific to any single department. (U.K.)
Signature scheme based on bilinear pairs
Tong, Rui Y.; Geng, Yong J.
2013-03-01
An identity-based signature scheme is proposed by using bilinear pairs technology. The scheme uses user's identity information as public key such as email address, IP address, telephone number so that it erases the cost of forming and managing public key infrastructure and avoids the problem of user private generating center generating forgery signature by using CL-PKC framework to generate user's private key.
An Optimization Scheme for ProdMod
International Nuclear Information System (INIS)
Gregory, M.V.
1999-01-01
A general purpose dynamic optimization scheme has been devised in conjunction with the ProdMod simulator. The optimization scheme is suitable for the Savannah River Site (SRS) High Level Waste (HLW) complex operations, and able to handle different types of optimizations such as linear, nonlinear, etc. The optimization is performed in the stand-alone FORTRAN based optimization deliver, while the optimizer is interfaced with the ProdMod simulator for flow of information between the two
Employee-referral schemes and discrimination law
Connolly, M.
2015-01-01
Employee-referral schemes (‘introduce a friend’) are in common usage in recruitment. They carry a potential to discriminate by perpetuating an already unbalanced workforce (say, by gender and ethnicity). With this, or course, comes the risk of litigation and bad publicity as well as any inherent inefficiencies associated with discrimination. This article is threefold. First, it examines the present state of the law. Second, it is based on a survey of employers who use these schemes. Third, it...
Basis scheme of personnel training system
International Nuclear Information System (INIS)
Rerucha, F.; Odehnal, J.
1998-01-01
Basic scheme of the training system for NPP personnel of CEZ-EDU personnel training system is described in detail. This includes: specific training both basic and periodic, and professional training meaning specialized and continuous training. The following schemes are shown: licence acquisition and authorisation for PWR-440 Control Room Personnel; upgrade training for job positions of Control Room personnel; maintaining and refresh training; module training for certificate acquisition of servicing shift and operating personnel
Navigators’ Behavior in Traffic Separation Schemes
Directory of Open Access Journals (Sweden)
Zbigniew Pietrzykowski
2015-03-01
Full Text Available One of the areas of decision support in the navigational ship conduct process is a Traffic Separation Scheme. TSSs are established in areas with high traffic density, often near the shore and in port approaches. The main purpose of these schemes is to improve maritime safety by channeling vessel traffic into streams. Traffic regulations as well as ships behavior in real conditions in chosen TSSs have been analyzed in order to develop decision support algorithms.
Per-Pixel, Dual-Counter Scheme for Optical Communications
Farr, William H.; Bimbaum, Kevin M.; Quirk, Kevin J.; Sburlan, Suzana; Sahasrabudhe, Adit
2013-01-01
Free space optical communications links from deep space are projected to fulfill future NASA communication requirements for 2020 and beyond. Accurate laser-beam pointing is required to achieve high data rates at low power levels.This innovation is a per-pixel processing scheme using a pair of three-state digital counters to implement acquisition and tracking of a dim laser beacon transmitted from Earth for pointing control of an interplanetary optical communications system using a focal plane array of single sensitive detectors. It shows how to implement dim beacon acquisition and tracking for an interplanetary optical transceiver with a method that is suitable for both achieving theoretical performance, as well as supporting additional functions of high data rate forward links and precision spacecraft ranging.
A Scheme for Evaluating Feral Horse Management Strategies
Directory of Open Access Journals (Sweden)
L. L. Eberhardt
2012-01-01
Full Text Available Context. Feral horses are an increasing problem in many countries and are popular with the public, making management difficult. Aims. To develop a scheme useful in planning management strategies. Methods. A model is developed and applied to four different feral horse herds, three of which have been quite accurately counted over the years. Key Results. The selected model has been tested on a variety of data sets, with emphasis on the four sets of feral horse data. An alternative, nonparametric model is used to check the selected parametric approach. Conclusions. A density-dependent response was observed in all 4 herds, even though only 8 observations were available in each case. Consistency in the model fits suggests that small starting herds can be used to test various management techniques. Implications. Management methods can be tested on actual, confined populations.
Proper use of colour schemes for image data visualization
Vozenilek, Vit; Vondrakova, Alena
2018-04-01
With the development of information and communication technologies, new technologies are leading to an exponential increase in the volume and types of data available. At this time of the information society, data is one of the most important arguments for policy making, crisis management, research and education, and many other fields. An essential task for experts is to share high-quality data providing the right information at the right time. Designing of data presentation can largely influence the user perception and the cognitive aspects of data interpretation. Significant amounts of data can be visualised in some way. One image can thus replace a considerable number of numeric tables and texts. The paper focuses on the accurate visualisation of data from the point of view of used colour schemes. Bad choose of colours can easily confuse the user and lead to the data misinterpretation. On the contrary, correctly created visualisations can make information transfer much simpler and more efficient.
A Classification Scheme for Literary Characters
Directory of Open Access Journals (Sweden)
Matthew Berry
2017-10-01
Full Text Available There is no established classification scheme for literary characters in narrative theory short of generic categories like protagonist vs. antagonist or round vs. flat. This is so despite the ubiquity of stock characters that recur across media, cultures, and historical time periods. We present here a proposal of a systematic psychological scheme for classifying characters from the literary and dramatic fields based on a modification of the Thomas-Kilmann (TK Conflict Mode Instrument used in applied studies of personality. The TK scheme classifies personality along the two orthogonal dimensions of assertiveness and cooperativeness. To examine the validity of a modified version of this scheme, we had 142 participants provide personality ratings for 40 characters using two of the Big Five personality traits as well as assertiveness and cooperativeness from the TK scheme. The results showed that assertiveness and cooperativeness were orthogonal dimensions, thereby supporting the validity of using a modified version of TK’s two-dimensional scheme for classifying characters.
Canonical, stable, general mapping using context schemes.
Novak, Adam M; Rosen, Yohei; Haussler, David; Paten, Benedict
2015-11-15
Sequence mapping is the cornerstone of modern genomics. However, most existing sequence mapping algorithms are insufficiently general. We introduce context schemes: a method that allows the unambiguous recognition of a reference base in a query sequence by testing the query for substrings from an algorithmically defined set. Context schemes only map when there is a unique best mapping, and define this criterion uniformly for all reference bases. Mappings under context schemes can also be made stable, so that extension of the query string (e.g. by increasing read length) will not alter the mapping of previously mapped positions. Context schemes are general in several senses. They natively support the detection of arbitrary complex, novel rearrangements relative to the reference. They can scale over orders of magnitude in query sequence length. Finally, they are trivially extensible to more complex reference structures, such as graphs, that incorporate additional variation. We demonstrate empirically the existence of high-performance context schemes, and present efficient context scheme mapping algorithms. The software test framework created for this study is available from https://registry.hub.docker.com/u/adamnovak/sequence-graphs/. anovak@soe.ucsc.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Cancelable remote quantum fingerprint templates protection scheme
International Nuclear Information System (INIS)
Liao Qin; Guo Ying; Huang Duan
2017-01-01
With the increasing popularity of fingerprint identification technology, its security and privacy have been paid much attention. Only the security and privacy of biological information are insured, the biological technology can be better accepted and used by the public. In this paper, we propose a novel quantum bit (qbit)-based scheme to solve the security and privacy problem existing in the traditional fingerprint identification system. By exploiting the properties of quantm mechanics, our proposed scheme, cancelable remote quantum fingerprint templates protection scheme, can achieve the unconditional security guaranteed in an information-theoretical sense. Moreover, this novel quantum scheme can invalidate most of the attacks aimed at the fingerprint identification system. In addition, the proposed scheme is applicable to the requirement of remote communication with no need to worry about its security and privacy during the transmission. This is an absolute advantage when comparing with other traditional methods. Security analysis shows that the proposed scheme can effectively ensure the communication security and the privacy of users’ information for the fingerprint identification. (paper)
Efficient multiparty quantum-secret-sharing schemes
International Nuclear Information System (INIS)
Xiao Li; Deng Fuguo; Long Guilu; Pan Jianwei
2004-01-01
In this work, we generalize the quantum-secret-sharing scheme of Hillery, Buzek, and Berthiaume [Phys. Rev. A 59, 1829 (1999)] into arbitrary multiparties. Explicit expressions for the shared secret bit is given. It is shown that in the Hillery-Buzek-Berthiaume quantum-secret-sharing scheme the secret information is shared in the parity of binary strings formed by the measured outcomes of the participants. In addition, we have increased the efficiency of the quantum-secret-sharing scheme by generalizing two techniques from quantum key distribution. The favored-measuring-basis quantum-secret-sharing scheme is developed from the Lo-Chau-Ardehali technique [H. K. Lo, H. F. Chau, and M. Ardehali, e-print quant-ph/0011056] where all the participants choose their measuring-basis asymmetrically, and the measuring-basis-encrypted quantum-secret-sharing scheme is developed from the Hwang-Koh-Han technique [W. Y. Hwang, I. G. Koh, and Y. D. Han, Phys. Lett. A 244, 489 (1998)] where all participants choose their measuring basis according to a control key. Both schemes are asymptotically 100% in efficiency, hence nearly all the Greenberger-Horne-Zeilinger states in a quantum-secret-sharing process are used to generate shared secret information
International Nuclear Information System (INIS)
Kriventsev, Vladimir
2000-09-01
Most of thermal hydraulic processes in nuclear engineering can be described by general convection-diffusion equations that are often can be simulated numerically with finite-difference method (FDM). An effective scheme for finite-difference discretization of such equations is presented in this report. The derivation of this scheme is based on analytical solutions of a simplified one-dimensional equation written for every control volume of the finite-difference mesh. These analytical solutions are constructed using linearized representations of both diffusion coefficient and source term. As a result, the Efficient Finite-Differencing (EFD) scheme makes it possible to significantly improve the accuracy of numerical method even using mesh systems with fewer grid nodes that, in turn, allows to speed-up numerical simulation. EFD has been carefully verified on the series of sample problems for which either analytical or very precise numerical solutions can be found. EFD has been compared with other popular FDM schemes including novel, accurate (as well as sophisticated) methods. Among the methods compared were well-known central difference scheme, upwind scheme, exponential differencing and hybrid schemes of Spalding. Also, newly developed finite-difference schemes, such as the the quadratic upstream (QUICK) scheme of Leonard, the locally analytic differencing (LOAD) scheme of Wong and Raithby, the flux-spline scheme proposed by Varejago and Patankar as well as the latest LENS discretization of Sakai have been compared. Detailed results of this comparison are given in this report. These tests have shown a high efficiency of the EFD scheme. For most of sample problems considered EFD has demonstrated the numerical error that appeared to be in orders of magnitude lower than that of other discretization methods. Or, in other words, EFD has predicted numerical solution with the same given numerical error but using much fewer grid nodes. In this report, the detailed
Accurate measurements of neutron activation cross sections
International Nuclear Information System (INIS)
Semkova, V.
1999-01-01
The applications of some recent achievements of neutron activation method on high intensity neutron sources are considered from the view point of associated errors of cross sections data for neutron induced reaction. The important corrections in -y-spectrometry insuring precise determination of the induced radioactivity, methods for accurate determination of the energy and flux density of neutrons, produced by different sources, and investigations of deuterium beam composition are considered as factors determining the precision of the experimental data. The influence of the ion beam composition on the mean energy of neutrons has been investigated by measurement of the energy of neutrons induced by different magnetically analysed deuterium ion groups. Zr/Nb method for experimental determination of the neutron energy in the 13-15 MeV energy range allows to measure energy of neutrons from D-T reaction with uncertainty of 50 keV. Flux density spectra from D(d,n) E d = 9.53 MeV and Be(d,n) E d = 9.72 MeV are measured by PHRS and foil activation method. Future applications of the activation method on NG-12 are discussed. (author)
Spectrally accurate initial data in numerical relativity
Battista, Nicholas A.
Einstein's theory of general relativity has radically altered the way in which we perceive the universe. His breakthrough was to realize that the fabric of space is deformable in the presence of mass, and that space and time are linked into a continuum. Much evidence has been gathered in support of general relativity over the decades. Some of the indirect evidence for GR includes the phenomenon of gravitational lensing, the anomalous perihelion of mercury, and the gravitational redshift. One of the most striking predictions of GR, that has not yet been confirmed, is the existence of gravitational waves. The primary source of gravitational waves in the universe is thought to be produced during the merger of binary black hole systems, or by binary neutron stars. The starting point for computer simulations of black hole mergers requires highly accurate initial data for the space-time metric and for the curvature. The equations describing the initial space-time around the black hole(s) are non-linear, elliptic partial differential equations (PDE). We will discuss how to use a pseudo-spectral (collocation) method to calculate the initial puncture data corresponding to single black hole and binary black hole systems.
A stiffly accurate integrator for elastodynamic problems
Michels, Dominik L.
2017-07-21
We present a new integration algorithm for the accurate and efficient solution of stiff elastodynamic problems governed by the second-order ordinary differential equations of structural mechanics. Current methods have the shortcoming that their performance is highly dependent on the numerical stiffness of the underlying system that often leads to unrealistic behavior or a significant loss of efficiency. To overcome these limitations, we present a new integration method which is based on a mathematical reformulation of the underlying differential equations, an exponential treatment of the full nonlinear forcing operator as opposed to more standard partially implicit or exponential approaches, and the utilization of the concept of stiff accuracy which ensures that the efficiency of the simulations is significantly less sensitive to increased stiffness. As a consequence, we are able to tremendously accelerate the simulation of stiff systems compared to established integrators and significantly increase the overall accuracy. The advantageous behavior of this approach is demonstrated on a broad spectrum of complex examples like deformable bodies, textiles, bristles, and human hair. Our easily parallelizable integrator enables more complex and realistic models to be explored in visual computing without compromising efficiency.
Geodetic analysis of disputed accurate qibla direction
Saksono, Tono; Fulazzaky, Mohamad Ali; Sari, Zamah
2018-04-01
Muslims perform the prayers facing towards the correct qibla direction would be the only one of the practical issues in linking theoretical studies with practice. The concept of facing towards the Kaaba in Mecca during the prayers has long been the source of controversy among the muslim communities to not only in poor and developing countries but also in developed countries. The aims of this study were to analyse the geodetic azimuths of qibla calculated using three different models of the Earth. The use of ellipsoidal model of the Earth could be the best method for determining the accurate direction of Kaaba from anywhere on the Earth's surface. A muslim cannot direct himself towards the qibla correctly if he cannot see the Kaaba due to setting out process and certain motions during the prayer this can significantly shift the qibla direction from the actual position of the Kaaba. The requirement of muslim prayed facing towards the Kaaba is more as spiritual prerequisite rather than physical evidence.
A new scheme for ATLAS trigger simulation using legacy code
International Nuclear Information System (INIS)
Galster, Gorm; Stelzer, Joerg; Wiedenmann, Werner
2014-01-01
Analyses at the LHC which search for rare physics processes or determine with high precision Standard Model parameters require accurate simulations of the detector response and the event selection processes. The accurate determination of the trigger response is crucial for the determination of overall selection efficiencies and signal sensitivities. For the generation and the reconstruction of simulated event data, the most recent software releases are usually used to ensure the best agreement between simulated data and real data. For the simulation of the trigger selection process, however, ideally the same software release that was deployed when the real data were taken should be used. This potentially requires running software dating many years back. Having a strategy for running old software in a modern environment thus becomes essential when data simulated for past years start to present a sizable fraction of the total. We examined the requirements and possibilities for such a simulation scheme within the ATLAS software framework and successfully implemented a proof-of-concept simulation chain. One of the greatest challenges was the choice of a data format which promises long term compatibility with old and new software releases. Over the time periods envisaged, data format incompatibilities are also likely to emerge in databases and other external support services. Software availability may become an issue, when e.g. the support for the underlying operating system might stop. In this paper we present the encountered problems and developed solutions, and discuss proposals for future development. Some ideas reach beyond the retrospective trigger simulation scheme in ATLAS as they also touch more generally aspects of data preservation.
CPSFS: A Credible Personalized Spam Filtering Scheme by Crowdsourcing
Directory of Open Access Journals (Sweden)
Xin Liu
2017-01-01
Full Text Available Email spam consumes a lot of network resources and threatens many systems because of its unwanted or malicious content. Most existing spam filters only target complete-spam but ignore semispam. This paper proposes a novel and comprehensive CPSFS scheme: Credible Personalized Spam Filtering Scheme, which classifies spam into two categories: complete-spam and semispam, and targets filtering both kinds of spam. Complete-spam is always spam for all users; semispam is an email identified as spam by some users and as regular email by other users. Most existing spam filters target complete-spam but ignore semispam. In CPSFS, Bayesian filtering is deployed at email servers to identify complete-spam, while semispam is identified at client side by crowdsourcing. An email user client can distinguish junk from legitimate emails according to spam reports from credible contacts with the similar interests. Social trust and interest similarity between users and their contacts are calculated so that spam reports are more accurately targeted to similar users. The experimental results show that the proposed CPSFS can improve the accuracy rate of distinguishing spam from legitimate emails compared with that of Bayesian filter alone.
Kuster, Daniel J; Liu, Chengyu; Fang, Zheng; Ponder, Jay W; Marshall, Garland R
2015-01-01
Theoretical and experimental evidence for non-linear hydrogen bonds in protein helices is ubiquitous. In particular, amide three-centered hydrogen bonds are common features of helices in high-resolution crystal structures of proteins. These high-resolution structures (1.0 to 1.5 Å nominal crystallographic resolution) position backbone atoms without significant bias from modeling constraints and identify Φ = -62°, ψ = -43 as the consensus backbone torsional angles of protein helices. These torsional angles preserve the atomic positions of α-β carbons of the classic Pauling α-helix while allowing the amide carbonyls to form bifurcated hydrogen bonds as first suggested by Némethy et al. in 1967. Molecular dynamics simulations of a capped 12-residue oligoalanine in water with AMOEBA (Atomic Multipole Optimized Energetics for Biomolecular Applications), a second-generation force field that includes multipole electrostatics and polarizability, reproduces the experimentally observed high-resolution helical conformation and correctly reorients the amide-bond carbonyls into bifurcated hydrogen bonds. This simple modification of backbone torsional angles reconciles experimental and theoretical views to provide a unified view of amide three-centered hydrogen bonds as crucial components of protein helices. The reason why they have been overlooked by structural biologists depends on the small crankshaft-like changes in orientation of the amide bond that allows maintenance of the overall helical parameters (helix pitch (p) and residues per turn (n)). The Pauling 3.6(13) α-helix fits the high-resolution experimental data with the minor exception of the amide-carbonyl electron density, but the previously associated backbone torsional angles (Φ, Ψ) needed slight modification to be reconciled with three-atom centered H-bonds and multipole electrostatics. Thus, a new standard helix, the 3.6(13/10)-, Némethy- or N-helix, is proposed. Due to the use of constraints from
Duru, Kenneth
2014-12-01
© 2014 Elsevier Inc. In this paper, we develop a stable and systematic procedure for numerical treatment of elastic waves in discontinuous and layered media. We consider both planar and curved interfaces where media parameters are allowed to be discontinuous. The key feature is the highly accurate and provably stable treatment of interfaces where media discontinuities arise. We discretize in space using high order accurate finite difference schemes that satisfy the summation by parts rule. Conditions at layer interfaces are imposed weakly using penalties. By deriving lower bounds of the penalty strength and constructing discrete energy estimates we prove time stability. We present numerical experiments in two space dimensions to illustrate the usefulness of the proposed method for simulations involving typical interface phenomena in elastic materials. The numerical experiments verify high order accuracy and time stability.
FASTSIM2: a second-order accurate frictional rolling contact algorithm
Vollebregt, E. A. H.; Wilders, P.
2011-01-01
In this paper we consider the frictional (tangential) steady rolling contact problem. We confine ourselves to the simplified theory, instead of using full elastostatic theory, in order to be able to compute results fast, as needed for on-line application in vehicle system dynamics simulation packages. The FASTSIM algorithm is the leading technology in this field and is employed in all dominant railway vehicle system dynamics packages (VSD) in the world. The main contribution of this paper is a new version "FASTSIM2" of the FASTSIM algorithm, which is second-order accurate. This is relevant for VSD, because with the new algorithm 16 times less grid points are required for sufficiently accurate computations of the contact forces. The approach is based on new insights in the characteristics of the rolling contact problem when using the simplified theory, and on taking precise care of the contact conditions in the numerical integration scheme employed.
International Nuclear Information System (INIS)
Smedley-Stevenson, Richard P.; McClarren, Ryan G.
2015-01-01
This paper attempts to unify the asymptotic diffusion limit analysis of thermal radiation transport schemes, for a linear-discontinuous representation of the material temperature reconstructed from cell centred temperature unknowns, in a process known as ‘source tilting’. The asymptotic limits of both Monte Carlo (continuous in space) and deterministic approaches (based on linear-discontinuous finite elements) for solving the transport equation are investigated in slab geometry. The resulting discrete diffusion equations are found to have nonphysical terms that are proportional to any cell-edge discontinuity in the temperature representation. Based on this analysis it is possible to design accurate schemes for representing the material temperature, for coupling thermal radiation transport codes to a cell centred representation of internal energy favoured by ALE (arbitrary Lagrange–Eulerian) hydrodynamics schemes
Compact high order schemes with gradient-direction derivatives for absorbing boundary conditions
Gordon, Dan; Gordon, Rachel; Turkel, Eli
2015-09-01
We consider several compact high order absorbing boundary conditions (ABCs) for the Helmholtz equation in three dimensions. A technique called "the gradient method" (GM) for ABCs is also introduced and combined with the high order ABCs. GM is based on the principle of using directional derivatives in the direction of the wavefront propagation. The new ABCs are used together with the recently introduced compact sixth order finite difference scheme for variable wave numbers. Experiments on problems with known analytic solutions produced very accurate results, demonstrating the efficacy of the high order schemes, particularly when combined with GM. The new ABCs are then applied to the SEG/EAGE Salt model, showing the advantages of the new schemes.
Energy Technology Data Exchange (ETDEWEB)
Smedley-Stevenson, Richard P., E-mail: richard.smedley-stevenson@awe.co.uk [AWE PLC, Aldermaston, Reading, Berkshire, RG7 4PR (United Kingdom); Department of Earth Science and Engineering, Imperial College London, SW7 2AZ (United Kingdom); McClarren, Ryan G., E-mail: rmcclarren@ne.tamu.edu [Department of Nuclear Engineering, Texas A & M University, College Station, TX 77843-3133 (United States)
2015-04-01
This paper attempts to unify the asymptotic diffusion limit analysis of thermal radiation transport schemes, for a linear-discontinuous representation of the material temperature reconstructed from cell centred temperature unknowns, in a process known as ‘source tilting’. The asymptotic limits of both Monte Carlo (continuous in space) and deterministic approaches (based on linear-discontinuous finite elements) for solving the transport equation are investigated in slab geometry. The resulting discrete diffusion equations are found to have nonphysical terms that are proportional to any cell-edge discontinuity in the temperature representation. Based on this analysis it is possible to design accurate schemes for representing the material temperature, for coupling thermal radiation transport codes to a cell centred representation of internal energy favoured by ALE (arbitrary Lagrange–Eulerian) hydrodynamics schemes.
A New Grünwald-Letnikov Derivative Derived from a Second-Order Scheme
Directory of Open Access Journals (Sweden)
B. A. Jacobs
2015-01-01
Full Text Available A novel derivation of a second-order accurate Grünwald-Letnikov-type approximation to the fractional derivative of a function is presented. This scheme is shown to be second-order accurate under certain modifications to account for poor accuracy in approximating the asymptotic behavior near the lower limit of differentiation. Some example functions are chosen and numerical results are presented to illustrate the efficacy of this new method over some other popular choices for discretizing fractional derivatives.
Accurate deuterium spectroscopy for fundamental studies
Wcisło, P.; Thibault, F.; Zaborowski, M.; Wójtewicz, S.; Cygan, A.; Kowzan, G.; Masłowski, P.; Komasa, J.; Puchalski, M.; Pachucki, K.; Ciuryło, R.; Lisak, D.
2018-07-01
We present an accurate measurement of the weak quadrupole S(2) 2-0 line in self-perturbed D2 and theoretical ab initio calculations of both collisional line-shape effects and energy of this rovibrational transition. The spectra were collected at the 247-984 Torr pressure range with a frequency-stabilized cavity ring-down spectrometer linked to an optical frequency comb (OFC) referenced to a primary time standard. Our line-shape modeling employed quantum calculations of molecular scattering (the pressure broadening and shift and their speed dependencies were calculated, while the complex frequency of optical velocity-changing collisions was fitted to experimental spectra). The velocity-changing collisions are handled with the hard-sphere collisional kernel. The experimental and theoretical pressure broadening and shift are consistent within 5% and 27%, respectively (the discrepancy for shift is 8% when referred not to the speed averaged value, which is close to zero, but to the range of variability of the speed-dependent shift). We use our high pressure measurement to determine the energy, ν0, of the S(2) 2-0 transition. The ab initio line-shape calculations allowed us to mitigate the expected collisional systematics reaching the 410 kHz accuracy of ν0. We report theoretical determination of ν0 taking into account relativistic and QED corrections up to α5. Our estimation of the accuracy of the theoretical ν0 is 1.3 MHz. We observe 3.4σ discrepancy between experimental and theoretical ν0.
Towards Accurate Application Characterization for Exascale (APEX)
Energy Technology Data Exchange (ETDEWEB)
Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-09-01
Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.
How flatbed scanners upset accurate film dosimetry
van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.
2016-01-01
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.
Accurate hydrocarbon estimates attained with radioactive isotope
International Nuclear Information System (INIS)
Hubbard, G.
1983-01-01
To make accurate economic evaluations of new discoveries, an oil company needs to know how much gas and oil a reservoir contains. The porous rocks of these reservoirs are not completely filled with gas or oil, but contain a mixture of gas, oil and water. It is extremely important to know what volume percentage of this water--called connate water--is contained in the reservoir rock. The percentage of connate water can be calculated from electrical resistivity measurements made downhole. The accuracy of this method can be improved if a pure sample of connate water can be analyzed or if the chemistry of the water can be determined by conventional logging methods. Because of the similarity of the mud filtrate--the water in a water-based drilling fluid--and the connate water, this is not always possible. If the oil company cannot distinguish between connate water and mud filtrate, its oil-in-place calculations could be incorrect by ten percent or more. It is clear that unless an oil company can be sure that a sample of connate water is pure, or at the very least knows exactly how much mud filtrate it contains, its assessment of the reservoir's water content--and consequently its oil or gas content--will be distorted. The oil companies have opted for the Repeat Formation Tester (RFT) method. Label the drilling fluid with small doses of tritium--a radioactive isotope of hydrogen--and it will be easy to detect and quantify in the sample
How flatbed scanners upset accurate film dosimetry
International Nuclear Information System (INIS)
Van Battum, L J; Verdaasdonk, R M; Heukelom, S; Huizenga, H
2016-01-01
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2–2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red–green–blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film. (paper)
Directory of Open Access Journals (Sweden)
Yungeun Kim
2012-01-01
Full Text Available Indoor localization systems typically locate users on their own local coordinates, while outdoor localization systems use global coordinates. To achieve seamless localization from outdoors to indoors, a handover technique that accurately provides a starting position to the indoor localization system is needed. However, existing schemes assume that a starting position is known a priori or uses a naïve approach to consider the last location obtained from GPS as the handover point. In this paper, we propose an accurate handover scheme that monitors the signal-to-noise ratio (SNR of the effective GPS satellites that are selected according to their altitude. We also propose an energy-efficient handover mechanism that reduces the GPS sampling interval gradually. Accuracy and energy efficiency are experimentally validated with the GPS logs obtained in real life.
Vogl, Matthias
2014-04-01
The paper analyzes the German inpatient capital costing scheme by assessing its cost module calculation. The costing scheme represents the first separated national calculation of performance-oriented capital cost lump sums per DRG. The three steps in the costing scheme are reviewed and assessed: (1) accrual of capital costs; (2) cost-center and cost category accounting; (3) data processing for capital cost modules. The assessment of each step is based on its level of transparency and efficiency. A comparative view on operating costing and the English costing scheme is given. Advantages of the scheme are low participation hurdles, low calculation effort for G-DRG calculation participants, highly differentiated cost-center/cost category separation, and advanced patient-based resource allocation. The exclusion of relevant capital costs, nontransparent resource allocation, and unclear capital cost modules, limit the managerial relevance and transparency of the capital costing scheme. The scheme generates the technical premises for a change from dual financing by insurances (operating costs) and state (capital costs) to a single financing source. The new capital costing scheme will intensify the discussion on how to solve the current investment backlog in Germany and can assist regulators in other countries with the introduction of accurate capital costing. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Neil Powell
2017-12-01
Full Text Available This paper considers how to achieve equitable water governance and the flow-on effects it has in terms of supporting sustainable development, drawing on case studies from the international climate change adaptation and governance project (CADWAGO. Water governance, like many other global issues, is becoming increasingly intractable (wicked with climate change and is, by the international community, being linked to instances of threats to human security, the war in the Sudanese Darfur and more recently the acts of terrorism perpetuated by ISIS. In this paper, we ask the question: how can situations characterized by water controversy (exacerbated by the uncertainties posed by climate change be reconciled? The main argument is based on a critique of the way the water security discourse appropriates expert (normal claims about human-biophysical relationships. When water challenges become increasingly securitized by the climate change discourse it becomes permissible to enact processes that legitimately transgress normative positions through post-normal actions. In contrast, the water equity discourse offers an alternative reading of wicked and post-normal water governance situations. We contend that by infusing norm critical considerations into the process of securitization, new sub-national constellations of agents will be empowered to enact changes; thereby bypassing vicious cycles of power brokering that characterize contemporary processes intended to address controversies.
Guzzi, Pietro Hiram; Milenković, Tijana
2017-01-05
Analogous to genomic sequence alignment that allows for across-species transfer of biological knowledge between conserved sequence regions, biological network alignment can be used to guide the knowledge transfer between conserved regions of molecular networks of different species. Hence, biological network alignment can be used to redefine the traditional notion of a sequence-based homology to a new notion of network-based homology. Analogous to genomic sequence alignment, there exist local and global biological network alignments. Here, we survey prominent and recent computational approaches of each network alignment type and discuss their (dis)advantages. Then, as it was recently shown that the two approach types are complementary, in the sense that they capture different slices of cellular functioning, we discuss the need to reconcile the two network alignment types and present a recent first step in this direction. We conclude with some open research problems on this topic and comment on the usefulness of network alignment in other domains besides computational biology. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Kulynych, Jennifer; Greely, Henry T
2017-04-01
Widespread use of medical records for research, without consent, attracts little scrutiny compared to biospecimen research, where concerns about genomic privacy prompted recent federal proposals to mandate consent. This paper explores an important consequence of the proliferation of electronic health records (EHRs) in this permissive atmosphere: with the advent of clinical gene sequencing, EHR-based secondary research poses genetic privacy risks akin to those of biospecimen research, yet regulators still permit researchers to call gene sequence data 'de-identified', removing such data from the protection of the federal Privacy Rule and federal human subjects regulations. Medical centers and other providers seeking to offer genomic 'personalized medicine' now confront the problem of governing the secondary use of clinical genomic data as privacy risks escalate. We argue that regulators should no longer permit HIPAA-covered entities to treat dense genomic data as de-identified health information. Even with this step, the Privacy Rule would still permit disclosure of clinical genomic data for research, without consent, under a data use agreement, so we also urge that providers give patients specific notice before disclosing clinical genomic data for research, permitting (where possible) some degree of choice and control. To aid providers who offer clinical gene sequencing, we suggest both general approaches and specific actions to reconcile patients' rights and interests with genomic research.
Mitrovica, Jerry X; Hay, Carling C; Morrow, Eric; Kopp, Robert E; Dumberry, Mathieu; Stanley, Sabine
2015-12-01
In 2002, Munk defined an important enigma of 20th century global mean sea-level (GMSL) rise that has yet to be resolved. First, he listed three canonical observations related to Earth's rotation [(i) the slowing of Earth's rotation rate over the last three millennia inferred from ancient eclipse observations, and changes in the (ii) amplitude and (iii) orientation of Earth's rotation vector over the last century estimated from geodetic and astronomic measurements] and argued that they could all be fit by a model of ongoing glacial isostatic adjustment (GIA) associated with the last ice age. Second, he demonstrated that prevailing estimates of the 20th century GMSL rise (~1.5 to 2.0 mm/year), after correction for the maximum signal from ocean thermal expansion, implied mass flux from ice sheets and glaciers at a level that would grossly misfit the residual GIA-corrected observations of Earth's rotation. We demonstrate that the combination of lower estimates of the 20th century GMSL rise (up to 1990) improved modeling of the GIA process and that the correction of the eclipse record for a signal due to angular momentum exchange between the fluid outer core and the mantle reconciles all three Earth rotation observations. This resolution adds confidence to recent estimates of individual contributions to 20th century sea-level change and to projections of GMSL rise to the end of the 21st century based on them.
Cook, Sharon A; Damato, Bertil; Marshall, Ernie; Salmon, Peter
2011-12-01
Influential views on how to protect patient autonomy in clinical care have been greatly shaped by rational and deliberative models of decision-making. Our aim was to understand how the general principle of respecting autonomy can be reconciled with the local reality of obtaining consent in a clinical situation that precludes extended deliberation. We interviewed 22 patients with intraocular melanoma who had been offered cytogenetic tumour typing to indicate whether the tumour was likely to shorten life considerably. They were interviewed before and/or up to 36 months after receiving cytogenetic results. Patients described their decision-making about the test and how they anticipated and used the results. Their accounts were analysed qualitatively, using inconsistencies at a descriptive level to guide interpretative analysis. Patients did not see a decision to be made. For those who accepted testing, their choice reflected trust of what the clinicians offered them. Patients anticipated that a good prognosis would be reassuring, but this response was not evident. Although they anticipated that a poor prognosis would enable end-of-life planning, adverse results were interpreted hopefully. In general, the meaning of the test for patients was not separable from ongoing care. Models of decision-making and associated consent procedures that emphasize patients' active consideration of isolated decision-making opportunities are invalid for clinical situations such as this. Hence, responsibility for ensuring that a procedure protects patients' interests rests with practitioners who offer it and cannot be delegated to patients. © 2010 Blackwell Publishing Ltd.
Greely, Henry T.
2017-01-01
Abstract Widespread use of medical records for research, without consent, attracts little scrutiny compared to biospecimen research, where concerns about genomic privacy prompted recent federal proposals to mandate consent. This paper explores an important consequence of the proliferation of electronic health records (EHRs) in this permissive atmosphere: with the advent of clinical gene sequencing, EHR-based secondary research poses genetic privacy risks akin to those of biospecimen research, yet regulators still permit researchers to call gene sequence data ‘de-identified’, removing such data from the protection of the federal Privacy Rule and federal human subjects regulations. Medical centers and other providers seeking to offer genomic ‘personalized medicine’ now confront the problem of governing the secondary use of clinical genomic data as privacy risks escalate. We argue that regulators should no longer permit HIPAA-covered entities to treat dense genomic data as de-identified health information. Even with this step, the Privacy Rule would still permit disclosure of clinical genomic data for research, without consent, under a data use agreement, so we also urge that providers give patients specific notice before disclosing clinical genomic data for research, permitting (where possible) some degree of choice and control. To aid providers who offer clinical gene sequencing, we suggest both general approaches and specific actions to reconcile patients’ rights and interests with genomic research. PMID:28852559
A promising sword of tomorrow: Human γδ T cell strategies reconcile allo-HSCT complications.
Hu, Yongxian; Cui, Qu; Luo, Chao; Luo, Yi; Shi, Jimin; Huang, He
2016-05-01
Allogeneic hematopoietic stem cell transplantation (allo-HSCT) is potentially a curative therapeutic option for hematological malignancies. In clinical practice, transplantation associated complications greatly affected the final therapeutical outcomes. Currently, primary disease relapse, graft-versus-host disease (GVHD) and infections remain the three leading causes of a high morbidity and mortality in allo-HSCT patients. Various strategies have been investigated in the past several decades including human γδ T cell-based therapeutical regimens. In different microenvironments, human γδ T cells assume features reminiscent of classical Th1, Th2, Th17, NKT and regulatory T cells, showing diverse biological functions. The cytotoxic γδ T cells could be utilized to target relapsed malignancies, and recently regulatory γδ T cells are defined as a novel implement for GVHD management. In addition, human γδ Τ cells facilitate control of post-transplantation infections and participate in tissue regeneration and wound healing processes. These features potentiate γδ T cells a versatile therapeutical agent to target transplantation associated complications. This review focuses on insights of applicable potentials of human γδ T cells reconciling complications associated with allo-HSCT. We believe an improved understanding of pertinent γδ T cell functions would be further exploited in the design of innovative immunotherapeutic approaches in allo-HSCT, to reduce mortality and morbidity, as well as improve quality of life for patients after transplantation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Crespin, Silvio J; Simonetti, Javier A
2018-05-11
Land has traditionally been spared to protect biodiversity; however, this approach has not succeeded by itself and requires a complementary strategy in human-dominated landscapes: land-sharing. Human-wildlife conflicts are rampant in a land-sharing context where wildlife co-occur with crops or livestock, but whose resulting interactions adversely affect the wellbeing of land owners, ultimately impeding coexistence. Therefore, true land-sharing only works if coexistence is also considered an end goal. We reviewed the literature on land-sharing and found that conflicts have not yet found their way into the land-sharing/sparing framework, with wildlife and humans co-occurring without coexisting in a dynamic process. To successfully implement a land-sharing approach, we must first acknowledge our failure to integrate the body of work on human-wildlife conflicts into the framework and work to implement multidisciplinary approaches from the ecological, economic, and sociological sciences to overcome and prevent conflicts. We suggest the use of Conflict Transformation by means of the Levels of Conflict Model to perceive both visible and deep-rooted causes of conflicts as opportunities to create problem-solving dynamics in affected socio-ecological landscapes. Reconciling farming and nature is possible by aiming for a transition to landscapes that truly share space by virtue of coexistence.
Directory of Open Access Journals (Sweden)
Gottesman Irving I
2006-05-01
Full Text Available Abstract Background Two large independent studies funded by the US government have assessed the impact of the Vietnam War on the prevalence of PTSD in US veterans. The National Vietnam Veterans Readjustment Study (NVVRS estimated the current PTSD prevalence to be 15.2% while the Vietnam Experience Study (VES estimated the prevalence to be 2.2%. We compared alternative criteria for estimating the prevalence of PTSD using the NVVRS and VES public use data sets collected more than 10 years after the United States withdrew troops from Vietnam. Methods We applied uniform diagnostic procedures to the male veterans from the NVVRS and VES to estimate PTSD prevalences based on varying criteria including one-month and lifetime prevalence estimates, combat and non-combat prevalence estimates, and prevalence estimates using both single and multiple indicator models. Results Using a narrow and specific set of criteria, we derived current prevalence estimates for combat-related PTSD of 2.5% and 2.9% for the VES and the NVVRS, respectively. Using a more broad and sensitive set of criteria, we derived current prevalence estimates for combat-related PTSD of 12.2% and 15.8% for the VES and NVVRS, respectively. Conclusion When comparable methods were applied to available data we reconciled disparate results and estimated similar current prevalences for both narrow and broad definitions of combat-related diagnoses of PTSD.
International Nuclear Information System (INIS)
Ladant, J.B.; Donnadieu, Y.; Dumas, C.
2014-01-01
The timing of the onset of the Antarctic Circumpolar Current (ACC) is a crucial event of the Cenozoic because of its cooling and isolating effect over Antarctica. It is intimately related to the glaciations occurring throughout the Cenozoic from the Eocene - Oligocene (EO) transition (∼ 34 Ma) to the middle Miocene glaciations (∼ 13.9 Ma). However, the exact timing of the onset remains debated, with evidence for a late Eocene setup contradicting other data pointing to an occurrence closer to the Oligocene - Miocene (OM) boundary. In this study, we show the potential impact of the Antarctic ice sheet on the initiation of a strong proto- ACC at the EO boundary. Our results reveal that the regional cooling effect of the ice sheet increases sea ice formation, which disrupts the meridional density gradient in the Southern Ocean and leads to the onset of a circumpolar current and its progressive strengthening. We also suggest that subsequent variations in atmospheric CO 2 , ice sheet volumes and tectonic reorganizations may have affected the ACC intensity after the Eocene - Oligocene transition. This allows us to build a hypothesis for the Cenozoic evolution of the Antarctic Circumpolar Current that may provide an explanation for the second initiation of the ACC at the Oligocene - Miocene boundary while reconciling evidence supporting both early Oligocene and early Miocene onset of the ACC. (authors)
Class of unconditionally stable second-order implicit schemes for hyperbolic and parabolic equations
International Nuclear Information System (INIS)
Lui, H.C.
The linearized Burgers equation is considered as a model u/sub t/ tau/sub x/ = bu/sub xx/, where the subscripts t and x denote the derivatives of the function u with respect to time t and space x; a and b are constants (b greater than or equal to 0). Numerical schemes for solving the equation are described that are second-order accurate, unconditionally stable, and dissipative of higher order. (U.S.)
Electricity Consumption Forecasting Scheme via Improved LSSVM with Maximum Correntropy Criterion
Jiandong Duan; Xinyu Qiu; Wentao Ma; Xuan Tian; Di Shang
2018-01-01
In recent years, with the deepening of China’s electricity sales side reform and electricity market opening up gradually, the forecasting of electricity consumption (FoEC) becomes an extremely important technique for the electricity market. At present, how to forecast the electricity accurately and make an evaluation of results scientifically are still key research topics. In this paper, we propose a novel prediction scheme based on the least-square support vector machine (LSSVM) model with a...
A stable higher order space time Galerkin marching-on-in-time scheme
Pray, Andrew J.
2013-07-01
We present a method for the stable solution of time-domain integral equations. The method uses a technique developed in [1] to accurately evaluate matrix elements. As opposed to existing stabilization schemes, the method presented uses higher order basis functions in time to improve the accuracy of the solver. The method is validated by showing convergence in temporal basis function order, time step size, and geometric discretization order. © 2013 IEEE.
Dynamic spectro-polarimeter based on a modified Michelson interferometric scheme.
Dembele, Vamara; Jin, Moonseob; Baek, Byung-Joon; Kim, Daesuk
2016-06-27
A simple dynamic spectro-polarimeter based on a modified Michelson interferometric scheme is described. The proposed system can extract a spectral Stokes vector of a transmissive anisotropic object. Detail theoretical background is derived and experiments are conducted to verify the feasibility of the proposed novel snapshot spectro-polarimeter. The proposed dynamic spectro-polarimeter enables us to extract highly accurate spectral Stokes vector of any transmissive anisotropic object with a frame rate of more than 20Hz.
A comparison of resampling schemes for estimating model observer performance with small ensembles
Elshahaby, Fatma E. A.; Jha, Abhinav K.; Ghaly, Michael; Frey, Eric C.
2017-09-01
In objective assessment of image quality, an ensemble of images is used to compute the 1st and 2nd order statistics of the data. Often, only a finite number of images is available, leading to the issue of statistical variability in numerical observer performance. Resampling-based strategies can help overcome this issue. In this paper, we compared different combinations of resampling schemes (the leave-one-out (LOO) and the half-train/half-test (HT/HT)) and model observers (the conventional channelized Hotelling observer (CHO), channelized linear discriminant (CLD) and channelized quadratic discriminant). Observer performance was quantified by the area under the ROC curve (AUC). For a binary classification task and for each observer, the AUC value for an ensemble size of 2000 samples per class served as a gold standard for that observer. Results indicated that each observer yielded a different performance depending on the ensemble size and the resampling scheme. For a small ensemble size, the combination [CHO, HT/HT] had more accurate rankings than the combination [CHO, LOO]. Using the LOO scheme, the CLD and CHO had similar performance for large ensembles. However, the CLD outperformed the CHO and gave more accurate rankings for smaller ensembles. As the ensemble size decreased, the performance of the [CHO, LOO] combination seriously deteriorated as opposed to the [CLD, LOO] combination. Thus, it might be desirable to use the CLD with the LOO scheme when smaller ensemble size is available.
A spatiotemporal-based scheme for efficient registration-based segmentation of thoracic 4-D MRI.
Yang, Y; Van Reeth, E; Poh, C L; Tan, C H; Tham, I W K
2014-05-01
Dynamic three-dimensional (3-D) (four-dimensional, 4-D) magnetic resonance (MR) imaging is gaining importance in the study of pulmonary motion for respiratory diseases and pulmonary tumor motion for radiotherapy. To perform quantitative analysis using 4-D MR images, segmentation of anatomical structures such as the lung and pulmonary tumor is required. Manual segmentation of entire thoracic 4-D MRI data that typically contains many 3-D volumes acquired over several breathing cycles is extremely tedious, time consuming, and suffers high user variability. This requires the development of new automated segmentation schemes for 4-D MRI data segmentation. Registration-based segmentation technique that uses automatic registration methods for segmentation has been shown to be an accurate method to segment structures for 4-D data series. However, directly applying registration-based segmentation to segment 4-D MRI series lacks efficiency. Here we propose an automated 4-D registration-based segmentation scheme that is based on spatiotemporal information for the segmentation of thoracic 4-D MR lung images. The proposed scheme saved up to 95% of computation amount while achieving comparable accurate segmentations compared to directly applying registration-based segmentation to 4-D dataset. The scheme facilitates rapid 3-D/4-D visualization of the lung and tumor motion and potentially the tracking of tumor during radiation delivery.
Financial incentive schemes in primary care
Directory of Open Access Journals (Sweden)
Gillam S
2015-09-01
Full Text Available Stephen Gillam Department of Public Health and Primary Care, Institute of Public Health, University of Cambridge, Cambridge, UK Abstract: Pay-for-performance (P4P schemes have become increasingly common in primary care, and this article reviews their impact. It is based primarily on existing systematic reviews. The evidence suggests that P4P schemes can change health professionals' behavior and improve recorded disease management of those clinical processes that are incentivized. P4P may narrow inequalities in performance comparing deprived with nondeprived areas. However, such schemes have unintended consequences. Whether P4P improves the patient experience, the outcomes of care or population health is less clear. These practical uncertainties mirror the ethical concerns of many clinicians that a reductionist approach to managing markers of chronic disease runs counter to the humanitarian values of family practice. The variation in P4P schemes between countries reflects different historical and organizational contexts. With so much uncertainty regarding the effects of P4P, policy makers are well advised to proceed carefully with the implementation of such schemes until and unless clearer evidence for their cost–benefit emerges. Keywords: financial incentives, pay for performance, quality improvement, primary care
2007-01-01
As announced at the meeting of the Standing Concertation Committee (SCC) on 26 June 2007 and in http://Bulletin No. 28/2007, the existing Saved Leave Scheme will be discontinued as of 31 December 2007. Staff participating in the Scheme will shortly receive a contract amendment stipulating the end of financial contributions compensated by save leave. Leave already accumulated on saved leave accounts can continue to be taken in accordance with the rules applicable to the current scheme. A new system of saved leave will enter into force on 1 January 2008 and will be the subject of a new implementation procedure entitled "Short-term saved leave scheme" dated 1 January 2008. At its meeting on 4 December 2007, the SCC agreed to recommend the Director-General to approve this procedure, which can be consulted on the HR Department’s website at the following address: https://cern.ch/hr-services/services-Ben/sls_shortterm.asp All staff wishing to participate in the new scheme a...
HR Department
2007-01-01
As announced at the meeting of the Standing Concertation Committee (SCC) on 26 June 2007 and in http://Bulletin No. 28/2007, the existing Saved Leave Scheme will be discontinued as of 31 December 2007. Staff participating in the Scheme will shortly receive a contract amendment stipulating the end of financial contributions compensated by save leave. Leave already accumulated on saved leave accounts can continue to be taken in accordance with the rules applicable to the current scheme. A new system of saved leave will enter into force on 1 January 2008 and will be the subject of a new im-plementation procedure entitled "Short-term saved leave scheme" dated 1 January 2008. At its meeting on 4 December 2007, the SCC agreed to recommend the Director-General to approve this procedure, which can be consulted on the HR Department’s website at the following address: https://cern.ch/hr-services/services-Ben/sls_shortterm.asp All staff wishing to participate in the new scheme ...
International Nuclear Information System (INIS)
Harrop, J.
1998-01-01
Is it possible to reconcile the aspirations of community participants in a wind energy project with the requirements imposed by the Non-Fossil Fuel Obligation legislation and procedure? This paper considers the practical experience of the framework that was adopted at Harlock Hill wind farm for community participation and the legal structure that were required to ensure that the project retained the full benefit of the premium price arrangements with the Non-Fossil Purchasing Agency Limited. (Author)
McKenzie, Emily; Potestio, Melissa L; Boyd, Jamie M; Niven, Daniel J; Brundin-Mather, Rebecca; Bagshaw, Sean M; Stelfox, Henry T
2017-12-01
Providers have traditionally established priorities for quality improvement; however, patients and their family members have recently become involved in priority setting. Little is known about how to reconcile priorities of different stakeholder groups into a single prioritized list that is actionable for organizations. To describe the decision-making process for establishing consensus used by a diverse panel of stakeholders to reconcile two sets of quality improvement priorities (provider/decision maker priorities n=9; patient/family priorities n=19) into a single prioritized list. We employed a modified Delphi process with a diverse group of panellists to reconcile priorities for improving care of critically ill patients in the intensive care unit (ICU). Proceedings were audio-recorded, transcribed and analysed using qualitative content analysis to explore the decision-making process for establishing consensus. Nine panellists including three providers, three decision makers and three family members of previously critically ill patients. Panellists rated and revised 28 priorities over three rounds of review and reached consensus on the "Top 5" priorities for quality improvement: transition of patient care from ICU to hospital ward; family presence and effective communication; delirium screening and management; early mobilization; and transition of patient care between ICU providers. Four themes were identified as important for establishing consensus: storytelling (sharing personal experiences), amalgamating priorities (negotiating priority scope), considering evaluation criteria and having a priority champion. Our study demonstrates the feasibility of incorporating families of patients into a multistakeholder prioritization exercise. The approach described can be used to guide consensus building and reconcile priorities of diverse stakeholder groups. © 2017 The Authors Health Expectations Published by John Wiley & Sons Ltd.
A fast resonance interference treatment scheme with subgroup method
International Nuclear Information System (INIS)
Cao, L.; He, Q.; Wu, H.; Zu, T.; Shen, W.
2015-01-01
A fast Resonance Interference Factor (RIF) scheme is proposed to treat the resonance interference effects between different resonance nuclides. This scheme utilizes the conventional subgroup method to evaluate the self-shielded cross sections of the dominant resonance nuclide in the heterogeneous system and the hyper-fine energy group method to represent the resonance interference effects in a simplified homogeneous model. In this paper, the newly implemented scheme is compared to the background iteration scheme, the Resonance Nuclide Group (RNG) scheme and the conventional RIF scheme. The numerical results show that the errors of the effective self-shielded cross sections are significantly reduced by the fast RIF scheme compared with the background iteration scheme and the RNG scheme. Besides, the fast RIF scheme consumes less computation time than the conventional RIF schemes. The speed-up ratio is ~4.5 for MOX pin cell problems. (author)
Sengupta, Arkajyoti; Ramabhadran, Raghunath O; Raghavachari, Krishnan
2014-08-14
In this study we have used the connectivity-based hierarchy (CBH) method to derive accurate heats of formation of a range of biomolecules, 18 amino acids and 10 barbituric acid/uracil derivatives. The hierarchy is based on the connectivity of the different atoms in a large molecule. It results in error-cancellation reaction schemes that are automated, general, and can be readily used for a broad range of organic molecules and biomolecules. Herein, we first locate stable conformational and tautomeric forms of these biomolecules using an accurate level of theory (viz. CCSD(T)/6-311++G(3df,2p)). Subsequently, the heats of formation of the amino acids are evaluated using the CBH-1 and CBH-2 schemes and routinely employed density functionals or wave function-based methods. The calculated heats of formation obtained herein using modest levels of theory and are in very good agreement with those obtained using more expensive W1-F12 and W2-F12 methods on amino acids and G3 results on barbituric acid derivatives. Overall, the present study (a) highlights the small effect of including multiple conformers in determining the heats of formation of biomolecules and (b) in concurrence with previous CBH studies, proves that use of the more effective error-cancelling isoatomic scheme (CBH-2) results in more accurate heats of formation with modestly sized basis sets along with common density functionals or wave function-based methods.
El Gharamti, Mohamad; Valstar, Johan R.; Hoteit, Ibrahim
2014-01-01
Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system's parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.
El Gharamti, Mohamad
2014-09-01
Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system\\'s parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.
How update schemes influence crowd simulations
International Nuclear Information System (INIS)
Seitz, Michael J; Köster, Gerta
2014-01-01
Time discretization is a key modeling aspect of dynamic computer simulations. In current pedestrian motion models based on discrete events, e.g. cellular automata and the Optimal Steps Model, fixed-order sequential updates and shuffle updates are prevalent. We propose to use event-driven updates that process events in the order they occur, and thus better match natural movement. In addition, we present a parallel update with collision detection and resolution for situations where computational speed is crucial. Two simulation studies serve to demonstrate the practical impact of the choice of update scheme. Not only do density-speed relations differ, but there is a statistically significant effect on evacuation times. Fixed-order sequential and random shuffle updates with a short update period come close to event-driven updates. The parallel update scheme overestimates evacuation times. All schemes can be employed for arbitrary simulation models with discrete events, such as car traffic or animal behavior. (paper)
An adaptive Cartesian control scheme for manipulators
Seraji, H.
1987-01-01
A adaptive control scheme for direct control of manipulator end-effectors to achieve trajectory tracking in Cartesian space is developed. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for online implementation with high sampling rates.
A Traffic Restriction Scheme for Enhancing Carpooling
Directory of Open Access Journals (Sweden)
Dong Ding
2017-01-01
Full Text Available For the purpose of alleviating traffic congestion, this paper proposes a scheme to encourage travelers to carpool by traffic restriction. By a variational inequity we describe travelers’ mode (solo driving and carpooling and route choice under user equilibrium principle in the context of fixed demand and detect the performance of a simple network with various restriction links, restriction proportions, and carpooling costs. Then the optimal traffic restriction scheme aiming at minimal total travel cost is designed through a bilevel program and applied to a Sioux Fall network example with genetic algorithm. According to various requirements, optimal restriction regions and proportions for restricted automobiles are captured. From the results it is found that traffic restriction scheme is possible to enhance carpooling and alleviate congestion. However, higher carpooling demand is not always helpful to the whole network. The topology of network, OD demand, and carpooling cost are included in the factors influencing the performance of the traffic system.
Quantum Watermarking Scheme Based on INEQR
Zhou, Ri-Gui; Zhou, Yang; Zhu, Changming; Wei, Lai; Zhang, Xiafen; Ian, Hou
2018-04-01
Quantum watermarking technology protects copyright by embedding invisible quantum signal in quantum multimedia data. In this paper, a watermarking scheme based on INEQR was presented. Firstly, the watermark image is extended to achieve the requirement of embedding carrier image. Secondly, the swap and XOR operation is used on the processed pixels. Since there is only one bit per pixel, XOR operation can achieve the effect of simple encryption. Thirdly, both the watermark image extraction and embedding operations are described, where the key image, swap operation and LSB algorithm are used. When the embedding is made, the binary image key is changed. It means that the watermark has been embedded. Of course, if the watermark image is extracted, the key's state need detected. When key's state is |1>, this extraction operation is carried out. Finally, for validation of the proposed scheme, both the Signal-to-noise ratio (PSNR) and the security of the scheme are analyzed.
Improvement of One Quantum Encryption Scheme
Cao, Zhengjun; Liu, Lihua
2012-01-01
Zhou et al. proposed a quantum encryption scheme based on quantum computation in 2006 [N. Zhou et al., Physica A362 (2006) 305]. Each qubit of the ciphertext is constrained to two pairs of conjugate states. So, its implementation is feasible with the existing technology. But it is inefficient since it entails six key bits to encrypt one message bit, and the resulting ciphertext for one message bit consists of three qubits. In addition, its security cannot be directly reduced to the well-known BB84 protocol. In this paper, we improve it using the technique developed in BB84 protocol. The new scheme entails only two key bits to encrypt one message bit. The resulting ciphertext is just composed of two qubits. It saves about a half cost without the loss of security. Moreover, the new scheme is probabilistic instead of deterministic.
Wang, Yu-Nu; Shyu, Yea-Ing Lotus; Chen, Min-Chi; Yang, Pei-Shan
2011-04-01
This paper is a report of a study that examined the effects of work demands, including employment status, work inflexibility and difficulty reconciling work and family caregiving, on role strain and depressive symptoms of adult-child family caregivers of older people with dementia. Family caregivers also employed for pay are known to be affected by work demands, i.e. excessive workload and time pressures. However, few studies have shown how these work demands and reconciliation between work and family caregiving influence caregivers' role strain and depressive symptoms. For this cross-sectional study, secondary data were analysed for 119 adult-child family caregivers of older people with dementia in Taiwan using hierarchical multiple regression. After adjusting for demographic characteristics, resources and role demands overload, family caregivers with full-time jobs (β=0.25, Pwork and caregiving roles (β=0.36, Pworking part-time or unemployed. Family caregivers with more work inflexibility reported more depressive symptoms (β=0.29, PWork demands affected family caregivers' role strain and depressive symptoms. Working full-time and having more difficulty reconciling work and caregiving roles predicted role strain; work inflexibility predicted depressive symptoms. These results can help clinicians identify high-risk groups for role strain and depression. Nurses need to assess family caregivers for work flexibility when screening for high-risk groups and encourage them to reconcile working with family-care responsibilities to reduce role strain. © 2010 Blackwell Publishing Ltd.
Directory of Open Access Journals (Sweden)
Rivanda Meira Teixeira
2016-03-01
Full Text Available Women have gained more and more space in various professional areas and this development also occurs in the field of entrepreneurship. In Brazil GEM 2013 identified for the first time, that the number of new woman entrepreneurs was higher than male entrepreneurs. However, it is recognized that women entrepreneurs face many difficulties when trying to reconcile their companies with the family. The main objective of this research is to analyse the challenges faced by women entrepreneurs of travel agencies to reconcile the conflict between work and family. This study adopted the multiple cases research strategy and were selected seven women creators and managers of travel agencies in the cities of Aracaju and the Barra dos Coqueiros, in the state of Sergipe (east coast of Brazil. In an attempt to reconcile well the multiple roles these women often face the frustration and guilt. At this moment, it shows the importance of emotional contribution of husband and children. It is noticed that the search for balance between the conflicting demands generate emotional distress and / or physical.
DEFF Research Database (Denmark)
Alrabadi, Osama; Papadias, C.B.; Kalis, A.
2009-01-01
A universal scheme for encoding multiple symbol streams using a single driven element (and consequently a single radio frequency (RF) frontend) surrounded by parasitic elements (PE) loaded with variable reactive loads, is proposed in this paper. The proposed scheme is based on creating a MIMO sys...
A simple angular transmit diversity scheme using a single RF frontend for PSK modulation schemes
DEFF Research Database (Denmark)
Alrabadi, Osama Nafeth Saleem; Papadias, Constantinos B.; Kalis, Antonis
2009-01-01
array (SPA) with a single transceiver, and an array area of 0.0625 square wavelengths. The scheme which requires no channel state information (CSI) at the transmitter, provides mainly a diversity gain to combat against multipath fading. The performance/capacity of the proposed diversity scheme...
Carbon trading: Current schemes and future developments
International Nuclear Information System (INIS)
Perdan, Slobodan; Azapagic, Adisa
2011-01-01
This paper looks at the greenhouse gas (GHG) emissions trading schemes and examines the prospects of carbon trading. The first part of the paper gives an overview of several mandatory GHG trading schemes around the world. The second part focuses on the future trends in carbon trading. It argues that the emergence of new schemes, a gradual enlargement of the current ones, and willingness to link existing and planned schemes seem to point towards geographical, temporal and sectoral expansion of emissions trading. However, such expansion would need to overcome some considerable technical and non-technical obstacles. Linking of the current and emerging trading schemes requires not only considerable technical fixes and harmonisation of different trading systems, but also necessitates clear regulatory and policy signals, continuing political support and a more stable economic environment. Currently, the latter factors are missing. The global economic turmoil and its repercussions for the carbon market, a lack of the international deal on climate change defining the Post-Kyoto commitments, and unfavourable policy shifts in some countries, cast serious doubts on the expansion of emissions trading and indicate that carbon trading enters an uncertain period. - Highlights: → The paper provides an extensive overview of mandatory emissions trading schemes around the world. → Geographical, temporal and sectoral expansion of emissions trading are identified as future trends. → The expansion requires considerable technical fixes and harmonisation of different trading systems. → Clear policy signals, political support and a stable economic environment are needed for the expansion. → A lack of the post-Kyoto commitments and unfavourable policy shifts indicate an uncertain future for carbon trading.
Pressure correction schemes for compressible flows
International Nuclear Information System (INIS)
Kheriji, W.
2011-01-01
This thesis is concerned with the development of semi-implicit fractional step schemes, for the compressible Navier-Stokes equations; these schemes are part of the class of the pressure correction methods. The chosen spatial discretization is staggered: non conforming mixed finite elements (Crouzeix-Raviart or Rannacher-Turek) or the classic MA C scheme. An upwind finite volume discretization of the mass balance guarantees the positivity of the density. The positivity of the internal energy is obtained by discretizing the internal energy balance by an upwind finite volume scheme and b y coupling the discrete internal energy balance with the pressure correction step. A special finite volume discretization on dual cells is performed for the convection term in the momentum balance equation, and a renormalisation step for the pressure is added to the algorithm; this ensures the control in time of the integral of the total energy over the domain. All these a priori estimates imply the existence of a discrete solution by a topological degree argument. The application of this scheme to Euler equations raises an additional difficulty. Indeed, obtaining correct shocks requires the scheme to be consistent with the total energy balance, property which we obtain as follows. First of all, a local discrete kinetic energy balance is established; it contains source terms winch we somehow compensate in the internal energy balance. The kinetic and internal energy equations are associated with the dual and primal meshes respectively, and thus cannot be added to obtain a total energy balance; its continuous counterpart is however recovered at the limit: if we suppose that a sequence of discrete solutions converges when the space and time steps tend to 0, we indeed show, in 1D at least, that the limit satisfies a weak form of the equation. These theoretical results are comforted by numerical tests. Similar results are obtained for the baro-tropic Navier-Stokes equations. (author)
EPU correction scheme study at the CLS
Energy Technology Data Exchange (ETDEWEB)
Bertwistle, Drew, E-mail: drew.bertwistle@lightsource.ca; Baribeau, C.; Dallin, L.; Chen, S.; Vogt, J.; Wurtz, W. [Canadian Light Source Inc. 44 Innovation Boulevard, Saskatoon, SK S7N 2V3 (Canada)
2016-07-27
The Canadian Light Source (CLS) Quantum Materials Spectroscopy Center (QMSC) beamline will employ a novel double period (55 mm, 180 mm) elliptically polarizing undulator (EPU) to produce photons of arbitrary polarization in the soft X-ray regime. The long period and high field of the 180 mm period EPU will have a strong dynamic focusing effect on the storage ring electron beam. We have considered two partial correction schemes, a 4 m long planar array of BESSY-II style current strips, and soft iron L-shims. In this paper we briefly consider the implementation of these correction schemes.
Verification of an objective analysis scheme
International Nuclear Information System (INIS)
Cats, G.J.; Haan, B.J. de; Hafkenscheid, L.M.
1987-01-01
An intermittent data assimilation scheme has been used to produce wind and precipitation fields during the 10 days after the explosion at the Chernobyl nuclear power plant on 25 April 1986. The wind fields are analyses, the precipitation fields have been generated by the forecast model part of the scheme. The precipitation fields are of fair quality. The quality of the wind fields has been monitored by the ensuing trajectories. These were found to describe the arrival times of radioactive air in good agreement with most observational data, taken all over Europe. The wind analyses are therefore considered to be reliable. 25 refs.; 13 figs
Optimal powering schemes for legged robotics
Muench, Paul; Bednarz, David; Czerniak, Gregory P.; Cheok, Ka C.
2010-04-01
Legged Robots have tremendous mobility, but they can also be very inefficient. These inefficiencies can be due to suboptimal control schemes, among other things. If your goal is to get from point A to point B in the least amount of time, your control scheme will be different from if your goal is to get there using the least amount of energy. In this paper, we seek a balance between these extremes by looking at both efficiency and speed. We model a walking robot as a rimless wheel, and, using Pontryagin's Maximum Principle (PMP), we find an "on-off" control for the model, and describe the switching curve between these control extremes.
System Protection Schemes in Eastern Denmark
DEFF Research Database (Denmark)
Rasmussen, Joana
outages in the southern part of the 132-kV system introduce further stress in the power system, eventually leading to a voltage collapse. The local System Protection Scheme against voltage collapse is designed as a response-based scheme, which is dependent on local indication of reactive and active power...... effective measures, because they are associated with large reactive power losses in the transmission system. Ordered reduction of wind generation is considered an effective measure to maintain voltage stability in the system. Reactive power in the system is released due to tripping of a significant amount...... system. In that way, the power system capability could be extended beyond normal limits....
Group Buying Schemes : A Sustainable Business Model?
Köpp, Sebastian; Mukhachou, Aliaksei; Schwaninger, Markus
2013-01-01
Die Autoren gehen der Frage nach, ob "Group Buying Schemes" wie beispielsweise von den Unternehmen Groupon und Dein Deal angeboten, ein nachhaltiges Geschäftsmodell sind. Anhand der Fallstudie Groupon wird mit einem System Dynamics Modell festgestellt, dass das Geschäftsmodell geändert werden muss, wenn die Unternehmung auf Dauer lebensfähig sein soll. The authors examine if group buying schemes are a sustainable business model. By means of the Groupon case study and using a System Dynami...
New Imaging Operation Scheme at VLTI
Haubois, Xavier
2018-04-01
After PIONIER and GRAVITY, MATISSE will soon complete the set of 4 telescope beam combiners at VLTI. Together with recent developments in the image reconstruction algorithms, the VLTI aims to develop its operation scheme to allow optimized and adaptive UV plane coverage. The combination of spectro-imaging instruments, optimized operation framework and image reconstruction algorithms should lead to an increase of the reliability and quantity of the interferometric images. In this contribution, I will present the status of this new scheme as well as possible synergies with other instruments.
Hilbert schemes of points and Heisenberg algebras
International Nuclear Information System (INIS)
Ellingsrud, G.; Goettsche, L.
2000-01-01
Let X [n] be the Hilbert scheme of n points on a smooth projective surface X over the complex numbers. In these lectures we describe the action of the Heisenberg algebra on the direct sum of the cohomologies of all the X [n] , which has been constructed by Nakajima. In the second half of the lectures we study the relation of the Heisenberg algebra action and the ring structures of the cohomologies of the X [n] , following recent work of Lehn. In particular we study the Chern and Segre classes of tautological vector bundles on the Hilbert schemes X [n] . (author)
Security problem on arbitrated quantum signature schemes
International Nuclear Information System (INIS)
Choi, Jeong Woon; Chang, Ku-Young; Hong, Dowon
2011-01-01
Many arbitrated quantum signature schemes implemented with the help of a trusted third party have been developed up to now. In order to guarantee unconditional security, most of them take advantage of the optimal quantum one-time encryption based on Pauli operators. However, in this paper we point out that the previous schemes provide security only against a total break attack and show in fact that there exists an existential forgery attack that can validly modify the transmitted pair of message and signature. In addition, we also provide a simple method to recover security against the proposed attack.
Optimal sampling schemes applied in geology
CSIR Research Space (South Africa)
Debba, Pravesh
2010-05-01
Full Text Available Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology UP 2010 2 / 47 Outline 1 Introduction to hyperspectral remote... sensing 2 Objective of Study 1 3 Study Area 4 Data used 5 Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology...
Quadratically convergent MCSCF scheme using Fock operators
International Nuclear Information System (INIS)
Das, G.
1981-01-01
A quadratically convergent formulation of the MCSCF method using Fock operators is presented. Among its advantages the present formulation is quadratically convergent unlike the earlier ones based on Fock operators. In contrast to other quadratically convergent schemes as well as the one based on generalized Brillouin's theorem, this method leads easily to a hybrid scheme where the weakly coupled orbitals (such as the core) are handled purely by Fock equations, while the rest of the orbitals are treated by a quadratically convergent approach with a truncated virtual space obtained by the use of the corresponding Fock equations
Security problem on arbitrated quantum signature schemes
Energy Technology Data Exchange (ETDEWEB)
Choi, Jeong Woon [Emerging Technology R and D Center, SK Telecom, Kyunggi 463-784 (Korea, Republic of); Chang, Ku-Young; Hong, Dowon [Cryptography Research Team, Electronics and Telecommunications Research Institute, Daejeon 305-700 (Korea, Republic of)
2011-12-15
Many arbitrated quantum signature schemes implemented with the help of a trusted third party have been developed up to now. In order to guarantee unconditional security, most of them take advantage of the optimal quantum one-time encryption based on Pauli operators. However, in this paper we point out that the previous schemes provide security only against a total break attack and show in fact that there exists an existential forgery attack that can validly modify the transmitted pair of message and signature. In addition, we also provide a simple method to recover security against the proposed attack.
Clocking Scheme for Switched-Capacitor Circuits
DEFF Research Database (Denmark)
Steensgaard-Madsen, Jesper
1998-01-01
A novel clocking scheme for switched-capacitor (SC) circuits is presented. It can enhance the understanding of SC circuits and the errors caused by MOSFET (MOS) switches. Charge errors, and techniques to make SC circuits less sensitive to them are discussed.......A novel clocking scheme for switched-capacitor (SC) circuits is presented. It can enhance the understanding of SC circuits and the errors caused by MOSFET (MOS) switches. Charge errors, and techniques to make SC circuits less sensitive to them are discussed....
Determination of Solution Accuracy of Numerical Schemes as Part of Code and Calculation Verification
Energy Technology Data Exchange (ETDEWEB)
Blottner, F.G.; Lopez, A.R.
1998-10-01
This investigation is concerned with the accuracy of numerical schemes for solving partial differential equations used in science and engineering simulation codes. Richardson extrapolation methods for steady and unsteady problems with structured meshes are presented as part of the verification procedure to determine code and calculation accuracy. The local truncation error de- termination of a numerical difference scheme is shown to be a significant component of the veri- fication procedure as it determines the consistency of the numerical scheme, the order of the numerical scheme, and the restrictions on the mesh variation with a non-uniform mesh. Genera- tion of a series of co-located, refined meshes with the appropriate variation of mesh cell size is in- vestigated and is another important component of the verification procedure. The importance of mesh refinement studies is shown to be more significant than just a procedure to determine solu- tion accuracy. It is suggested that mesh refinement techniques can be developed to determine con- sistency of numerical schemes and to determine if governing equations are well posed. The present investigation provides further insight into the conditions and procedures required to effec- tively use Richardson extrapolation with mesh refinement studies to achieve confidence that sim- ulation codes are producing accurate numerical solutions.
Havasi, Ágnes; Kazemi, Ehsan
2018-04-01
In the modeling of wave propagation phenomena it is necessary to use time integration methods which are not only sufficiently accurate, but also properly describe the amplitude and phase of the propagating waves. It is not clear if amending the developed schemes by extrapolation methods to obtain a high order of accuracy preserves the qualitative properties of these schemes in the perspective of dissipation, dispersion and stability analysis. It is illustrated that the combination of various optimized schemes with Richardson extrapolation is not optimal for minimal dissipation and dispersion errors. Optimized third-order and fourth-order methods are obtained, and it is shown that the proposed methods combined with Richardson extrapolation result in fourth and fifth orders of accuracy correspondingly, while preserving optimality and stability. The numerical applications include the linear wave equation, a stiff system of reaction-diffusion equations and the nonlinear Euler equations with oscillatory initial conditions. It is demonstrated that the extrapolated third-order scheme outperforms the recently developed fourth-order diagonally implicit Runge-Kutta scheme in terms of accuracy and stability.
Zhu, Wuming; Trickey, S B
2017-12-28
In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li + , Be + , and B + , in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B
Zhu, Wuming; Trickey, S. B.
2017-12-01
In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li+, Be+, and B+, in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B field.
Analysis of Program Obfuscation Schemes with Variable Encoding Technique
Fukushima, Kazuhide; Kiyomoto, Shinsaku; Tanaka, Toshiaki; Sakurai, Kouichi
Program analysis techniques have improved steadily over the past several decades, and software obfuscation schemes have come to be used in many commercial programs. A software obfuscation scheme transforms an original program or a binary file into an obfuscated program that is more complicated and difficult to analyze, while preserving its functionality. However, the security of obfuscation schemes has not been properly evaluated. In this paper, we analyze obfuscation schemes in order to clarify the advantages of our scheme, the XOR-encoding scheme. First, we more clearly define five types of attack models that we defined previously, and define quantitative resistance to these attacks. Then, we compare the security, functionality and efficiency of three obfuscation schemes with encoding variables: (1) Sato et al.'s scheme with linear transformation, (2) our previous scheme with affine transformation, and (3) the XOR-encoding scheme. We show that the XOR-encoding scheme is superior with regard to the following two points: (1) the XOR-encoding scheme is more secure against a data-dependency attack and a brute force attack than our previous scheme, and is as secure against an information-collecting attack and an inverse transformation attack as our previous scheme, (2) the XOR-encoding scheme does not restrict the calculable ranges of programs and the loss of efficiency is less than in our previous scheme.
Investigation of schemes for incorporating generator Q limits in the ...
Indian Academy of Sciences (India)
Handling generator Q limits is one such important feature needed in any practical load flow method. This paper presents a comprehensive investigation of two classes of schemes intended to handle this aspect i.e. the bus type switching scheme and the sensitivity scheme. We propose two new sensitivity based schemes ...
Sirenko, Kostyantyn; Liu, Meilin; Bagci, Hakan
2013-01-01
A scheme that discretizes exact absorbing boundary conditions (EACs) to incorporate them into a time-domain discontinuous Galerkin finite element method (TD-DG-FEM) is described. The proposed TD-DG-FEM with EACs is used for accurately characterizing
Sources of funding for community schemes
Energy Technology Data Exchange (ETDEWEB)
NONE
1999-11-01
There is an increasing level of interest amongst community groups in the UK to become involved in the development of renewable energy schemes. Often however these community groups have only limited funds of their own, so any additional funds that can be identified to help fund their renewable energy scheme can be very useful. There are a range of funding sources available that provide grants or loans for which community groups are eligible to apply. Few of these funding sources are targeted towards renewable energy specifically, nevertheless the funds may be applicable to renewable energy schemes under appropriate circumstances. To date, however, few of these funds have been accessed by community groups for renewable energy initiatives. One of the reasons for this low take-up of funds on offer could be that the funding sources may be difficult and time-consuming to identify, especially where the energy component of the fund is not readily apparent. This directory draws together details about many of the principal funding sources available in the UK that may consider providing funds to community groups wanting to develop a renewable energy scheme. (author)
The QKD network: model and routing scheme
Yang, Chao; Zhang, Hongqi; Su, Jinhai
2017-11-01
Quantum key distribution (QKD) technology can establish unconditional secure keys between two communicating parties. Although this technology has some inherent constraints, such as the distance and point-to-point mode limits, building a QKD network with multiple point-to-point QKD devices can overcome these constraints. Considering the development level of current technology, the trust relaying QKD network is the first choice to build a practical QKD network. However, the previous research didn't address a routing method on the trust relaying QKD network in detail. This paper focuses on the routing issues, builds a model of the trust relaying QKD network for easily analysing and understanding this network, and proposes a dynamical routing scheme for this network. From the viewpoint of designing a dynamical routing scheme in classical network, the proposed scheme consists of three components: a Hello protocol helping share the network topology information, a routing algorithm to select a set of suitable paths and establish the routing table and a link state update mechanism helping keep the routing table newly. Experiments and evaluation demonstrates the validity and effectiveness of the proposed routing scheme.
SYNTHESIS OF VISCOELASTIC MATERIAL MODELS (SCHEMES
Directory of Open Access Journals (Sweden)
V. Bogomolov
2014-10-01
Full Text Available The principles of structural viscoelastic schemes construction for materials with linear viscoelastic properties in accordance with the given experimental data on creep tests are analyzed. It is shown that there can be only four types of materials with linear visco-elastic properties.
BPHZL-subtraction scheme and axial gauges
Energy Technology Data Exchange (ETDEWEB)
Kreuzer, M.; Rebhan, A.; Schweda, M.; Piguet, O.
1986-03-27
The application of the BPHZL subtraction scheme to Yang-Mills theories in axial gauges is presented. In the auxillary mass formulation we show the validity of the convergence theorems for subtracted momentum space integrals, and we give the integral formulae necessary for one-loop calculations. (orig.).
The data cyclotron query processing scheme
R.A. Goncalves (Romulo); M.L. Kersten (Martin)
2010-01-01
htmlabstractDistributed database systems exploit static workload characteristics to steer data fragmentation and data allocation schemes. However, the grand challenge of distributed query processing is to come up with a self-organizing architecture, which exploits all resources to manage the hot
THE DEVELOPMENT OF FREE PRIMARY EDUCATION SCHEME ...
African Journals Online (AJOL)
user
Education scheme in Western Region and marked a radical departure from the hitherto ... academic symposia, lectures, debates, reputable journals and standard .... Enrolment in Primary Schools in the Western Region by Sex, 1953 – 1960. Year Boys .... “Possibly no single decision of the decade prior to independence had.
High Order Semi-Lagrangian Advection Scheme
Malaga, Carlos; Mandujano, Francisco; Becerra, Julian
2014-11-01
In most fluid phenomena, advection plays an important roll. A numerical scheme capable of making quantitative predictions and simulations must compute correctly the advection terms appearing in the equations governing fluid flow. Here we present a high order forward semi-Lagrangian numerical scheme specifically tailored to compute material derivatives. The scheme relies on the geometrical interpretation of material derivatives to compute the time evolution of fields on grids that deform with the material fluid domain, an interpolating procedure of arbitrary order that preserves the moments of the interpolated distributions, and a nonlinear mapping strategy to perform interpolations between undeformed and deformed grids. Additionally, a discontinuity criterion was implemented to deal with discontinuous fields and shocks. Tests of pure advection, shock formation and nonlinear phenomena are presented to show performance and convergence of the scheme. The high computational cost is considerably reduced when implemented on massively parallel architectures found in graphic cards. The authors acknowledge funding from Fondo Sectorial CONACYT-SENER Grant Number 42536 (DGAJ-SPI-34-170412-217).
An HFB scheme in natural orbitals
International Nuclear Information System (INIS)
Reinhard, P.G.; Rutz, K.; Maruhn, J.A.
1997-01-01
We present a formulation of the Hartree-Fock-Bogoliubov (HFB) equations which solves the problem directly in the basis of natural orbitals. This provides a very efficient scheme which is particularly suited for large scale calculations on coordinate-space grids. (orig.)
A classification scheme for LWR fuel assemblies
Energy Technology Data Exchange (ETDEWEB)
Moore, R.S.; Williamson, D.A.; Notz, K.J.
1988-11-01
With over 100 light water nuclear reactors operating nationwide, representing designs by four primary vendors, and with reload fuel manufactured by these vendors and additional suppliers, a wide variety of fuel assembly types are in existence. At Oak Ridge National Laboratory, both the Systems Integration Program and the Characteristics Data Base project required a classification scheme for these fuels. This scheme can be applied to other areas and is expected to be of value to many Office of Civilian Radioactive Waste Management programs. To develop the classification scheme, extensive information on the fuel assemblies that have been and are being manufactured by the various nuclear fuel vendors was compiled, reviewed, and evaluated. It was determined that it is possible to characterize assemblies in a systematic manner, using a combination of physical factors. A two-stage scheme was developed consisting of 79 assembly types, which are grouped into 22 assembly classes. The assembly classes are determined by the general design of the reactor cores in which the assemblies are, or were, used. The general BWR and PWR classes are divided differently but both are based on reactor core configuration. 2 refs., 15 tabs.
The EU Greenhouse Gas Emissions Trading Scheme
Woerdman, Edwin; Woerdman, Edwin; Roggenkamp, Martha; Holwerda, Marijn
2015-01-01
This chapter explains how greenhouse gas emissions trading works, provides the essentials of the Directive on the European Union Emissions Trading Scheme (EU ETS) and summarizes the main implementation problems of the EU ETS. In addition, a law and economics approach is used to discuss the dilemmas
International Nuclear Information System (INIS)
Pinto, H.V.
1976-02-01
Calibration in energy and efficiency of the system used. Obtainement of singles gamma ray spectra of low and high energy. Reduction of the data obtained in the spectrometer by means of computer: localization and determination of the areas of the peaks, also the analysis of the shape of the peaks for identification of doublets. Checking of the decay scheme [pt
Parallel knock-out schemes in networks
Broersma, H.J.; Fomin, F.V.; Woeginger, G.J.
2004-01-01
We consider parallel knock-out schemes, a procedure on graphs introduced by Lampert and Slater in 1997 in which each vertex eliminates exactly one of its neighbors in each round. We are considering cases in which after a finite number of rounds, where the minimimum number is called the parallel
Nonclassical lightstates in optical communication schemes
International Nuclear Information System (INIS)
Mattle, K. U.
1997-11-01
The present thesis is a result in theoretical and experimental work on quant information and quant communication. The first part describes a new high intense source for polarization entangled photon pairs. The high quality of the source is clearly demonstrated by violating a Bell-inequality in less than 5 minutes with 100 standard deviations. This new source is a genius tool for new experiments in the field of fundamental physics as well as applied physics. The next chapter shows an experimental implementation of an optical dense quantum coding scheme. The combination of Bell-state generation and analysis of this entangled states leads to a new nonclassical communication scheme, where the channel capacity is enhanced. A single two state photon can be used for coding and decoding 1.58 bit instead of 1 bit for classical two state systems. The following chapter discusses two photon interference effects for two independent light sources. In an experiment two independent fluorescence pulses show this kind of interference effects. The fifth chapter describes 3-photon interference effects. This nonclassical interference effect is the elementary process for the quantum teleportation scheme. In this scheme an unknown particle state is transmitted from A to B without sending the particle itself. (author)